When you create a new website or rework your current website redesign, you have the opportunity to really improve your search engine ranking, or seriously screw it up. Depending on what types of fancy JavaScript, CSS, or spider blocking you use on your website. Let's explore...
The search engines each have their own software program that "crawls" the internet simply by following one link to another from one website to another. Search engines like following links because it's what they're best at. Hyperlinking from one website to another is what made the world wide web happen, and search engines still follow hyperlinks as their core method to find new information.
In recent years the use of JavaScript and CSS hyperlinking techniques has evolved to create interesting user experiences. Fancy navigation menus, scrolling images, and popup sub-windows are seemingly good ways to engage users on your website, but sometimes it's those same methods that also zap the search engine's ability to crawl your website.
Although search engines say they have the ability to process JavaScript and CSS, there's no guarantee that they will spend the time to do that on your website. If your website isn't important to them then you will likely be ignored. What makes you important? Content. You will become important if you publish content regularly, as in daily.
Some things to watch out for:
1. Don't use CSS for linking from your navigation menus. Make sure your navigation links are using the traditional <a> HTML tag for linking to your pages.
2. Avoid using JavaScript and the "void" feature to control your navigation. Not only does this block search engine spiders, but it also limits more advanced users the choice to open links in a new window.
3. Don't use photo galleries with JavaScript popup windows that point to large image files. These popup windows should point to full HTML files with page titles, meta descriptions, and an image caption. When you only link to the image file the search engines won't spend much time trying to figure out what it is. Why should they bother if you were too lazy to provide some information for them?
Other things to watch out for during the (re)design of your website include a few technical aspects that can ruin your day. Here are a few:
1. Make sure the robots.txt file on your site doesn't limit access to important pages within your site. Web programmers will often times try to tweak the robots.txt file to improve the speed at which a search engine will crawl your site, but sometimes those tweaks backfire.
2. Are you using a meta tag to tell search engines not to read your pages, or follow links from your pages? It's the "noindex,nofollow" feature that controls this functionality. It's good to have these on your privacy policy and terms of services pages, but usually nowhere else on your website.
3. What happens when your website returns a "404 error page?" You need to set your website up so it will show users some default message when they request a page that doesn't exist, or broke. You might not realize it, but this happens all the time. You should consider updating your 404 page often to include some type of "that didn't work message" followed up by a "take a look at our current specials or events instead."
Using these 6 items to help guide your website design and functionality will increase the potential that the search engines spider and index your website more accurately.