Showing posts with label techiezens infosystems. Show all posts
Showing posts with label techiezens infosystems. Show all posts

Wednesday 30 January 2013

Generate Maximum Traffic by Redesign Your Website

To generate maximum traffic for your site is not a simple task because getting lots of traffic to your site is not an overnight job. It needs so much experience and a lot of hard work.Ifyou’recurrent website is not performing very well and affecting your business then it's a time to redesign your website. Every website needs continuous improvement, to be it the field of market for their continuous growth. Before, the stage comes you have to redesign your website to use in a proper way to generate a proper traffic towards it.
Generally, websites are affected with minor error which will not noticeable by the necked eye, it doesn't matter how big your website is; if it is not properly linked it'll be overlooked. So, to drive traffic towards your website you need to follow some important tips: Some Simple ways of drawing maximum web traffic:
Use of Keywords: By using preferred keywords you can rank your website high on the main search engines like Google, msn and yahoo search engine. Empower your search engine optimization.SEO Services Delhi helps in "organic" or "natural" search, which is a paid and non-paid service.

Advertise it well: Another way to generate maximum traffic on your website you have to advertise your website in an appropriate UseNet group, for example, Google. It is also beneficial if you use banner advertising, it's a good option to advertise your website, you can also text link and put your website ads on other site. Submit your website to various privately run link Dictionaries and search engines. These will generate traffic massively and create specified visitor.


Social Networking Sites as a Medium: Always use social networking sites as a medium as much as you can. These websites generate high web traffic; you can use your creativity, and start a group to advertise your website. Basically, these networking sites are used not to advertise but to give information about you website.

Link the website: You can also Link your website with other websites, this also known as reciprocal linking, use link manager to be at safer side as some people are not honest they could remove linking within few days. Keep tracking that you are properly reciprocated by your linking partner.

Email marketing: You can also depend on banner exchange program as it doesn't cost much and can get so many visitors to your website. Try email marketing, this is one of the most cost effective marketing ways in which you can send information about special information and products etc. But, use this tool carefully, it can be annoying sometimes as people get frustrated after seeing their inbox full and your mail can go unread. Encourage your client to write review for you, no matter, whether it's short or long, it gives good impression to new users and helps to know about the website.


This Article is written by Amit Thapa associated with India Largest Web Solutions Agency Techiezens Infosystems situated in New Delhi.I have an excellent experience in Professional Internet Marketing Since 2007 to till.I want to learn more internet marketing best ideas. Web Designing Company Delhi offers affordable web design services, Mobile Website Design, Flash website design. Ecommerce web design company offers cheap & affordable services to across the world.

The 3 Most Important Aspects of SEO


As today’s worlds is heading towards the internet boom and no business can survive without an internet presence thus SEO has come in to play. Body: Online marketing has become an essential part of the business strategy and thus one can say that SEO has become an integral part of big and small business. Many though after the first dot com bubble burst online marketing would be a forgotten story and will bury with time, however in the present world the things have changed and now entrepreneur have started talking it a lot seriously and in today’s global world small business have been able to connect with its local customer and in the same time have been able to stretch to the customer in other city with help of their loyal fans recommendation. Web design Delhi provides most affordable and high quality design services.




Before one starts with the SEO or online marketing one has to make sure he look for the following most important aspects of SEO. Identifying the keywords, on page optimization, Link building.No matter what business one is to start with, one has to make sure he/she know what is the services he/she will offer, what are the targeted audience. One should make sure they do enough of market research before he/she can start marketing of the product. The keywords are as important as the website or location of one’s office. Keywords are the only one that brings the customer. So please make sure to know what key words are related to the product and what not.

Now that we have so much of development in the online marketing the stuffing of keywords is a long gone history and now spamming with keywords can earn your website bad reparation so make sure you add appropriate keywords in the meta’s as they can be help full when someone is searching the website. Though the Meta description and Meta keywords are not a popular source to get traffic these days. However they are still relevant and can be a lot helpful to organize the site one want to market. Link Building is another important aspect of SEO and thus one should take it a lot seriously. Though many have stated link building has lost its ground to the social media, however there is still lot left with the link building and one can state that linking with the popular website or earning a link back from popular website can push the popularity to the higher level. However please note getting link back from a non-related website will make no good and thus always try to get a link back from the website which are relevant to the theme or cause of the website.

This Article is written by Amit Thapa associated with India Largest Web Solutions Agency Techiezens Infosystems situated in New Delhi.I have an excellent experience in Professional Internet Marketing Since 2007 to till.I want to learn more internet marketing best ideas. Web Designing Company Delhi offers affordable web design services, Mobile Website Design, Flash website design. Ecommerce web design company offers cheap & affordable services to across the world.

Tuesday 29 January 2013

Google Latest Updates - Is SEO Died?

That’s just stupid, honestly. I’ve seen so many “Is SEO Dead” threads and articles popping up everywhere telling people that all hope is lost, and that’s just ridiculous. SEO will NEVER die, period!

You simply need to adjust your strategy a bit, write high-quality content and stick to white-hat SEO techniques if you’re doing any form of link building. That’s it, that’s all you need to do to “conquer” any update Google throws at you.

If you write something on your website, ask yourself the following question: “Is this what I want to see on Google’s first page if this is what I searched for?” If you can answer honestly and it’s “yes”, then you should be fine. It’s all about relevancy and user experience. If Google feels you’re REALLY contributing to the WWW then they won’t touch your rankings. But if you’re doing anything sneaky to try and fool Google or just try to make a quick buck without providing sufficient value to your readers, whatever good rankings you have now certainly won’t last long. People who says SEO is dead have no idea what they’re talking about, they’re quitters, wannabes or most likely, -both.
  • Here’s a breakdown of the 2 algorithm changes made by Google that you can use to inspect, fix and master to avoid ever getting slapped again.
Farmer/Panda Updates: As stated before, the Panda update was implemented to remove article directories and thin-content sites (like minisites) from the SERPS. The de-indexing of many article directories lead to a lot of sites losing backlinks on a major scale, which is what caused some sites’ rankings to drop dramatically. More recent updates (Dec 2011 – Apr 2012) targeted private blog networks like SEO Link Vine (this one hit in Dec 2011 judging from personal experience), Build My Rank and High PR Society just to name a few. Panda was also said to target web 2.0 sites, but this varies depending on the amount and quality of content posted on these sites.
  • How to Fix your Site that was Hit by Google Panda?
Easy, focus your link building efforts on social networks and don’t spam the living hell out of the internet with every flashy SEO tool you can get your hands on. Stick to white hat SEO and you’re good to go.

The Dreaded Penguin Updates: The Penguin update focused more on the on-page SEO of sites, penalizing sites that had too many stuffed keywords in every post or page. Sure, it also targeted anchor text backlinks and obvious footprints from backlinking, but is more focused towards your on-page content itself and not so much the off-page factors.
  • How to Fix your Site after being Penguin Slapped?
This is what I did, and it worked. Since I myself has been found guilty of a bit of keyword stuffing here and there (especially the older posts), I’ve had some keywords drop from page 1 into oblivion overnight. Here’s what I did to fix my site after being hit by the Penguin updates.
Google Panda vs Penguin - How to Beat Google Penguin Updates
I edited each and every one of my posts that made me money and removed many duplicate keywords to get a keyword density of between 0.8% and 1.2%. My rankings are coming back in a big way, here’s a screenshot of my Google Webmaster Tools account (on the left) to show you the proof that this is the way to “beat Google Penguin”. These rankings have disappeared but are making a big comeback after I’ve edited my posts.

Another thing that triggers a visit from the wrath of the Penguin, is linking to unrelated sites. So delete any links pointing to outbound sites that are unrelated, because that will get you slapped faster than you can say “Penguin”.

That’s it, that’s all you need to do! I also suggest following Matt Cutts on Google Plus and keep a close eye in what he has to say. Know that it will take some time to get your rankings back, but if you follow my advice it’s sure to come back sooner rather than later. Know that SEO is NOT dead and it will never die, so ignore anyone telling you otherwise.

You now have everything you need to know to recover from both Panda and Penguin updates, so get to work and focus on producing great content mixed with relevant, white hat link building techniques.

Latest Panda Refresh #24 Update in January 22nd

Despite it being just five days from extremely strong signs of a Google update, which Google denied, Google announced yesterday on Twitter that they have pushed out a new Panda refresh.



This would make the 24th revision or refresh to the Panda update since launching in February 2011. This update has a noticeable impact of 1.2% of English queries. The previous once, version 23 on the holidays had an impact of about 1.3%.

Honestly, the forums are not that noisy about this change. The January 17th change, which Google said was nothing, seemed like a much larger Google search update than this 1.2% Panda refresh they announced yesterday.



Thursday 20 December 2012

Facebook Video Ads on News Feed to be Launched Soon

Facebook Video Ad
Facebook is going to launch video ads in the news feed in the initial of 2013 as per the report by Ad agencies. The cover story concerns about the industrial executive who have been aware of the fact Facebook presents a unique marketing opportunity for businesses.

A news feed is a data format used for providing users with updated content.Facebook video ads is uploaded on desktop website as well as Facebook’s app for mobile and tablet. The length of video is around 15 seconds. According to television commercial the standard length of video should be 30 seconds. To meet this requirement, advertisers will have to generate new content for Facebook so as to acquire standard length of video as per the television commercial advertisement. Some of the videos possess auto play features. Auto play feature enable the automatic visibility of the uploaded video when it is visible to the user. The audio component is still the matter of discussion that whether it would be automatically activated as well. The video ads will always attract the user’s attention as news feed in the left and right columns of the screen. Facebook is also working to ensure that video’s can be visible to the users on mobile apps too.

Facebook users could react to the auto play video ads probably with a serious outrage and it is a matter of serious concern of potential backlash. The Report Says: The foremost concern is the autoplay function, it seems to be invasive and sometimes these ads are considered to be fraud in the video-ad market. The main drawback of autoplay ad is it viewed even if someone isn’t watching them. Additionally, the other concern is users become tired with so many Facebook ads from advertisers which are not related to their friends and family members even if advertisers tailor are based on information of a person’s profile. Facebook presents a unique marketing opportunity for businesses through the creation of Facebook Business Pages or Company’s Page. Facebook has not yet disclosed that how much they will charge for the video advertisement. However, video ads possess higher price as compare to other forms of Internet advertising.

Tuesday 18 December 2012

Why we look search engine optimization is must for every business




I need to ask you a question… How do you search for information?
search engine optimisationChances are you (like most people) are going to type that search query into Google to find out. Whether is to find the restaurant you heard about, to see if those shoes are cheaper online or to figure out if there is a gym in the new area that you’ve moved to. If you search for answers to your questions online, nothing is stopping your audience from doing the same and seeing your competitors.
Search Engine Optimization is the process of helping your website to rank faster in order to make your website visible to your consumers to grow your business. The work we do at Web firm is to help you grow your business online by providing visits to your website in order to get them to contact you, buy from you or look up your address to visit you in person.
The great thing about SEO is that it is trackable; you can see where your money is going.  You know how many visitors come to your site via what keywords, domains and where they enter your site. Can you say that about your mail out catalogues or your ad in the Yellow pages?
At Web firm, we have the experience and skills to help you achieve your online marketing goals through our various channels such as Search Engine Optimization, Paid Search Advertising and Social Media Marketing. Web firm’s SEO packages start from just $199 (ex GST) a month. 
Call us on 011-45525910 / +91-9650676661 Affordable Web Solutions Company Delhi   or email us at laxman @techiezens.com, parvesh@techiezens.com for more information about our Search Engine Optimization Packages and what they can do for your business.

How Changing the Internet Marketing trends

Internet Marketing Melbourne

In today’s corporate world, not having a website can have an impact on how far you would be able to go in your chosen niche or industry. This is because, nowadays, more and more people are keener on doing their daily activities without having to leave the comfort of their home. Something that is readily afforded by the Internet. That being said, of late, there has also been an increased demand for Internet marketing professionals who can really get sites into the top spot in an SERP (Search Engine Results Page), like Google. This demand is further pushed by the various changes in Google’s algorithm when it comes to ranking sites

So how do you ensure that you stay on top?


Lately, Google has been paying a little more attention to the content of your site and how it is able to answer the needs of the readers. Rather than just how many links are pointing to your site. Although the latter is still given a lot of weight, Google now considers whether those sites are able to profit from you. That said, one of the things that you have to keep in mind is that, the more natural your site looks, the better it would be for you. In line with this, you have to make sure that your content is well written. Use your keywords sparingly as a little too much might raise red flags with the search engine giant.
Make sure that you pay particular attention to your title as well as to your site’s description. If possible, make sure of your keywords here but ensure that you are not spamming your own site. Keep in mind that the site description, and the title are the first things that a reader would see in a search engine results page. So make the most of it!
Lastly, you have to keep in mind that Internet marketing is more than just search engine optimization. You can also do email marketing, referral marketing, and affiliate marketing. You have a lot of options to go for so, if you are not really familiar with how search engine optimization works, better engage in some other form of internet marketing.

Monday 3 December 2012

Guidelines for Managing (Not Provided) Data

It has been more than a year since Google announced its SSL enhancement. What follows are some thoughts and tips on how to handle the growing lack of data you receive from your organic visits.

A Little Background

Google announced on October 18, 2011 that it would be “enhancing [the] default search experience for signed-in users” by making SSL search the default search for signed-in users. This change encrypts your search queries and Google’s results page and means that visits from organic search listings no longer include the information about each individual query. Instead, Google started passing the term “(not provided)” as the referring keyword for those organic search visits.

Google mitigated the issue by offering publishers to “…receive an aggregated list of the top 1,000 search queries that drove traffic to their site for each of the past 30 days through Google Webmaster Tools.” Google didn't block paid search visit data “to enable advertisers to measure the effectiveness of their campaigns and to improve the ads and offers they present to you.”

What Changed

SSL (Secure Sockets Layer) is a protocol that helps provide secure Internet communications for services like web browsing, email, instant messaging, and other data transfers. When you search over SSL, your search queries and search traffic are encrypted so that intermediary parties that might have access to your network can't easily see your results and search terms.
  • Search terms are encrypted and are excluded from the referrer headers that are part of the request sent to the result site you visit.
  • The landing site still receives information that you're coming from Google, but not the query that was issued.
  • If you click on an ad on the results page, your browser will send an unencrypted referrer that includes your query to the advertiser’s site.
  • Google logs the same information about your search when you’re using SSL search as we do for unencrypted search. SSL search doesn't reduce the data that Google receives and logs when you search, or change the listing of the items in your web history.
google-vs-encrypted-google

The Current Status of (not provided)

A recent Optify study (disclaimer: I work for Optify) found that the term ”(not provided)” now accounts for almost 40 percent of referring traffic data from organic search and since the introduction of the SSL enhancement, recognized referring keywords have dropped by almost 50 percent.
This means that for 1 of every 2.5 visits from organic search, you're no longer able to see the referring keywords and that almost 50 percent of the keywords you were previously able to track and measure are no longer available to you.

not-provided-percentage-of-total-organic-search
recognized-keywords-from-organic-search-indexed

What it Means

The study shows that the trend of “(not provided)” is only going to continue trending up until the majority of organic referrer data (search terms) will completely disappear. But what does this mean for you, the marketer? It means you will no longer be able to:
  • Truly measure the performance of your SEO efforts by connecting a search term with website metrics such as traffic, conversion rate, leads, engagement (page views and time on site), and revenue.
  • Use referrer data to customize and/or personalize your user experience. For example, if you used to offer related content based on referring keyword, or used referring keywords in your lead nurturing rules, you will no longer be able to do that.
  • Score visitors and leads based on their referring keyword. If you use a lead scoring system that uses referring keyword as one of the rules, this option will no longer be available to you.
But you will still be able to:
  • Measure your overall SEO performance and report on ROI since the visit source will still be organic search, but you won't be able to analyze what keywords contributed to that performance nor will you be able to report ROI on specific SEO initiatives.
  • Practice SEO and work on getting more traffic from organic search. This change doesn't prevent you from following any SEO best practices; it just means that it will be harder to measure their effectiveness.

5 Tips for Handling “(not provided)” Data

What can you do as (not provided) continues to eat away at your organic search data?
  1. Make the most out of the data you have. With the “(not provided)” rate approaching 40 percent, it means that you still have more than 60 percent of organic visits with referring keywords data. Make the most out of that data since it won't be there for much longer.
  2. Use Webmaster Tools. Google offers a lot of data about your website in Google Webmaster Tools. This includes the top 1,000 daily search queries and top 1,000 daily landing pages for the past 30 days, in addition to the impressions, clicks, click-through rate (CTR), and average position in search results for each query. You can compare this to the previous 30 day period as well as export to a CSV file to import to a different system or analyze it using Excel. For most small to mid-size B2B sites, this should be more than enough data to analyze in aggregate. See Linking Google Analytics to Webmaster Tools.
  3. For personalization, use other data. If you're using keyword data to personalize the user experience you offer on your website (related content, targeted landing pages) and off your website (follow up emails, lead nurturing), you will need to start using other data instead. Form submissions, page viewed, and campaign tagging could be used to replace keyword data in your personalization efforts.
  4. For SEO work, use proxies. The problem with the aggregate data is it doesn’t give you the ability to tie a referring keyword with the subsequent website behavior like page views and time on page. More importantly, actions like form submission (B2B) and clicks on page (B2C, e.g. shopping cart actions) can't be associated with a keyword making it impossible to report on ROI for specific keywords. This means that you will have to start using proxies such as keyword rank and ranked page to estimate single keyword performance.
  5. Use PPC data to estimate keyword performance. Since Google is still passing referrer data to advertisers for clicks on their sponsored results, you can use PPC to estimate the performance of keywords you're targeting or considering.

Thursday 29 November 2012

Get more Visitors by Quality Search Engine Optimization

Search engines don’t necessarily make it easy to achieve high ranking through user keyword searches, but it is worth the effort. When writing articles, if you thoughtfully (overdoing it will backfire) shift some of your focus to SEO (search engine optimization), you can raise your site’s traffic, as well as your rank for certain keyword searches. Hopefully, these tips will help and inspire you. Competition for top rankings will only get stronger over time. You can get a head start by slowly phasing out the use of frames in your site design.

Using frames makes your site noticeably slower to load and takes users more time to navigate. It also makes it more difficult for web crawlers and spiders to access the information contained within the frame itself. Use unique content on your website to generate traffic. Posting information that you can find on several different websites only helps you to blend in and not stand out of the crowd. By choosing unique and original content for your website you are offering something no one else has to offer. When you have determined which popular search engine terms to use, be sure to place them in your HTML title tag. You should do this because search engines give title tag content the most weight out of any of the other elements found on the page. Also use these phrases in title, tags, and description of your videos that you post on video sharing sites. Make sure that you include good headings to all of your paragraphs. These headings should be relevant to the topics that are covered in the paragraph. Make sure to include great keywords in your headings to make it easier for your reader to find what they are looking for. Search engine spiders are not big fans of flash based websites. They are extremely hard for them to crawl and using flash can keep you from even being indexed. If you have to use flash, make sure to include alternate text that describes what the flash is showing so that the spiders can crawl it and index your site. Find free tools to help you submit.


 Several websites offer automated submission tools, that help you submit to up to hundreds of article directory sites in a short amount of time. Search very hard to find the free tools that do this. Most sites charge a fee for this task, but if you are lucky you will find those that do not. Use the alt tag (html code) to add keywords to your images. Search engines (as of yet) can not actively search images and create keywords for them. They rely on you to create keywords for their engine searches. More keywords mean more hits for your site, and more search visibility. It is integral for you to improve the functionality for your customers on your website. To do this, you can include a search box in the top right hand corner of your page. This gives your visitors the ability to find exactly what they want with one click of the mouse.

Whenever it is possible, use your keywords in your URL. Search engines pick up on keywords that are placed in the URL. Instead of using numbers or text in article URLs, try to use a CMS that will use real words. These should be the keywords that readers will, most likely, search for. Write website content that human beings can understand and enjoy. Keyword injected nonsense just isn’t going to work. Search engines are programmed to differentiate between actual sentences and strings of words. They know a paragraph shouldn’t have the same sentence repeated over and over. If you fill your site with valuable content then the SERPS will reward you. Be smart with where you place the keywords in your site. Makes sure to place them in aspects of your sites such as titles, URLs, content, image names, etc. Think about what terms your visitors would use to find your content and what they’d expect to see when they arrived. Do not create a site with search engine optimization as your primary goal. This won’t make your site money, but the customers do. Build your site with the visitor in mind. Ask yourself: “Is my site fun, enjoyable, or useful?” If you answered no to all of those, you will not see hits and clicks from interested people. Plan your website so that the structure is clean and you avoid going too deeply into directories.

Every page you write for your website should be no more than three clicks away from the homepage. People, and search engines, like to find the information they are looking for, quickly and easily. Constantly adding fresh content, be it articles or reviews or widgets, will ensure that people are revisiting your website as often as possible. If you provide them with something they really want to read, view, or use they’ll bookmark and share your site with others as it will be a comprehensive cache of knowledge. Avoid flash as much as possible if you want to enhance your search engine optimization. Flash is not accessible to the algorithms search engines employ, rendering the content you create almost useless from a search engine optimization standpoint. Instead, use images and text menus as these are easily detected and factored into your ranking on search results pages. A few major, web-based sites have combined to start a new website, Schema.org, to help with search engine optimization.

The site will list common vocabulary. This site will show webmasters and developers SEO terms, and teach them how to improve their ranking with the search engines. The goal of this site is to be a resource for site developers. You have found that search engine optimization doesn’t have to be difficult. An educated, common sense approach, goes a long way towards raising your site’s visibility, as well as, its rank. Apply the advice you have discovered here and you will be sure to enjoy the rewards of higher traffic.

Tuesday 27 November 2012

Cheap Ecommerce Website Design Company in Delhi

Training Classes in Delhi with New Batches

Friday 23 November 2012

Common Technical SEO Problems and How to Solve Them

I love technical SEO (most of the time). However, it can be frustrating to come across the same site problems over and over again. In the years I've been doing SEO, I'm still surprised to see so many different websites suffering from the same issues.

This post outlines some of the most common problems I've encountered when doing site audits, along with some not-so-common ones at the end. Hopefully the solutions will help you when you come across these issues, because chances are that you will at some point!

1. Uppercase vs Lowercase URLs

From my experience, this problem is most common on websites that use .NET. The problem stems from the fact that the server is configured to respond to URLs with uppercase letters and not to redirect or rewrite to the lowercase version.  
I will admit that recently, this problem hasn't been as common as it was because generally, the search engines have gotten much better at choosing the canonical version and ignoring the duplicates. However, I've seen too many instances of search engines not always doing this properly, which means that you should make it explicit and not rely on the search engines to figure it out for themselves.
How to solve:
There is a URL rewrite module which can help solve this problem on IIS 7 servers. The tool has a nice option within the interface that allows you to enforce lowercase URLs. If you do this, a rule will be added to the web config file which will solve the problem.
More resources for solutions:

2.  Multiple versions of the homepage

Again, this is a problem I've encountered more with .NET websites, but it can happen quite easily on other platforms. If I start a site audit on a site which I know is .NET, I will almost immediately go and check if this page exists:
www.example.com/default.aspx
The verdict? It usually does! This is a duplicate of the homepage that the search engines can usually find via navigation or XML sitemaps.
Other platforms can also generate URLs like this:
www.example.com/index.html
www.example.com/home
I won't get into the minor details of how these pages are generated because the solution is quite simple. Again, modern search engines can deal with this problem, but it is still best practice to remove the issue in the first place and make it clear.
How to solve:
Finding these pages can be a bit tricky as different platforms can generate different URL structures, so the solution can be a bit of a guessing game. Instead, do a crawl of your site, export the crawl into a CSV, filter by the META title column, and search for the homepage title. You'll easily be able to find duplicates of your homepage.
I always prefer to solve this problem by adding a 301 redirect to the duplicate version of the page which points to the correct version. You can also solve the issue by using the rel=canonical tag, but I stand by a 301 redirect in most cases.
Another solution is to conduct a site crawl using a tool like Screaming Frog to find internal links pointing to the duplicate page. You can then go in and edit the duplicate pages so they point directly to the correct URL, rather than having internal links going via a 301 and losing a bit of link equity.
Additional tip - you can usually decide if this is actually a problem by looking at the Google cache of each URL. If Google hasn't figured out the duplicate URLs are the same, you will often see different PageRank levels as well as different cache dates.
More resources for solutions:

3. Query parameters added to the end of URLs

This problem tends to come up most often on Ecommerce websites that are database driven. There of a chance of occurrence on any site, but the problem tends to be bigger on eCommerce websites as there are often loads of product attributes and filtering options such as colour, size, etc. Here is an example from Go Outdoors (not a client):
In this case, the URLs users click on are relatively friendly in terms of SEO, but quite often you can end up with URLs such as this:
www.example.com/product-category?colour=12
This example would filter the product category by a certain colour. Filtering in this capacity is good for users but may not be great for search, especially if customers do not search for the specific type of product using colour. If this is the case, this URL is not a great landing page to target with certain keywords.
Another possible issue that has a tendency to use up TONS of crawl budget is when said parameters are combined together. To make things worse, sometimes the parameters can be combined in different orders but will return the same content. For example:
www.example.com/product-category?colour=12&size=5
www.example.com/product-category?size=5&colour=12
Both of these URLs would return the same content but because the paths are different, the pages could be interpreted as duplicate content.
I worked on a client website a couple of years back who had this issue. We worked out that with all the filtering options they had, there were over a BILLION URLs that could be crawled by Google. This number was off the charts when you consider that there were only about 20,000 products offered.
Remember, Google does allocate crawl budget based on your PageRank. You need to ensure that this budget is being used in the most efficient way possible.
How to solve:
Before going further, I want to address another common, related problem: the URLs may not be SEO friendly because they are not database driven.  This isn't the issue I'm concerned about in this particular scenario as I'm more concerned about wasted crawl budget and having pages indexed which do not need to be, but it is still relevant.
The first place to start is addressing which pages you want to allow Google to crawl and index. This decision should be driven by your keyword research, and you need to cross reference all database attributes with your core target keywords. Let's continue with the theme from Go Outdoors for our example:
Here are our core keywords:
  • Waterproof jackets
  • Hiking boots
  • Women's walking trousers
On an eCommerce website, each of these products will have attributes associated with them which will be part of the database. Some common examples include:
  • Size (i.e. Large)
  • Colour (i.e. Black)
  • Price (i.e. £49.99)
  • Brand (i.e. North Face)
Your job is to find out which of these attributes are part of the keywords used to find the products. You also need to determine what combination (if any) of these attributes are used by your audience.
In doing so, you may find that there is a high search volume for keywords that include "North Face" + "waterproof jackets." This means that you will want a landing page for "North Face waterproof jackets" to be crawlable and indexable. You may also want to make sure that the database attribute has an SEO friendly URL, so rather than "waterproof-jackets/?brand=5" you will choose "waterproof-jackets/north-face/." You also want to make sure that these URLs are part of the navigation structure of your website to ensure a good flow of PageRank so that users can find these pages easily.
On the other hand, you may find that there is not much search volume for keywords that combine "North Face" with "Black" (for example, "black North Face jackets"). This means that you probably do not want the page with these two attributes to be crawlable and indexable.
Once you have a clear picture of which attributes you want indexed and which you don't, it is time for the next step, which is dependant on whether the URLs are already indexed or not.
If the URLs are not already indexed, the simplest step to take is to add the URL structure to your robots.txt file. You may need to play around with some Regex to achieve this. Make sure you test your regex properly so you don't block anything by accident. Also, be sure to use the Fetch as Google feature in Webmaster Tools. It's important to note that if the URLs are already indexed, adding them to your robots.txt file will NOT get them out of the index.
If the URLs are indexed, I'm afraid you need to use a plaster to fix the problem: the rel=canonical tag. In many cases, you are not fortunate enough to work on a website when it is being developed. The result is that you may inherit a situation like the one above and not be able to fix the core problem. In cases such as this, the rel=canonical tag serves as a plaster put over the issue with the hope that you can fix it properly later. You'll want to add the rel=canonical tag to the URLs you do not want indexed and point to the most relevant URL which you do want indexed.
More resources for solutions:

4. Soft 404 errors 

This happens more often than you'd expect. A user will not notice anything different, but search engine crawlers sure do.  
A soft 404 is a page that looks like a 404 but returns a HTTP status code 200. In this instance, the user sees some text along the lines of "Sorry the page you requested cannot be found." But behind the scenes, a code 200 is telling search engines that the page is working correctly. This disconnect can cause problems with pages being crawled and indexed when you do not want them to be.
A soft 404 also means you cannot spot real broken pages and identify areas of your website where users are receiving a bad experience. From a link building perspective (I had to mention it somewhere!), neither solution is a good option. You may have incoming links to broken URLs, but the links will be hard to track down and redirect to the correct page.
How to solve:
Fortunately, this is a relatively simply fix for a developer who can set the page to return a 404 status code instead of a 200. Whilst you're there, you can have some fun and make a cool 404 page for your user's enjoyment. Here are some examples of awesome 404 pages, and I have to point to Distilled's own page here :)
To find soft 404s, you can use the feature in Google Webmaster Tools which will tell you about the ones Google has detected:
You can also perform a manual check by going to a broken URL on your site (such as www.example.com/5435fdfdfd) and seeing what status code you get. A tool I really like for checking the status code is Web Sniffer, or you can use the Ayima tool if you use Google Chrome.
More resources for solutions:

5. 302 redirects instead of 301 redirects

Again, this is an easy redirect for developers to get wrong because, from a user's perspective, they can't tell the difference. However, the search engines treat these redirects very differently. Just to recap, a 301 redirect is permanent and the search engines will treat it as such; they'll pass link equity across to the new page. A 302 redirect is a temporary redirect and the search engines will not pass link equity because they expect the original page to come back at some point.
How to solve:
To find 302 redirected URLs, I recommend using a deep crawler such as Screaming Frog or the IIS SEO Toolkit. You can then filter by 302s and check to see if they should really be 302s, or if they should be 301s instead.
To fix the problem, you will need to ask your developers to change the rule so that a 301 redirect is used rather than a 302 redirect.
More resources for solutions:

6. Broken/Outdated sitemaps

Whilst not essential, XML sitemaps are very useful to the search engines to make sure they can find all URLs that you care about. They can give the search engines a nudge in the right direction. Unfortunately, some XML sitemaps are generated one-time-only and quickly become outdated, causing them to contain broken links and not contain new URLs.  
Ideally, your XML sitemaps should be updated regularly so that broken URLs are removed and new URLs are added. This is more important if you're a large website that adds new pages all the time. Bing has also said that they have a threshold for "dirt" in a sitemap and if the threshold is hit, they will not trust it as much.
How to solve:
First, you should do an audit of your current sitemap to find broken links. This great tool from Mike King can do the job.
Second, you should speak to your developers about making your XML sitemap dynamic so that it updates regularly. Depending on your resources, this could be once a day, once a week, or once a month. There will be some development time required here, but it will save you (and them) plenty of time in the long run.
An extra tip here: you can experiment and create sitemaps which only contain new products and have these particular sitemaps update more regularly than your standard sitemaps. You could also do a bit of extra-lifting if you have dev resources to create a sitemap which only contains URLs which are not indexed.
More resources for solutions:

A few uncommon technical problems

I want to include a few problems that are not common and can actually be tricky to spot. The issues I'll share have all been seen recently on my client projects.

7. Ordering your robots.txt file wrong

I came across an example of this very recently, which led to a number of pages being crawled and indexed which were blocked in robots.txt.
The reason that the URLs in this case were crawled was because the commands within the robots.txt file was wrong. Individually the commands were correct, but they didn't work together correctly.
Google explicitly say this in their guidelines but I have to be honest, I hadn't really come across this problem before so it was a bit of a surprise.
How to solve:
Use your robots commands carefully and if you have separate commands for Googlebot, make sure you also tell Googlebot what other commands to follow - even if they have already been mentioned in the catchall command. Make use of the testing feature in Google Webmaster Tools that allows you to test how Google will react to your robots.txt file.

8.  Invisible character in robots.txt

I recently did a technical audit for one of my clients and noticed a warning in Google Webmaster Tools stating that "Syntax was not understood" on one of the lines. When I viewed the file and tested it, everything looked fine. I showed the issue to Tom Anthony who fetched the file via the command line and he diagnosed the problem: an invisible character had somehow found it's way into the file.  
I managed to look rather silly at this point by re-opening the file and looking for it!
How to solve:
The fix is quite simple. Simply rewrite the robots.txt file and run it through the command line again to re-check. If you're unfamiliar with the command line, check out this post by Craig Bradford over at Distilled.

9.  Google crawling base64 URLs

This problem was a very interesting one we recently came across, and another one that Tom spotted. One of our clients saw a massive increase in the number of 404 errors being reported in Webmaster Tools. We went in to take a look and found that nearly all of the errors were being generated by URLs in this format:
/aWYgeW91IGhhdmUgZGVjb2RlZA0KdGhpcyB5b3Ugc2hvdWxkIGRlZmluaXRlbHkNCmdldCBhIGxpZmU=/
Webmaster tools will tell you where these 404s are linked from, so we went to the page to findout how this URL was being generarted.  As hard as we tried, we couldn't find it. After lots of digging, we were able to see that these were authentication tokens generated by Ruby on Rails to try and prevent cross site requests. There were a few in the code of the page, and Google were trying to crawl them!  
In addition to the main, problem, the authentication tokens are all generated on the fly and are unique, hence why we couldn't find the ones that Google were telling us about.
How to solve:
In this case, we were quite lucky because we were able to add some Regex to the robots.txt file which told Google to stop crawling these URLs. It took a bit of time for Webmaster Tools to settle down, but eventually everything was calm.

10. Misconfigured servers

This issue is actually written by Tom, who worked on this particular client project. We encountered a problem with a website's main landing/login page not ranking. The page had been ranking and at some point had dropped out, and the client was at a loss. The pages all looked fine, loaded fine, and didn't seem to be doing any cloaking as far as we could see.

After lots of investigation and digging, it turned out that there was a subtle problem caused by a mis-configuration of the server software, with the HTTP headers from their server.

Normally an 'Accept' header would be sent by a client (your browser) to state which file types it understands, and very rarely this would modify what the server does. The server when it sends a file always sends a "Content-Type" header to specify if the file is HTML/PDF/JPEG/something else.

Their server (they're using Nginx) was returning a "Content-Type" that was a mirror of the first fiel type found in the clients "Accept" header. If you sent an accept header that started "text/html," then that is what the server would send back as the content-type header. This is peculiar behaviour, but it wasn't being noticed because browsers almost always send "text/html" as the start of their Accept header.

However, Googlebot sends "Accept: */*" when it is crawling (meaning it accepts anything).
(See: http://webcache.googleusercontent.com/search?sourceid=chrome&ie=UTF-8&q=cache:http://www.ericgiguere.com/tools/http-header-viewer.html)
I found if I sent a */* header this caused the server to fall down as */* is not a valid content-type and the server would crumble and send an error response.

Changing your browsers user agent to Googlebot does not influence the HTTP headers, and tools such as web-sniffer also don't send the same HTTP headers as Googlebot, so you would never notice this issue with them!

Within a few days of fixing the issue, the pages were re-indexed and the client saw a spike in revenue.

Why did our PageRank go down?

Recently a newspaper contacted me. Their PageRank had dropped from 7 to 3, and they wanted to know why. They genuinely didn’t seem know what the issue was, so I took some time to write them an in-depth reply. Part of the motivation for my blog is to provide information in more scalable ways, so I figured I’d strip any identifying information from my email and post it. Here’s what I wrote:

Hi, the usual reason why a site’s PageRank drops by 30-50% like this is because the site violates our quality guidelines by selling links that pass PageRank. Here’s our documentation on that: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66356 and here’s a video I made about this common case: http://www.youtube.com/watch?v=kFcJ7PaLoMw (it’s about 1:30 into the video). http://www.nytimes.com/2012/08/26/business/book-reviewers-for-hire-meet-a-demand-for-online-raves.html?_r=1&pagewanted=all is a good recent article about paid reviews. In Google’s world, we take paid links that pass PageRank as seriously as Amazon would take paid reviews without disclosure or as your newspaper would treat a reporter who was paid to link to a website in an article without disclosing the payment.

In particular, earlier this year on [website] we saw links labeled as sponsored that passed PageRank, such as a link like [example link]. That’s a clear violation of Google’s quality guidelines, and it’s the reason that [website]‘s PageRank as well as our trust in the website has declined.

In fact, we received a outside spam report about your site. The spam report passed on an email from a link seller offering to sell links on multiple pages on [website] based on their PageRank. Some pages mentioned in that email continue to have unusual links to this day. For example [example url] has a section labeled “PARTNER LINKS” which links to [linkbuyer].

So my advice would be to investigate how paid links that pass PageRank ended up on [website]: who put them there, are any still up, and to investigate whether someone at the [newspaper] received money to post paid links that pass PageRank without disclosing that payment, e.g. using ambiguous labeling such as “Partner links.” That’s definitely where I would dig.

After that investigation is complete and any paid links that pass PageRank are removed, the site’s webmaster can do a reconsideration request using Google’s free webmaster tools console at google.com/webmasters. I would include as much detail as you can about what you found out about the paid links. That will help us assess how things look going forward.