Monday, December 15, 2008

Google Updates December 2008

Google Updates December 2008

While searching something i noticed that Google showing results in differ way. Looking something updates on user’s feedback. So that if an irrelevant result shows them, they can remove that and even comment about it.

So now if webmaster tried to do some unethical SEO might be possible it will not going to help them.
So good news for White hat seo :) and fearing factor for the Black hat webmasters :( .







Waiting for your favorable response.









Tuesday, November 11, 2008

Sifymail WIYI powered Google

Sify launches Sifymail WIYI powered by Google

Sify Technologies has launched a new mail service called Sifymail WIYI (World In Your Inbox), powered by Google. Sify had earlier signed a deal with Google under which Google Apps suite of communication and collaboration tools, including email, chat and online documents, would power Sify mail and chat, as well as other applications using the Google Apps platform.

According to Sify, Sifymail WIYI will offer its users a 7 GB mailbox, faster downloads and effective spam filters. It will also provide simultaneous chat, advanced search, Sify Documents, Sify Spread Sheets and Sify Calendar all accessible from the inbox. A wide range of widgets and customizable windows will let users to personalize their inbox to become virtually their home page.

Venkata Rao Mallineni, head of portals and consumer marketing, Sify Technologies, has said, “With internet applications and tools evolving rapidly, we felt the need to add value to Sify users by providing a one stop platform for all their Internet needs.”

“The new Sifymail WIYI is a perfect blend of email and applications that will not just enhance the user’s experience with us, but benefit them functionally. For the first time ever, all this functionality is enabled from their inbox, making it their home page on the Internet,” Venkata Rao Mallineni has added.

Source: http://www.alootechie.com/content/sify-launches-sifymail-wiyi-powered-google

Thursday, October 30, 2008

Google Image Search October 2008 Update

Google Image Search October 2008 Update

A WebmasterWorld thread has users noticing changes over at Google Image Search. The changes may be a filter or may be a full-blown image index update, hard to tell at the moment.

The last Google Image update seemed to be an image filter update in September. But now, the update seems a bit more outside of the scope of what we would classify as a filter.

WebmasterWorld administrator, tedster, observed:

I started seeing some really wrong captions on some images, where the algo is pulling the caption from on-page anchor text. How can on-page anchor text be a candidate for naming for an image that is also on the page?

Senior member, zeus, who reported the image update has seen tons of images drop out of Google Image search, plus he has seen a possible hotlink image bug. The hotlink image bug happens when a third-party site links to a specific image and Google classifies that image from a a third-party domain.

Forum discussion at WebmasterWorld .


Source: http://www.seroundtable.com/archives/018496.html

Friday, October 10, 2008

Microsoft pays to search

Microsoft pays to search

OK, did you see this? Microsoft is going to pay you to use its search engine. Until the end of the year, you’ll get points every time you use Microsoft’s Live.com service. Pile up enough of them, and you can buy free music downloads, gadgets, even frequent-flier miles. (Limited to the first 1 million people who sign up. Works with Internet Explorer for Windows only.)

I’m sorry. I mean, I know Microsoft has deep pockets and all. But has it really come to this? If Microsoft doesn’t feel as though it can compete with Google on the merits of its product, maybe, instead of trying to bribe you, the company should pour that money into making Live.com better…

Souce:http://payperclickoffer.com/pay-per-click-advertising-microsoft-pays-you-to-search/

Thursday, September 18, 2008

GEO TARGETING AND SEO searchengineoptimization

GEO TARGETING AND SEO

Providing geographic metadata in Web sites and syndicated feeds can provide users with the ability to search easily for services and articles based on location and proximity.

Geolocation by IP address enables statcounter applications and Web sites to determine users' locations automatically in order to provide specific location-based services to users and members of an on-line community. Such data also helps in combating internet fraud and ip is one of many details recorded by our payment processors 2checkout and Paypal. Recording ip also helps Newswriter target expired domain traffic campaigns according to the clients customer audience. In this article, we present various methods by which Web sites can provide their geographic locations to static pages and syndicated feeds, in the form of meta information or geotags. Put another way, geolocation by IP address is the technique a Web site uses to determine where users are located; geotagging is the technique users employ to find out where a Web site is located. In addition we will look at the SEO value of such services including the results of our recent study into ICBM.

Geotags typically locate the Web site's principle location on the Earth. This information can contain a number of elements. Some geotagging contains latitude and longitude information enabling a webmaster to pinpoint an exact location. Additional tags can name cities, regions and country stats for general locations. Web services, applications and users then can query this information to obtain directions (how to get from here to there), locality (what's near there) or context (where was this article written). Geotags differ from a simple address in that they usually are encoded in metadata and are not visible as part of the Web page. In the case of Newswriter our geo targeting is hardcoded on the server and not visible. By following a standard, other services easily and reliably can find these geotags. Various semantic Web projects still are solidifying geospatial tagging standards, but several techniques already have become common and supported. This article presents these current techniques.

Why Geotag?

Providing a geographic location is beneficial particularly for retail and service businesses, tourist attractions and entertainment venues. If you want to locate a local veterinary center or a hotel near a particular landmark this can be achieved through geotagging. Geographic link directories, such as A2B and Multimap, can index these services by location and allow users to search geographically as well as by service type. Currently, many of these services limit users in their selection of available services. But, it would be possible to allow for more complex queries, such as searching for "Thai restaurants within 2 miles of Central London". Or, when using automatic geolocation, one could ask for "directions from my current location to the nearest theater".

Current location-based services rely on the Web site administrator registering with an on-line index and specifying its location. Some of these services charge a fee, and many are not used commonly, nor are they cross-referenced. Google runs Google Maps a free engine that allows users to search for location-based services using complex queries such as the examples above. Google Maps is an excellent example of how providing geographic information on a Web site greatly can enhance its visibility and usefulness to potential customers and users.

Geographic metadata also is useful for bloggers and photographers. Traveling writers, travel writers and reviewers can give context to their articles by supplying specific geographic information about where they are writing from or where the business they are reviewing is located. Are you travelling to Thailand?. A geo targeted Thailand article can put content infront of those that are looking for it.

By embedding a geographic location in the metadata of the Web site, applications and Web-based services quickly and reliably can determine the site's location relative to search criteria. Using metadata prevents the confusion of an automated search bot having to determine the location from the site's text. Geotagging has been around for some time. Yet only a minority of people know of its use and fewer still utilise the benefits of geourls. The rest of this article discusses the techniques used for embedding geographic information in your Web site or syndicated feed.

Geotagging a Web Site

For a Web site, several means of geotagging are available. In this article I will focus on meta tags for geo targeting. Below is a copy of the meta tags used on a domain of our parent company:

Blogger Not Allowed some HTML Tags So treat <@ as < :)

<@meta name="ICBM" content="40.746980, -73.980547">
<@meta name="DC.title" content="Watch Live Football">
<@META NAME="geo.position" content="40.746990, -73.980537">
<@META NAME="geo.placename" CONTENT="New York">
<@META NAME="geo.region" CONTENT="USA">

The problem with ICBM is its original acronym is Intercontinental Ballistic Missile. The meaning for ICBM in turns of Geo Targeting is:

The form used to register a site with the Usenet mapping project, back before the day of pervasive Internet, included a blank for longitude and latitude, preferably to seconds-of-arc accuracy. This was actually used for generating geographically-correct maps of Usenet links on a plotter; however, it became traditional to refer to this as one's 'ICBM address' or 'missile address', and some people include it in their sig block with that name. (A real missile address would include target elevation.)

ICBM tags are limited to latitude and longitude and do not include other regional information, such as city or country. The syntax is as follows:

<@meta name='ICBM' content="latitude, longitude" />

This tag would be included in your Web page's section.

Another means of embedding geographic metadata is through geo-structure tags. These geo-structure tags can include latitude and longitude information as well as regional information and an extra placename. The placename could contain the specific address of the person or business. Or, it could be useful for providing a location that may not have a specific point but covering a broader region, such as a city or district. The following example is for the Museo Nacional Del Prado, in Madrid, Spain

<@META NAME="geo.position" content="40.746990, -73.980537">
<@META NAME="geo.placename" CONTENT="New York">
<@META NAME="geo.region" CONTENT="USA">

Geotagging an RSS Feed

Besides geotagging a Web site, it is possible to geotag the source of an RSS feed as well as the individual articles. By geotagging each article, your feed can provide entries from various locations and reach a varied audience. Then, these entries can be displayed on a map where users can read about locations that interest them. Alternatively, by geotagging the source of the feed, a directory or opml file could provide feeds based on user-selected locations.

An example tag looks something like this. Notice the addition of altitude:

<@rdf:RDF>
<@geo:Point>
<@geo:lat>55.701<@/geo:lat>
<@geo:long>12.552<@/geo:long>
<@geo:alt>52.4<@/geo:alt>
<@/geo:Point>
<@/rdf:RDF>

The ICBM standard discussed above also can be used in tagging an RSS feed. An XML namespace is used to specify the keywords of the file, and the tags are included either in the header or within the item tags. Here is an example below:

<@rss version="2.0" >
<@item>
<@title>M 3.7, Southern Alaska<@/title>
<@description>October 02, 2006 03:55:52 GMT<@/description>
<@link>http://earthquake.usgs.gov/recenteqsww/Quakes/ak00043775.htm<@/link>
<@icbm:latitude>60.4780<@/icbm:latitude>
<@icbm:longitude>-152.4355<@/icbm:longitude>
<@dc:subject>3<@/dc:subject>
<@dc:subject>pasthour<@/dc:subject>
<@/item>

Finally, some Weblog services may prevent users from adding new tags to RSS feeds. In this case, it is acceptable for some sites and packages to embed the geographic information in tags, as shown below:

<@rss version="2.0" >
<@item>
<@title>M 3.7, Southern Alaska<@/title>
<@description>October 02, 2006 03:55:52 GMT<@/description>
<@link>http://earthquake.usgs.gov/recenteqsww/Quakes/ak00043775.htm<@/link>
<@icbm:latitude>60.4780<@/icbm:latitude>
<@icbm:longitude>-152.4355<@/icbm:longitude>
<@dc:subject>3<@/dc:subject>
<@dc:subject>pasthour<@/dc:subject>
<@/item>

Several Weblog packages already incorporate the ability to specify a geographic location within an entry as well as for the entire Weblog. This geographic information then can be included for users when reading the Weblog through their browsers or through their own aggregators. Each entry, when posted, is assigned either a default location or is given a new location.

What to Do with Geotags

Now that your Web site has been geotagged, what can you do to share this information with users and have new users find your site? A2B is the new incarnation of geourl.com. A2B allows Web site administrators to register their sites. From there, users can search for sites based on location or geographic locality to another Web site. It may be interesting to find out what other sites and places can be found in your area.

A2B also provides a free public API that allows application and Web site developers to query the A2B database of locations. The A2B query does not return the actual location of the Web sites, however, merely their distances and directions (compass headings) from the queried location.
To find out the latitude and longitude or city and region of a Web site, the user can view the Web site's meta information. To illustrate this, we have written an extension to the Firefox browser that alerts users that geotags are available for the Web site currently being viewed. The extension also retrieves that information without the user having to look at the Web site's markup source.

Geourl can be used in a similar way although it can be difficult to use its small map to pinpoint and generate meta tags for your site.

Other applications of geotags include creating a Web page of closely related Web sites, similar to a Web ring, and display their locations on a map of the Earth or a specific region. A restaurant review Web page, for example, could display a map of their reviewing regions, and users could click on locations to read reviews of the restaurants located there. Furthermore, travelers could pull up Weblogs and travel information for the area they will be visiting. Hopefully, larger services similar to Google Local or Multimap will be developed that automatically will collect and use this information to provide users with a large database of services.

Future of Geotags

Geotags currently are not employed widely, and only a small number of services support their use. Geotags have been around for some time and still very few people are aware of them. However, many could benefit from better geographic knowledge of Web sites and on-line data. Applications could provide a central location to assist users in finding out about their locations or intended travel locations

ECommerce is growing exponentially with every company worth its salt jumping on the bandwagon. Imagine if your online purchases were chosen by location rather than serps. In a previous article we discussed the importance search engines pay to content on domains and rank them accordingly. Due to the world of duplicate content this created we are looking at what could be next in the world of SEO. Could geotags be the next decider. Instead of ranking by age, importance, content or backlinks will the internet as we know it become more homely with surfers looking for sites in their immediate vicinity.

Whatever the future holds for geotagging; can you afford to ignore it as a webmaster?.

GeoTagging and SEO

We have been aware of the use of geotagging for quite some time. However, recently we heard claim from one of our business associates regarding its use as an SEO tool.

Our informer claimed that through using Geotagging and placing a site within the vicinity of a good cluster of established domains; that any new domain could be indexed and obtain a pr of 3 after one update. This pr update was based on no other link building work – only the use of geotagging. When I first heard of this idea I laughed it off. Geotagging has been around for a while and while it has its bonuses Grand Master Google isn’t going to pay that much attention. It wasn’t until the idea was explained to me properly that I sat up and paid attention.

Google bases part of its algorithm on backlinks to a site. Google also looks for sites similar or associated with a site. In the past this area would contain sites of the same category or authority sites in any given industry. With geotagging you are putting your site in the same area or association as a whole host of domains. In some geographic locations such as New York mentioned earlier there are around 200 geotagged domains per mile. By tagging a domain in the same region you are basically saying that your site is geographically associated with others in the area. Based on this argument it was put to us that using geotargeting in any significant cluster would increase the pr and serps for any new domain. Being 1 of the largest internet marketing companies in the world; we had to put this to the test.

Take 3 domains. All purchased this year for subsidiary sites but put on hold for the time being

http://www.adultbulktraffic.com/
http://www.bulkcasinotraffic.com/
http://www.buytrafficwholesale.com/

Add to each site a basic text page containing content related to the domain. If these sites are going to get anywhere they need some content. We added geotagging to 2 of the new sites; sticking them smack bang in the middle of a huge cluster of established good pr sites in London, England. The third domain was left – 0 backlinks no geotargeting. We allowed the test to run for one google pr update to see the results and claims of this technique.

3 months later and I actually forgot about the test sites. I only remembered them this week whilst editing another site and stumbling across geotagging. 3 months on and I wish I could say here is a secret SEO method which will get you a pr of 3 on any new domain with 5 minutes work. However, the truth lies with my original scepticism. None of the 3 test domains are indexed in the search engines and all remain pr0. For the geotagged domains some links have been found from geourl to their location but in the search engine world these have been deemed to carry no importance.

As regards, to SEO geotagging is not going to have any significant effect on your domain. Infact it is more than likely to have no effect at all. For internet marketing if your site bases its product, customer base or prestige on a location and you are registered with the correct directories to utilise geotagging then it can be a new weapon in your internet marketing arsenal. For me I guess I better start getting around to develop those 3 domains into new subsidiary sites of Newswriter. Then again; just by placing their url in this article on our pr6 site they will likely be indexed. You win some, you lose some. Still trying to find that pot of gold at the end of the rainbow.

Source: http://www.newswriter.us/ShowAdminArticle-13.htm

Wednesday, September 3, 2008

Semantic Web Marketing

Semantic Web Marketing

Semantic Web, Semantic Marketing, Effective Web Marketing – SEO Tips

The Semantic Web is an evolving extension of the World Wide Web in which the semantics of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content.

The semantic web is a vision of information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing and combining information on the web.

One promise of the Semantic Web is increasing the relevance of websites without visitor effort - visitors will get more of what they want and less of what they don't when they arrive at a website. Ultimately, the entire Web is going to get more relevant for each of us. Semantic Marketing enables your website to deliver more meaningful interactions with the majority of visitors. When you increase the relevance of your website, more visitors will stay longer and go deeper into your content. This results in more visitors becoming prospects, and ultimately, more customers.

Semantic marketing is very effective way of web marketing. Semantic marketing makes a determination about each visitor upon arrival and displays relevant content, simplified navigation and/ or supportive imagery. Today, we use pre-session, detectable attributes in the absence of a unified system for ontological matching.

Here are 7 possible missions for "semantic marketing":

  1. Marketing becomes the champion of generating the underlying data.

  2. Marketing views categorization, metadata, RDF graphs, relevant microformats, etc., as a new kind of market positioning and placement -- "semantic branding", if you will.

  3. Marketing takes a much broader view of distribution and promotion of its semantic web data in search engines and vertical networks (SEO++), including the sponsorship or creation of new niche semantic networks.

  4. Marketing comes up with new ways to incentivize the conversion of semantic web interactions in real business objectives.

  5. Marketing will have a real challenge with tracking and attributing distributed data in the semantic web to measure its impact -- from multi-touch marketing to micro-touch marketing. Hard problem but entrepreneurial ingenuity will prevail.

  6. Marketing will want to leverage other people's data in their own value-add mash-ups (interesting "joint venture" semantic data partnerships), as well as for internal-only apps focused on market research and competitive intelligence.

  7. Marketing will need to be concerned with brand protection in the semantic web: quality control to watch for bad data, conflicting data, competitive misuse, etc.
Source: http://importantseotips.blogspot.com/2008/08/semantic-web-marketing-seo-tips_25.html

Media Optimization vs Media Marketing

Social Media Optimization vs Social Media Marketing


Social Media Optimization (SMO) is the things you can do ‘on site’ to your website where as Social Media Marketing (SMM) is the things you do off-site. This is slightly different to SEO where it can be done on site and off site. Confused yet?

SMO is a subset of overall SMM just like SEO (Search Engine Optimization) is a subject of SEM (Search Engine Marketing). Instead of repeating what has been perfectly said around the web

Social Media Optimization

SMO refers to the process of refining a website (optimizing it) so that it’s awareness and content are easily spread through social mediums and online communities by users and visitors of the website. This can include anything done “on-page” such as improving the design and usability of the website so that it becomes more compelling to users, in an effort to help them spread it through social media sites. The simplest example of SMO is represented by all the “digg this” and “add to delicious” icons and links that are all over the web today.

Social Media Marketing

SMM on the other hand plays more of an active role in relation to social media by referring to the creation and distribution of content and other messages through the social web by some form of viral marketing. This can be anything from creating compelling content that gets bookmarked and even hits digg’s homepage to spreading a viral video by putting it on YouTube and other social media websites. It’s about the things that are done off-site, for example, participating in online communities where your customers hang out would be an active role that falls under SMM.

Source: http://importantseotips.blogspot.com/2008/08/social-media-optimization-vs-social.html

Tuesday, September 2, 2008

Search Rankings for Keyword

Lowering Your Search Rankings for a Keyword That is Getting You in Legal Trouble

A client of mine wanted to target high traffic terms in his industry. His industry was not very competitive, and some of the target terms were competing trademarks. We rank top 5 for them in Google, and now legal troubles are occurring. We removed all references to the competing trademark on our site, but still have some rogue inbound links that we can't get removed that have the anchor text which targets the competing mark. What should we do? Answer: There are four options which can help get you out of this situation.

Work With the Competition

If you have a nepotistic relationship with the competitor and recommend them then perhaps you can both be strengthened as category leaders.

Is the Lawsuit Cheap Marketing?

Anything involving Google and search is still a ripe field for media exposure. If you think your chances of winning are good enough, and the potential return is much larger than the risk of losing consider letting them sue you. It is probably not your fault that Google ranked you, but also seek legal advice outside of reading this post... I am not a lawyer.

Tank Your Rankings for that Keyword

If the links point at a page other than the homepage, consider removing the URL from your site, and then use the Google URL removal tool when the 404 error shows.

If you bought those links cash usually works to help remove them. Use their on site contact information and the email in their whois. Call the number in the whois data. Pay them to take down the links.

Improve the Rankings of Competing Pages

The tips offered in my search engine reputation management post work here as well. Follow the tips in that post to help make other competing pages rank better.

You can also work on improving the rankings of other competing pages while lowering your rankings. Feel free to push a couple strong pages if you are just trying to end the confrontation, but if you feel they are dirty you may also want to help surface some ugly news that was ranking on page 3. Do that and they may care less about your rankings, and shift their focus to those other sites.

Source: http://convonix.blogspot.com/2008/06/lowering-your-search-rankings-for.html


Thursday, July 24, 2008

Google Rocky Search Engine Updates

Google Rocky Search Engine Updates

If you've been promoting your site on the Google search engine, you've probably experienced ups and downs at times unless you are a strong authority site whereby these updates don't have practically any effect on your rankings such that the latter are rather stable. These ups and downs are what the seo marketers call the Google updates. These updates are important because Google is tweaking its algorithm to deliver better search engine results and hopefully to solve problems which are being faced by marketers ie url canonicalization, duplicate content and 302 redirects. In the past months and years, there have been several updates eg Florida, Allegra, Bourbon, Jagger and the latest one at the time of this writing(03/01/2006) is Bigdaddy which has already started about 2-3 months back. These updates are named at webmasterworld.com. You can read more about this latest update on Senior Google engineer Matt Cutts blog at http://www.mattcutts.com/blog/bigdaddyGoogle.

Google has several datacenters and when the updates are rolling in, the different datacenters are working on building new results and when everything is stabilised and settled, the final results are shown on the main Google.com. Google is no longer proned to chaotic updates whereby the search engine results are really shaken with rankings shifting quite a lot and creating confusion. With all the different datacenters, Google is proceeding with micro updates. Presently, Bigdaddy is only live at 66.249.93.104 and 64.233.179.104 right now. You might want to take a closer look at 66.249.93.104 as Matt Cutts says this is the preferred data center to hit.

If you have a website ranking in the search engines and according to the datacenters, you seem to be affected, don't panic. It's of no use constantly worrying about whether your site will finally come out "safe and sound" when the update is not over yet. And if you have a good site with several quality backlinks from different sources ie reciprocal links, one-way links, directory links, article links and press releases links, you should not really worry. Even you lose rankings, it might just be temporary and will get fixed sooner or later. In the meantime, continue to work in building your backlinks and content as well and not wasting too much time checking and monitoring the update. Let it do its job and you might be surprised by the final outcome.

This article can be freely published on a website as long as it's not modified in any way including the author bylines, plus all the hyperlinks must be made active just like below.

Jean Lam is a writer, author, publisher and owns several sites. Visit his Website Marketing

Tuesday, July 22, 2008

Search Engine Updates

Webmasters always anxiously wait for a search engine update. Those who rank well want to see their sites get even better. Those who didn't do well expect a major boost. Those whose sites get de-indexed anticipate a major comeback. Those who just started new sites bet on their sites will make into the first page of search engine result pages (SERPs) for their targeting keywords. Of course, not everyone will be happy about the results of search engine updates. After all, search engine traffic is a zero-sum game - someone loses and someone gains. Then, the webmasters start preparing for next update.

The Types of Search Engine Updates

Search Engines are large software systems. There're three types of search engine updates.

  1. Updates of index database - Search engine crawlers continuously scan the Web for new content and changes to feed their index databases. This drives the minor SERP shifts.

  2. Minor search engine updates - Search engines need to fix software bugs and do some minor algorithm changes once in a while. Minor updates seem to happens monthly.

  3. Major search engine updates - Every year, major search engines will shock the SEO community with major search engine updates. A major update involves both major algorithm changes and the re-organization of the index database. Major search engine updates are clearly driven more by business reasons than by technical reasons.


The Business Reasons Behind The Major Updates


All major search engines claim that they strive to present search results to users with the highest quality. But the business of search engine is business. What they won't tell us is that there're many business reasons for every major search engine updates. Search engine traffic is hot commodity - it's free and has higher conversion rate since the searchers are very close to make their buying decisions. The downside of the search traffic for webmasters is that they don't have control at all. Your sites may be ranked #1 today, but nowhere next day.

Search engine companies will, no doubt, use the search engine traffic to maximize the values for their stakeholders. Google's Feb. 2 update (allegra update or Superbowl update) once again shocked the webmaster community like last Florida update. The noticeable change in Superbowl update is that well-established sites rank well even for specific keywords that aren't even highly relevant to their pages. You may think the move is to fight spams and improve the quality of SERPs. That's only part of the story. The results of the update is that the websites of well-established corporations (with never ending press releases) will get a major traffic boost from Google. Google does this by algorithm changes, not manual manipulations.

If we think search engine traffic from Google is really an incentive to try it free before you buy. This time, Google decides to lure the major corporations to test the benefits of search traffic. Major corporations will likely increase their spending in online advertising and those news agencies may even drop their law sues against Google if they see the traffic from Google justifies that their sites benefit from including in Google index database.

Is this the real driving force behind last update? Only Google knows. If you own Google, however, you will do the exactly the same.

Will this negatively impact the user experience? - maybe and maybe not. What is the real difference between the #1 spot and the site that ranks #100? - the backlinks. Backlinks don't alter the quality of a page at all.

When they say technology, they mean business. Major technology changes are always driven by business needs. It has nothing to do with "good" or "bad".


Strategies to Cope with search Engine Updates


The Internet and the Web was once hailed as the new medium and the new opportunity for small business and site owners. They will be disappointed as big three peek into fortune 500 companies's deep packets. There're strategies they can use to cope with the search engine updates, however.

  1. Create a portfolio of website using different SEO techniques. If some of your sites get hammered in a update the rest may benefit from the update.

  2. Generate traffic from all major search engines.

  3. Use the search traffic to build loyal user bases.

  4. Build sites similar to the sites of well-established companies. The tags, on-page, off page optimization techniques will become less and less important as major corporations aren't interested in those types of things that geek webmasters are interested.

Speculation on Coming Updates

The next Google update is around the corner. I won't expect any noticeable change. If Google decides to let big apples try search traffic free. They will need a couple of months to realize the values of the search traffic. There will be major re-distribution of search traffic in 3 or 6 months. If the same group of major corporations always get huge amount of free traffic from Google. They won't bother to open their wallets. The major traffic distribution may be from one group of major corporations to another. The search traffic will unlikely sift back to small sites.

Tuesday, July 8, 2008

Yahoo Update July 2008

Yahoo Update July 2008


Yahoo recently announced an update in their algorithm and warned people they may see fluctuations in rankings. Google also reportedly made over 45 tweaks to their algorithm last month. There is a lot of fluctuation going on in the search engine results right now.

If you have considered optimizing your site now would be a good time to do it.

You see when rankings fluctuate many of the same sites will end up returning to their past position after things settle - so while this fluctuation is going on it's actually the perfect time for a new site to swoop in with a newly optimized site and try to take over some of those rankings.

If you are handling your own site, make sure you stay aware of fluctuations in the algorithms! Staying current is really important in the SEO world.

Wednesday, July 2, 2008

Alternative Search Engines

Alternative Search Engines

Search Engine name URL Category
Accoona www.accoona.com A.I. Search (HM)
AfterVote (SEM) www.aftervote.com Social Search
Agent 55 www.agent55.com MetaSearch
AllTha.at www.allth.at Continuous Search
AnswerBus www.answerbus.com Semantic Search
Blabline www.blabline.com Podcast Search
Blinkx* www.blinkx.com Video Search
Blogdigger www.blogdigger.com Blog Search
Bookmach.com* www.bookmach.com Bookmark Search
ChaCha* (#1 2006) www.chacha.com Guided Search
ClipBlast!* www.clipblast.com Video Search
Clusty* www.clusty.com Clustering Search
CogHog www.infactsolutions.com/projects/coghog/demo.htm Semantic Search
Collarity* www.collarity.com Social Search (HM)
Congoo* www.congoo.com Premium Content Search
CrossEngine (Mr. Sapo)* www.crossengine.com MetaSearch
Cydral http://en.cydral.com Image Search (French)
Decipho* www.decipho.com Filtered Search
Deepy www.deepy.com RIA Search
Ditto* www.ditto.com Visual Search
Dogpile www.dogpile.com MetaSearch
Exalead* www.exalead.com/search Visual Search
Factbites* www.factbites.com Filtered Search
FeedMiner www.feedminer.com RSS Feeds Search
Feedster www.feedster.com RSS Feeds Search
Filangy www.filangy.com Social Search
Find Forward www.findforward.com Meta Feature Search
FindSounds* www.findsounds.com Audio Search
Fisssh! www.fisssh.com Filtered Search (HM)
FyberSearch www.fybersearch.com Meta Feature Search
Gigablast* www.gigablast.com Blog Search
Girafa* www.girafa.com Visual Display
Gnosh www.gnosh.org Meta Search
GoLexa www.golexa.com Meta Feature Search
GoshMe* (SEM) www.goshme.com Meta Meta Search
GoYams* www.goyams.com Meta Search
Grokker* www.grokker.com Meta Search
Gruuve www.gruuve.com Recommendation Search
Hakia www.hakia.com Meaning Based Search
Hyper Search http://hypersearch.webhop.org.90.seekdotnet.com Filtered Search
iBoogie www.iboogie.com Clustering Search
IceRocket* www.icerocket.com Blog Search
Info.com www.info.com MetaSearch
Ixquick* www.ixquick.com Meta Search
KartOO* www.kartoo.com Clustering Search
KoolTorch (SEM) www.kooltorch.com Clustering Search
Lexxe* www.lexxe.com Natural Language Processing (NLP)
Lijit www.lijit.com Search People
Like* www.like.com Visual Search
LivePlasma* www.liveplasma.com Recommendation Search (HM)
Local.com* www.local.com Local Search
Mamma www.mamma.com MetaSearch
Mnemomap www.mnemo.org Clustering Search
Mojeek* www.mojeek.com Custom Search Engines (CSE)
Mooter* www.mooter.com Clustering Search
Mp3Realm http://mp3realm.org MP3 Search
Mrquery www.mrquery.com Clustering Search
Ms. Dewey* www.msdewey.com Unique Interface (HM)
Nutshell www.gonutshell.com MetaSearch
Omgili www.omgili.com Social Search
Pagebull* www.pagebull.com Visual Display
PeekYou www.peekyou.com People Search
Pipl http://pipl.com People Search
PlanetSearch* www.planetsearch.com MetaSearch
PodZinger www.podzinger.com Podcast Search
PolyMeta www.polymeta.com MetaSearch
Prase www.prase.us MetaSearch
PureVideo www.purevideo.com Video Search (HM)
Qksearch www.qksearch.com Clustering Search
Querycat http://querycat.com F.A.Q. Search (HM)
Quintura* www.quintura.com Clustering Search
RedZee www.redzee.com Visual Display
Retrievr http://labs.systemone.at/retrievr/ Visual Search
Searchbots www.searchbots.net Continuous Search
SearchKindly www.searchkindly.org Charity Search
Searchles* (DumbFind) www.searchles.com Social Search
SearchTheWeb2* www.searchtheweb2.com Long Tail Search
SeeIt www.seeit.com Image Search
Sidekiq* www.sidekiq.com MetaSearch
Slideshow* http://slideshow.zmpgroup.com/ Visual Display
Slifter* www.slifter.com Mobile Shopping Search (HM)
Sphere www.sphere.com Blog Search
Sproose www.sproose.com Social Search
Srchr* www.srchr.com MetaSearch
SurfWax* www.surfwax.com Meaning Based Search
Swamii www.swamii.com Continuous Search (HM)
TheFind.com* www.thefind.com Shopping Search
Trexy* www.trexy.com Search Trails
Turboscout* www.turboscout.com MetaSearch
Twerq www.twerq.com Tabbed Results
Url.com* www.url.com Social Search
WasaLive! http://en.wasalive.com RSS Search
Web 2.0* www.web20searchengine.com Web 2.0 Search
Webbrain* www.webbrain.com Clustering Search
Whonu?* www.whonu.com MetaSearch
Wikio* www.wikio.com Web 2.0 Search
WiseNut* www.wisenut.com Clustering Search
Yoono* www.yoono.com Social Search
ZabaSearch* www.zabasearch.com People Search
Zuula* www.zuula.com Tabbed Search (HM)
Twing* www.Twing.com Community / Forum Search & Discovery Engine


Ask anyone which search engine they use to find information on the Internet and they will almost certainly reply: "Google." Look a little further, and market research shows that people actually use four main search engines for 99.99% of their searches: Google, Yahoo!, MSN, and Ask.com (in that order). But in my travels as a Search Engine Optimizer (SEO), I have discovered that in that .01% lies a vast multitude of the most innovative and creative search engines you have never seen. So many, in fact, that I have had to limit my list of the very best ones to a mere 100.

But it's not just the sheer number of them that makes them worthy of attention; each one of these search engines has that standard "About Us" link at the bottom of the homepage. I call it the "why we're better than Google" page. And after reading dozens and dozens of these pages, I have come to the conclusion that, taken as a whole, they are right!

The Search Homepage

In order to address their claims systematically, it helps to group them into categories and then compare them to their Google counterparts. For example, let's look at the first thing that almost everyone sees when they go to search the Internet - the ubiquitous Google homepage. That famously sparse, clean sheet of paper with the colorful Google logo is the most popular Web page in the entire World Wide Web. For millions and millions of Internet users, that Spartan white page IS the Internet.

Google has successfully made their site the front door through which everyone passes in order to access the Internet. But staring at an almost blank sheet of paper has become, well, boring. Take Ms. Dewey for example. While some may object to her sultry demeanor, it's pretty hard to deny that interfacing with her is far more visually appealing than with an inert white screen.

A second example comes from Simply Google. Instead of squeezing through the keyhole in order to reach Google's 37 search options, Simply Google places all of those choices and many, many more all on the very first page; neatly arranged in columns.

Artificial Intelligence

A second arena is sometimes referred to as Natural Language Processing (NLP), or Artificial Intelligence (AI). It is the desire we all have of wanting to ask a search engine questions in everyday sentences, and receive a human-like answer (remember "Good Morning, HAL"?). Many of us remember Ask Jeeves, the famous butler, which was an early attempt in this direction - that unfortunately failed.

Google's approach, Google Answers, was to enlist a cadre of "experts." The concept was that you would pose a question to one of these experts, negotiate a price for an answer, and then pay up when it was found and delivered. It was such a failure, Google had to cancel the whole program. Enter ChaCha. With ChaCha, you can pose any question that you wish, click on the "Search With Guide" button, and a ChaCha Guide appears in a Chat box and dialogues with you until you find what you are looking for. There's no time limit, and no fee.

Clustering Engines

Perhaps Google's most glaring and egregious shortcoming is their insistence on displaying the outcome of a search in an impossibly long, one-dimensional list of results. We all intuitively know that the World Wide Web is just that, a three dimensional (or "3-D") web of interconnected Web pages. Several search engines, known as clustering engines, routinely present their search results on a two-dimensional map that one can navigate through in search of the best answer. Search engines like KartOO and Quintura are excellent examples.

Recommendation Search Engines

Another promising category is the recommendation search engines. While Google essentially helps you to find what you already know (you just can't find it), recommendation engines show you a whole world of things that you didn't even know existed. Check out What to Rent, Music Map, or the stunning Live Plasma display. When you input a favorite movie, book, or artist, they recommend to you a world of titles or similar artists that you may never have heard of, but would most likely enjoy.

Metasearch Engines

Next we come to the metasearch engines. When you perform a search on Google, the results that you get are all from, well, Google! But metasearch engines have been around for years. They allow you to search not only Google, but a variety of other search engines too - in one fell swoop. There are many search engines that can do this, Dogpile, for instance, searches all of the "big four" mentioned above (Google, Yahoo!, MSN, and Ask) simultaneously. You could also try Zuula or PlanetSearch - which plows through 16 search engines at a time for you. A very interesting site to watch is GoshMe. Instead of searching an incredible number of Web pages, like conventional search engines, GoshMe searches for search engines (or databases) that each tap into an incredible number of Web pages. As I perceive it, GoshMe is a meta-metasearch engine (still in Beta)!

Other Alt Search Engines

And so it goes, feature after feature after feature. TheFind is a better shopping experience than Google's Froogle, IMHO. Like is a true visual search engine, unlike Google's Images, which just matches your keywords into images that have been tagged with those same keywords. Coming soon is Mobot (see the Demo at www.mobot.com). Google Mobile does let you perform a search on your mobile phone, but check out the Slifter Mobile Demo when you get a chance!

Finally, almost prophetically, Google is silent. Silent! At least Speeglebot talks to you, and Nayio listens! But of course, why should Google worry about these upstarts (all 100 of them)? Aren't they just like flies buzzing around an elephant? Can't Google just ignore them, as their share of the search market continues to creep upwards towards 100%, or perhaps just buy them? Perhaps.

The Last Question

Issac Asimov, the preeminent science fiction writer of our time, once said that his favorite story, by far, was The Last Question. The question, for those who have not read it, is "Can Entropy Be Reversed?" That is, can the ultimate running down of all things, the burning out of all stars (or their collapse) be stopped - or is it hopelessly inevitable?

The question for this age, I submit, is… "Can Google Be Defeated"? Or is Google's mission "to organize the world's information and make it universally accessible and useful" a fait accompli?

Perhaps the place to start is by reading (or re-reading) Asimov's "The Last Question." I won't give it away, but it does suggest The Answer….

Charles Knight is the Principal of Charles Knight SEO, a Search Engine Optimization company in Charlottesville, VA.

The Top 100

For an Excel spreadsheet of the entire Top 100 Alternative Search Engines, go to: http://charlesknightseo.com/list.aspx or email the author at Charles@CharlesKnightSEO.com.

This list is in alphabetical order. Feel free to share this list, but please retain Charles' name and email.

Update, 5 February 2007: Charles Knight has left a detailed comment (#94) in response to all the great feedback in the comments to this post. He also notes:

"...while it looks like a very simple, almost crude list of 100 names, it has taken countless hours to try and do it properly and fairly. The list will be updated all year long, and the Top 100 can only get better and better until the Best of 2007 are announced on 12/31/07."

Thursday, June 19, 2008

Google News Search Leaps Ahead

Google News Search Leaps Ahead

Google has dramatically enhanced its news search service, serving up a portal of real-time news drawn from more than 4,000 sources worldwide.

Until recently, Google's news search has been competent, but less useful than other news-aggregating services such as AllTheWeb's News Search and Yahoo's Full coverage. The new enhancements establish Google as one of the premier news finding and filtering destinations on the web.

Like Yahoo's Full Coverage, Google News Search now looks like a portal, with links to the top headlines organized into categories such as Top Stories, World, Business, Sports and so on. Each category has its area on the News Search home page, with headlines, descriptions and links for the top two or three stories.

"The page looks very different than the average Google page," said Marissa Mayer, Google product manager. That's because it's packed with headlines, descriptions, thumbnail photos and dozens of links to the sources of the articles online.

Unlike Yahoo Full Coverage, however, Google News Search isn't assembled by human editors who select and format the news. Google's process is fully automated. News stories are chosen and the page is updated without human intervention. Google crawls news sources constantly, and uses real-time ranking algorithms to determine which stories are the most important at the moment -- in theory highlighting the sources with the "best" coverage of news events.

Each top story is presented with a headline linked directly to the source. Beneath the headline is a short description, name of the source, and the time when the article was last crawled, ranging from a few minutes to several hours ago.

Beneath the main headline and description are two full headlines from other sources, followed by four or five links to stories with only the name of the publication indicated. Finally, there are links to "related" stories from other sources.

This design makes it easy to quickly scan the headlines while having the option of reading multiple accounts of a story from different news sources -- from literally thousands of sources, for some stories.

Each major category has a link at the top of its respective section that allows you to scan news just within the category. Tabs on the upper left of each page also allow you focus in on Top Stories, World, U.S., Business, Sci/Tech, Sports, Entertainment and Health categories.

Unlike many news aggregators that simply "scrape" headlines and links from news sites, Google's news crawler indexes the full text of articles. This approach offers several unique benefits.

For example, full text indexing allows true searching, rather than just browsing of headlines. Creating a full text index of news also allows Google to cluster related news stories, around what Mayer calls a "centroid" of keywords. "A cluster is defined by a centroid of keywords, and all the articles have some of those key words in them," she said.

The process uses artificial intelligence in addition to traditional information retrieval techniques to match keywords with stories. Mayer says this approach to identifying related articles means that the relative importance of each article is "baked in," which is how the top sources for each story are selected.

Other factors used in calculating the relevance of top and related stories include how recently articles were published, and the reputation of the source. When you actually do a search, these factors are also applied in addition to keyword analysis to determine how closely particular stories match your query.

On search results pages, a link allows you to override the default ranking by relevance and order results by date -- a feature that's particularly helpful for monitoring breaking news.

Google's decision to index the full text of news sources rather than simply scraping headlines posed a major challenge for implementing the new service. The vast diversity and typically cluttered design of most online news formats is more difficult to crawl and index than many other types of web sites. "Article extraction has proven to be one of the most difficult aspects of the project," said Mayer.

Google crawls its 4,000 sources of news continuously and in real time. According to Mayer, the crawler continuously computes what's likely to change on each news source, and when the change is likely to occur. To expedite the discovery of new stories, the crawler tends to hit hub or major section pages frequently, to see what new links are there.

While the news sources are crawled constantly and individual news stories are updated continuously, the entire set of displayed stories is "auto generated" every 15 minutes. A message in the upper right corner of the main news page indicates when it was last generated.

Google's updated news search is an exceptionally powerful tool for web users. It's still in beta, so there are still a few rough edges, but all told it's one of the best news browse and search portals currently operating on the web.

Google News Search
http://news.google.com

News Search Engines
http://searchenginewatch.com/links/news.html

Wednesday, June 18, 2008

Hitwise Data Google Rules

Hitwise Data: Google Rules


Google Receives 68 Percent of U.S. Searches in May 2008

Search leader continues record growth - up 5 percent year-over-year;

Google accounted for 87 percent of searches in UK

NEW YORK, NY – June 10, 2008 – Google accounted for 68.29 percent of all U.S. searches in the four weeks ending May 31, 2008, Hitwise announced today. Yahoo! Search, MSN Search and Ask.com each received 19.95, 5.89 and 4.23 percent respectively. The remaining 41 search engines in the Hitwise Search Engine Analysis Tool accounted for 1.63 percent of U.S. searches.
Percentage of U.S. Searches Among Leading Search Engine Providers
Domain May-08 Apr-08 May-07
www.google.com 68.29% 67.90% 65.13%
search.yahoo.com 19.95% 20.28% 20.89%
search.msn.com 5.89% * 6.26% * 7.61% *
www.ask.com 4.23% 4.17% 3.92%

Note: Data is based on four week rolling periods (ending 5/31/ 2007, 4/26/08, 5/26/2007 from the Hitwise sample of 10 million U.S. Internet users. * - includes executed searches on Live.com and MSN Search but does not include searches on Club.Live.com.

Source Hitwise

In the U.K. market, Google search properties (Google.co.uk and Google.com) accounted for 87 percent of all UK searches in May 2008 representing a 12 percent increase compared to May 2007. Yahoo! search properties accounted for 4.09 percent of UK searches in May 2008, a 2 percent increase compared to April 2008. MSN search properties accounted for 3.72 percent and Ask search properties accounted for 3.07 percent of searches. MSN increased two percent compared to April 2008 and Ask increased 6 percent.

Percentage of U.K. Searches Among Leading Search Engine Providers
Domain May-08 Apr.-08 May-07
Google Properties 87.30% 87.69% 78.28%
Yahoo! Properties 4.09% 4.01% 8.58%
Microsoft Properties 3.72% 3.65% 5.46%
Ask Properties 3.07% 2.89% 4.96%

Note: Data is based on UK Internet usage over the four week rolling periods (ending 5/31/ 2007, 4/26/08, 5/26/2007) from the Hitwise sample of 8.4 million UK Internet users. Note that the percentages for the search properties include the .uk and .com domains.

Source: Hitwise UK

Google an Increasing Source of Traffic to Key U.S. Industries
Search engines continue to be the primary way Internet users navigate to key industry categories. Comparing May 2008 to May 2007, the Travel, News and Media, Entertainment, Business and Finance, Sports, Online Video and Social Networking categories showed double digit increases in their share of traffic coming directly from search engines.

U.S. Category Upstream Traffic from Search Engines and Google - May 2008
Category Percent of Category Traffic from Search Engines, May-08 Percentage Change in Share of Traffic From, Search Engines, May-08 - May-07 Percent of Category Traffic from Google, May-08 Percent Change in Share of Traffic From Google, May-08 - May-07
Health and Medical 45.76% 3% 30.86% 5%
Travel 34.81% 11% 24.26% 21%
Shopping and Classifieds 25.48% 2% 16.84% 8%
News and Media 21.70% 7% 14.53% 10%
Entertainment 24.33% 17% 15.76% 22%
Business and Finance 18.15% 14% 11.73% 22%
Sports 13.09% 17% 8.81% 24%
Online Video* 29.94% 37% 20.78% 52%
Social Networking* 16.50% 18% 9.98% 21%

All figures are based on U.S. data from the Hitwise sample of 10 million Internet users.
* denotes custom category

Source: Hitwise

About Hitwise
Hitwise is the leading online competitive intelligence service. Only Hitwise provides its 1,400 clients around the world with daily insights on how their customers interact with a broad range of competitive websites, and how their competitors use different tactics to attract online customers.

Since 1997, Hitwise has pioneered a unique, network-based approach to Internet measurement. Through relationships with ISPs around the world, Hitwise’s patented methodology anonymously captures the online usage, search and conversion behavior of 25 million Internet users. This unprecedented volume of Internet usage data is seamlessly integrated into an easy to use, web-based service, designed to help marketers better plan, implement and report on a range of online marketing programs.

Monday, June 16, 2008

SEO activities on your website

SEO activities on your website

1. Analysis & Research

* Keyword/Phrase analysis
* Keyword research using word tracker, Overture and Googlesets
* Competitive analysis
* Extensive Competitive Analysis for better search engine ranking performance
* Initial position analysis report
* Website Usability Analysis by Usability and copyediting expert
* Extensive Log file analysis
* Personalized Report Analysis and Monitoring

2. On site ( On Page) optimization

* Homepage Optimization
* Meta tags placement
* Content fixing
* Monthly Manual Update to Optimized Content
* Fixing the text links
* Optimized Navigational Structure
* Site map for better crawling of your site
* Descriptive site map creation
* Link resource page creation
* Link exchange page creation
* Image Optimization
* SEO Copywriting
* Spell Checking
* HTML Validation Checking
* Browser Compatibility checking
* Website Load time checking
* Creation of robots file
* MOD_Rewrite / URL rewrite for Dynamic sites for better search engine crawling

3. Off site (Off Page) optimization

* Manual submission to all major search engines
* Semi-automatic submission to more than 200 Search Engines
* Resubmission of sites to certain search engines if necessary
* Submission to important paid inclusion directories
* Yahoo Directory Inclusion
* Submission to Dmoz directory
* Submission to more than 5000 free inclusion quality directories
* Re-optimization of site
* Reciprocal link building - 2way and 3way links
* Link Popularity through one way links
* Buying text link advertisements from relevant sites to increase the link popularity
* Article Submission
* Newsletter Submission
* Forum Posting
* Blog submission

4. Reports

* Monthly management plan
* Detailed Ranking Report
* Weekly updates & comprehensive Monthly Ranking Reports
* Submission Management with Reports
* Site Visibility Statistics Report
* Server Check and Link Check

5. Support

* Search Engine Algorithm Updates
* 100% Guaranteed Uptime during site Modifications
* Multiple CD Burned backups of Optimized pages and site
* Technical Support
* 24/7 Phone support and online support

Friday, June 13, 2008

Change Definition for Doorway Pages

Google Change Definition for Doorway Pages

According to Search Engine Watch, Google has changed the way it defines 'Doorway Pages'.

The new definition at Google Webmaster Help Center for 'Doorway Pages':

Doorway pages are typically large sets of poor-quality pages where each page is optimized for a specific keyword or phrase. In many cases, doorway pages are written to rank for a particular phrase and then funnel users to a single destination.

Whether deployed across many domains or established within one domain, doorway pages tend to frustrate users, and are in violation of our Webmaster guidelines.

However, the cached version of the same still shows the old version:

Doorway pages are pages specifically made for search engines. Doorway pages contain many links - often several hundred - that are of little to no use to the visitor, and do not contain valuable content. HTML sitemaps are a valuable resource for your visitors, but ensure that these pages of links are easy for your visitors to navigate. If you have a number of links to include, consider organizing them into categories or into multiple pages. But in doing so, ensure that they are intended for visitors to navigate the sections of your site, and not simply for search engines.

In the new version of the definition, key sentences, words and adjectives have been changed and replaced by more generic terms. Discussions are on at the Search Engine Watch Forum. It seems that Google has tweaked the definition in order to make the look of the page more subtle than technical.

Google IP Delievery Geo Location

Google IP Delievery Geo Location cloaking


At the Google Webmaster Central Blog, Google has released some valuable information about webserving techniques, especially related to Googlebot. This post has been written keeping in mind the numerous information requests that Google had received for IP Delivery, Geo Location and Cloaking techniques.

Geolocation: It is the process of serving targeted or different content to users on the basis of their locations. Webmasters have the tools to determine a user's location from preferences stored in their cookies. This information is related to the user's login or their IP address. Such as, if your website is about theater, then you can always use geolocation techniques to highlight Broadway for a user in New York.


IP Delivery: It is the process of serving targeted or different content to users on the basis of their IP address. IP addresses are meant to provide geographic information. IP delivery is quite similar to geolocation, therefore, the techniques are almost the same.


Cloaking: It is the method (unethical though) of serving different content to users than to Googlebot. However, this step is considered to be unethical and Google Webmaster Guidelines prohibit Webmasters from using it. If the file that Googlebot crawls is different from the file served to the user then a Webmaster is coined as being in a high-risk category. A program such as md5sum or diff can compute a hash to verify that two different files are identical.

First Click Free: If the Webmasters follow Google First Click Free Policy. Then they would be able to include their premium or subscription-based content in Google's websearch index without violating Google's quality guidelines. Webmasters can allow all users who find their page using Google search to see the full text of the document, even if they have not registered or subscribed. The user's first click to the content area is free. But, if the user jumps to another section of the website, then a Webmaster can block the user's access to the premium or subscribed content with a login or a payment request.

There is also a thread at the Webmaster Help Group, that would be quite interesting for all the Webmasters out there.

Robot Exclusion Protocol

Google, Yahoo! & Microsoft Talk About 'Robot Exclusion Protocol'!

Google Webmaster Central Blog, Yahoo Search Blog and the Microsoft Live Search Webmaster Center Blog have come out with quite informative documentation about Robot Exclusion Protocol, Last year in February, I had put up a post informing our readers about Google's thoughts on the Robots Exclusion Protocol. All three have come out with REP features documentation at the same time. This makes it mighty easier for users to know about the techniques employed by all three Search Engines for the Robot Exclusion Protocol.

Here is what all three Blogs are saying in Unison:

Google Webmaster Blog:

For the last couple of years Google, Yahoo! and Microsoft have been collaborating to bring essential Webmaster Tools. The REP features employed by all three search engines are applicable for all crawlers or for specific crawlers by targeting them to specific user-agents, which is how any crawler identifies itself. The following are the major REP features currently in use by all three search engines.

For Robots.txt Directives

Disallow.

Allow.

Wildcard Support.

Sitemaps Location.



For Sitemaps Directives.

NOINDEX META Tag.

NOFOLLOW META Tag.

NOSNIPPET META Tag.

NOARCHIVE META Tag.

NOODP META Tag.



Yahoo! Search Blog:


Yahoo! Follows the same REP feature as used by Google and mentioned above. However, there are some Yahoo! Specific REP directives that are neither supported by Google, nor by Microsoft. These features are:

Crawl-Delay: Allows a site to delay the frequency with which a crawler checks for new content.

NOYDIR META Tag: This is similar to the NOODP META Tag above but applies to the Yahoo! Directory, instead of the Open Directory Project.

Robots-nocontent Tag: Allows you to identify the main content of your page so that the Yahoo! crawler targets the right pages on your site for specific search queries by marking out non content parts of your page.



Microsoft Live Search Webmaster Center Blog:


Even Microsoft follows the same REP Directive as Yahoo! And Google. However, as with Yahoo, Microsoft too has a dedicated REP feature that works with Microsoft and Yahoo, but not with Google.

Crawl-Delay: Allows a site to delay the frequency with which a crawler checks for new content.



Over at Matt Cutts Blog, he also mentions the similar REP directives used by Google, Microsoft and Yahoo!. However, he has also written about some other informative online documents that Google has published over the past few weeks. Some of the really interesting posts by Google so far have been.

IP delivery/geolocation/cloaking: In this post, Google explains with the help of a video, their webserving techniques related to Googlebot. This post is all about IP Delivery, Geo-location and Cloaking techniques.

Doorway Pages: Google has recently changed the definition for Doorway Pages at the Google Webmaster Help Center. This post provides the old and the new definition for the user to compare and understand the difference between the two.



This collaborative revelation is all about providing a clear picture to the Webmasters about the actual REP functionalities. Keeping track of techniques for different Search Engines is very arduous task and hence, Yahoo!, Microsoft and Google have provided a consolidated overview of the actual similarities and differences between the implementation of REP features by these three major search engines.

Google SERP Dancing Possible June 2008

Google SERP Dancing Possible June 2008

According to Webmasters World Google SERPs (Search Engine Result Pages) are returning fewer results for specific queries, pointing to possible Google SERPs update. The speculations for such changes are that, it may be due to the quality control practices employed by Google, or it can also be a human-error.

Another concern is cache related, where the cache date is current but the cache pages are about one to three months old. In some cases, cache pages aren't being displayed at all. Such as in case of google.co.uk, users are experiencing ranking changes (increased ranks), that is being attributed to the quantity of links rather than quality of links. Some reports suggest that a lot of irrelevant information is being displayed in the first page of Google SERP, information that is in no way related to search query.

Let us see what the Webmasters at the 'Webmaster World' have to say about this possible update:

“I've wondered too as to why some search terms are affected more than others and some result pages are changing around while other barely move.

Has anyone seen any relationship between how popular a search the term is and how much movement is going on?

As to when it will end I don't think we can predict as nothing quite like this has gone on before”

“I don’t know if anybody has reported this or they could be doing some major testing in my areas.

I’m seeing some dramatic across the board cuts for returned results for many keywords in my areas. Many keywords that once returned 850-950 results are now showing only 600-725 returned results. First page though is showing about the same amount of returned results as before which is somewhat deceptive. I had a feeling this was right around the corner. They’re applying more and more of the quality control features of Adwords to the natural results.”

“I see this pattern too. It might be the result of the "human editorial army" as well as the automated quality measures.“

“Has anybody seen where cache dates may be 1-3 months old but the page showing in the cache is current? This is taking into account that the new cache date could show up shortly but doesn't.”

“ I'm noticing the rapid rise of a few sites in the google.co.uk serps. On investigation using Yahoo site-explorer it looks like shear volume of backlinks of any quality trumps a lower number of quality links. Whoever has been playing with the UK geo filter recently seems to have turned off the "high quality" bit of the algorithm. Thus creating a field day for webmasters who exploit the low pay rates of 3rd World SEOs.

Since it is generally agreed that being linked to (except in certain extreme situations) cannot harm your site should we all be paying someone $200 to get 400 links from dodgy directories?”

Well, these unexpected changes definitely have unmistakable similarities with SERP updates. However, as of now, it would be wise to just wait for Google's response.