Category Archives: Google Webmaster Central

Google is not putting logos next to your company’s results

We’re all familiar by now with the little author photos which appear in Google results. The initiative is supposed to help results from “trusted authors” stand out, and in the example below, where my own blog gets highlighted, you can get a fair idea of how effective it is:

ryder-cup-google-results

There has been some talk in the last few days that Google is extending the concept to put company logos next to the relevant results, which would be of interest to a lot of us. Here’s what they have to say (and note the – rare – correct use of the most overused word of the last year, ‘iconic’):

Today, we’re launching support for the schema.org markup for organization logos, a way to connect your site with an iconic image. We want you to be able to specify which image we use as your logo in Google search results.

However, despite the implications you might draw from that, Google is not planning to start putting logos next to your company’s results in the same way as it has for authors. What the announcement above refers to is the rather odd panel Google calls the “Knowledge Graph” which appears top right in the results for certain searches, usually those with Wikipedia entries (below). And should you be the lucky recipient of one of these, you can already specify your logo by having a related Google+ page. The new announcement just provides an alternative way of pointing the search engine towards your official logo.

Selfridges

That said, I’m going to add this markup to my business website, because you never know when and where Google might start making use of the data. It’s a small addition which can’t do any harm.

Official Google Webmaster Central Blog announcement

Is Google indexing pages from your website?

A new function which you’ll see in your Google Webmaster Tools has made a lot of full-time webmasters very happy. Hiding under the “health” menu, the new “Index Status” feature shows you how many pages from your site have been included in Google’s index. If you take a look, you should see a graph which steadily goes upwards. All is well unless it flattens out or drops (unless you’re not adding pages to your website). Under the “Advanced” tab you’re provided with three other graphs, showing the cumulative number of pages crawled, the number of pages which are blocked by robots.txt, and also the number of pages that were not selected for inclusion in Google’s results. This is great information. More at the Official Google Webmaster Central Blog.

Google Webmaster Tools Index Status

Presenting PDF documents to the web: a summary

A good post on the Google Webmaster Central Blog covers PDFs in Google search results, something which is probably of critical importance to most industrial and scientific companies. It’s almost certain that you have catalogues, brochures or data sheets on your website as PDF files, and these seem to be appearing ever more strongly in Google’s results (so long as you don’t hide them from your website visitors by making them only available on request). We’ve discussed how to present PDF files before, here and here and here and here and here! However, this article comes straight from the horse’s mouth, so to speak. Do remember a common failing – which I see quite frequently – of PDF brochures being scans of original documents. Like any image, Google can’t read the content of these. If you have PDF documents which are scanned images, you need to get proper versions urgently. You can tell images quite easily: the text won’t be selectable.

Oops, I didn’t mean to make that public

I’m sure we’ve all, at one time or another, posted something online which was private or was just incorrect. While changing the page isn’t hard, what if you’re unlucky enough that a search engine has been round and hoovered up what you’ve written in the meantime? Don’t forget, Google and Bing effectively keep a public copy of the entire web (click the ‘cached’ link next to almost every Google result). Unfortunately, you’ll just have to wait until they come round again. If you have an “XML Sitemap” – and you should – then you can mark the page as high priority, ensuring it gets re-crawled the next time the search engine comes round. But you will have to wait.

Harder than this is getting a page removed from the index completely. Just deleting a page from your site is a hopeless approach; the search engines might take months of crawling and re-crawling your site before they decide that the page really no longer exists. You need to flag the page as having gone, either by redirecting its URL to a replacement page, or listing it as gone, which in Google, you do through Webmaster Tools.

(You have got a Webmaster Tools account, haven’t you? Whoever created your website should have ensured you do, as part of the service, or they weren’t doing their job properly. But if you haven’t got one, sort it out today. It’s in week 1 of our Insider Programme, that’s how essential it is.)

Now Google has made the process of removing a URL from the results a little easier, and Easier URL removals for site owners on Google Webmaster Central gives the full story. You’ll still need – eventually – to properly indicate that the page no longer exists, or block the “Googlebot” from crawling the page, but you can fast-track its removal now.

Are you on top of your 404s?

“404″ is one of those bits of computer code geekery which will be recognised by even the non-IT-savvy internet user. But as a website owner, it’s very important that you correctly serve up 404s – indicating that someone’s on the right domain, but the page they’ve asked for doesn’t exist. An excellent and very thorough introductory post on the Google Webmaster Central blog called Do 404s hurt my site? is a good place to brush up on what a “404″ should really be used for. If your website is managed for you by someone else, I’d forward the article to them and get them to confirm that non-existent pages are being handled correctly.