Every website should have a ‘robots.txt’ file, which tells the search engines some essential information including where in the site they should and shouldn’t look. It’s not really designed to make parts of a website private, but it should stop the search engines wasting the time allocated to a site on irrelevant stuff. You can find your ‘robots.txt’ file at [your domain]/robots.txt, e.g. bmon.co.uk/robots.txt.
The reason I mention this is because there’s never been a properly standardised list of rules that can be used in robots.txt, and Google’s trying to do something about it. One result is that the search engine has announced that from 1 September it won’t be supporting one widely-used command, ‘noindex’. So if your robots.txt contains any lines with that in it, you may want to change it. Google has listed a few alternatives.