Yesterday Google published details of one approach it’s now taking to move beyond ‘deep learning’. I’d only suggest diving into this stuff if you’re really keen, because it’s not aimed at those of us who have brains significantly less than the size of planets.
Its new system is called the Pathways Language Model (PaLM). It seems to me to be a way of training AI without having to consider all the information that’s out there, good or bad. This ‘few-shot’ approach aims to simulate how humans learn, by putting together a relatively small amount of knowledge to solve problems we’ve not seen before.
Why should we care? I’d say that this sort of stuff illustrates how far down the road researchers at Google are to generating search results that are as good as an individual could provide, should we be able to store the entire web in our head. It reinforces the fact that when someone says to us: “Get us to the top position on Google”, the only answer should be: “First, we need to provide the best information”.
Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance is on the Google AI Blog.