If there's one current winner in the AI race, it's OpenAI. Microsoft is taking very good advantage of this. Google is being more cautious, yes, but we must not forget that it has been working in this area for years and that it has DeepMind as its spearhead. And yet Google's own managers seem to be clear that neither they nor OpenAI will win. The winner will be someone else.
"We have no moat." A document leaked in SemiAnalysis and supposedly created by Google engineers shows their analysis of the current state of the art in the field of artificial intelligence. In it they use the expression "we have no moat, and neither does OpenAI", which is used in business and investment contexts to describe a company or business that lacks a sustainable competitive advantage. . Or what is the same: according to Google, both it and OpenAI will have difficulties to maintain their position in the AI market and to remain competitive in the long term.
The "threat" Open Source. Google's analysis compares its Bard model with ChatGPT, but also includes the latest advances in Open Source projects in this area. The conclusion of the Google engineers is striking:
"Although our models still have a slight advantage in terms of quality, the gap is closing with astonishing speed. Open source models are faster, more customizable, more private and, pound for pound, more capable. They do things with 100 dollars and 13,000 million parameters that cost us 10 million and 540,000 million parameters. And they do it in weeks, not months".
The graph above, adapted from the one used in the launch of the Vicuna-13B model, shows how after the launch of Meta de LLaMA (and its subsequent leak to P2P networks), Alpaca-13B took two weeks to appear and improved it noticeably, but is that Vicuna-13B, which appeared a week later, was close to the precision and quality of Bard and ChatGPT with far fewer resources.
Constant and frenetic innovation. The document talks about how in the Open Source world spectacular advances are taking place in a very short time. Barely days passed between major breakthroughs (see timeline for a full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evaluations, multimodality, RLHF, etc., etc., many of which build on one another. And for Google the thing does not end there:
"More importantly, they've solved the scaling problem to the point where anyone can tweak it. A lot of the new ideas come from ordinary people. The barrier to entry for training and experimentation has been lowered from total production from a large research organization to one person, one afternoon and a rugged laptop".