In response to the danger posed by ChatGPT, Google is preparing its solution with its own artificial intelligence. Created by DeepMind and called Sparrow, it must provide reliable and sourced answers while respecting certain limits. In this article we will look at Sparrow, what it is, and when it could be released.
Many people see ChatGPT, the conversational artificial intelligence developed by OpenAi, as a potential competitor to Google, due to its ability to provide exhaustive, complex, and above all unique answers. But even though the tool is far from perfect, it often makes mistakes and spreads fake news, and some have already started using it for malicious purposes, Microsoft has already planned to integrate it with its Bing search engine in order to attack Google on its own ground. No wonder Google, a pioneer in the field of AI, is really worried. To the point of launching a "red code" and reorganizing various departments to move forward on its AI projects. One of them, called Sparrow, could well compete with ChatGPT since it takes the form of a fairly similar chatbot. Demis Hassabis, CEO of DeepMind, a subsidiary of Alphabet, Google's parent company, specializing in artificial intelligence, revealed in an interview with Time that the firm plans to launch Sparrow in private beta this year.
Google Sparrow: cautious development
DeepMind takes a much more cautious approach than OpenAI so as not to tarnish its reputation. In a September 2022 post featuring Sparrow, the Alphabet subsidiary described its AI as "a dialogue agent that is helpful and reduces the risk of dangerous and inappropriate responses." It is based on DeepMind's Chinchilla language model, which certainly has fewer parameters than OpenAI's, GPT-3.5, but which has been trained on a large amount of data. In addition, it has Internet access, which allows it to incorporate up-to-date information into its responses.
Sparrow's slight launch delay compared to ChatGPT is intentional and deemed by the company to be necessary, given that it is working on important features that OpenAI's AI lacks, such as citing the sources used. to provide the answer. For Demis Hassabis, "it's right to be cautious on that front." DeepMind also wants to establish the limits that its artificial intelligence must not cross. "Our agent is designed to speak with a user, answer questions, and search the internet using Google when it's useful to find evidence to inform their answers," DeepMind said in September. The firm has therefore determined a set of rules to ensure that "model behavior is safe", including a ban on making threatening statements, making hateful comments or "pretending" to have a human identity.
Sparrow: Google's response to ChatGPT?
Will Sparrow keep all ITS promises? In September, testing by the Alphabet subsidiary indicated that artificial intelligence provided plausible, evidence-based answers to factual questions 78% of the time. On the other hand, DeepMind admitted that, in terms of respecting its rules, the AI still had progress to make since the testers were able to deceive it so that it breaks them 8% of the time. For the firm, crafting better rules for Sparrow "will require both input from experts on many topics (including policymakers, social scientists and ethicists) and input from a wide range of people." We will now have to wait for the private beta to be able to compare Google's AI to ChatGPT, in terms of response quality and respect for ethics.
But Sparrow is not Google's only project in terms of artificial intelligence. It is also working on AlphaCode, which is able to code as well as a novice programmer, and LaMDa (Language Model for Dialogue Applications), a conversational AI which was particularly noticed in June 2022, when one of the developers had put together a file in order to prove that the artificial intelligence was conscious. Google is being understandably very cautious with their development and we are not likely to see it in active use any time soon.