Ask Onix
Google CEO cautions users on AI reliability in BBC interview
Sundar Pichai, chief executive of Alphabet and Google, has urged users not to "blindly trust" artificial intelligence tools, emphasizing their susceptibility to errors in an exclusive BBC interview published Tuesday. The remarks come as Google rolls out deeper AI integration across its services, including the latest iteration of its Gemini model.
Current limitations of AI technology
Pichai acknowledged that while AI models like Google's Gemini offer creative and conversational benefits-such as assisting with writing tasks-they remain "prone to errors." He stressed the importance of a "rich information ecosystem" where users cross-reference AI outputs with other sources, including traditional search tools.
"We take pride in the amount of work we put in to give us as accurate information as possible," Pichai told the BBC, "but the current state-of-the-art AI technology is prone to some errors." He advised users to leverage AI for its strengths while recognizing its limitations: "People have to learn to use these tools for what they're good at, and not blindly trust everything they say."
Competitive pressures and market shifts
The interview coincides with Google's aggressive push to reclaim market share from rivals like OpenAI's ChatGPT. Since May 2025, the company has embedded its Gemini chatbot into search results via a new "AI Mode," framing the move as a "new phase of the AI platform shift." Analysts view the integration as a strategic response to ChatGPT's challenge to Google's long-standing search dominance.
Pichai's comments align with a BBC investigation earlier this year, which found that leading AI chatbots-including Google's Gemini, OpenAI's ChatGPT, Microsoft's Copilot, and Perplexity AI-frequently produced "significant inaccuracies" when summarizing news content. The study underscored ongoing concerns about reliability in generative AI systems.
Balancing innovation with responsibility
Addressing the rapid pace of AI development, Pichai described a "tension" between technological advancement and the need for safeguards against harm. Alphabet's approach, he said, involves being "bold and responsible at the same time," with investments in AI security keeping pace with innovation.
As an example, he cited Google's recent open-sourcing of tools designed to detect AI-generated images-a move aimed at combating misinformation. "Our consumers are demanding [rapid progress]," Pichai noted, "but we're also scaling our mitigations proportionally."
Response to Musk's warnings on AI concentration
When asked about resurfaced comments from Elon Musk warning that a single entity, such as Google-owned DeepMind, could create an AI "dictatorship," Pichai dismissed the scenario as unlikely. "No one company should own a technology as powerful as AI," he conceded, but added: "If there was only one company building AI technology and everyone else had to use it, I would be concerned-but we are so far from that scenario right now."
"We are moving fast through this moment. I think our consumers are demanding it."
Sundar Pichai, CEO of Alphabet and Google
What's next for Google's AI strategy
Google's latest Gemini 3.0 model has begun regaining traction against competitors, though the company faces ongoing scrutiny over accuracy and ethical deployment. Pichai's interview signals a dual focus: accelerating AI capabilities while reinforcing user awareness of their fallibility.
Industry observers note that the tech giant's willingness to openly discuss AI's limitations-a departure from earlier hype-may reflect broader shifts toward transparency as regulatory and public pressure mounts.