Godfather of AI Shares

Geoffrey Hinton, often hailed as the “Godfather of AI,” has expressed concern about placing too much trust in artificial intelligence systems—specifically OpenAI’s GPT-4. In a recent interview with CBS, Hinton admitted that despite his deep understanding of AI, he finds himself overly reliant on the chatbot for everyday tasks. “I tend to believe what it says, even though I should probably be suspicious,” he acknowledged, underlining the subtle risks of overconfidence in intelligent systems.
During the segment, Hinton tested GPT-4 with a simple logic riddle: “Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?” GPT-4 answered “two,” a common mistake made by both humans and machines. In reality, the correct answer is one—Sally herself is the only sister shared by all three brothers. Hinton expressed disappointment in the error, remarking, “It surprises me. It surprises me it still screws up on that.”
He described GPT-4 as “an expert at everything,” but quickly added, “It’s not a very good expert at everything,” capturing the dual nature of AI: impressive in scope, yet inconsistent in precision. Hinton’s reflection resonates with growing concerns in the tech community about users placing blind faith in AI tools that, while powerful, are not infallible. His comments serve as a cautionary reminder, especially as these systems become increasingly embedded in daily life.
Despite highlighting GPT-4’s shortcomings, Hinton remains hopeful about the future of AI. When asked whether GPT-5 might answer the riddle correctly, he responded, “Yeah, I suspect,” suggesting that upcoming models will likely demonstrate greater reasoning capability and fewer errors. His optimism reflects the rapid pace of innovation in the field, where improvements between versions can be substantial.
Following the broadcast, several social media users reported that newer versions of OpenAI’s technology, including GPT-4o and GPT-4.1, correctly solved the riddle that tripped up GPT-4. This indicates notable progress in reasoning performance, even in subtle or deceptively simple scenarios. The incident also highlights how quickly AI models evolve—and how small errors in one version may be fixed in the next.
OpenAI launched GPT-4 in 2023 as a major leap in reasoning, language understanding, and problem-solving. Since then, it has released GPT-4o, which delivers faster and more interactive performance, and continued development with versions like GPT-4.5 and GPT-4.1. These models are now integral to numerous industries and personal tools, but as Hinton’s remarks suggest, users must remain critical and informed when using them.
While Hinton acknowledges that GPT-4 is a groundbreaking achievement, he also reminds us that no AI system is perfect. “It’s easy to be impressed by how much these models can do,” he noted, “but we must not forget how easily they can get things wrong.” His message serves as both praise and caution: AI is powerful, but trust in it should be measured—and earned, not assumed.