OpenAI has introduced its latest artificial intelligence model, the o1 series, which features human-like reasoning and deliberate problem-solving abilities. This innovative model is designed to spend more time thinking before responding to queries, allowing it to tackle complex tasks and solve challenging problems in fields such as science, coding, and mathematics.
The new o1 series aims to simulate a more deliberate thinking process, refining its strategies and recognising mistakes in a manner similar to human cognition. According to OpenAI’s Chief Technology Officer, Mira Murati, this model marks a significant leap in AI capabilities. Murati predicts that this advancement will fundamentally change human interactions with AI systems. “We’ll see a deeper form of collaboration with technology, akin to a back-and-forth conversation that assists reasoning,” Murati stated.
Unlike existing AI models known for quick, intuitive responses, the o1 series introduces a slower, more thoughtful approach to reasoning. This model is expected to drive progress in various fields, including science, healthcare, and education, by assisting in exploring complex ethical and philosophical dilemmas, as well as abstract reasoning. Mark Chen, Vice-President of Research at OpenAI, noted that early tests by professionals in diverse fields showed the o1 series performs better at problem-solving compared to previous AI models. “An economics professor remarked that the model could solve a PhD-level exam question ‘probably better than any of the students,’” Chen revealed.
However, the model does have limitations: its knowledge base is current only up to October 2023, and it lacks the capability to browse the internet or upload files and images. Despite these limitations, the launch coincides with reports of OpenAI seeking to raise $6.5 billion, potentially reaching a valuation of $150 billion, as noted by Bloomberg News. If achieved, this valuation would place OpenAI far ahead of competitors such as Anthropic and xAI.
The rapid development of advanced generative AI has raised safety concerns among governments and technologists about broader societal implications. OpenAI has faced internal criticism for seemingly prioritising commercial interests over its original mission to develop AI for the benefit of humanity. Last year, CEO Sam Altman was temporarily ousted by the board over concerns about the company’s direction, an event referred to internally as “the blip.” Additionally, several safety executives, including Jan Leike, have departed the company, citing a shift in focus from safety to commercialisation.
In response to these criticisms, OpenAI has introduced a new safety training approach for the o1 series, leveraging its enhanced reasoning capabilities to ensure compliance with safety and alignment guidelines. The company has also formalised agreements with AI safety institutes in the US and UK, providing them with early access to research versions of the model to support collaborative efforts in safeguarding AI development.
As OpenAI advances its technological capabilities with the new o1 series, the organisation strives to balance innovation with a renewed commitment to safety and ethical considerations in AI deployment.