Apple Siri: OpenAI & Anthropic AI Takeover?

Bloomberg — Apple Inc. (AAPL) is considering the possibility of using artificial intelligence technology of Anthropic PBC U OpenAI to boost a new version of Sirileaving aside his own internal models in a potentially boxing movement intended to reverse his wobbly effort in artificial intelligence.

The iPhone manufacturer has spoken with both companies about the possibility of using their large linguistic models for Siri, according to people familiar with conversations. They have asked them to train versions of their models that could be executed in Apple’s cloud infrastructure to perform tests, people said, they asked not to be identified when talking about private deliberations.

See more: Apple does not need to panic and buy its way to the glory of AI

If Apple finally keeps going, it would represent a monumental turn. At present, the company feeds most of its AI functions with its own technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that works with that technology by 2026.

A change to the claude models of Anthropic or Chatgpt of OpenAi for Siri would be a recognition that The company is struggling to compete in the generative AIthe new most important technology in decades. Apple already allows ChatgPP to respond to web -based search consultations in Siri, but the assistant itself is promoted by Apple.

Apple’s investigation into third -party models is at an early stage, and the company has not made a final decision on its use, people said. A competitor project internally nicknamed LLM Siri that uses its own models continues in active development.

To make a change, which is being discussed for next year, could allow Apple, based in Cupertino (California), to offer Siri functions as well as the attendees of the Android phones, which would help the company shed its reputation of lagging in AI.

Representatives of Apple, Anthropic and Openai declined to comment. Apple’s shares rose more than 2% after Bloomberg will inform the deliberations.

Siri fights

The project to evaluate external models was initiated by the head of Siri, Mike Rockwell, and the head of software engineering, Craig Federight. They were entrusted with Siri’s supervision after the functions ceased to be under the command of John Giannandrea, the company’s head. He was set out following a warm response to Apple’s intelligence delays and Siri’s functions.

Rockwell, who previously launched the Vision Pro headphones, assumed the role of Siri engineer in March. After assuming the position, he instructed his new group to evaluate whether Siri would do a better job managing the consultations using Apple’s models or third -party technology, including Claude, Chatgpt and Google Gemini by Alphabet Inc. (GOOGL).

See more: Apple explores a possible alliance or purchase of perplexity to strengthen its strategy in AI

After multiple rounds of tests, Rockwell and other executives concluded that Anthropic technology is the most promising for Siri’s needspeople said. That led Adrian Perica, vice president of corporate development of the company, to initiate conversations with Anthropic about the use of Claude, people said.

The Siri wizard, originally launched in 2011, has lagged behind the popular chatbots of AI, and Apple attempts to update the software have been hindered by engineering and delays problems.

A year ago, Apple revealed new capabilities of Siri, including those that would allow you to access users’ personal data and analyze the screen content to respond better to consultations. The company also showed a technology that It would allow Siri to control applications and functions more accurately on all Apple devices.

The improvements were far from ready. Apple initially announced plans for a launch in early 2025, but finally delayed launch indefinitely. They are now planned for next spring, as reported Bloomberg News.

Uncertainty about AI

People with knowledge of Apple’s team claim that it is operating with a high degree of uncertainty and lack of clarity, with executives still carefully studying a series of possible directions. Apple has already approved a billionaire budget for 2026 to execute its own models through the cloudbut their plans for beyond are still murky.

Even so, Federight, Rockwell and other executives have been increasingly open to the idea that the adoption of external technology is the key to a short -term change of course. They do not see the need for Apple to trust their own models, which they currently consider inferior, when it can be associated with third parties, according to people.

See more: Apple risks new EU fines for restrictions on the App Store

The granting of Licens from AI to third parties would reflect an approach adopted by Samsung Electronics Co. although the company marks its functions under the umbrella of Galaxy AI, many of its functions are actually based on Gemini. Anthropic, meanwhile, is already used by Amazon.com Inc. (AMZN) to help boost the new Alexa+.

In the future, if their own improvement technology, executives believe that Apple should have the property of the models of the growing importance in the operation of the products. The company is working on a series of projects, including a desktop robot and glasses that will make intensive use of AI.

Apple has also recently considered Perplexity’s acquisition to help reinforce its work in AI, as reported Bloomberg. He also briefly held conversations with Thinking Machines Lab, the startup of AI founded by the former director of Openai technology, Mira Murati.

Sour moral

Apple’s models are developed by a team of about 100 people led by Ruoming Pang, a distinguished Apple engineer which joined Google in 2021 to direct this work. It is accounts to Daphne Luong, senior director in charge of the research on Ia.

Luong is one of Giannandrea’s main lieutenants, and the foundation models team is one of the few significant groups of AI that still depend on Giannandrea. Even in that area, Federight and Rockwell have assumed a more important role.

Regardless of the path you take, the proposed change has weighed in the team, which has some of the most demanded talents of the AI ​​industry.

Some members have indicated internally that they are not happy that the company is considering third -party technology, which creates the perception that they are guilty, at least in part, of the company’s deficiencies in the field of AI. They have said that they could leave for billionaire packages that They are offering goal Platforms Inc. (META) y openai.

Meta, owner of Facebook and Instagram, has been offering some annual salary packages of between US $ 10 and US $ 40 million, or even more, so that they join their new Superintentintelligence Labs group, according to people with knowledge of the matter. Apple is known, in many cases, to pay their half -enlightenment engineers, or even less, what they can get in the free market.

One of the most veteran researchers of Apple’s great languages, Tom Gunter, left last week. He had worked in Apple for about eight years, and some colleagues are difficult to replace given his set of unique skills and the disposition of Apple competitors to pay exponentially more for talent.

This month, Apple was also about to lose the team behind MLXits key open source system to develop automatic learning models in the latest Apple chips. After the engineers threatened to leave, Apple made counteroffers to retain them, and for now they stay.

Discussions with Anthropic and Openai

In his conversations with both Anthropic and Openai, the iPhone manufacturer requested a personalized version of Claude and Chatgpt that could be executed on the private cloud computer of Apple, an infrastructure based on high -end Mac chips that the company currently uses to make its most sophisticated internal models work.

Apple believes that executing the models in their own chips housed on cloud servers controlled by Apple, instead of depending on third -party infrastructure, It will safeguard user privacy. The company has already tested internally the viability of the idea.

See more: Apple presents Liquid Glass, the most ambitious redesign in its history: the characteristics

Other Apple’s intelligence functions are driven by AI models that reside in consumer devices. These models, slower and less powerful than cloud -based versions, are used for tasks such as summarizing short emails and creating genmojis.

Apple will open the models on the devices to external developers at the end of this yearallowing application creators to create functions of AI based on their technology.

The company has not announced plans to give access to applications to cloud models. One of the reasons is that cloud servers still have no capacity to manage an avalanche of new third -party functions.

The company is not currently working on its internal models for use cases on the device or developers. Even so, the engineers of the Foundation’s models team fear that the change to a third party can also presage with other functions in the future.

Last year, Openai offered to train models on Apple devicesbut the iPhone manufacturer was not interested.

Photographer: David Paul Morris/Bloomberg.

Since December 2024, Apple uses Openai to manage some functions. In addition to responding to world knowledge consultations in Siri, Chatgpt can write text blocks in writing tools. Later, in iOS 26, there will be a chatgpt option for the generation of images and the analysis of image on screen.

While they discuss a possible agreement, Apple and Anthropic have disagreed on preliminary financial terms, according to people. The startup of AI seeks an annual multimillionaire fee that increases considerably every year. The fight to reach an agreement has left Apple contemplating the possibility of working with Openai or others if it continues with the third parties plan, they said.

See more: Apple iOS 26 changes: compatible devices list

Changes in management

If Apple reaches an agreement, the influence of Giannandrea, who joined Apple from Google in 2018 and is a supporter of internal development of large linguistic models, would continue to be reduced.

In addition to losing Siri, Giannandrea was stripped of the responsibility for the Apple Robotics Unit. And, in movements that had not been reported above, the Core ML and APP teams attems, groups responsible for the frames that allow developers to integrate AI in their applications, were transferred to the Federight Software Engineering Organization.

Apple’s basic model team had also been building large language models to help employees and external developers write code in XCODE, their programming software. The company ended the project, announced last year as Swift Assist, about a month ago.

Instead, Apple will launch at the end of this year a new Xcode that can take advantage of third -party programming models. Application developers will be able to choose between Chatgpt or Claude.

Read more at Bloomberg.com

Related Posts

Leave a Comment