Apple has declared that it will roll back its notification summarization feature for a few news apps in the upcoming iOS 18.3 updates for all iPhones with Apple Intelligence due to concerns about misinformation from the media.
The British Broadcasting Corporation (BBC) found in December that the technology was prone to errors when Apple Intelligence incorrectly summarized an incident about Luigi Mantigone, the alleged assassin of UnitedHealth Group executive Brian Thompson, saying that Mantigone had shot himself. Ken Schwencke, the editor of the news site ProPublica, found one New York Times notification to be erroneously summarized as “Netanyahu Arrested.” In fact, the International Criminal Court had not arrested Benjamin Netanyahu, the prime minister of Israel.
Alongside the BBC complaint to Apple, the National Union of Journalists (NUJ) and Reporters Without Borders also called for Apple to remove the feature. Laura Davison, NUJ general secretary, argues, “At a time [when] access to accurate reporting has never been more important, the public must not be placed in a position of second-guessing the accuracy of news they receive.”
Apple first responded by adding a warning, acknowledging that Artificial Intelligence (AI)-generated summaries are still experimental and could be wrong. Since then, the company has disabled the feature on news apps and plans to improve the AI on iPhones before re-releasing it in another software update. Other Apple devices with the feature, however, such as Macbooks, have not been impacted.
These errors are not uncommon: issues continue to plague AI products across all platforms. Google’s summaries of search results using its AI model, Gemini, have often provided users with inaccurate and even potentially dangerous advice. These outputs have ranged from putting non-toxic glue to stick cheese onto pizza crust, which was comment left by a Reddit user around 12 years ago, to eating rocks for its mineral content. BLS Director of Technology Mr. Patrick Hourigan explains, “AI generated content can contain bias, hallucinations or otherwise incorrect information. The issue with summarizing legitimate news source[s] is that you’re taking a fact-checked news story […], and opening [it] up […] to all of AI’s flaws. ”
Despite other incidents, most features have been relatively accurate. Touted as a way to reduce clutter and time needed to go through notifications, Apple Intelligence was thought to be a good feature at launch, and many people were optimistic about it. Zubair Hasan (I) adds, “I think [the feature] will get better, [and they will] find a certain niche [eventually].”
These challenges have sparked debates about the technology’s viability. AI is praised for its ability to generate things ranging from code, videos and 3D models to predicting cancer and generating trillions of dollars. On the other hand, AI can severely limit creativity, spread misinformation and hinder true education.
Mr. Hourigan reflects, “Every emerging technology is initially regarded with fear and trepidation. […] If we look at AI as a replacement for human creativity and ingenuity, I think we will be in trouble. If we instead look at AI as a tool to be used to help accomplish specific tasks within closed environments, I think we will have more success with adopting AI tools in our society.”