This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/05/GettyImages-1160331590-e1684788218677.jpg?w=2048Americans increasingly distrust the media, with half of them saying national news outlets intend to mislead or deceive them to adopt a specific viewpoint, a Gallup and Knight Foundation study found in February.
A recently introduced news site, Boring Report, thinks it’s found an antidote to public skepticism by enlisting artificial intelligence to rewrite news headlines from their original sources and summarize those stories. The service says it uses the technology to “aggregate, transform, and present news” in the most factual way possible, without any sensationalism or bias.
“The current media landscape and its advertising model encourage publications to use sensationalist language to drive traffic,” a representative at Boring Report told Fortune in an email. “This affects the reader as they have to parse through emotionally charging, alarming, and otherwise fluffy language before they get to the core facts about an event.”
Reached #6 on the Magazines & Newspaper section of the App Store today! Thank you, everyone, for the support! We will continue to work hard to get you updates and new features pic.twitter.com/9Qr77rWB9X
— Boring Report (@boringreport) May 8, 2023
On its website, as an example, Boring Report juxtaposed a fictional and hyperbolic headline, “Alien Invasion Imminent: Earth Doomed to Destruction” with one that it would write, “Experts Discuss Possibility of Extraterrestrial Life and Potential Impact on Earth.”
Boring Report told Fortune that it doesn’t claim to remove biases, but rather its goal is simply to use A.I. to inform readers in a way that removes “sensationalist language.” The platform uses software by OpenAI, a Silicon Valley-based company, to generate summaries of news articles.
“In the future, we aim to tackle bias by combining articles from multiple publications into a single generated summary,” Boring Report said, adding that currently, humans don’t double check articles before publishing them and that humans only review them if a reader points out an egregious error.
The service publishes a list of headlines and includes links to original sources. For instance, one of the headlines on Tuesday was “Truck Crashes into Security Barriers near White House,” which links back to the source article on NBC titled “Driver arrested and Nazi flag seized after truck crashes into security barriers near the White House.”
Tools like OpenAI’s A.I. chatbot ChatGPT are increasingly being used in various industries to do jobs that were only recently done exclusively by f human workers. Some media companies, under intense financial strain, are looking to tap A.I. to handle some of the workload and to help them become more financially efficient.
“In some ways, the work we were doing towards optimizing for SEO and trending content was robotic,” S. Mitra Kalita, a former executive at CNN and co-founder of two other media startups, told Axios in February about how newsrooms use technology to identify widely discussed subjects online and then focus stories on those topics. “Arguably, we were using what was trending on Twitter and Google to create the news agenda. What happened was a sameness across the internet.”
Newsrooms have also already begun experimenting with A.I. For instance, BuzzFeed said in February it would use A.I. to create quizzes and other content for its users in a more targeted fashion.
“To be clear, we see the breakthroughs in AI opening up a new era of creativity that will allow humans to harness creativity in new ways with endless opportunities and applications for good,” BuzzFeed CEO Jonah Peretti wrote in January before the launch of the outlet’s A.I. tool. While the company uses A.I. to help improve its quizzes, the tech doesn’t write news stories. BuzzFeed eliminated its news division last month.
Some media company experiments with A.I haven’t gone well. For instance, some articles published by tech news site CNET using A.I.—with disclosures that readers had to dig for to see—included inaccuracies.
Amid the quest to change how news is written and packaged is a fear that A.I. will be misused or used to create spam sites. Earlier this month, a report by NewsGuard, a news rating group, found that A.I.-generated news sites had become widespread and were linked to spreading false information. The websites, which produced a large amount of content—sometimes hundreds of stories daily, rarely revealed who owned or controlled them.
Boring Report, launched in March, is owned and backed by its two New York-based engineers—Vasishta Kalinadhabhotla and Akshith Ramadugu. The free service is also supported by donations and was recently ranked among the top 5 downloaded apps under the Magazines & Newspapers section of Apple’s App Store. Representatives at Boring Report declined to share specifics regarding user numbers, but told Fortune that they planned to launch a paid version in the future.
But the reason behind the new crop of A.I. media platforms is clear to NewsGuard CEO Steve Brill—it’s because readers lack mainstream news outlets that they trust. As a result, the rise of A.I. news has made it especially challenging to find genuine sources of information.
“News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source,” Brill told the New York Times. “This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.”