NewsBreak, the most downloaded news app in the United States, was reportedly using artificial intelligence (AI) to create fake stories.

According to Reuters, the free app published a fabricated article titled "Christmas Day Tragedy Strikes Bridgeton, New Jersey Amid Rising Gun Violence in Small Towns" last December. However, the local police promptly debunked the story that falsely reported a shooting incident in Bridgeton, New Jersey.

In a Facebook post, the Bridgeton police department dismissed the NewsBreak article as "entirely false." It also criticized the app for publishing AI-generated fiction that "they have no problem publishing to readers."

Following the statement, NewsBreak, headquartered in Mountain View, California, and with offices in Beijing and Shanghai in China, removed the fabricated article and told Reuters that it was from a different source, the findplace.xyz website.

The company added that when it discovers inaccurate content or a violation of its community standards, it immediately takes action to remove the content.

However, a deeper investigation revealed that this incident was not isolated. Reuters also reported that since 2021, NewsBreak had published at least 40 false stories, many of which were AI-generated. These stories have had real-world consequences, affecting local communities.

Newsbreak claims to have more than 50 million monthly users. It publishes licensed content from major media outlets like CNN, Reuters, and Fox, as well as local news or press releases obtained through web scraping, which it then rewrites using AI.

The app billed itself as "the go-to source for all things local." However, the extensive use of AI tools has led to significant errors. In March, Newsbreak added a disclaimer to its homepage, warning that its content "may not always be error-free."

Dwindling Newspaper Sales Echo Through Economy
(Photo : Justin Sullivan/Getty Images) SAN FRANCISCO - SEPTEMBER 20: Freshly printed copies of the San Francisco Chronicle roll off the printing press at one of the Chronicle's printing facilities September 20, 2007 in San Francisco, California.

AI Journalism in Australia

AI-generated journalism remains a problem in and out of the United States. In Australia, Australian Community Media (ACM) will take no action against its in-house lawyer James Raptis, who was implicated in creating websites that later published thousands of articles using copy taken from legitimate media outlets.

Four websites that used AI to alter original news stories reportedly posted these articles, with some carrying the byline James Raptis. The lawyer told the ABC that he had no involvement in writing and publishing the articles, adding that he had only hosted and set up the sites.

Hours after the media tried to contact him, the websites were all taken down, and the lawyer's social media accounts were shut down or made private. Raptis noted that the four websites, F1 Initiative, League Initiative, Surf Initiative, and AliaVera, were operated by another person without his oversight.

Read Also: Google AI Search Criticized by News Publishers Over Concerns About Content Revenue 

Australia's ACM Accepts James Raptis' Explanation

James Raptis' private firm shares AliaVera's office address. ACM publishes 16 daily and 55 non-daily news brands, including the Illawarra Mercury, the Canberra Times, and the Newcastle Herald. The former Domain publisher, Antony Catalano, owned ACM.

According to reports, ACM's management has accepted Raptis' explanation that another person was responsible.

Related Article: Reddit-Infused ChatGPT Will Soon Be a Reality with New OpenAI Deal 

Written by Aldohn Domingo

(Photo : Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion