The UK government must implement a system of recording AI misuse and malfunctions to prevent missing key instances, according to a think tank.

Research from the Centre for Long-Term Resilience (CLTR) advises the UK government to report AI issues in public services and create a UK-wide center, as reported by The Guardian.

CLTR, which focuses on government responses to crises and extreme risks, recommends an AAIB-like reporting structure for AI management. According to Organisation for Economic Co-operation and Development (OECD) statistics, news outlets have documented over 10,000 AI "safety incidents" since 2014. These occurrences cause physical, economic, and psychological trauma.

The OECD's AI safety event monitor included a deepfake video of Labour leader Keir Starmer, Google's Gemini model misrepresentations, self-driving vehicle mishaps, and a chatbot inciting an assassination.

Tommy Shaffer Shane noted that the UK government is unaware of AI events. Thus, incident reporting, which transformed aviation and medicine, is highly recommended.

The think tank recommends that the UK adopt a strong incident reporting system, like essential safety industries. No regulator exists for complex AI systems like chatbots and image generators; therefore, many AI issues fall under the radar. Labor promises to impose strict regulations on sophisticated AI businesses.

This system would quickly detect AI faults, predict future events, and coordinate speedy reactions to significant concerns. The government might also detect large-scale damages early using incident reporting.

Despite assessments by the UK AI Safety Institute, certain AI models may pose risks after deployment. Incident reporting would help the government assess its regulatory structure.

The study also highlighted that an incident reporting system would help the DSIT's Central AI Risk Function (CAIRF) analyze and report AI threats.

UK Joins Global Effort to Promote AI Safety

The UK and 10 other nations signed a declaration on AI safety cooperation in May, including tracking AI damages and accidents.

The Seoul AI Safety Summit saw tech giants Microsoft, Amazon, and OpenAI establish a major worldwide AI safety accord, per CNBC. The pact requires US, China, Canada, the UK, France, South Korea, and UAE enterprises to voluntarily build sophisticated AI models safely. These firms will disclose safety guidelines to address issues like malicious actor usage.

Read Also: Opera Launches Opera One R2 Developer Beta with Enhanced Multimedia and AI Capabilities

The safety frameworks will identify "red lines" for frontier AI hazards, such as automated cyberattacks and bioweapons. If companies cannot address the risks, they may use a "kill switch" to stop the development of AI models.

UK Prime Minister Rishi Sunak lauded the commitments to "ensure safe AI development, transparency, and accountability."

The pact builds on November's promises from generative AI software makers. Companies will announce these safety limits before the AI Action Summit in France in early 2025, after consulting "trusted actors," including home governments.

US-TECHNOLOGY-IT-LIFESTYLE
(Photo : BRENDAN SMIALOWSKI/AFP via Getty Images) 
A person looks at Wehead, an AI companion that can use ChatGPT, during Pepcom's Digital Experience at the The Mirage resort during the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 8, 2024.

Summer Travelers Warned Over Surge of AI-Powered Scams

Separately, users are warned about increased AI-powered travel frauds during summer vacations, TechTimes recently reported.

Over the previous year and a half, Booking.com chief information security officer Marnie Wilking reported a 500 to 900% surge in worldwide phishing assaults. This surge is mostly attributable to generative AI technologies, which have advanced these assaults.

Phishers trick victims into giving over login credentials or financial information. Travel websites are attractive to fraudsters because passengers disclose personal and financial information. Scammers may now send grammatically perfect, multilingual phishing emails using ChatGPT.

To counteract this menace, Wilking recommends 2FA for online activities for tourists and hosts. This security mechanism necessitates that users validate their identity using a one-time code sent to their phone or an authenticator app. 

Related Article: Computer Scientists Find New Security Loophole That Allows Spying on Internet Users

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion