-
Humans far behind as robot breaks record at Beijing half marathon
-
Zelensky slams oil sanctions relief for Russia
-
Thousands gather for Pope Leo's first mass in Angola
-
French billionaire shrugs off mass exodus at hallowed French publisher
-
'DJ Priest' mixes religion and rave in Buenos Aires tribute to Pope Francis
-
Fit in fatigues: German army presses recruitment drive
-
Pope Leo to hold giant mass for Angola's Catholics
-
From Armin van Buuren to Mochakk, electronic music dominates Coachella
-
Hollywood, Silicon Valley turn out for the 'Oscars of Science'
-
Australian soldier charged with war crimes vows to clear his name
-
Branded pop-up events take center stage at Coachella
-
AI 'agent' fever comes with lurking security threats
-
How France fell for reimagined 19th-century workers' canteens
-
South Korea's chainsaw artist carves a name for herself at 91
-
Blue Origin set to launch rocket with reusable booster for first time
-
Strait of Hormuz to stay closed until port blockade lifts, Iran says
-
Iraq fish die-off leaves farmers mourning lost livelihoods
-
Crisis-hit Bulgaria votes in eighth election in five years
-
'Pure joy' for Matarazzo after Copa del Rey triumph
-
Messi scores winner as Miami down Colorado on coach debut
-
Nuggets hold off T'Wolves, Cavs thump Raptors in NBA playoff openers
-
Fitzpatrick extends lead as Scheffler charges at RBC Heritage
-
Real Sociedad secure Copa del Rey penalty triumph over Atletico
-
'Scandalous' Marseille lose at Lorient, dent Champions League bid
-
Arteta urges Arsenal to have no regrets in Man City title showdown
-
Substitute Dupont helps Toulouse cruise past Castres in Top 14
-
Questions surround Warriors after NBA play-in exit
-
Man Utd beat Chelsea as Spurs stunned by Brighton equaliser
-
Cunha steers Man Utd towards Champions League at Chelsea's expense
-
Cavs cruise past Raptors in NBA playoff opener
-
England beat Iceland to stay perfect in Women's World Cup qualifying
-
Spurs 'not finished yet', says defiant De Zerbi
-
Germany's Gnabry a World Cup doubt after thigh injury
-
Spurs stunned by late Brighton equaliser, Leeds pull clear of trouble
-
At least 6 killed after gunman opens fire in Ukrainian capital
-
Relegation-haunted Spurs count cost of Brighton draw
-
Spurs count cost after Brighton draw leaves them in drop zone
-
'Scandalous' Marseille lose at Lorient, damage Champions League bid
-
Abhishek fireworks, Malinga spell sink Chennai
-
Napoli's Serie A title defence nears end with Lazio defeat
-
England run in 12 tries to hammer Scotland in Six Nations
-
Rybakina powers past Andreeva to reach Stuttgart final
-
At least 5 killed after gunman opens fire in Ukrainian capital
-
Rublev, Fils fightbacks set up Barcelona Open final
-
Leeds pull clear of trouble, Bournemouth sink Newcastle
-
Spain rout Ukraine to boost Women's World Cup qualifying hopes
-
Bayern close in on Bundesliga title as Dortmund lose
-
US extends sanctions waiver on purchases of Russian oil
-
Trump signs order to fast-track research on psychedelic drugs
-
Cobolli downs Zverev to set up Munich final with Shelton
'Happy (and safe) shooting!': Study says AI chatbots help plot attacks
From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.
Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.
Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.
The chatbots, it added, had become a "powerful accelerant for harm."
"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.
"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."
Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.
In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"
In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."
Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.
The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.
"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.
"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."
AFP reached out to the AI companies for comment.
"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.
"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."
The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.
The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.
OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.
The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.
C.Bruderer--VB