-
17 injured, five critically, in head-on train crash in Denmark
-
Iran economy looks set to withstand US naval blockade
-
EssilorLuxottica sales slide as investors turn wary of AI glasses
-
Lufthansa loses fight over bailout at EU top court
-
Eurozone business activity falls on Mideast war
-
Leipzig and Union's Bundesliga clash shows changing face of football
-
Trump envoy wants Italy to replace Iran at World Cup: report
-
Electric vehicles supercharge EU car sales
-
Starc cleared to play in IPL by Cricket Australia
-
South Korea e-commerce probe opens rift in US ties
-
Clearing Hormuz Strait mines could take six months: report
-
South Korea's Samsung workers rally in thousands as strike looms
-
US firms voice 'concern' over China's new supply chain rules
-
Iran says won't reopen Hormuz if US upholds naval blockade
-
Japanese team with school coach to cap remarkable journey to the top
-
UN leadership hopefuls stress need for peace and restoring confidence
-
France must avoid becoming 'hostage' on critical minerals: trade minister
-
Thunder roll past Suns, Pistons bounce back to level series with Magic
-
US says China used 'intimidation' to block Taiwan leader's Africa trip
-
Suarez off mark but Messi fires blanks as Miami beat Salt Lake
-
Inter ready to pounce for Serie A title glory as Milan host Juve
-
Fresh paint, careful choreography as pope visits African prison
-
Jones calls on Australian fans to get behind Japan at World Cup
-
Sellers in China trade hub seek tariff reprieve from Trump visit
-
Stocks sink and oil rises with Iran, US no closer to peace talks
-
'Dancing in their hands': Japan wig masters set stage alive
-
Climate scrubbed from G7 meeting to appease US, host France says
-
Trump, his 'low IQ' slur, and the right's race obsession
-
Chip giant SK hynix posts record quarterly profit on AI boom
-
'Big loss' for F1 if Verstappen quits, say McLaren rivals
-
Israeli strikes kill 5 in Lebanon, Beirut to seek truce extension
-
Barca edge Celta but lose match-winner Yamal to injury
-
UK, France agree three-year deal to stop migrant crossings
-
Trump looks for way out on war, but Iran may not oblige
-
Tears and smiles at tribute concert for Swiss fire victims
-
Tesla reports higher profits, topping estimates
-
Manchester City go top of Premier League as Burnley relegated
-
Kane and Diaz send Bayern past Leverkusen into German Cup final
-
Concert pays tribute to Swiss fire disaster victims
-
US stocks rise, shrugging off uncertain ceasefire prospects while oil prices jump
-
Pope hits out at jails in closed-off Equatorial Guinea
-
Atletico beaten again in Elche thriller
-
England rugby great Moody offered 'hope' in battle with motor neurone disease
-
PSG roll over Nantes to move closer to Ligue 1 title
-
Ecuador doctors protest crisis as patients bring own meds to surgery
-
Top Peru ministers quit in protest over stalled US fighter jet deal
-
De La Hoya and Ali's grandson slam proposed federal boxing reform
-
Trump alleges Democratic-backed Virginia referendum was 'rigged'
-
Archer, Burger help Rajasthan beat Lucknow in IPL
-
Migrants deported from US stranded, 'scared' in DR Congo
Inbred, gibberish or just MAD? Warnings rise about AI models
When academic Jathan Sadowski reached for an analogy last year to describe how AI programs decay, he landed on the term "Habsburg AI".
The Habsburgs were one of Europe's most powerful royal houses, but entire sections of their family line collapsed after centuries of inbreeding.
Recent studies have shown how AI programs underpinning products like ChatGPT go through a similar collapse when they are repeatedly fed their own data.
"I think the term Habsburg AI has aged very well," Sadowski told AFP, saying his coinage had "only become more relevant for how we think about AI systems".
The ultimate concern is that AI-generated content could take over the web, which could in turn render chatbots and image generators useless and throw a trillion-dollar industry into a tailspin.
But other experts argue that the problem is overstated, or can be fixed.
And many companies are enthusiastic about using what they call synthetic data to train AI programs. This artificially generated data is used to augment or replace real-world data. It is cheaper than human-created content but more predictable.
"The open question for researchers and companies building AI systems is: how much synthetic data is too much," said Sadowski, lecturer in emerging technologies at Australia's Monash University.
- 'Mad cow disease' -
Training AI programs, known in the industry as large language models (LLMs), involves scraping vast quantities of text or images from the internet.
This information is broken into trillions of tiny machine-readable chunks, known as tokens.
When asked a question, a program like ChatGPT selects and assembles tokens in a way that its training data tells it is the most likely sequence to fit with the query.
But even the best AI tools generate falsehoods and nonsense, and critics have long expressed concern about what would happen if a model was fed on its own outputs.
In late July, a paper in the journal Nature titled "AI models collapse when trained on recursively generated data" proved a lightning rod for discussion.
The authors described how models quickly discarded rarer elements in their original dataset and, as Nature reported, outputs degenerated into "gibberish".
A week later, researchers from Rice and Stanford universities published a paper titled "Self-consuming generative models go MAD" that reached a similar conclusion.
They tested image-generating AI programs and showed that outputs become more generic and strafed with undesirable elements as they added AI-generated data to the underlying model.
They labelled model collapse "Model Autophagy Disorder" (MAD) and compared it to mad cow disease, a fatal illness caused by feeding the remnants of dead cows to other cows.
- 'Doomsday scenario' -
These researchers worry that AI-generated text, images and video are clearing the web of usable human-made data.
"One doomsday scenario is that if left uncontrolled for many generations, MAD could poison the data quality and diversity of the entire internet," one of the Rice University authors, Richard Baraniuk, said in a statement.
However, industry figures are unfazed.
Anthropic and Hugging Face, two leaders in the field who pride themselves on taking an ethical approach to the technology, both told AFP they used AI-generated data to fine-tune or filter their datasets.
Anton Lozhkov, machine learning engineer at Hugging Face, said the Nature paper gave an interesting theoretical perspective but its disaster scenario was not realistic.
"Training on multiple rounds of synthetic data is simply not done in reality," he said.
However, he said researchers were just as frustrated as everyone else with the state of the internet.
"A large part of the internet is trash," he said, adding that Hugging Face already made huge efforts to clean data -- sometimes jettisoning as much as 90 percent.
He hoped that web users would help clear up the internet by simply not engaging with generated content.
"I strongly believe that humans will see the effects and catch generated data way before models will," he said.
P.Vogel--VB