-
Torrential rains in Kenya kill 81 in March: officials
-
Iran threatens Mideast infrastructure after Trump ultimatum
-
Spurs felled by Forest in relegation battle, Sunderland shock Newcastle
-
Spurs collapse against Forest, failing acid test
-
US may 'escalate to de-escalate' against Iran: Treasury chief
-
Howe disappointed in himself after 'painful' Newcastle defeat
-
Quansah to miss England's pre-World Cup friendlies
-
Araujo header scrapes Liga leaders Barca win over Rayo
-
Georgia buries Patriarch Ilia II as succession stirs fears of Russian influence
-
DeChambeau wins back-to-back LIV Golf play-offs
-
Sunderland inflict more derby pain on Newcastle
-
Nepali youth demand release of govt report into deadly September uprising
-
Paris doubles up with super-G victory at World Cup finals
-
Dortmund part ways with sporting director Kehl
-
Belgium remembers Brussels jihadist attacks 10 years on
-
Russia resumes use of space launch site damaged in accident
-
Cuba scrambles to restore power after new blackout
-
Senegal's Idrissa Gueye ready to 'hand back' AFCON medals
-
New Zealand's Walsh bags fourth world indoor gold
-
Goggia claims first super-G title after victory in Kvitfjell
-
Slovenia votes in tight polls, with conservatives eyeing comeback
-
A herd stop: Train kills 3 rare bison in Poland
-
Vietnam, Russia to sign energy deal: Hanoi
-
American Gumberg triumphs in Hainan for second DP World Tour win
-
South Africa clinch 19-run win over New Zealand in fourth T20
-
Iran threatens Middle East infrastructure after Trump ultimatum
-
French elect mayors in key cities including Paris
-
'They beat us with whips': Sudan RSF detainees tell of horrors in El-Fasher
-
Australia's Hannah Green wins historic third tournament in a row
-
China's premier vows to expand global 'trade pie': state media
-
Belgium commemorates Brussels attacks 10 years on
-
Sri Lanka raises fuel prices by 25 percent as war bites
-
Rights groups fear use of arrest to stifle free speech in Pakistan
-
Iranian missiles sow panic, destruction in Israeli towns
-
Damaged Russian tanker to be towed to Libya: state-owned company
-
Gilgeous-Alexander scores 40, LeBron breaks NBA appearance record
-
Cuba hit by second nationwide blackout in a week
-
James breaks NBA appearance record as Lakers win thriller
-
BTS draws over 100,000 fans to Seoul comeback concert: label
-
US-China 'Board of Trade' may help ties but experts flag market worries
-
Trump gives Iran 48 hours to open Hormuz as Tehran strikes Israel
-
Sinner, defending champ Mensik advance to third round at Miami Open
-
Iran missile strikes wound over 100 in two south Israel towns
-
Shai hits 40 as Thunder win despite NBA melee with four ejected
-
Records shattered as US heatwave moves eastward
-
Iran missiles hit southern Israel, injuring more than 100
-
LeBron James breaks record for most NBA games played
-
'Perfect' PSG sweep past Nice to reclaim top spot in Ligue 1
-
Japan coach says Asian Cup crown 'well-deserved' for inspirational team
-
PSG sweep past Nice to reclaim top spot in Ligue 1
Gemini's flawed AI racial images seen as warning of tech titans' power
For people at the trend-setting tech festival here, the scandal that erupted after Google's Gemini chatbot cranked out images of Black and Asian Nazi soldiers was seen as a warning about the power artificial intelligence can give tech titans.
Google CEO Sundar Pichai last month slammed as "completely unacceptable" errors by his company's Gemini AI app, after gaffes such as the images of ethnically diverse Nazi troops forced it to temporarily stop users from creating pictures of people.
Social media users mocked and criticized Google for the historically inaccurate images, like those showing a female black US senator from the 1800s -- when the first such senator was not elected until 1992.
"We definitely messed up on the image generation," Google co-founder Sergey Brin said at a recent AI "hackathon," adding that the company should have tested Gemini more thoroughly.
Folks interviewed at the popular South by Southwest arts and tech festival in Austin said the Gemini stumble highlights the inordinate power a handful of companies have over the artificial intelligence platforms that are poised to change the way people live and work.
"Essentially, it was too 'woke,'" said Joshua Weaver, a lawyer and tech entrepreneur, meaning Google had gone overboard in its effort to project inclusion and diversity.
Google quickly corrected its errors, but the underlying problem remains, said Charlie Burgoyne, chief executive of the Valkyrie applied science lab in Texas.
He equated Google's fix of Gemini to putting a Band-Aid on a bullet wound.
While Google long had the luxury of having time to refine its products, it is now scrambling in an AI race with Microsoft, OpenAI, Anthropic and others, Weaver noted, adding, "They are moving faster than they know how to move."
Mistakes made in an effort at cultural sensitivity are flashpoints, particularly given the tense political divisions in the United States, a situation exacerbated by Elon Musk's X platform, the former Twitter.
"People on Twitter are very gleeful to celebrate any embarrassing thing that happens in tech," Weaver said, adding that reaction to the Nazi gaffe was "overblown."
The mishap did, however, call into question the degree of control those using AI tools have over information, he maintained.
In the coming decade, the amount of information -- or misinformation -- created by AI could dwarf that generated by people, meaning those controlling AI safeguards will have huge influence on the world, Weaver said.
- Bias-in, Bias-out -
Karen Palmer, an award-winning mixed-reality creator with Interactive Films Ltd., said she could imagine a future in which someone gets into a robo-taxi and, "if the AI scans you and thinks that there are any outstanding violations against you... you'll be taken into the local police station," not your intended destination.
AI is trained on mountains of data and can be put to work on a growing range of tasks, from image or audio generation to determining who gets a loan or whether a medical scan detects cancer.
But that data comes from a world rife with cultural bias, disinformation and social inequity -- not to mention online content that can include casual chats between friends or intentionally exaggerated and provocative posts -- and AI models can echo those flaws.
With Gemini, Google engineers tried to rebalance the algorithms to provide results better reflecting human diversity.
The effort backfired.
"It can really be tricky, nuanced and subtle to figure out where bias is and how it's included," said technology lawyer Alex Shahrestani, a managing partner at Promise Legal law firm for tech companies.
Even well-intentioned engineers involved with training AI can't help but bring their own life experience and subconscious bias to the process, he and others believe.
Valkyrie's Burgoyne also castigated big tech for keeping the inner workings of generative AI hidden in "black boxes," so users are unable to detect any hidden biases.
"The capabilities of the outputs have far exceeded our understanding of the methodology," he said.
Experts and activists are calling for more diversity in teams creating AI and related tools, and greater transparency as to how they work -- particularly when algorithms rewrite users' requests to "improve" results.
A challenge is how to appropriately build in perspectives of the world's many and diverse communities, Jason Lewis of the Indigenous Futures Resource Center and related groups said here.
At Indigenous AI, Jason works with farflung indigenous communities to design algorithms that use their data ethically while reflecting their perspectives on the world, something he does not always see in the "arrogance" of big tech leaders.
His own work, he told a group, stands in "such a contrast from Silicon Valley rhetoric, where there's a top-down 'Oh, we're doing this because we're going to benefit all humanity' bullshit, right?"
His audience laughed.
N.Schaad--VB