Salesforce CEO Points to Possible AI “Hiroshima”

It’s safety first with artificial intelligence – or else, a major technology executive warns.

The World Economic Forum’s annual conference in Davos, Switzerland is winding down this week, but not without a harsh warning from Mark Benioff, CEO at Salesforce.

Serious guardrails and safety functions need to be put in place with artificial intelligence, or we risk experiencing a “Hiroshima moment,” Benioff said on a WEF panel discussion on January 18 sponsored by Yahoo Finance.

“This is a huge moment for AI. AI took a huge leap forward in the last year or two years,” Benioff noted, adding that allowing AI to proceed at its current jet speed without stiff regulations could see negative societal, cultural, and business risks ahead.

“We don’t want something to go wrong,” he noted. “That’s why (we have) safety summits. That’s why we’re talking about trust,” Benioff said.

“We don’t want to have a Hiroshima moment. We’ve seen technology go wrong, and we saw a Hiroshima. We don’t want to see an AI Hiroshima. We want to make sure that we’ve got our head around this now,” he added.

Potential for Good

Benioff says there will be “good” outcomes from managed AI implementations, especially from generative AI – if everyone is on the same page.

“I think AI has to be almost a human right,” Benioff said “I’ve been saying … for decades that AI could be a creator of inequality. It could also be a creator of equality.”

Benioff is not alone in weighing the potential risks and rewards of artificial intelligence.

On Wednesday, OpenAI CEO Sam Altman told another WEF panel that given the fast path and burgeoning usage of AI, public policy and business leaders will eventually have to make “uncomfortable” decisions about AI use and oversight.

AI will eventually require “quite a lot of individual customization” and “that’s going to make a lot of people uncomfortable,” Altman said. Geographic, cultural, and even personal values are already in play with AI usage and society needs to be careful on how those realities square with AI’s growth.

“If a country said, you know, all gay people should be killed on sight, then no … that is well out of bounds,” Altman noted. “But there are probably other things that I don’t personally agree with, but a different culture might. … We have to be somewhat uncomfortable as a tool builder with some of the uses of our tools.”




Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *