As artificial intelligence advances more rapidly than history’s biggest technologies, control is a big issue.
As the World Economic Forum winds down in Davos, Switzerland this week, industry’s leading lights continue to expand on what artificial intelligence means and why it needs serious societal guardrails to keep the technology in line.
The latest example is OpenAI, creator of ChatGPT which is heavily invested in generative AI technology, and certainly has a stake in the game, for good and bad, Altman implies.
Here’s what he had to say during the interview.
On taking control of AI. Given the fast path and burgeoning usage of AI, Altman says public policy and business leaders will eventually have to make “uncomfortable” decisions about AI use and oversight.
AI will eventually require “quite a lot of individual customization” and “that’s going to make a lot of people uncomfortable,” Altman said. Geographic, cultural, and even personal values are already in play with AI usage and society needs to be careful on how those realities square with AI’s growth.
“If a country said, you know, all gay people should be killed on sight, then no … that is well out of bounds,” Altman noted. “But there are probably other things that I don’t personally agree with, but a different culture might. … We have to be somewhat uncomfortable as a tool builder with some of the uses of our tools.”
“It’ll be different for users with different values. The countries issue I think, is somewhat less important,” he added.
On AI growth (and what’s coming down the pipeline). “We’re headed toward a new way of doing knowledge work,” Altman said. “(With future AI applications) you might just be able to say, ‘What are my most important emails today,'” and have AI handle that task.
It won’t happen soon, but AI will eventually “help vastly accelerate the rate of scientific discovery”. When that happens, “it’s a big, big deal,” Altman said.
On 2024 elections and bad information. This year will feature a host of big elections, both in the U.S. and abroad. With country and regional governments pressuring technology companies to limit toxic information, Altman said he’s “nervous” about what’s being said online in 2024.
He doesn’t sound thrilled about OpenAI playing a major stakeholder role in election “disinformation”, either.
The OpenAI chief says the company wants to avoid “fighting the last war” on election misinformation but acknowledges government pressure will make election involvement likely.
On copyright issues with major media companies. While the OpenAI CEO says the company “doesn’t need” to vacuum up data from media companies to create solid Gen AI models, he has issues with copyright lawsuits – current and future.
“I wish I had an easy yes or no answer,” he told Axios. “We can respect an opt-out” from companies like the NYT he said, “but (media) content has been copied and not attributed all over the web” and OpenAI “can’t avoid” training on that.
Brian O’Connell, a former Wall Street bond trader and best-selling author, is a prominent figure in the finance industry. With a substantial background as an ex-Wall Street trader, he has authored two best-selling books: ‘The 401k Millionaire’ and ‘CNBC’s Creating Wealth’, demonstrating his profound knowledge of finance and investing.
Brian is also a finance and business writer for esteemed national platforms and publications, including CNN, TheStreet.com, CBS News, The Wall Street Journal, U.S. News & World Report, Forbes, and Fox News.