What a coup!
In his first public appearance since rejoining Open AI as CEO, Sam Altman took the stage at the HOPE Global Forums on Dec. 11 to join in a conversation with Operation HOPE’s CEO and founder — John Hope Bryant.
Bryant waited until the very end of their conversation to announce that he and Altman would co-chair a new AI Ethics Council, with strong Atlanta representation.
It will include Dr. Bernice A. King, CEO of the King Center; Dr. Helene Gayle, president of Spelman College, Dr. George French, president of Clark Atlanta University; as well as other names to follow. Bryant said former U.N. Ambassador Andrew Young also would provide guidance to the initiative.
Bryant was careful to say the Council will not be providing legal guidance. “Let the government do what the government does,” he said. “We will stay out of that.”
But the Council will try to be a constructive force in navigating through the ethical questions that surround artificial intelligence (AI).

“We can’t just sit around and throw rocks and just complain,” Bryant said while talking to Altman and the attendees at the HOPE Global Forums. “This is happening, whether you like it or not. But what we can do is be a force for good. We’re going to try to provide some ethics, some morals, some framework guidelines and hopefully some good counsel too, along with tech leaders you are going to provide — to try to be a case for optimism [the theme of the 2023 forum].”
During their conversation, Altman spoke of the concerns and potential of AI. Open AI created ChatGPT, which has shown society the power of artificial intelligence.
First, Altman said humans quickly adapt to new technologies — after first having their minds blown.
“In a few years, kids who are just growing up, they will never have known a time where computers couldn’t understand where AI was not a thing,” Altman said. “And we’re all just going to get used to this right away, but it’ll blow our minds for a little bit of time.”
Last week, Altman was named as Time Magazine’s CEO of the year.
In one of the biggest business stories of the year, Altman was fired less than a month ago due to conflicts with some of Open AI’s board. Immediately, more than 500 of the company’s employees — nearly all of its workforce — threatened to quit. Then Microsoft immediately offered Altman and his team a home.
So, Open AI’s board reversed its decision and welcomed Altman back, minus the board members who had wanted him fired in the first place.
Altman had already agreed to come talk to the HOPE Global Forums.
Bryant admitted they kept waiting for him to cancel his appearance, but that call never came.
“I really wanted to do this,” said Altman, who admitted it’s been like a tornado. “I wish I were a little less tired and more engaged, and I’m sorry. It’s been a long few weeks.”
Altman acknowledged that “people have a lot of anxiety about AI, and I get that, and I feel that.”
But there’s always been fear with new technologies until people get used to having cell phones, computers or AI.
Bryant said fear exists for people whose jobs might be eliminated or replaced because of AI and new technologies, and it is important for employees to be trained for jobs in the new economy.
“I’m concerned that if you’re not intentional about educating and training a whole generation of everybody, that we won’t give people a sense of purpose and belonging and make them visible for the new economy,” Bryant said, adding that Altman has demonstrated empathy for people and how technology impacts society.

Altman admitted even he has had concerns about how rapidly AI technology is evolving.
“This will be a tool for people in many cases,” Altman said. “This is something that will change the way people will do their jobs in the same way that mobile phones did and the internet did before that. We adapt and find new, better ways to work. It’s a little scary. But I think human ingenuity, creativity, and a desire to listen to each other… is so strong.”
Altman reminded the audience that many of the jobs we have today didn’t exist 500 years ago.
“Certainly, the future is going to be like that too,” he continued. “Our kids and grandkids, great grandkids, they’re going to do different things. And again, I think that’s okay. We’ve just got to figure out how we’re going to manage through this rapid change.”
Altman said it’s important to get people’s feedback, especially those who are most impacted by new technology. It’s important for technology not to be developed in secret, and to provide an environment where people can say what’s broken.
“Let technology and society co-evolve together,” Altman said. “Let’s really listen to what people actually want.”
Altman said it’s important for society to enjoy the benefits of technology, but there needs to be some guardrails.
“One thing t we try to do differently than what other AI companies have done is we really try to get out in the world,” Altman said. “And that permeates the whole company.”
“We really try to listen to as broad a base of stakeholders as we can,” Altman said. “One thing we try to do differently than what other AI companies have done is we really try to get out in the world. And that permeates the whole company.”

Leave a comment