Sam Altman, the CEO of OpenAI, said that the artificial intelligence startup does not fully understand its GPT product and has to understand it better in order to produce new versions of it.
CEO Nicholas Thompson of The Atlantic addressed guests at the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, regarding the potential advantages and safety of AI technology.
Sam Altman’s Explained
The understanding of how AI and machine learning systems make decisions is known as interpretability, and Altman stated, “We certainly have not solved interpretability.” Nicholas Thompson interrupted him, saying, “Isn’t that an argument to not keep releasing new, more powerful models? If you don’t understand what’s happening?”
The response from Sam Altman was, “These systems [are] generally considered safe and robust.” Although we are unaware of the inner workings of your brain, we are aware that you are able to follow instructions and respond to questions about your reasoning.
The announcement this week that OpenAI has “recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI [artificial general intelligence] coincides with the publication of GPT-4o.
“If we are right that the trajectory of improvement is going to remain steep” he stated in reference to the creation of a new safety and security committee at OpenAI then developing policies will be crucial. “It seems to me that the more we can understand about what’s going on in these models, the better,” he continued. That seems like it may be a part of this whole package that enables us to validate and make safety assertions.
Source
Wsj.com