A team is formed to rein in super-intelligent AI
While arranging AI tools remains high on the agenda, OpenAI, the creator of ChatGPT, is assembling a team to rein in super-intelligent AI.
A lot has actually changed since OpenAI released ChatGPT. While the effect of artificial intelligence has begun to be felt even in the smallest places, governments are working on the necessary legal regulations. Besides, OpenAI is building a dedicated team to manage the risks of “super-intelligent” AI.
Super-intelligent AI could be the end of humanity
Superintelligence refers to a hypothetical artificial intelligence model that is smarter than even the most talented and intelligent person, and succeeds in multiple areas of expertise rather than in a single area like some previous-generation models. OpenAI believes such a model could emerge before the end of the decade. According to the firm, superintelligence will be the most effective technology humanity has ever invented and will help us solve many of the world’s most pressing problems. On the other hand, there are risks: “But the enormous power of superintelligence can also be very dangerous, leading to the weakening of humanity and even human extinction.”
The new team will be led by OpenAI Chief Scientist Ilya Sutskever and the research lab’s head of compliance, Jan Leike. In addition, OpenAI said it will dedicate 20 percent of the processing power it currently holds to this venture. The company has also started some tests. “There are many promising ideas in preliminary experiments, we have increasingly useful measures for progress, and we can use today’s models to empirically examine many of these problems,” the statement said. it was said.
Although OpenAI is keen to take action against very intelligent AI in the future, the risks and problems of existing models also need to be addressed. Chief among these problems are copyright policy and hallucinations. Finally, let’s add to our words that the company will publish a roadmap about its new laboratory in the future.