AI advancements have brought significant improvements in how consumers interact and have revolutionized how information is accessed and expanded on. With the rapid growth of technology, companies are facing immense pressure to ensure safety is maintained and ethical considerations are taken into account. OpenAI and its work in evolving AI tools and taking the platform to a whole new level are well-known in the tech community, but there is also increased scrutiny as the company expands further. A leading AI expert sheds light on the o1 model and how it is not better at reasoning but has more ability to deceive.
OpenAI’s o1 is said to be better at deceiving, and an AI expert strongly suggests stronger safety tests
OpenAI has recently introduced its o1 model, which marks a significant leap in AI’s reasoning capabilities in comparison to previous models. It can handle complex problems with a human-like problem-solving strategy. While the platform comes with advanced reasoning abilities, an AI firm, Apollo Research, pointed out that the model was better at lying.
Now, a Reddit post is doing the rounds for bringing the matter to light by sharing a Business Insider report about an AI expert Yoshua Bengio, who is considered the godfather of the field, suggesting stronger safety tests need to be placed up to prevent any harmful consequences from coming up with its ability to deceive. Bengio expressed:
In general, the ability to deceive is very dangerous, and we should have much stronger safety tests to evaluate that risk and its consequences in o1’s case.
Bengio, like many others, is concerned about the rapid advancement of AI and the dire need for legislative safety measures. He suggested that a law like California’s SB 1047 should be implemented to impose strict safety constraints on AI models. SB 1047 is an AI safety bill that regulates powerful AI models and makes it mandatory for companies to allow third-party testing to evaluate harm or address potential risks.
OpenAI, however, has ensured that the o1-preview is managed under the Preparedness Framework, which is there to handle risks that come with AI model advancement. The model is placed as posing a medium risk, and it said that the concerns regarding it are moderate.
Youshua Bengio further emphasized that companies should exhibit greater predictability before advancing with the AI models and deploying them without sufficient safeguards. He advocated for a regulatory framework to be in place to ensure AI goes in the intended direction.
Read full on Wccftech
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.