- OpenAI is reportedly having trouble with Orion in certain areas like coding
- Progress is slower than expected due to quality issues with training data
- The next-gen model could also be more expensive
OpenAI is running into difficulties with Orion, the next-gen model powering its AI. The company is struggling in certain areas when it comes to the performance gains realized with the successor to GPT-4.
This comes from a report by The Information, citing OpenAI employees, who claim that the increase in quality seen with Orion is ‘far smaller’ than that witnessed when moving from GPT-3 to GPT-4.
We’re also told that some OpenAI researchers are saying that Orion “isn’t reliably better than its predecessor [GPT-4] in handling certain tasks.” What tasks would they be? Apparently, coding is a weaker point, with Orion possibly not outdoing GPT-4 in this arena – although it is also noted that Orion’s language skills are stronger.
So, for general-use queries – and for jobs such as summarizing or rewriting text – it sounds like things are going (relatively) well. However, these rumors don’t sound quite as hopeful for those looking to use AI as a coding helper.
So, what’s the problem here?
By all accounts, OpenAI is running into something of a wall when it comes to the data available to train its AI. As the report makes clear, there’s a “dwindling supply of high-quality text and other data” that LLMs (Large Language Models) can work with in pre-release training to hone their powers in solving knottier problems like resolving coding bugs.
These LLMs have chomped through a lot of the low-hanging fruit, and now finding this…
Read full post on Tech Radar
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.