AMD’s Strix Point APUs showcase a strong performance advantage in AI LLM workloads against Intel’s Lunar Lake offerings.
AMD Strix Point APUs show dominance in AI LLMs while reducing overall latency against competing Intel Lunar Lake SoCs
The demand for higher performance in AI workloads has not only forced many companies to bring their own specialized hardware to the market but also made the competition more fierce. Since LLMs(large language models) have evolved significantly, the need for faster hardware is also increasing.
To tackle this, AMD introduced its own AI-oriented processors for mobile platforms, known as Strix Point, a while back. In the latest blog post, the company claims that its Strix Point APUs can have a big lead over its rivals while decreasing the latency for quicker output. According to AMD, the Ryzen AI 300 processors can deliver higher Tokens per second than Intel’s Lunar Lake chips, which are Intel’s special mobile chips for AI workloads.
As per the comparison, the Ryzen AI 9 HX 375 offers up to 27% higher performance in consumer LLM applications in LM Studio than the Intel Core Ultra 7 258V. The latter isn’t the fastest in the Lunar Lake lineup, but it’s surely close to the higher-end Lunar Lake CPUs since the core/thread count remains the same except for the core clocks.
The LM Studio is AMD’s consumer-friendly tool built on the llama.cpp that doesn’t require its users to learn the technical side of the LLMs. Llama.cpp is a framework that is optimized for x86 CPUs and uses AVX2 instructions. While…
Read full on Wccftech
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.