ADVERTISEMENT

More Than 500 AI Models Run Optimised On Core Ultra Processors: Intel

The models draw from categories of local AI inferencing, including large language, diffusion, super resolution, object detection, image classification/segmentation and computer vision.

<div class="paragraphs"><p>(Source: rawpixel.com/freepik)</p></div>
(Source: rawpixel.com/freepik)

Intel has announced that more than 500 artificial intelligence models are running optimised on new Intel Core Ultra processors. These models, which can be deployed across the central processing, graphics processing and neural processing units, are available across industry sources, including OpenVINO Model Zoo, Hugging Face, ONNX Model Zoo and PyTorch.

The models draw from categories of local AI inferencing, including large language, diffusion, super resolution, object detection, image classification/segmentation and computer vision, according to Intel.

“Intel has a rich history of working with the ecosystem to bring AI applications to client devices, and today we celebrate another strong chapter in the heritage of client AI by surpassing 500 pre-trained AI models running optimised on Intel Core Ultra processors,” said Robert Hallock, Intel vice president and general manager of AI and technical marketing.

Models form the backbone of AI-enhanced software features like object removal, image super resolution or text summarisation. There is a direct link between the number of enabled/optimised models and the breadth of user-facing AI features that can be brought to market. A feature cannot be designed without a model, and it cannot reach optimal performance without runtime optimisation.

AI models are an important layer in the software stack that determines the performance, stability and capabilities of an AI-driven application. AI models are trained to analyse large quantities of data, draw conclusions and take actions based on such inferences. Developers creating new AI PC features can utilise these models and build on them. The more AI models there are, the more AI PC features are enabled.

OpenVINO optimises for Intel Core Ultra by load-balancing across all the compute units, compressing the models to run efficiently in an AI PC, and optimising the runtime to take advantage of memory bandwidth and core architecture within Intel Core Ultra.

According to Intel, these more than 500 optimised AI models span over 20 categories of AI and facilitate the development and deployment of AI features. They include Phi-2, Mistral, Llama, Bert, Whisper and Stable Diffusion 1.5 and help improve system stability, reliability and performance.