Microsoft yesterday (23) announced the Phi-3 family of new compact language models (SLM). The platform consists of three different AI options, each meeting a specific need in terms of efficiency and performance.
The first of the set is Phi-3-miniA language model with 3.8 billion parameters trained on a smaller database than larger competitors such as GPT-4. It is available in two variants, one with support for 4,000 and the other with 128,000 tokens, and the instructions have been improved to make it practically complete for use.
At this point, Phi-3-mini will be published on Microsoft Azure AI Studio, Hugging Face and Ollama. The tool is optimized for ONNX Runtime with Windows DirectML support and will also be available as a microservice on Nvidia NIM.
Other models in the Phi-3 family are Phi-3-small (7 billion parameters) and Phi-3-medium (14 billion parameters). Both are built on Microsoft’s AI Liability Standard, but performance and applicability data not disclosed. They will be published in the Azure AI catalog in the next few weeks.
Small size applications
Unlike large language models (LLMs) – such as GPT-4.0 and Gemini – SLMs are prepared for applications with lower data and training demandsLike native rendering tools. These platforms can be embedded into smartphones and computers For example, with special hardware for AI acceleration.
Naturally, the closer to a “full-size” AI the better; That’s what Microsoft is offering with these SLMs. According to the company, the Phi-3 family outperforms its competitors in the same category and manages to outperform some LLMs in certain contexts, such as language interpretation, content processing, and programming.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.