Nvidia presented this Monday (18) a set of artificial intelligence (AI) tools for enterprise use. With the new features, companies of all sizes can create and distribute custom applications on their own platforms.
They were wanted Nvidia NIM stands for Nvidia Inference Microservices. In practice, NIM is a set of processes and tools that help you use dozens of popular AI models. Recommended for companies that need to develop applications for internal use or optimize their processes.
Developer-facing microservices include: “Bundles” containing all technologies launched in recent years By the company, which has become a reference in the artificial intelligence market and is one of the companies that increased its market value the most last year.
It allows customers to use its catalog of tools to improve their own work environments, including adopting AI as part of their business models if they haven’t done so before.
NVIDIA’s new services for artificial intelligence
According to the company, NIM allows developers to reduce the time it takes to build and build a platform from scratch from “weeks to minutes,” even being economical in terms of carbon footprint.
NIM’s prebuilt tools are based on Nvidia inference software such as Triton Inference Server and TensorRT-LLM. They include language, speech, and data analysis APIs that can be applied to different industries and adapted to work with these institutions’ databases.
For example, in the healthcare industry, companies can use NIM to: Optimize customer service chatbots to generate accurate and contextual responsesor Create complete, easy-to-view charts for diagnosis.
Services created using technology NVIDIA CUDAGPUs are used by data centers or those with an infrastructure that includes the company’s cloud computing services.
Packets Support open and proprietary models from both NVIDIA and other companies. Partners include Google, Meta, Microsoft, Mistral AI and Stabilyu AI, some of the biggest names in generative AI today.
In addition, accelerated software development kits, libraries and tools will be available in the new environment. NVIDIA CUDA-X.
availability
NVIDIA’s new AI microservices are now available as part of NVIDIA AI Enterprise 5.0. The license price is $4,500 per GPU per year or $1 per GPU for each clock contracted.
The platform was created to be accessible by a variety of equipment and software. In addition to being able to run in a customer’s own data center, microservices will also be available in cloud markets such as AWS, Google Cloud, Oracle Cloud and Microsoft Azure.
Interested developers can request a free trial period on the Nvidia AI Platforms website.
Source: Tec Mundo

I am a passionate and hardworking journalist with an eye for detail. I specialize in the field of news reporting, and have been writing for Gadget Onus, a renowned online news site, since 2019. As the author of their Hot News section, I’m proud to be at the forefront of today’s headlines and current affairs.