Apple researchers have developed a new method for training large language models (LLMs). It allows you to integrate both textual and visual information.
Using a diverse dataset, the new MM1 model sets a new standard for AI. Neural networks can create comments on images and provide visual answers to questions. The company is exploring different types of learning data and architectural models that enable AI to understand and use language by solving visual and linguistic cues. AI uses MM1 to learn to interpret complex images or answer questions related to visual elements.
The MM1 model contains 30 billion parameters. She can perform multi-step reasoning on multiple images using chain-of-thought clues.
This research is part of Apple’s more favorable initiatives aimed at expanding the capabilities of artificial intelligence in the face of increasing competition. Bloomberg’s Mark Gurman previously reported that Apple is leading Google in Gemini countries to implement new features that are coming to the iPhone as part of iOS 18. He also said that iOS 18 will have more II support.
Source: Iphones RU

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.