Sber presented a free beta version of the Kandinsky Video 1.1 neural network: it allows you to create videos using text and image descriptions. A representative of the company informed RB.RU about this.
Subscribe to RB.RU on Telegram
The future video format is a continuous scene with movement of both the object and the background. The model generates a video sequence lasting up to six seconds at a rate of 8 and 32 frames per second.
The updated Kandinsky Video allows you to create videos in 16:9, 9:16 or 1:1 formats.
A distinctive feature of the new version of the model is the generation of video not only from text, but also from images, that is, Kandinsky Video 1.1 is capable of “reviving” a static image.
Additionally, in the new version of the model, you can control the dynamics of the generated video using a special motion scoring parameter.
You can try the Kandinsky Video neural network on the fusionbrain.ai platform and on the official Kandinsky Telegram bot.
The architecture was developed and trained by Sber AI researchers with the support of scientists from the AIRI Institute of Artificial Intelligence on the combined data set of Sber AI and SberDevices.
Author:
Anastasia Marina
Source: RB

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.