The self-directed learning technique using unlabeled data coped with the task of classifying objects, but had problems determining the position of the object in different poses. This limitation is particularly relevant in areas such as autonomous driving, where it is important to distinguish between a car approaching at high speed and a car passing by.
To solve this problem, a team of researchers developed a new learning model that takes both objects and their orientations into account. Their approach uses a new dataset containing images of the same object taken with small changes in camera angle. These images are label-free and simulate the process of a robot learning its environment by moving around an object.
Unlike its predecessors, the new method improves object position estimation by 10−20%, which significantly improves the accuracy of pose recognition without reducing the efficiency of object classification. This approach allows algorithms to better process images of new objects that the robot has never seen before.
Source: Ferra

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.