Vision AI

AI Perception

4_Vision AI.jpg

AI Perception

 

Technology concerning the recognition of a robot’s surroundings and the subsequent dynamic response following the analysis of video information obtained by the camera acting as the robot’s eyes

 

 

Object detection & tracking

 

The detection and tracking of objects is essential for the interaction between the robot and its surrounding environment. Deep learning-based object detection technology, originating from RCNN, has quickly evolved to reach an anchor-less and transformer-based system.

 

These diverse and cutting-edge detection network structures are being researched, and additional research is being conducted to solve various problems such as the imbalance between classes and domain dependency problems.

 

In addition, research is also underway to improve the accuracy of tracking through sensor fusion technology, which uses various sensor data within the robot together.

[Detection]

[Charging port Tracking]

Recognition

 

The robot can only decide how to interact if it can recognize the semantic information of objects detected in its surroundings. We are researching a base technology that will enable robots to identify people and providing tailored experiences based on their traits, using face/person re-identification technology based on representation learning.

[Action Recognition]

Through action recognition technology, we are also aiming to procure various channels in which humans and robots can interact. In addition, research is being carried out on ways to improve the robot’s recognition of its surrounding environment and conditions through video-based text recognition.

[Face Recognition]

[OCR]

Image segmentation

 

A robot will move while constantly recognizing its surroundings and identifying possible routes. Although obstacles may be detected via Lidar, detailed information can be acquired pixel by pixel using image segmentation technology. We are developing semantic segmentation technology that fits the robot’s driving environment while also researching sensor fusion to apply the aforementioned technology to the robot’s movement.

[Semantic Segmentation]