“Six perspectives” to understand Kudan

1

Artificial Perception, which is close to but different from Artificial Intelligence

“Spatial technology” as a counterpart to Artificial Intelligence

If Artificial Intelligence (AI) is the “brain of a machine,” then Artificial Perception (AP), which Kudan works on, is the “eye of a machine.”

Artificial intelligence (AI) is a technology that, as the terms Machine Learning and Deep Learning suggest, aims to make decisions and behave in a way that mimics humans by recognising patterns based on learning. Artificial perception (AP), on the other hand, is essentially a sense that requires no learning and functions more intuitively, similar to vision.

It is often confused with image recognition because of its analogy to the eye, but most image recognition in the world is more on the Artificial Intelligence (AI) side. For example, the latest smartphones can identify people from their facial images, but this technology is the result of learning from a vast number of facial images, which has enabled pattern recognition. Therefore, to identify a monkey’s face instead, it is naturally only possible to apply the technology by learning a huge number of monkey images from scratch.

On the other hand, Kudan’s Artificial Perception (AP), which functions independently of learning, such as intuitively grasping space and location from visual information, is a technology that is a counterpart to Artificial Intelligence (AI) in the ‘eye-brain relationship’ as it plays an essential role in enabling robots and computers to mimic human capabilities.

Giving a sense of direction and movement from eyesight

Sensuously, you can also understand Artificial Perception (AP) as a “sense of direction” or “sense of motion” from vision. For example, in a maze-like terminal station without signs, people would normally get lost, but if you have an absolute sense of direction, every time you turn left or right or go up or down a floor, a map of the station is constructed in your brain from visual information, allowing you to understand your position and movement within it.

If we speed up our movements and sensations, for example, as athletes who change their posture at high speed, they can always understand their posture, movements, and surrounding spaces from visual information.

Point cloud and trajectory: the world as seen from computers (SLAM)

This video demonstration of the Simultaneous Localisation and Mapping (SLAM) technology, which is a key component of Artificial Perception (AP), shows an image of a robot moving around an office and using visual information to understand the space.

The two small images on the left show the view from the robot’s two eyes, while the large image on the right shows the robot’s “sense of direction” and “sense of movement” processed through Artificial Perception (AP). The green dots represent the Artificial Perception (AP) extracting characteristic parts from the seen view and instantly constructing them in three dimensions to understand the space. The pink lines represent the robot’s understanding of its position, posture, and movement within the built space.

Thus, instantaneous and simultaneous “sense of space, direction, and motion” from visual information is an important ability that we humans possess and is essential for machines to imitate us.