Engineers developed new methods of deceiving recognition systems

Researchers find new ways to deceive recognition systems that make algorithms incorrectly identify objects, the distance to them or they are «invisible».

Deep learning algorithms perfectly analyze forms and colors to determine the differences between people and animals, passenger and trucks and so on. They are used in various applications and spheres of activity, often performing important tasks, such as road safety or property safety. However, a group of engineers from South-Western Research Institute (SWRI) is engaged in determining the vulnerabilities of these systems to correct them in the future.

Researchers have developed special images of images that force the cameras to incorrectly classify the objects located nearby when analyzing. If a person gets a T-shirt with such a pattern, it will install it on a vehicle or simply placed on the street, the algorithms will think that there is not something that in fact or the object is not where it is actually located. In this case, such samples should not cover the entire surface or be parallel to the chamber to deceive the system.

Although externally, they seem to a person with ordinary colorful images, but in certain situations it may disrupt the work of detectors and cause chaos within the framework of the system. For example, unsuccessful advertising on the bus can cause a neural network of the car behind to see not a vehicle, but a promotable product, which can lead to a collision.

During the audit of the algorithms, the team tests various models and evaluates their effect. Ultimately, this will help increase the level of safety of detection systems.

Personality identification systems begin to apply in retail. At the end of last year, the largest Japanese network of small stores 7-Eleven