
AI Face Recognition Technology Helps Criminals to Hack Databases
The age of digital technology offers us all new goodies to simplify many internet tasks. For example, the well-known AI-based facial recognition system. It works as a fast and reliable way to identify any user. However, such a system is a favorite method of fraud among criminals.
Many experts have long been sounding the alarm about the insecurity of such AI technologies. And only recently, popular schemes for using the technology for fraudulent purposes have been revealed.
According to a report by ID.me Inc., a company specializing in AI technology for identifying users by facial recognition, in 2020 more than 100,000 people tried to circumvent face verification fraudulently. And this statistic is only for the United States.
In just 6 months of the year, more than 80,000 attempts at deception were made at the stage in the process where a selfie photo is matched to an identity card.
Today, we’ll talk in more detail about this type of fraud and the danger of facial recognition technology.
Technologies are constantly evolving. Even an iPhone can be opened without a password. Read more here.
The popularity of AI facial recognition programs
The face recognition function has become one of the most popular programs for quickly identifying users when making payments via smartphones, logging into accounts, and confirming their identity when working with different applications.

For example, Uber drivers regularly pass a check on their identity. They send selfies to a unique program that identifies them and allows them to work. This measure was taken to prevent cases of hacking the system and to prevent drivers sharing their Uber accounts.
Amazon.com Inc. and smaller vendors such as Idemia Group S.A.S., Thales Group, and AnyVision Interactive Technologies Ltd. create and sell user identification programs. The principle of operation of their technologies is that the system matches faces to a particular face fingerprint. The identification of individual people is usually more accurate than the identification of faces in a crowd.
Why do criminals use the face recognition function?
Experts from Experian PLC said that soon, more and more criminals would use so-called “Frankenstein faces,” using artificial intelligence to combine the facial features of different people to form a new identity to deceive the facial identification system.
In addition, experts said that using such technology is associated with an increasing type of financial crime where fraudsters use both actual data and fake information to create a fake identity.
Many public activists are fighting for the abolition of the face recognition function. For example, in the UK, privacy campaigners applied an asymmetric make-up that was created to deceive the city’s CCTV camera.
An expert of a company engaged in AI security research said that in fact, there are several reasons why criminals use this feature for their own purposes: from gaining access to digital wallets on other people’s phones to successfully gaining access to places with a high degree of protection (hotels, business centers, or hospitals.)
Our smartphones are not as secure as we think. Read the article to know more.
According to the expert, any AI technology for facial recognition is dangerous since technical holes in the system can be used to confuse it.
A growing threat
The very idea of cheating AI technology for facial recognition arose back in 2017. One of the Lemonade insurance company clients tried to deceive its artificial intelligence by wearing a blonde wig, applying lipstick, and uploading a video saying that his $5,000 camera was stolen.
After analyzing the video, Lemonade found it suspicious and determined that the man had created a fake identity. The company said in a blog post that he had previously filed a successful claim for damages under his true identity. However, the company Lemonade still declined to comment on its use of facial recognition.
Before that, there was a precedent when the Chinese prosecutor’s office accused two scammers of stealing more than $70 million. According to information, they created a fictitious company that was engaged in the sale of bags. They sent fake tax bills to their clients. The criminals used a video that was able to deceive the facial recognition system.
The report says that scammers bought hi-res images of people’s faces on the black market on the internet and then used an application to create videos from the photos so that they looked as if the faces were nodding, blinking, and opening their mouths.
Then, the scammers used a phone with the built-in function of disabling the front camera and playing a ready-made fake video instead. This scheme had been working since 2018.
According to John Spencer, director of the strategy at Veridium LLC, a biometric identification company, to deceive the face identification system it is sometimes enough to print out a picture of a face and cut out the eyes using the photo as a photomask. Many systems miss such a trick since they analyze blinking or moving eyes to confirm that image is live.

By the way, the most difficult to hack is Apple’s Face ID system. The system is configured so that it can recognize more than 30,000 invisible points, creating an image depth.
After that, the resulting image is analyzed, and the person is identified. Then, the iPhone uses its own chip, which converts the resulting image into a mathematical representation. It then compares the user’s face with its own database, according to Apple’s website.
However, Spencer said that banks and other companies prefer other systems for registering customers in their iPhone applications rather than Apple’s popular Face ID.
iPhones have many tech backdoors. Learn more here.
In addition, many online services ask users to upload static photos for identification and video selfies, other documents, etc., for greater data security. Then, the services send the data to third-party services for analysis, where AI technology analyzes them and identifies the person.
In search of a solution
In any case, there are two effective ways to protect facial recognition systems from fraud. The first is to update the AI technology data regularly, changing the algorithms in response to new threats. The second is to train the system with as many models as possible, including models based on attempts to fool the system, so that the AI can recognize them better.
For example, large companies like Google, Facebook, and Apple are constantly looking for new ways to improve AI technologies for facial recognition. So, Facebook has released a tool for detecting deep fakes.
In addition, Blake Hall from ID.me has said that in 2021, his company was able almost to prevent almost all attempts by scammers to deceive their AI technology.
Follow security measures to protect your data from threats.