Board Directors and Fraudulent Artificial Intelligence

Nicholas J Price

Artificial intelligence (AI) is a branch of computer science that has to do with designing and building smart machines that can perform tasks that normally require human intelligence. AI is an interdisciplinary science that can be useful in a variety of applications.

For example, digital marketers use AI on the back end of digital applications to learn more about consumer behavior so they can more easily target their audiences. Manufacturers use AI in various ways, including integrating multiple departments to streamline custom orders and shorten processes and in robotics operations.

While legitimate businesses are exploring ways to use AI to enhance business growth, criminals are also exploring and experimenting with ways to use fraudulent artificial intelligence for cybercrime. This was the case where an energy firm that’s based in the United Kingdom received and acted on orders from a “senior executive” who turned out to be an imposter. In fact, the imposter wasn’t even a human being at all.

Cybersecurity experts haven’t counted out the possibility that cybercriminals would eventually use

AI in some way for criminal activity. The United Kingdom incident is prompting cybersecurity experts to develop tools to prevent future incidents of fraud related to voice-altering applications using AI software.

Artificial Intelligence Software Used to Mimic Voice of Chief Executive

In March, the CEO of an energy firm in the United Kingdom answered a call from a chief executive at the parent company in Germany. The CEO was told to send €220,000 (US$243,000) to a Hungarian supplier, with the caller stating that the request was urgent and needed to be paid within an hour. The CEO responded quickly and immediately transferred the funds. Sometime later, the CEO received another call in which the same chief executive notified him that he’d reimbursed the funds to the UK firm. A third call came in to the CEO asking for a second payment. The CEO noticed that, this time, the call came from a number in Austria. He checked to see if the funds had been reimbursed and found that they hadn’t. He became suspicious and decided not to send the second payment and to report the incident.

As it turned out, the CEO had actually been talking to a voice-activated, intelligence-based software program where the software was highly successful in mimicking the German executive’s voice over the phone. The software effectively copied the chief executive’s German accent and speech rhythms, which made the call appear to be legitimate. Cybercriminals were responsible for using artificial intelligence to impersonate the chief executive.

The company’s insurance firm, Euler Hermes Group SA, didn’t disclose the name of either company that was affected. The money was transferred from a Hungarian account to an account in Mexico and was subsequently distributed to other locations. Authorities have no suspects at this time.

In this case, the company had insurance, which is one way to mitigate risk. Fortunately, the insurance company covered the claim.

Predictions Over AI-Related Cyber Crimes Are Materializing

This is the first known case in which advanced technology for artificial intelligence was used for illegal purposes. It’s unknown whether there have been similar incidents that haven’t been reported or whether incidents have occurred where experts haven’t been able to attribute the technology used in the crime to AI technology. Experts also noted that they’re unsure whether the criminals used bots to respond to the CEO’s questions. If that is the case, they believe it will make the investigation even more difficult.

Prior to this incident, AI experts and law enforcement officials had predicted that it was only a matter of time before criminals would use AI to automate cyberattacks. This could be the first of many such attacks if cybercriminals find it to be a successful strategy. Most cybersecurity experts are adept at keeping hackers out, but they haven’t yet gotten to the point that they have the technology to detect spoofed voices. The next challenge before cybersecurity experts is working on products that can detect fake voices and recordings.

Criminals used commercial voice-generating software to carry out the attack. Voice-generating software is becoming sophisticated enough that it quickly and accurately impersonates voices. Another tactic that criminals use is connecting audio samples so that a recording effectively mimics a person’s voice. This technique can take many hours of editing recordings, but the end result is usually quite successful. There is some concern that the recordings could be used to impersonate celebrities and executives and harm their careers.

The Next Phase of Cybersecurity Is Detection of Fraud

Irakli Beridze is the head of the Centre on AI and Robotics at the United Nations Interregional Crime and Justice Research Institute, where they employ a team of researchers who are working on technologies that apply machine learning and that are being designed to detect fake videos as well as fake audio. Beridze is concerned about the risk of cybercriminals using video calls that incorporate a particular person’s voice combined with video that shows their facial expressions and mannerisms. Machine learning technology certainly has the capability to make this happen. In the case of the energy firm in the UK, there wouldn’t have been anything to arouse the suspicions of the CEO or anyone else and millions of dollars could have disappeared without anyone knowing where it went.

Board Directors and Fraud Related to Artificial Intelligence

Imagine the same scenario in which the board was using Diligent Boards, Diligent Messenger and Diligent Secure File-Sharing tools, which are fully integrated and highly protected. With the right procedures in place, the original request could have come directly through the portal so there would be no doubt as to the identity of the chief executive. Confirmations could have been sent back the same way, again with no doubt as to the identity of the sender.

Diligent also offers Governance Intel, a tool that brings customized intelligence and insights directly to board directors. Directors can filter their settings, so they get the latest news and information on cybercrime and artificial intelligence, as well as all the other important news they need to lead their companies successfully. Governance Intel and other digital solutions are the modern solutions to protecting entities against cybercrime.

Related Insights
Nicholas J. Price
Nicholas J. Price is a former Manager at Diligent. He has worked extensively in the governance space, particularly on the key governance technologies that can support leadership with the visibility, data and operating capabilities for more effective decision-making.