In a short period of time, humans have gotten quite accustomed to speaking with chatbots or intelligent virtual assistants (VAs).
But what is a VA, at its core?
At its most basic definition, a VA is a software program that uses text or text-to-speech to perform an online chat conversation and is typically used as an alternative to direct contact with a live human agent. The technology involved in the development of VAs is known as natural language processing (NLP). VAs — both chatbots and voice assistants like Alexa or Siri — are adept at answering questions that have been formulated in natural language and can respond as a real person would.
They do the job so well that an increasing number of companies are using VAs and, in particular, chatbots, to improve and scale customer experience. In fact, 80% of companies want to use a VA by the end of 2020, especially because they are great money-savers. With VAs’ 24/7 presence, reduced training expenses, optimized workforce cost and personalization in customer experience, businesses have an interesting opportunity to optimize their expenses. VAs will save businesses about $8 billion annually by 2022.
While there are a number of benefits that businesses can enjoy by embracing VAs, it is also incredibly important to first look at the VA’s security system. While this isn’t common knowledge yet, it is necessary that a reliable, detailed and multilayer protection plan is in place to make VAs secure.
A clear example of what can happen when there are no security measures in place was in 2017 with a multimillion-dollar Delta lawsuit. In this case, Delta Airlines sued an artificial intelligence firm that provides virtual assistant services because its poor security policies resulted in a data breach that released a variety of consumer records, including passwords, credit cards, email addresses and travel booking information. Delta claimed that the chatbot company failed to install security measures like multifactor authentication, which led to hackers’ ability to modify the chatbot’s source code. They were able to see Delta’s website activity and divert customers to a fraudulent website where they could secure user data. In instances where customers enter confidential details, it is imperative that a chatbot system is protected, or else this data may fall prey to the wrong hands.
This case brought about a mainstream conversation about the need for chatbot system security, and more research has been made available about the different forms of attacks. In general, VA attacks can be divided into two broad forms: internal or manipulation attacks that modify system behavior, or external or extraction attacks that discreetly detect hidden information and attack system weaknesses.
An unprotected VA system can be responsible for significant business problems and can put companies back millions of dollars and sacrifice the integrity of their customer and company data.
There are five key ways an unprotected ML-powered VA system can cause problems for your business.
1. Data Theft And Unauthorized Transactions
Backdoor channels that go largely unseen can allow chatbot development frameworks to get access to confidential information from the business. An example of this is the Delta data breach, which highlights just how much chatbot attacks can affect brand credibility.
Models can be manipulated with malicious intent. For example, a user interacting with a banking chatbot can be given a malicious link that could redirect the user to another webpage where fraudulent transactions can occur.
2. Customer Dissatisfaction And Frustration
Chatbots can be altered in such a way that the intent or user request is misclassified and the response is incorrect. This, in turn, can cause dissatisfaction and frustration among customers. Even worse, it may lead to a complete failure of the system and can cause massive problems in terms of customer retention and lifetime value.
3. Increased Computational Expenses
If an attacker drains the computation resources of the business by adding an extreme number of malicious queries through bots, the end result will always be increased expenses for the business.
4. Impacted Business Strategy
Business strategy may be altered based on analytics drawn from false chatbot engagements. For example, a business may think that Product B is suddenly gathering more interest than the usual standout Product A because of an attacker’s continued bot requests — in actuality, all of these requests are false, but the company doesn’t know that and could choose to spend precious resources developing and refining a product that people don’t really want or need.
Because most businesses make decisions based on analytics, this type of analytics poisoning can jeopardize a company’s entire business strategy.
5. Increased Manual Support
Attackers can overload the system with bot requests by releasing multiple bots at once, which has drastic consequences on genuine customers’ access to the service. Most systems are unable to identify the differences between a bot and a genuine user, as none of them read the text or have any filters to identify the source. This leads to either latency, whereby too much traffic slows the system considerably, or denials of chatbot services and system crashes.
In effect, there will be an increased need for manual support, which costs time and money for the business.
Unfortunately, simple web app or network firewalls are not enough to protect your VA chatbot because they don’t work at the NLP level. This is a new space, and it is imperative that businesses choose to protect their VAs through whatever means possible. When security measures are taken, a VA can be developed into a reliable application that will significantly enhance customer service, allowing companies to automate their operations, manage large quantities of consumer requests and deliver cost-cutting opportunities. Companies simply need to first ensure that their VAs are secure and protected from any form of attack.