Identity theft is one of the fastest growing crimes in the U.S., with 33% of adults reporting that they’ve been victims at some point in their lives. As our businesses, livelihood, and now our education systems increasingly depend on electronic data to perform daily operations, larger amounts of personally identifiable information (PII) are being entered and stored online. This can put companies and individuals at risk for identity theft.
PII: What is it?
PII is any data that can be used to identify a specific person, including but not limited to:
- Social Security Number
- IP Address
- Bank Account Number
- Login ID
- Credit Card Number
- Driver’s License
As the advancement of artificial intelligence (AI) has progressed, our reliance on data-driven capabilities and expectation of speedy data retrieval has grown, thus requiring stronger and more expansive data protection. We want convenience at our fingertips, but that luxury comes with increased risk of exposure.
PII in a Higher Ed Chat Bot
In the past, a college student might have called the Bursar’s office to inquire about their outstanding balance. The customer service rep would ask for the student’s ID number, look up the account and provide the amount owed. The rep would have a hard time retaining by memory the student’s PII and the risk of exposure from that phone call would remain low.
Nowadays, we are accustomed to getting immediate results with minimal effort. Today’s student prefers to ask their school’s chat bot “how much do I owe?” and they expect the bot to deliver the answer right away. The risk level of exposure is much higher in this situation than with the phone call of yesteryear.
To obtain the answer, the bot needs to gather PII from the student, such as an account number and date of birth, and then retrieve the data from the school’s student information system (SIS). The inputting and transferring of this data into a chat window on a computer or mobile device leaves the student vulnerable to risk should the school’s systems become compromised.
How Does Ivy.ai Ensure Data Privacy?
To address this issue, our team at Ivy.ai has implemented custom data obfuscation settings. First, we identify the pattern of the data our clients wish to mask. Clients can select any number of data sequences, from Social Security Numbers to credit cards to phone numbers and more. If a user types a sequence of numbers resembling the sensitive data pattern (for example, 123-45-6789), the actual data is erased and will appear as randomized characters in the chat transcript. The data will no longer be accessible to anyone, including Ivy.ai, our clients or end users.
The second step in our process is to alert the user in the chat window. If a sequence of numbers coinciding with a sensitive data pattern is entered, our bot will issue a warning message in the chat instructing the user to refrain from entering additional sensitive data.
As David A. Teich noted in his recent Forbes article, “privacy and compliance…are a direct reflection of a company’s ethics and business practices.” Our team at Ivy.ai couldn’t agree more.
If you are interested in applying data obfuscation settings to your Ivy.ai account, please reach out to your Ivy.ai representative.