This post originally appeared on the Finn AI blog. Finn AI is now a part of Glia.
Taking their cue from industries like retail and social media, banks are exploring artificial intelligence including machine-learning algorithms and natural language processing to help them remain relevant among their increasingly digital clientele.
When implemented correctly, AI presents amazing opportunities for the financial services industry to surprise and delight customers. AI uses algorithms that progressively improve themselves by consuming masses of big data. The more they consume, the better they get at spotting patterns and making decisions. For example, customer behavior patterns can help AI suggest a new investment product or spot fraud on an account.
As these algorithms evolve beyond the understanding of the people who created them, the data gets combined with other data in new ways. These newly formed data combinations raise some questions around data governance, particularly in relation to the General Data Protection Regulation (GDPR) in Europe.
In the wake of Cambridge Analytica and other data privacy scandals, it’s anticipated that the regulatory landscape in North America will shift toward alignment with Europe. It’s prudent for financial services organizations to be mindful of current and impending data regulations and take the following three considerations into account when building solutions powered by AI:
1. Build a secure and fast digital backbone
To deliver real value, AI needs access to large amounts of data. The data needs a highly secure, low latency connection to travel from point of capture to point of analysis and back again as quickly as possible. A fast, secure, and reliable network designed to support AI requirements is key to ensuring a seamless transfer between applications.
Cloud infrastructure is also recommended. Legacy, on-premise servers would struggle to handle the large computing power that AI requires. Fintech partners can help with this. Finn AI, for example, uses the Amazon Web Services (AWS) Virtual Private Cloud (VPC) which operates on encrypted volumes, over encrypted connections, and with encrypted backups.
Having a strong digital backbone and appropriate infrastructure in place will help financial institutions take advantage of AI while securing client data, complying with industry regulations, and avoiding data breaches that can destroy brands and impact shareholder value.
2. Know (and adhere to) privacy and security requirements
Most of the data required to make AI useful in financial services is personal. It’s produced by individuals going about their daily lives: purchasing groceries, paying bills, booking a vacation, investing in stocks, and so on. It contains sensitive information about people’s behavior, travel habits, and lifestyles and, with AI, it can also be used to forecast further activity (creating yet more personal data).
The handling of this type of data is now governed by regulations such as GDPR and Payment Services Directive (PSD2). Financial services organizations must be more transparent with customer data and be able to explain and defend how they’re using, storing, and archiving it.
When choosing a fintech partner for your bank’s AI solutions, make sure that they follow the Reduce, Redact, Review principles.
3. Choose fintech partners that take security seriously
Respecting the privacy of customer data while maintaining high security standards is critical. Conversational AI uses a large amount of transactional data from customers to learn, perform tasks, and communicate. As a result, it can be quite complex to track its progress and understand how it uses the data.
In Europe, the GDPR states that companies must minimize the amount of data they collect and store, limiting it to what is strictly necessary. Companies are also required to put limits on how long they store that data. When choosing a fintech partner for your bank’s AI solutions, make sure that they follow the Reduce, Redact, Review principles:
- Reduce: AI partners should only request or store PII that is absolutely needed. All PII should be evaluated as part of regular Software Development Lifecycles.
- Redact: In cases where PII must be stored, technology partners should take all due care to quickly redact and anonymize the end user identifiable elements. Performance logs should be anonymized in real time before storage and other stored PII should be redacted via specific checkpoints as the information loses purpose.
- Review: Stored data should be reviewed continually to ensure redaction and reduction policies are working.
Finn AI, for example, has developed its own Privacy by Design model. There is minimal handling or storage of PII built into its product design, and what PII we do collect is used to provide additional value to users of our platform. Finn AI continually refines the concepts of redaction, reduction, and transparency as its technology evolves so banking partners can be confident that their client data is protected and their AI solutions are compliant with industry regulations.