Generative AI: A Friend or A Foe?

Kristina Avrionova Fortanix
Kristina Avrionova
Published:Jun 18, 2024
Reading Time:3mins
gen ai

Gen AI is sucking the air out of every room, be it the living or corporate room. While a lot of us get excited about what seems to be limitless possibilities of AI, it is very important for everyone to understand the security risks that AI brings. After all, is very easy to leak your own Personal Identifiable data or sensitive corporate data with just few prompts into any foundational AI model, without realizing the consequences. Similarly, AI has now given bad actors new attack vectors to exploit and hunt for valuable data. AI can be our friend if only we are aware of the vulnerabilities and associated risks, while following best security practices. 

What are the down sides of AI from a cyber security standpoint 

  • Data leaks Unintentionally, employees can feed models with confidential information. For example, a customer service rep can use a GenAI model to polish their response, without realizing that the copy has customer name, email, credit card and other sensitive data. Or a developer may want to quickly debug code with the help of a model, thus involuntarily exposing the company’s source code. Those examples are based on real life scenarios, which explain why many companies have put severe restrictions on the use of GenAI models in their organizations. Bad actors, on the other hand, can use prompt leaks or API leak attacks to manipulate the model to display internal sensitive data. 
  • Phishing attacks-- No matter how well the firewall was configured or how we, as users, have become more aware of phishing attacks, the fact is that there are always going to be folks who will forget or ignore the rules and good email hygiene. AI chatbots can now generate text that mimics real users more accurately, easily deceiving individuals in a hurry or those with overflowing email inboxes.
  • Malware injectionGenAI can now write code with ease, speed, and effectiveness. It is tempting to use it for quick task completion. Can you be certain the model hasn't been compromised and malicious code hasn't been inserted before deploying it in your own environment? 
  • Misinformation A model can hallucinate, meaning it can give incorrect or misleading results. While it can happen for various reasons, like not having enough training data, it can also happen because bad actors are inserting bad information into the model on purpose, causing the model to misbehave and generate harmful information. This is very concerning for organizations who are using Retrieval-Augmented Generation (RAG) models. RAG uses internal and proprietary data to enhance the accuracy and reliability of Gen AI models, so it can provide tailored and relevant information. Therefore, the data and the RAG model together have become prime targets.  

So what can be done about it? 

The answer is an extensive list of tactics. But data security, as always and even more so now, is fundamental, and it should not be an afterthought.

Data Security best practices: 

  • Limit access to data: Only provide access to sensitive data to those who absolutely need it, implementing role-based access controls and regularly reviewing permissions. Basically, enforce Zero Trust principles.
  • Encrypt Data though it’s lifecycle: Encryption ensures that even if data is intercepted, it cannot be read without the appropriate decryption key. Data should be encrypted at-rest, in-transit, and in-use, and you should regularly scan to look for exposed, meaning unencrypted, data services.
  • Safely work with data- if you have to use sensitive data in the AI models because it will help you with more relevant output, make sure the data has been anonymized.   
  • Ensure code integrity- before deploying code, especially one generated with help of AI, check to guarantee that no malware was introduced. Once you know the code is clean you can cryptographically sign the code using digital certificates. It is the most effective way to attest that the software is not malicious. 
  • Manage your cryptographic assets- from encryption keys  for data across datacenters, SaaS, or multicloud to certificates and secrets, know where your crypto assets are, control the access, and store them securely.

Follow Fortanix for more best practices on data security and let's make AI a friend, not a foe. 

About Fortanix 

Fortanix is a global leader in data security. We prioritize data exposure management, as traditional perimeter-defense measures leave your data vulnerable to malicious threats in hybrid multi-cloud environments. The Fortanix unified data security platform makes it simple to discover, assess, and remediate data exposure risks, whether it’s to enable a Zero Trust enterprise or to prepare for the post-quantum computing era. We empower enterprises worldwide to maintain the privacy and compliance of their most sensitive and regulated data, wherever it may be.  

Share this post: