October 4, 2024

-
min read

Safeguarding Sensitive Information in the Age of Generative AI

Since its debut in 2022, ChatGPT has radically reshaped the way we interact with technology. Generative AI (genAI) platforms like ChatGPT, Google Gemini, and Meta AI have rapidly gained in popularity, offering capabilities that range from rewriting text to generating creative content. 

While these tools have created new opportunities for enhanced productivity, they’ve also introduced new security risks — particularly when users unknowingly share sensitive information. The convenience of these tools means it’s easy to accidentally leak sensitive data and cause united privacy issues. As more organizations integrate these tools into their workflows, it’s critical to stay aware of their vulnerabilities. 

In this blog, we’ll delve into the emerging challenges posed by generative AI and explore strategies organizations can implement to protect their data. By understanding these risks and proactively addressing them, organizations can take advantage of AI while safeguarding their most valuable assets— their data. 

The risks of generative AI 

GenAI platforms can be both playful and practical. Someone might use generative AI for lighthearted activities, like rewriting a song in the style of their favorite cartoon character. But the practical use cases — like drafting emails or creating a presentation — are where the risks of genAI become more evident. 

When users copy information into a genAI tool, that data becomes a part of the AI’s underlying knowledge base. That’s a particular problem if users are working with sensitive data, as it could be inadvertently exposed to other users and organizations using the same genAI tool. 

This type of accidental data leakage can lead to significant privacy violations — but that’s not the only risk. If proprietary data or sensitive information is uploaded to genIA platforms, it opens the door to malicious exploitation, where cybercriminals leverage this data for fraudulent activities or phishing attacks. 

While genAI can offer convenient benefits, it’s crucial for users and organizations to remain vigilant against the risk of data leakage.  

Implementing genAI security with Lookout 

As organizations continue to embrace genAI tools, there are three practical options for guarding against data leakage: 

  • Block access to genAI sites for most users 
  • Allow selected users access to specific genAI sites 
  • Implement data loss prevention (DLP) techniques to prevent data leakage 

These are all goals that can be achieved by using Lookout Secure Internet Access — part of the Lookout Cloud Security Platform. Here’s how. 

Blocking access

The first step in protecting against data leakage in genAI is completely blocking access to all genAI sites.  This may sound drastic, but it’s the most effective way to prevent unauthorized use of these tools. Here’s how we do it:

  1. Configure the environment in TLS bypass mode: This allows us to monitor and control encrypted traffic.
  2. Create a web and application policy: This policy specifically targets and blocks access to genAI sites.
  3. Verify the setup: We attempt to access generative AI sites to ensure users receive a browser message explaining why the access is blocked.

Allowing exceptions

We recognize that blanket restrictions can hinder productivity. Certain users may need access to genAI tools for legitimate purposes. Here’s how we manage exceptions:

  1. Add exceptions to the block policy based on user groups: Specific groups, such as administrators, may be granted access.
  2. Standardize approved genAI sites: We allow access to selected, trusted generative AI sites, like ChatGPT or Google Gemini, for specific users.

Incorporating DLP 

When blocking access isn’t practical or sufficient, data loss prevention (DLP) is here to save the day. Blocking access isn’t always practical or sufficient. DLP monitors sensitive data and prevents it from being shared. Here’s our approach:

  1. Create a policy to audit user behavior on genAI platforms: We set up policies to track what users are doing on generative AI sites.
  2. Set up DLP templates: These templates are designed to identify personally identifiable information (PII) and other sensitive data.
  3. Test the DLP setup: We post sensitive data on genAI sites to see if the platform identifies and logs these activities. Once we’re confident it works, we switch the policy from audit to block, ensuring that any attempt to share sensitive data is prevented.

Future-proofing your data security

By following these steps and using Lookout Secure Internet Access, organizations can effectively protect themselves from the risks associated with generative AI tools. Implementing these measures helps maintain a balance between leveraging the benefits of AI and safeguarding sensitive information.

To learn more about protecting your sensitive data from genAI risks — and stay on top of other emerging security threats — check out our on-demand webinar, Mastering SSE: Protect Your Data From the GenAI Overlords.

Protect Your Data From the GenAI Overlords

Join us to discover the essential strategies for keeping your data safe from the increasingly ambitious plans of our GenAI overlords!

Book a personalized, no-pressure demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations
  • How an integrated endpoint-to-cloud security platform can detect threats and protect your organization

Book a personalized, no-pressure demo today to learn:

  • How adversaries are leveraging avenues outside traditional email to conduct phishing on iOS and Android devices
  • Real-world examples of phishing and app threats that have compromised organizations
  • How an integrated endpoint-to-cloud security platform can detect threats and protect your organization
Collaboration

Book a personalized, no-pressure demo today to learn:

Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization.

Protect Your Data From the GenAI Overlords

Join us to discover the essential strategies for keeping your data safe from the increasingly ambitious plans of our GenAI overlords!