Let’s talk security: Keeping data safe and secure when using AI

Security is a critical topic in a world where we have hundreds of thousands or millions of personally identifiable information points. This is especially important when it comes to applications with artificial intelligence (AI). There is a lot to this conversation, so we will begin it here but will continue to discuss this topic over time.

One critical consideration is the amount of personally identifiable information (PII) given to any AI application. Here’s how we approach this issue:

  1. Minimize PII Storage: Our system deletes Social Security Numbers when documents are uploaded, ensuring they are not stored in our system. This is the first layer of security.

  2. Limit Data Sharing: When utilizing the underlying foundational models that run TruePilot, we limit the data those models have access to, only sending necessary information. For example, if we are looking for clients with a certain income level or who file in specific states, we only send data relating to income or state returns associated with that client - excluding client names, addresses, and other unrelated tax data. This is crucial for keeping data secure.

  3. Evaluate AI Tool Policies: We thoroughly review the security policies of every AI tool we use. All in-house hosted models are exclusively accessible by TruePrep.ai, and we utilize HIPAA-compliant data storage solutions. We only partner with third parties that are SOC-2, GDPR, and CCPA compliant, and these partners are regularly evaluated by external auditors. Any external AI system we utilize does not store our data permanently and only uses the data for abuse monitoring and compliance purposes. Thus, we ensure that all data handled by TruePrep.ai is managed with the highest level of security and compliance possible [1].

  4. Anonymization and Encryption: Utilizing techniques such as anonymization, pseudonymization, and strong encryption ensures that even if data is intercepted, it cannot be easily linked back to individuals. These practices are essential for safeguarding PII in AI applications [2].

  5. Compliance and Best Practices: Adhering to best practices and regulatory requirements, such as those outlined in GDPR and HIPAA, further secures sensitive data. Implementing strong encryption algorithms and secure key management practices ensures the confidentiality and integrity of PII [3].

This ongoing focus on security helps mitigate the risks associated with handling PII in AI applications and ensures that our systems remain robust against potential threats.

Additional Reading

  1. dasera.com - Protecting PII Data in AI Applications

  2. medium.com - Best Practices for handling PII data | by Andrew Weaver

  3. virtru.com - PII Encryption Best Practices: 6 Steps to Secure PII

  4. cybersecurity.illinois.edu - Privacy considerations for Generative AI

  5. lakera.ai - A Guide to Personally Identifiable Information (PII) and ...

  6. piiano.com - PII Security Best Practices: How to Protect Sensitive Data

Previous
Previous

Introducing TruePrep: AI for accountants, by accountants