Emerging Risks in Third-Party AI Solutions and How to Help Address Them

Emerging Risks in Third-Party AI Solutions and How to Help Address Them
Cyber Labs

02 of 20

This insight is part 02 of 20 in this Collection.

December 4, 2024 10 mins

Emerging Risks in Third-Party AI Solutions and How to Help Address Them

Optimizing Your Cyber Resilience Strategy Through CISO and CRO Connectivity

As the cyber threat landscape changes due the introduction of new threat surfaces from AI-driven solutions, ever-increasing sophistication of attacks, and pressure from regulatory bodies, increased collaboration is required within organizations to prioritize the most critical security initiatives.

In today's AI-driven business landscape, organizations increasingly rely on external vendors and partners to power their AI initiatives. While this collaboration can fuel innovation, it also has the potential to introduce new risks and/or vulnerabilities that can undermine AI's benefits. To help effectuate AI's potential, leaders should consider factors such as the security aspects related to AI supply chains, third-party relationships, and software components. Failing to properly understand and manage these and other risks can lead to severe consequences, including reputational damage, financial losses, and regulatory penalties, to name a few. Like any other third-party risk, this requires a proactive and strategic approach to help ensure both innovation and security. It involves effectively and proactively identifying, assessing, mitigating, and monitoring risks. 

Manage Third-Party Risks Decisively

AI solutions frequently depend on third-party vendors, which can pose significant security challenges. To help effectively manage these risks, an essential step is to adopt a structured approach that includes prioritization, evaluation, and transparency. Below, we outline some of the key steps to help manage third-party risks and secure your AI ecosystem. As part of effectively managing these risks, organizations should think about prioritizing their third-party suppliers and vendors based on their level of criticality to the AI system.

  • Prioritize Your Vendors

    One way to start is by identifying, classifying, and categorizing your vendors based on their role and impact on your AI initiatives. Critical vendors, such as those providing foundational components or accessing and managing sensitive data, should be prioritized for in-depth reviews. While vendors may have different levels of criticality to your AI systems, the appropriate level of assessment should be determined by multiple factors, including:

    • Your industry's regulatory requirements
    • The type and sensitivity of data being processed
    • Compliance obligations (e.g., HIPAA, GDPR, GLBA)
    • The vendor's level of access to your systems and data
    • Potential impact on your critical business operations
    • Contractual obligations to your own customers and partners

    Even for vendors deemed less critical to core AI operations, organizations should maintain rigorous vendor management practices aligned with their regulatory requirements and risk management framework. Consult with legal and compliance teams to ensure vendor assessment protocols meet all applicable regulatory standards for your industry.

  • Determine Where to Start

    You can begin by focusing on vendors whose services are directly integrated into your AI systems. For instance, suppliers of pre-trained models or data-processing tools should be examined and risk profiled on a priority basis, as any vulnerabilities in their components could have immediate and significant impacts on your AI's performance and security.

  • Assess the Level of Effort for Vendor Due Diligence

    The level of vendor due diligence can be tailored based on the vendor's importance or data sensitivity handling. High-impact or high-risk vendors require rigorous security assessments, including audits of their security controls and compliance certifications. For lower-impact vendors, it is important to ensure they follow industry-standard practices and among other things, have proper incident reporting protocols in place.

Some Key Considerations for Vendor Evaluation
  • Security Controls

    Assess their adherence to robust security standards, such as encryption, access control, and vulnerability management.

  • Compliance and Certifications

    Look for certifications like ISO 27001 or SOC 2, which indicate a commitment to security best practices, as well as other relevant regulations like GDPR, HIPAA, or industry-specific AI standards, particularly concerning AI’s role in decision-making or data processing.

  • Incident Response

    Have a well-documented response plan in case AI-related risks materialize. Strive to ensure the vendor has clear protocols for timely incident reporting and effective communication in case of security issues. Include contractual obligations that require vendors to comply with your security policies and report incidents promptly to manage and remediate issues.

  • Programmatic Considerations

    Focus on evaluating whether the vendor has strong programmatic support for secure integrations. This includes APIs, private connections (e.g., Azure Private Link), and automated mechanisms for enforcing security policies.

  • Questionnaires and Evaluation Tools

    Use structured questionnaires to gather information on vendor security practices, including data handling, access control measures, and previous incident history. This can help create a consistent baseline for evaluating vendor capabilities and weaknesses.

  • Generative AI Transparency

    Be aware of vendors who falsely claim to integrate generative AI into their solutions. Commit to verifying the actual capabilities, such as whether models are hosted in a private cloud (e.g., Azure to Azure OpenAI or AWS to Claude) or if they are merely calling third-party APIs. Assess logging, guardrails, and evaluations conducted by vendors to help ensure responsible AI use.

Secure Your AI Supply Chain

Understanding and managing third-party risks is just the beginning. To help effectively secure your AI initiatives, it is important to extend that diligence to the entire AI supply chain, ensuring that each component and integration point is secure and trustworthy. For instance, when using environments like Jupyter Notebooks, assess the extensions and plugins in use, such as nbconvert or ipywidgets, and ensure they comply with security best practices.

Securing your AI supply chain requires a layered approach that includes steps such as vetting components, real-time risk intelligence and continuous monitoring, governance, and working programmatically with vendors. To help navigate securing your AI supply chain, we've broken down some of these key elements into the following sections:

  • Vet and Validate Components

    Make efforts to survey every external component, library, and service used in your AI models is rigorously vetted and validated before integration. This includes tools like PySpark, Dask, and other distributed processing technologies.

  • Implement Continuous Monitoring

    Set up continuous monitoring to help detect emerging vulnerabilities in components such as open-source datasets or external libraries. Incorporating automated tools can contribute to making this process efficient and reliable.

  • Govern Use of External Resources

    Establish and enforce strict governance policies to manage the use of external resources, including open-source models and data libraries. This can assist in helping to maintain compliance and reduce security risks.

Organizations should conduct thorough security reviews before integrating any external components, such as open-source libraries, into their AI models. Automated scanning tools can help monitor for updates or security patches, and assist in helping to prevent potential breaches and maintain the integrity of AI systems.

Gain Visibility with AI Software Bills of Materials (SBOMs)

A critical aspect of securing AI is understanding every component that makes up your systems. AI Software Bills of Materials (SBOMs) can offer this visibility. Mandate the creation and maintenance of SBOMs for all AI software assets. When properly leveraged, these documents can catalog components—including open-source libraries, third-party code, datasets, and pre-trained models—that may be used in your AI. For example, maintaining an SBOM for machine learning frameworks like TensorFlow, including their dependencies such as CUDA/cuDNN, can help to ensure traceability. Integrate SBOM management into your development lifecycle and leverage automated tools to help keep them up to date. SBOMs can provide a clear inventory of your AI software components, which is an essential part in helping to identify vulnerabilities and managing risk. With a comprehensive inventory, you can quickly assess the impact of newly discovered vulnerabilities or compliance issues, which can help enable formation of a proactive security stance.

Below is an example of how AI supply chain components can be mapped to an SBOM to help provide a comprehensive view of dependencies and support its management.

 

 Component  Category  Third-Party Dependencies  SBOM Considerations
Junpyter Notebooks Dev Environment Jupyter, ipywidgets, nbconvert List all notebook extensions, plugins, and kernels.
Pandas Data Handling NumPy, Dask, modin Check for updates to dependencies like Numpy.
Hugging Face Transformers NLP & Pre-trained Models Tokenizers, Model Weights Ensure licensing for pre-trained models (e.g., GPL).
PyTorch Machine Learning Framework CUDA,  cuDNN, PyTorch nightly builds CUDA/cuDNN licensing and GPU compatibility checks.
MLFlow Experiment tracking SQLAlchemy, boto3 for AWS S3 Track versions of experiment tracking libraries.
ONNX Model Export/Inference Protocol Buffers, numpy Ensure exported models meet inference environment.
Triton Inference Server Model Serving HTTP/gRPC, Nvidia TensorRT Include GPU drivers and library versions.
Docker Containerization Docker Hub, cloud container registries Container base image vulnerability scans.
FastAPI API Creation Starlette, Pydantic, Uvicorn Ensure compliance with API data security.

Table: Sample of AI Dependencies and SBOM Considerations

Each component in the AI software pipeline would require detailed attention to the software dependencies, licensing, security, and upstream/downstream supply chain components, especially if deploying models or applications in production environments.

It's okay if some of these terms are unfamiliar. Navigating AI security and compliance is complex, and it's important to connect with the right advisors to guide you through these challenges. Remember, it takes a village to tackle these problems, and collaboration is key.

Organizations using multiple pre-trained models and third-party components should create and maintain SBOMs to keep track of all software components and versions in use. This can allow for quick identification and updating of affected components when vulnerabilities are discovered, helping to minimize security risks and potential impacts.

Moving Forward

Managing third-party risks in AI goes beyond adopting the latest tools—it requires a strategic, tailored approach that aligns with your organization's unique AI journey. By taking steps such as managing vendor relationships, securing your supply chain, implementing continuous risk monitoring, and maintaining detailed SBOMs, you can help build a risk-informed foundation for resilient AI adoption.

Start today: Evaluate your AI vendor relationships, make efforts to secure your supply chain, and build your SBOMs to help develop future-proof AI efforts. Neglecting these actions could expose your organization to substantial risks, such as impacts to both business continuity and long-term growth. The earlier you act, the stronger your position can be in addressing evolving threats. A proactive stance today may work to help safeguard your AI initiatives for the future.

 

Aon’s Thought Leaders
  • Kris Kimmerle
    Head of AI Security and Strategy
  • Scott Swanson
    Practice Leader, Security Advisory

About Cyber Solutions:

Aon’s Cyber Solutions offers holistic cyber risk management, unsurpassed investigative skills, and proprietary technologies to help clients uncover and quantify cyber risks, protect critical assets, and recover from cyber incidents.

General Disclaimer

This document is not intended to address any specific situation or to provide legal, regulatory, financial, or other advice. While care has been taken in the production of this document, Aon does not warrant, represent or guarantee the accuracy, adequacy, completeness or fitness for any purpose of the document or any part of it and can accept no liability for any loss incurred in any way by any person who may rely on it. Any recipient shall be responsible for the use to which it puts this document. This document has been compiled using information available to us up to its date of publication and is subject to any qualifications made in the document.

Terms of Use

The contents herein may not be reproduced, reused, reprinted or redistributed without the expressed written consent of Aon, unless otherwise authorized by Aon. To use information contained herein, please write to our team.

Subscribe CTA Banner