3 Ways to Take a Risk-Based Approach to AI
Though the underlying technology of artificial intelligence (AI) has been around for years, the pace of growth and investment in AI is accelerating—leading to an explosion of practical use cases that are being incorporated in a very short period of time. Virtually every business can identify huge opportunities for growth through AI.
Wait a second…fast-moving tech, popping up in every product and business function, implemented at breakneck pace…even with so much potential, it begs the question: What about the risk?
AI is a huge part of our future here at Whistic, but it would be naive and misleading to suggest there are absolutely no risks. There are also some unique difficulties for businesses who need to assess that risk in their own supply chain. It can be hugely challenging to understand:
- Which of your third-parties are using AI in the first place, or what they’re using it for
- Whether or not a vendor used AI to build their software
- How vendors approached security and controls in the development phase—especially given the speed at which things are moving right now
- Whether you have the right talent and processes in place to maintain strong AI controls and effectively achieve scale
So, what’s the right approach to addressing these AI challenges?
The Case for a Risk-based Approach to AI Assessments
Great third-party risk management helps your business weigh the benefits of software against the possible red flags. In theory, that’s the case whether you’re assessing a cloud services provider, a new content management system, or artificial intelligence.
But artificial intelligence models are highly complex systems designed to constantly learn and evolve, and at this stage, we don’t totally know what we don’t know. The way businesses react to these unknowables can be organized into three categories:
- Totally Risk Averse—Usually composed of the very largest, Fortune-level enterprise companies, this group can afford to take a “wait and see” approach to AI risk from a competitive standpoint as the technology matures.
- Highly Risk Tolerant—True cutting-edge startups can place riskier bets on AI because they believe it will help them win fast and win big in their industry. They’ll assume greater risk to achieve the greatest competitive advantage.
- Plentiful and Pragmatic—This is the category into which the vast majority of businesses fall. They can’t NOT invest in AI; all their competitors are talking about it, and if they fail to act, they’ll fall behind. A strong risk-based strategy becomes their competitive advantage.
These three categories all relate the value of risk in commercial or business terms, and it highlights a transformation in the role of cybersecurity and CISOs in a risk-based world. Once seen by the business as “cost centers,” security leaders have changed that perception (which was never really accurate in the first place) to that of essential business strategists.
So, the first and most important aspect of a risk-based approach to AI is to ensure InfoSec has a clear line of communication with their peers across the organization. Security professionals also need to educate themselves on business operations and market factors—in addition to the threat landscape and technology changes.
Here are three more steps your organization can take to view AI in your supply chain through a risk-management lens.
Create a strong risk-ranking methodology
Risk ranking is the process of designating tiers of risk determined by a set of criteria that reflect the needs and circumstances of your business; these tiers of risk allow you to accurately categorize third parties into the proper tier.
This process is an essential part of any third-party risk management strategy, so it’s good practice to begin with—and it’s especially important for AI, because it gives you a general, working sense of your overall risk tolerance. Great risk-ranking methodology will include:
- Vendor profiles and inventory—This is a comprehensive list of your third-parties, along with the data necessary to determine the level of risk they pose. A vendor profile should include things like the list of systems they have access to, the data volume they have access, the data types and classifications they have access to, and how important they are to the overall health of your business.
- Alignment with the vendor intake process—Take the time to understand how third-parties are onboarded to ensure that you are collecting the right data for your vendor profiles—which will be used to drive your risk-ranking system. The right TPRM software can also help you create controls for this process and help to eliminate shadow IT.
- A scoring system that makes sense—In order to rank your vendors into tiers of risk, you’ll need a formula that generates consistent, understandable scoring. Organize these scores into distinct categories, weighted for importance based on the specific drivers of risk for your business. A simple, straightforward scoring system can be very effective; if you’re just getting started with your program, designate each third party as High, Medium, or Low risk based on the data you collect in your vendor profiles.
- A single system of record—Collecting all vendor data, contact and contract info, and profiles in a single system ensures that you have all the information you need to make a consistent ranking, gives all stakeholders greater visibility into the process, and keeps things from slipping through the cracks of multiple silos and systems.
Leverage industry standards to assess AI risk
We are big advocates for using standards and frameworks across all of your TPRM process—it’s the reason our platform gives customers access to more than 40 questionnaires and tools based on industry standards.
Standard questionnaires and frameworks for assessment are vetted, and if you have the right TPRM platform, you can align the right standard to the right risk-ranking score to ensure you properly assess the third party. This can save you and your vendor hours and hours of time.
But when it comes to AI, these standards are especially important because of the uncertainty surrounding AI. Whistic recommends using the three most robust standards for assessing AI use cases:
- NIST AI Risk Management Framework
- CapAI Assessment (based on the EU AI Act)
- ISO 23053 Standard
These will give software buyers a strong foundation for understanding the largest potential risks of AI in the supply chain. But they are also a great tool for vendors that utilize AI in their products, as they provide frameworks to self-assess against. Information based on these self-assessments can then be included in the vendor’s trust center or Whistic Profile.
Ask additional, targeted questions
Essentially, this is about candid conversations with your partners as early in the evaluation and procurement process as possible with the aim of uncovering how AI is used in the software you use or are considering.
Specific questions will help your third-party partners to be as transparent as possible. As a software vendor that is working extensively with AI, Whistic recommends asking the same questions we’d want to be asked to help understand the risks:
- What kind of penetration testing have you performed on the AI in your environment and solutions?
- What kind of monitoring are you doing of your AI-based systems?
- What kind of documentation can you provide to demonstrate that you’ve done all these things correctly and with due diligence?
Again, this is really about communicating very clearly and transparently with your third parties, but the answers they provide can also give you more confidence in making the best risk-based decisions.
Whistic AI Powers Great Risk-management While Helping Assess the Risk of AI
Whistic’s AI-powered, dual-sided TPRM platform is designed so that software buyers and sellers can connect instantaneously, safely, and with greater insight—shortening procurement and sales cycles and getting to value faster.
But we also understand how important it is to assess the possible risks of AI in your supply chain. That’s why we’re the first platform to provide industry-standard frameworks for assessing AI. We also offer full transparency into our security posture—available publicly through our own Whistic Security Profile.
The vast potential in AI means you might not be able to wait to take action. That doesn’t mean you should sacrifice security. A risk-based approach to AI helps you take advantage of the commercial benefits, creates a realistic picture of your risks, and gives you the tools to monitor and respond to those risks.
If you’re ready to take your next steps on the AI journey, set up a time to discuss how Whistic can help.