Skip to content

3 Key Takeaways (and 5 Quick Tips) to Take the Terror Out of AI Governance

Happy Halloween! Spooky season was really made for security and risk professionals—after all, they are confronted constantly with a coven of nasty incidents, threats, attackers, and ransomware that goes bump in the night. 

With the vigilance of Van Helsing, our intrepid cyber heroes drive a stake through the heart of many of these cyber threats, but one ominous specter still sends an icy chill down the spine of every CISO: AI risk, especially third-party AI risk. 

But never fear! While there are some legitimate risks to be wary of before embracing AI in your business, there are things you and your third-party risk management (TPRM) team can do to make AI a whole lot less scary. In a recent webinar—Banish the AI Bogeyman: Managing & Assessing AI Risk— Whistic Vice President of Security, Risk, and Compliance John Finizio and Senior Security Analyst Chris Honda lead an interactive discussion on meeting the challenge of AI risk in your third-party ecosystem. Here are three main takeaways (and five useful tidbits) from their conversation that will help to make AI risk and governance way less scary. 

1. Focus on the fundamentals when it comes to establishing AI governance

Building an effective AI governance framework is essential but challenging, especially as AI applications proliferate within organizations. A “back to basics” approach can give your organization a tool set to build on, though. This starts with establishing clear policies to ensure consistent, responsible AI usage.

“You have to have clear documentation and policies,” says Honda. “Having those in place puts you in a better position and allows you to say, ‘This is what we do, this is what we don’t do’ when it comes to AI.” Establish these “non-negotiables” early, set clear boundaries and expectations for AI use cases, and align governance to business needs so you can better understand your risk tolerance. 

Both Finizio and Honda stressed the importance of executive buy-in to provide “air cover” for your compliance team and empower them to lead AI initiatives. Which leads us to takeaway #2…

2. Cross-functional collaboration is key to mitigating AI risks

John Finizio says, “When you’re tasked with standing up an AI governance program, the first question you need to ask is: how will your executive leadership support it?” Top-down buy-in is critical, because it establishes the strategic value of your program and drives other stakeholders across the business to make AI security a priority, too—and there are going to be a lot of stakeholders if you want to get it right. AI governance must engage InfoSec, Compliance, Procurement and Engineering, along with any other business unit that may wish to work with an AI-based solution or vendor. 

Finizio and Honda also discussed the importance of communication. Duh, right? Well, everyone thinks they’re a good communicator until they get blindsided by a development they thought they were in the loop on. But when it comes to AI, it’s worth revisiting this core tenet of business acumen for several practical reasons. 

First, InfoSec and Risk—rightly or wrongly—have a reputation as the team of “No!” That core distrust can lead to things like Shadow IT or other risky behaviors. Open and constant communication to understand the true needs of your business partners builds trust and avoids what Honda calls “one-sided, unilateral decisions.” 

Second, you want to ensure that your AI governance program is tailored to actual use cases for your business. This allows you to be proactive about risk assessments, understand the necessary scope of monitoring, embed AI TPRM into existing workflows, and properly allocate resources to ongoing risk management. Without an open dialogue about the real ways AI is used in your business, you can’t establish a risk practice that makes sense. 

Honda notes that, for his team, “It’s about feedback, transparency, and accuracy. So, if there’s ever an anomaly or blip on the radar, we know exactly where the problem is and can dive right in.”

3. Leverage external frameworks for AI risk management

It can take a lot of up-front work to establish an AI governance plan. Luckily, you don’t have to start from scratch. Finizio and Honda recommend building your program with the help of established frameworks such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) or ISO standards. These guidelines can give your program structure while still leaving room for customization to your specific needs. 

5 more quick-hit takeaways on AI governance and security

During the session, Finizio and Honda also shared some pearls of AI wisdom that are perfect to keep in the back of your mind as you build your risk-based approach:

1. The AI landscape is evolving. Finizio notes that, “AI isn’t static, and neither are the risks. We’re learning everyday, and what worked in governance or risk management even a year ago might not be sufficient tomorrow.” Adaptability and flexibility are important skills to bake into your AI risk program. 

2. Education and training are critical. Chris Honda says that “Not everyone on your team needs to be an AI expert, but awareness of the basic risks and benefits helps everyone work smarter.” Take the time to cultivate a baseline understanding across the business so you can raise the overall risk acumen of the organization. Provide essential trainings regularly so you can keep up with any changes to your AI risk posture. 

3. Create the AI “Trust Factor.” According to Honda, “Trust in AI is huge. How do we measure trust? It starts with transparency. If people don’t understand it, they won’t trust it.” Adopting safe AI practices requires over-communication and visibility. And trust is a two-way street; you can’t expect stakeholders to be open and transparent with you if you don’t lead by example. 

4. Data quality matters. This is the ole “garbage in, garbage out” chestnut, but good data management and hygiene is essential with AI. Bad data leads to bad outputs that negate the positive value of the AI solutions in the first place. But bad data can also lead to additional risks like unintentional bias that can be harmful to your business. Maintain strong oversight of data repositories and centralize wherever possible to increase control. 

5. AI can be a competitive differentiator. Even on Halloween, fear is not a good reason to avoid AI if there is a strong business use case that gives you a competitive advantage. Instead of shying away from the risk, have a clearly articulated (and documented) AI value proposition to guide usage. Security and TPRM frameworks don’t exist strictly to prevent all risks—they exist to help you understand the risks and take calculated ones. Build a program to encourage innovation, not stifle it. 

We hope that these tips help to take some of the scariness out of AI adoption. Taking a prepared, reasoned, well-documented approach can help you govern AI, setting clear parameters that maximize the business potential of the tool, highlight the very real risks, and help to cordon them off safely.

If you’d like to check out everything that John Finizio and Chris Honda had to say, you can view their session on-demand here.

Whistic is AI-first TPRM to make your pathway to AI value more secure

Whistic’s TPRM platform utilizes AI to make it easier and faster to assess the risks of things like…well, AI—or any other risk that may originate with your vendor ecosystem. If you’d like to learn more about how AI-first TPRM can work for your business, schedule some time with our team of experts and we’ll show you. 

Third-Party Risk Management Information Security