April 9, 2025
5
Min Read

3x Myths of AI Governance

It’s been less than two years since ChatGPT launched, gaining one million users in its first five days and triggering an unfathomable artificial intelligence (AI) boom.

Since then, we’ve seen the release of thousands of AI applications and hundreds of ChatGPT wrappers. Every major enterprise is using multiple large language models. The vast majority of employees are bringing their own AI tools to work. More than one in every four dollars invested in U.S.-based startups in 2023 went to AI companies.

While exciting, this rapid scale presents numerous risks – from intellectual property leaks and security vulnerabilities to regulatory compliance challenges and unanticipated costs. It can be tempting for organizations to lock everything down and simply block AI use. But that approach will backfire – stifling innovation, driving top employees to more progressive competitors, and spurring more unsanctioned Shadow AI use.

If your organization is charting a path forward in this new world, here are three myths you should consider.

Myth #1: Traditional controls and processes are sufficient

In the pre-ChatGPT era, IT teams had to vet new technology tools before use, and employees were okay with waiting. Now, employees are exposed daily to new AI apps that can transform their work. They don’t want to wait for IT approval; they want to start now.  

It is impossible for traditional controls and processes to keep up with the rapid pace of AI innovation. Also, the proliferation of generative AI has introduced a paradigm shift. While traditionally, IT teams have been most concerned with what comes into an organization (via software downloads), they are now more concerned with what goes out (via text, images, video, voice, and confidential data).

The tempting solution for some organizations struggling with vetting at the speed of demand is to tighten the reins and require employees to use one “approved” tool. For example, we recently worked with one enterprise that streamlined from six coding copilots to one.

But this is not the answer for you. AI is not a Swiss Army knife. Empowering your workforce means providing access to the right tool to do the job. Forcing everyone to take the same path renders your team incapable of operating at speed and holds your business back.

In addition, you may allow the use of a specific application because you deem it safe, but how people interact with it may change your exposure to potential risk. It’s critical to determine how to identify the right tools for various functions and guide people to use them more securely.  

Myth #2: Existing vendors can be trusted

Organizations have trusted relationships with numerous vendors, but many are adding AI features to their tools and turning them on by default. For example, Slack recently came under fire for utilizing user data to train its new AI services and requiring users to email to opt out.

Slack certainly is not the only company doing this. Training and data retention settings can increase data leakage exposure. The AI capabilities vendors are building within their solutions are typically very dynamic, evolving rapidly with software updates and making it nearly impossible for legal and privacy teams to stay ahead.

It’s important to understand what AI models your vendors are using and the implications for your business. We recently worked with a company that discovered long after the fact that a key vendor changed its sub-processor without notice.

We are seeing some enterprises asking vendors to renegotiate contracts to clarify what they can and can’t do with their data. It is critical to stay vigilant and to adapt and apply controls based on up-to-date awareness.

Myth #3: AI regulation is tomorrow’s problem

The International Organization for Standardization (ISO) introduced the 42001 International Standard in December 2023 to offer guidance for developing trustworthy AI management systems. Many countries are drafting laws, including the EU AI Act, which is now enforceable in specific regions.

These regulations aren’t coming; they’re here. And companies are changing the way they do business today to address them. For example, Microsoft recently updated its Supplier Security and Privacy Assurance (SSPA) Program to ensure its AI system suppliers comply with ISO 42001 for service delivery.

While giants like Microsoft may be leading the way, it won’t be long until companies of all sizes – to manage their own risk – ask their vendors if they’re complying with ISO 42001 and using the National Institute of Standards and Technology’s (NIST’s) Risk Management Framework (RMF).

In addition, a wave of new state-specific regulations hit the books this year, with at least 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C. introducing AI bills, and 31 adopting resolutions or enacting legislation.

The sheer volume of new regulations, and the rate at which they change, make it difficult for organizations to keep up. And it’s not just new regulations; the EU’s General Data Protection Regulation (GDPR) continues to pose significant challenges in the AI era.

The shifting legal environment only compounds your risks associated with other mistaken assumptions about AI. After all, if you’re not aware of what AI tools are in your environment or how they are being used, how do you know if you’re complying?

Begin by embracing the problem

The volume of new AI technologies and the rate of innovation are challenging how every organization approaches security, privacy, and compliance.

There’s no slowing down. AI is evolving so quickly that businesses must learn to think differently. The way we govern and secure environments must be just as dynamic as the technology.

Each day you wait to tackle the problem, the problem only grows larger. Keep in mind – just two years ago, ChatGPT didn’t exist; this September, the AI hosting platform Hugging Face reached one million AI model listings.

The path forward is challenging, but it is passable. Your team should consider developing a well-informed vetting process that can serve as a gatekeeper, to help your organization balance risk and innovation. The development and enforcement of granular policies can put the scaffolding in place for your business to safely leverage the power of AI responsibly and adapt to the speed of demand.

As they say, the best time to plant a tree was 20 years ago; the next best time is now. Start today.

What are your numbers?

Get an AI Usage And Risk Assessment to understand what is happening across your organization.

Request a Live Product Demo Now

By submitting this form, you are agreeing to our Terms & Conditions and Privacy Policy.

Your Request has been Successfully Submitted

Thank you. Our team will contact you shortly.
Oops! Something went wrong while submitting the form.