New AI systems need to be vetted with risk assessment and policies put in place to ensure Safe AI Use. Typically AI systems and the context in which they are used, determines if AI use is safe. The underlying models and services, settings and key attributes, use case, security profile, user role, and the data input as prompts - should all be considered together, to make a determination of the risk of AI use. When AI systems or a component change, another rapid vetting cycle may be needed to ensure Safe AI Use but without causing any disruption.
AI adoption is accelerating, and organizations face increasing pressure from boards and executives to leverage AI for efficiency and innovation. Employees are eager to use AI-powered co-pilots and productivity tools to work smarter, not harder. However, when IT and security teams take too long to vet and approve these tools, employees become frustrated and often bypass official processes. This leads to AI sprawl, unsanctioned AI usage, and shadow AI —where employees use personal accounts or unsecured freemium tools, exposing the organization to security risks. Delays also create inefficiencies, resulting in redundant tools within the same category and unnecessary costs. By rapidly vetting and onboarding AI technologies, enterprises can streamline adoption, minimize security risks, and ensure employees have access to approved, secure, and cost-effective solutions. This approach not only reduces bottlenecks but also fosters innovation without compromising security and compliance.
There isn't one type of generative AI — public generative AI services and co-pilots, internally developed AI systems, and SaaS applications with embedded AI -- are three distinct vectors of AI adoption in the enterprise. Each of these operates differently, with distinct settings, controls, and ways of handling data. This diversity makes it difficult to consistently track, assess, and govern AI use across an enterprise. Compounding this challenge is the heterogenous way in which these are deployed and consumed — it spans different environments, integrations, and user interactions. Each type presents unique hurdles in discovery and management. To effectively discover AI, organizations need technology that can not only detect and classify all types of AI usage but also supports various deployment models while allowing seamless integration. The complexity of these requirements makes AI discovery and governance a difficult yet essential task.