Foundational Model
A foundation model is a large-scale artificial intelligence model trained on broad, diverse datasets that can be adapted to a wide range of tasks without being retrained from scratch. These models — such as GPT-4, Claude, Gemini, and Llama — learn general patterns from massive amounts of text, images, or other data during training, then apply that general knowledge to specific applications through fine-tuning, prompting, or other adaptation techniques. Foundation models matter because they've changed how organizations build AI applications. Instead of training a custom model for every use case — which requires significant data, compute, and expertise — organizations can start with a foundation model and adapt it for their specific needs. This dramatically lowers the barrier to AI adoption, which is why AI usage has exploded across industries in the past few years. Foundation models are called foundational because they serve as the base layer for many different applications. The same model might power a customer service chatbot, a document analysis tool, a code assistant, and a research agent. Their versatility comes from scale: by training on trillions of tokens of data, these models develop broad capabilities that transfer across domains. However, this generality also means they carry risks at scale — biases embedded in training data, the potential for hallucination, and security vulnerabilities that affect every downstream application built on top of them. For enterprises, foundation models are both an enabler and a risk management challenge. Organizations need to evaluate which models they allow, understand what data was used to train them, assess their performance and safety characteristics, and maintain oversight as new model versions are released — all while enabling the business to move fast.