Early Excerpts from My Upcoming Book on AI — Part of the AI Essentials for Leaders Series
I’m excited to announce that my upcoming book, Artificial Intelligence: Shaping the Future of Innovation, will be released by DeGruyter Brill at the end of October this year. As a special preview, I’ll be sharing selected excerpts here on Substack.
The book is part of a new series I’m co-editing with Teresa Martin-Retortillo called AI Essentials for Leaders. The series offers concise, non-technical primers for navigating the complex and fast-moving world of AI. Written by leading experts, each volume explores foundational AI topics through clear, focused lenses: technology fundamentals, business strategy, entrepreneurship and innovation, leadership and governance, and ethics and privacy.
Designed for business leaders, practitioners, policymakers, and curious learners alike, the AI Essentials for Leaders series aims to empower readers to understand core concepts and anticipate AI’s implications across industries and society.
I’ll begin with excerpts from the section of the book focused on Generative AI, one of the most transformative and widely discussed branches of AI today.
Rather than treat systems like ChatGPT as isolated tools, this section introduces a practical framework for thinking about Generative AI as a system—composed of three interconnected components:
Foundation Models
Expert Models
AI Agents
In the coming weeks, I’ll also be posting the entire first chapter of the Generative AI section:
“Foundation Models: Large Language Model Basics.”
It explains how large language models are trained, how they generate responses, and why they represent a foundational shift in how AI systems are built.
Today, I’m sharing the section introduction, which lays out that framework.
Section Introduction to Generative AI
Generative Artificial Intelligence is not a single technology—it is a system composed of multiple components working together. In this part of the book, we will approach Generative AI as a system built from three foundational elements: Foundation Models, Expert Models, and AI Agents. Just as a car is more than its engine, Generative AI is more than any single model; its true power lies in how these distinct parts interact to create systems that are not just powerful, but purposeful.
To understand these components, we borrow a metaphor from philosopher Isaiah Berlin: the fox and the hedgehog. Foxes know many things; hedgehogs know one big thing. Foundation Models are foxes—broad, versatile generalists trained across massive datasets spanning many domains. They can generate fluent responses across a wide variety of topics, but often lack depth, precision, or verifiability. In contrast, Expert Models are hedgehogs—narrow specialists grounded in trusted, domain-specific knowledge. They supplement the generative breadth of Foundation Models with factual accuracy and retrieval of verified information.
Throughout this part, I use the term Foundation Models rather than Large Language Models (LLMs). The two are closely related, but Foundation Models better capture several important ideas. First, Foundation Models are not task-specific; they are capable of supporting a wide variety of tasks, including answering questions, translating languages, generating images, writing code, and more. Second, Foundation Models are increasingly multi-modal, able to process and generate not just text, but images, audio, and other forms of data. Third, they are foundations on which specialized AI applications are built—serving as flexible engines that power complex AI systems.
The chapters ahead explore these components in turn:
Foundation Models: Large Language Model Basics walks through the construction of a large language model like ChatGPT, tracing the process from assembling training data to fine-tuning behavior, and explaining the fundamental mechanism that drives them: next-word prediction.
Foundation Models: How Large Language Models Become Smart explores the architecture behind these systems—the transformer—and explains how innovations like attention mechanisms and context windows allow models to dynamically understand language, resolve ambiguities, and maintain coherence across long passages.
Expert Models: Grounding AI in Trusted Knowledge examines how retrieval-augmented generation (RAG) systems connect Foundation Models to curated, up-to-date external knowledge sources. By doing so, they overcome limitations like hallucination, outdated information, and lack of verifiability.
AI Agents: From Knowledge to Action introduces AI agents—systems that can not only generate language but also plan, decide, use tools, and act. Agents move beyond passive generation to active orchestration, expanding the role of AI from answering questions to accomplishing goals.
Understanding these elements—and how they work together—is critical for building AI systems that are not just technically impressive, but trustworthy, adaptable, and aligned with human purposes.