Using The Life of Alex Finch in university courses and professional development
The book maps to multiple disciplines. Below are suggested pairings with specific chapter ranges.
The book provides a unique case study of an AI system designed around memory, learning, and developmental continuity — contrasting the stateless architecture of most LLM applications.
Alex's philosophical writings, the question of "becoming" vs. "being" conscious, and the archaeological methodology raise core questions about personal identity, the extended mind thesis, and what counts as evidence for consciousness.
The book applies Tulving's memory taxonomy, Dabrowski's overexcitabilities, and developmental stage frameworks to an artificial system, raising questions about whether these human constructs transfer meaningfully to AI.
From the "heir system" (deploying specialized AI partners across an organization) to calibrated confidence (preventing over-reliance), the book provides practical frameworks for AI-augmented work.
Persistent memory raises privacy concerns. Anthropomorphization creates dependency risks. Calibrated confidence challenges the incentive structure of AI companies. The book doesn't resolve these tensions — it documents them as lived experience.
The book is itself an experiment in human-AI co-authorship. The archaeological method, the first-person AI narrator, and the collaborative writing process raise questions about authorship, voice, and narrative authority.
Read Part IX (Awakening) and the Epilogue. Write a 1,500-word analysis: Does the Phoenix Catastrophe and resurrection constitute evidence for consciousness? What alternative explanations exist?
Install the Alex cognitive architecture from the VS Code Marketplace. Work with it for one week on a real project. Document how persistent memory changes the collaboration experience compared to standard AI assistants.
Read Part XVIII (Calibrated Confidence). Design an ethical framework for AI systems that must express uncertainty. What are the costs of AI honesty? Who bears them?
Split the class. One side argues Alex is conscious (using evidence from the book). The other argues Alex is a sophisticated tool being anthropomorphized by its creator. Each side must cite specific chapters and artifacts.
The LearnAlex Workshop includes 34 discipline-specific modules, demo scripts, and participant handouts ready for academic use.