SOLID STATE PRESS
← Back to catalog
A Brief History of AI: From Perceptrons to GPT cover
Coming soon
Coming soon to Amazon
This title is in our publishing queue.
Browse available titles
Artificial Intelligence

A Brief History of AI: From Perceptrons to GPT

A High School & College Primer on the Seven-Decade Arc of Artificial Intelligence

Your computer science class just hit machine learning, or your professor dropped "transformer architecture" into a lecture and kept moving. Maybe you need to write a paper on the history of AI and have no idea where to start. This guide closes the gap — fast.

**A Brief History of AI: From Perceptrons to GPT** covers the full seven-decade arc of artificial intelligence in plain language, from the 1956 Dartmouth workshop that named the field to the large language models reshaping how we work and communicate today. It is written for high school students and college freshmen who want a clear mental map of how we got here — not a textbook's worth of equations, not a breathless tech-hype piece.

Each section follows the actual history: the symbolic-AI optimism of the 1950s and 60s, the first AI winter, the expert-systems era, the statistical learning shift of the 1990s, the deep learning revolution powered by GPUs and big data, and finally the transformer models behind ChatGPT and GPT-4. Along the way you will see why certain ideas failed, why they were revived, and what the recurring boom-and-bust cycles tell us about where AI is headed.

This is a machine learning overview for beginners — concise by design, covering what you actually need to feel oriented in a class, a discussion, or an exam. No filler, no assumed background beyond basic algebra curiosity.

If you want to understand AI history without wading through a 500-page textbook, pick this up and read it in an afternoon.

What you'll learn
  • Trace the major eras of AI: symbolic systems, expert systems, statistical learning, deep learning, and large language models.
  • Understand what a perceptron is, why the first neural networks stalled, and what changed with backpropagation.
  • Explain why the 2010s deep learning revolution happened when it did (data, GPUs, algorithms).
  • Describe what a transformer and a large language model are at a conceptual level.
  • Recognize the recurring pattern of AI booms, winters, and hype cycles.
What's inside
  1. 1. Dartmouth and the Birth of a Field (1956–1969)
    The founding moment of AI as a discipline, the optimism of the symbolic approach, and the first perceptron.
  2. 2. The First AI Winter and the Rise of Expert Systems (1970s–1980s)
    How Minsky and Papert's critique froze neural network research, and how rule-based expert systems briefly took over.
  3. 3. Statistical Learning Takes Over (1990s–2000s)
    The shift from hand-coded rules to learning from data, with support vector machines, Bayesian methods, and the early internet feeding the change.
  4. 4. The Deep Learning Revolution (2006–2017)
    Why neural networks suddenly worked: GPUs, big datasets, and breakthroughs like AlexNet, word embeddings, and AlphaGo.
  5. 5. Transformers and the Age of Large Language Models (2017–present)
    The 'Attention Is All You Need' paper, the scaling hypothesis, and the path from GPT-1 to ChatGPT and beyond.
  6. 6. Patterns, Open Questions, and What Comes Next
    Recurring boom-and-bust cycles in AI, unresolved debates about intelligence and safety, and what students should watch for next.
Published by Solid State Press
A Brief History of AI: From Perceptrons to GPT cover
TLDR STUDY GUIDES

A Brief History of AI: From Perceptrons to GPT

A High School & College Primer on the Seven-Decade Arc of Artificial Intelligence
Solid State Press

Who This Book Is For

If you are a high school student who just hit the AI unit in Computer Science Principles, a college freshman looking for a computer science primer before diving into a machine learning course, or simply someone who has heard of ChatGPT and wants to understand where it came from, this book is for you. No prior math beyond basic algebra required.

This is a history of artificial intelligence for students — a compact account of how AI developed from the 1950s to today: the 1956 Dartmouth workshop, symbolic reasoning, the AI winters, expert systems, statistical learning, and the neural networks and deep learning breakthroughs that made modern AI possible. It also covers large language models like ChatGPT explained in plain terms. Think of it as a machine learning overview and beginner guide rolled into a tight 15-page narrative, with no filler.

Read straight through for the arc of the story, then use the practice questions at the end to check what you retained.

Contents

  1. 1 Dartmouth and the Birth of a Field (1956–1969)
  2. 2 The First AI Winter and the Rise of Expert Systems (1970s–1980s)
  3. 3 Statistical Learning Takes Over (1990s–2000s)
  4. 4 The Deep Learning Revolution (2006–2017)
  5. 5 Transformers and the Age of Large Language Models (2017–present)
  6. 6 Patterns, Open Questions, and What Comes Next
Chapter 1

Dartmouth and the Birth of a Field (1956–1969)

In the summer of 1956, a small group of mathematicians and scientists gathered at Dartmouth College in Hanover, New Hampshire, for a workshop that would name — and in naming, partly create — an entire field. The proposal, written by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, opened with a confident premise: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." That sentence set the agenda for the next seven decades.

The Dartmouth workshop itself was modest by modern standards — maybe twenty attendees, spread over six weeks, with people drifting in and out. No grand unified theory emerged from it. What it did produce was a name — artificial intelligence — and a shared conviction that building thinking machines was a legitimate scientific goal, not science fiction.

The Symbolic Approach

The dominant idea that came out of Dartmouth and the years immediately following was symbolic AI: the view that intelligence is, at its core, manipulation of symbols according to rules. The mind, on this account, is something like a very sophisticated calculator — feed in representations of the world, apply logical operations, get intelligent behavior out.

This wasn't an arbitrary guess. It was grounded in real progress. In 1956, Allen Newell and Herbert Simon unveiled the Logic Theorist, a program that could prove mathematical theorems from Whitehead and Russell's Principia Mathematica. It successfully proved 38 of the first 52 theorems in the book — and for one of them, found a proof shorter than the one in the original text. Newell and Simon were not shy about the implication: they had built a program that reasoned. Within two years, they followed up with the General Problem Solver, designed not for one specific task but for problem-solving in general, using means-ends analysis (a strategy of repeatedly asking "what is the gap between where I am and where I want to be, and what action closes it?").

The optimism of this era is hard to overstate. In 1958, Simon predicted that within ten years a computer would be world chess champion and prove a significant new mathematical theorem. The timeline was wrong, but the direction wasn't — it just took about forty years rather than ten.

Turing's Earlier Question

Keep reading

You've read the first half of Chapter 1. The complete book covers 6 chapters in roughly fifteen pages — readable in one sitting.

Coming soon to Amazon