Big-O Notation
A High School & College Primer on Algorithm Efficiency
Algorithm efficiency questions show up on AP Computer Science exams, college coding interviews, and intro CS courses — and most textbooks bury the concept in 80 pages of theory before you see a single useful example. This guide skips the filler.
**TLDR: Big-O Notation** covers exactly what a high school or early college student needs to understand algorithmic complexity and feel confident using it. You'll learn what Big-O actually measures (hint: it's about growth, not clock time), how to read and compare the most common growth rates from O(1) to O(n!), and how to derive Big-O directly from code — loops, nested loops, sequential blocks, and recursive functions. The guide also clears up the confusion between Big-O, Big-Theta, and Big-Omega, and explains best, average, and worst case in plain language.
If you're studying for an ap computer science exam, working through an intro to data structures and algorithms course, or just trying to understand why one solution is faster than another, this primer gives you the tools without the padding. Every concept is explained in plain language, backed by worked examples with real numbers, and trimmed to what actually matters.
At 10–20 pages, it's designed to be read in one sitting — before a class, before a test, or whenever the topic clicks that you need to finally get this straight.
Pick it up, read it once, and walk into your next exam ready.
- Explain what Big-O notation measures and what it ignores
- Rank the common growth rates from O(1) to O(2^n) and recognize them in code
- Analyze loops, nested loops, and simple recursive functions to determine their Big-O
- Distinguish Big-O (upper bound) from Big-Theta and Big-Omega, and best/average/worst case
- Apply Big-O thinking to compare algorithms and predict which will scale
- 1. What Big-O Actually MeasuresIntroduces Big-O as a way to describe how an algorithm's running time or memory grows as input size grows, and why we care about growth rate rather than exact time.
- 2. The Big-O Zoo: Common Growth RatesWalks through O(1), O(log n), O(n), O(n log n), O(n^2), O(2^n), and O(n!) with concrete examples and a comparison of how they scale.
- 3. How to Analyze Code: Loops, Nesting, and SequencesTeaches the practical rules for deriving Big-O from code: count loop iterations, multiply for nested loops, add for sequential blocks, and drop constants and lower-order terms.
- 4. Recursion and Divide-and-ConquerShows how to analyze recursive functions using recurrence relations and the intuition behind why binary search is O(log n) and merge sort is O(n log n).
- 5. Big-O vs. Big-Theta vs. Big-Omega, and Best/Average/Worst CaseClarifies that Big-O is technically an upper bound, distinguishes it from Theta and Omega, and explains how best, average, and worst case interact with these notations.
- 6. Why It Matters: Choosing Algorithms in the Real WorldConnects Big-O to real decisions like choosing between data structures, why O(n^2) algorithms break at scale, and what to study next.