English· Español· Deutsch· Nederlands· Français· 日本語· ქართული· 繁體中文· 简体中文· Português· Русский· العربية· हिन्दी· Italiano· 한국어· Polski· Svenska· Türkçe· Українська· Tiếng Việt· Bahasa Indonesia

nu

guest
1 / ?
back to lessons

Growth Rates, Not Absolute Costs

Big O: Measuring How Fast Costs Grow

Big O notation measures the growth rate of an algorithm's cost — not the absolute cost.

The question Big O answers: as the input size N doubles, does the work double? Quadruple? Stay flat?

The 'O' stands for 'order of magnitude.' The expression inside the parentheses describes the shape of the growth curve.

Big O Growth Table: operations at N=10, 100, and 1,000 for O(1), O(N), and O(N²)

Key complexity classes:

- O(1) — constant: cost does not grow. Example: looking up a value by index in an array. Whether the array has 10 items or 10 million, one lookup stays one lookup.

- O(N) — linear: cost grows with input. Example: reading every item in a list once. Double the list, double the reads.

- O(N²) — quadratic: cost grows as the square of input. Example: comparing every item to every other item. Double N, quadruple the work.

Growth comparison table:

| N | O(1) | O(N) | O(N²) |

|---|------|------|-------|

| 10 | 1 | 10 | 100 |

| 100 | 1 | 100 | 10,000 |

| 1,000 | 1 | 1,000 | 1,000,000 |

At N=10 the difference looks small. At N=1,000 the gap becomes enormous.

Compare O(N) to O(N²)

Use the growth comparison table above.

At N=1,000: O(N) requires 1,000 operations. O(N²) requires 1,000,000 operations.

At N=1,000, how many times more operations does O(N²) require compared to O(N)? Show your work.

How Code Structure Reveals Complexity

How to Identify Big O from Code

Big O hides in the shape of your code. Four patterns cover most cases:

Single loop over N items: O(N)

# O(N): one pass through the list
for item in list_of_n_items:
    process(item)

Nested loops, each over N items: O(N²)

# O(N²): every item paired with every other item
for item_a in list_of_n_items:
    for item_b in list_of_n_items:
        compare(item_a, item_b)

Constant-time lookup: O(1)

Array index access, hash table lookup — one step regardless of size.

Divide-and-conquer (cut the problem in half each step): O(log N)

Binary search: each check halves the remaining items.

---

Hidden O(N²): a list inside a loop

# This looks like O(N) but it is actually O(N²)
visited = []
for item in list_of_n_items:
    if item not in visited:   # scans all of visited: O(N)
        visited.append(item)  # the whole loop: O(N²)

The if item not in visited line scans every element of visited each iteration. A list scan costs O(N). A loop that runs N times, each doing O(N) work: O(N) × O(N) = O(N²).

This pattern appears in 1,000+ open-source projects. The fix takes one character: replace visited = [] with visited = set(). Set membership testing costs O(1), not O(N). One change. Same results. 1,000× faster at N=1,000.

Classify & Fix This Code

Read this code:

result = []
for name in student_names:
    if name not in result:
        result.append(name)
What is the Big O complexity of this code? Explain why the `if name not in result` line makes it expensive. Then rewrite the code to make it O(N).

Theory Meets Practice

Big O in the Wild

Big O is not just theory. It separates programs that finish in seconds from programs that take 17 minutes — on the exact same task.

Real example: dependency cycle detection in a build system.

When you install software, your computer resolves a graph of dependencies: package A needs B, B needs C, and so on. A build system checks this graph for cycles.

An O(N²) cycle-detection algorithm works fine at N=100 packages. At N=50,000 packages (a real project): it takes 17 minutes. The O(N) fix: 10 seconds.

This exact defect exists in GHC (Haskell compiler), pip (Python package manager), Maven (Java build system), Cargo (Rust package manager), & 1,000+ other projects.

Not because their authors were careless. visited = [] was written when N was small. The code calcified. N grew. Nobody noticed until the build took half an hour.

Big O is the vocabulary that lets you name what happened — & fix it.

---

Want to go deeper? Our unhamming course includes a full chapter on Big O, MOAD-0001 (a real O(N²) defect found in 1,000+ open-source projects), & why naming a pattern lets you find it everywhere. See [The Missing Chapter: Algorithmic Complexity](/en/un/unhamming_algorithmic_complexity).

Predict the Build Times

A build system uses O(N²) cycle detection.

Measured build times:

- N=100 packages: 0.01 seconds

- N=1,000 packages: 1 second

For O(N²): time scales as (N_new / N_old)².

For O(N): time scales as (N_new / N_old).

Using those formulas & measured data: (A) at N=10,000, how long does the O(N²) version take? (B) after an O(N) fix, using N=1,000 as your baseline, how long does the O(N) version take at N=10,000? Show your work for both.