Why Go Beats Chess at Breaking AI (Still)


When I first learned that computers had conquered chess in 1997 but took until 2016 to master Go, I was genuinely puzzled. Go seemed like a simpler game—a 19×19 board, rules you could explain in five minutes, no complex piece movements. Yet the world’s most powerful computers struggled against it for decades after Kasparov fell to Deep Blue. The reason lies not in the surface rules, but in the staggering mathematical complexity buried beneath them.

The Basic Difference: Two Games, Two Philosophies

Chess and Go are fundamentally different problems dressed in the language of strategy games. Chess, invented in Persia around the 6th century, is a game of perfect information with a finite, calculable decision tree. Every piece moves in a precisely defined way. Every position has a measurable value. Go, originating in China over 2,500 years ago, operates on looser rules but creates an exponentially more complex landscape.

Related: digital note-taking guide

In chess, you have exactly 32 pieces, 64 squares, and about 20 legal opening moves. In Go, you start with an empty 19×19 board (361 stones can be placed) and nearly unlimited possibilities at each turn. A typical chess game lasts 40-60 moves. A typical Go game lasts 150 moves. But the numbers don’t tell the full story—the structure of the problem does.

The Game Tree Explosion: Why Brute Force Fails

Computer scientists measure game complexity using a concept called the game tree—the branching diagram of all possible moves and responses. Chess has an average branching factor of about 35 (roughly 35 legal moves per position). Go has an average branching factor of about 250 moves per position (Sharp, 2017). This difference seems modest, but mathematically it’s catastrophic. [2]

The total number of possible positions tells the real story. Chess has approximately 10^43 to 10^47 unique positions. Go has approximately 10^170 unique positions. To put that in perspective: there are only about 10^80 atoms in the observable universe. Go has more possible board configurations than there are atoms in existence (Müller & Enzenberger, 2012). [1]

Deep Blue defeated Kasparov by evaluating 200 million positions per second, using purpose-built hardware and sophisticated move-ordering algorithms. Even with that power, it couldn’t evaluate every possibility—it relied on pruning (eliminating obviously bad branches) and heuristics (educated guesses about position value). But here’s where why Go is harder than chess for computers becomes clear: the sheer number of promising possibilities in Go makes pruning far less effective.

Branching Factor and Computational Depth

In chess, most moves are obviously bad. A computer can quickly eliminate them. In Go, many moves have subtle value that only becomes apparent ten, twenty, or fifty moves later. A move that looks weak might prepare a brilliant territory-grabbing sequence. A defensive move might seem passive until it enables a devastating counterattack. This is why why Go is harder than chess for computers from a search-tree perspective: you can’t safely prune as aggressively.

To evaluate a chess position, computers look ahead 20-30 moves (called “depth”). Even with alpha-beta pruning—a technique that eliminates branches by recognizing symmetry—this is manageable. To evaluate a Go position with similar depth would require analyzing incomprehensibly more positions. The computational cost scales exponentially with depth, and Go’s larger branching factor makes this exponential growth steeper.

Evaluation Without a Clear Signal: The Pattern Recognition Problem

Another reason why Go is harder than chess for computers involves something more subtle: positional evaluation. In chess, you can relatively easily assign a numerical value to a position. A pawn is worth about 1 point, a knight or bishop about 3, a rook about 5, a queen about 9. Material advantage correlates strongly with winning chances. A computer can look at a position and quickly estimate who’s winning.

Go has no such clear currency. Territory and influence are distributed across the board. A local group might be alive or dead depending on moves played 50 positions in the future. The value of a move emerges from global properties—influence, territory, life-and-death status of groups—that are genuinely difficult to quantify. One stone placed in a seemingly empty area might, through a chain of tactics, determine the fate of a large group (Müller & Enzenberger, 2012). [3]

For decades, this was the core of the problem. Chess computers used evaluation functions—mathematical formulas that assigned point values to positions based on material, piece safety, pawn structure, and so on. These functions worked because they captured the essential dynamics of chess. Go’s evaluation function is vastly more complex. You can’t just count stones; you have to understand patterns, potential territory, and the likelihood of group survival—all intuitive human concepts that are hard to encode mathematically.

The Intuition Gap

In my experience teaching both games, I’ve noticed that strong Go players develop almost unconscious pattern recognition. They see a shape and immediately know it’s weak or strong, alive or dead, valuable or not. This pattern recognition comes from having played thousands of games and internalized thousands of local shapes and their implications. A computer following a traditional algorithm couldn’t match this. It couldn’t “see” the pattern; it had to calculate it.

This is precisely where artificial intelligence technology (specifically deep neural networks and machine learning) eventually broke through. But before 2016, this intuition gap was a massive barrier. [5]

Why AlphaGo Changed Everything: Learning Instead of Calculating

In March 2016, DeepMind’s AlphaGo defeated Lee Sedol, one of the world’s strongest Go players. This wasn’t a brute-force victory like Deep Blue’s. Instead, AlphaGo combined two neural networks with traditional search (Silver et al., 2016):

Last updated: 2026-04-01

Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

About the Author

Written by the Rational Growth editorial team. Our health and psychology content is informed by peer-reviewed research, clinical guidelines, and real-world experience. We follow strict editorial standards and cite primary sources throughout.


What is the key takeaway about why go beats chess at breaking?

Evidence-based approaches consistently outperform conventional wisdom. Start with the data, not assumptions, and give any strategy at least 30 days before judging results.

How should beginners approach why go beats chess at breaking?

Pick one actionable insight from this guide and implement it today. Small, consistent actions compound faster than ambitious plans that never start.

Related Reading

Published by

Rational Growth Editorial Team

Evidence-based content creators covering health, psychology, investing, and education. Writing from Seoul, South Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *