← Back to Insights
Game Theory15 min read

The GTO We Trust Is Bound to Solver Abstractions

You open a solver and see "fold 100% here." The natural response is to memorize it as "this is a folding spot" and stop thinking about it. Then you watch live players actually bluff this spot, and you start to wonder whether the solver's 100% fold was simply wrong. The catch is that a 100% fold in a solver does not always mean folding is the truth. It is computed inside a specific abstraction: a specific set of betting options, a specific assumed opponent range, a specific set of allowed actions. Swap the abstraction and that 100% fold may disappear. A large part of the GTO we currently trust is bound to this second kind of 100%.

The raw game tree of poker is too large to solve directly. Even heads-up no-limit hold'em at 100bb, once you expand flop × turn × river × every action available at each node, has decision points on the order of 1016110^{161}, the scale commonly cited in the Libratus literature. At each node many bet sizes, raise sizes, street sequences, and hand-state details are theoretically possible. A solver does not calculate that full universe directly. It compresses the game into a model that can actually be solved.

Abstraction can happen at several layers: hand strength can be grouped, boards can be bucketed, bet sizes can be discretized, and some actions can be included or removed. This article is mainly about action abstraction, meaning which actions the solver is allowed to choose at each node: where can a player check, bet, or raise? Which bet sizes are available? Can OOP donk? Can SB limp?

Abstraction: the decision tree the solver sees

Same spot, different abstractions expose different actions

OOP river decision?CheckBet 33%Bet 75%Bet 125%Bet 200%In the tree (solver evaluates)Not in this abstraction (invisible to solver)

The solver searches for equilibrium only along the solid branches. The dashed actions all exist in real poker, but this tree did not include them, so the solver never had the chance to choose them. Swap the abstraction and "100% bet 75% pot" may become "125% pot is optimal."

An action that is not included in the abstraction can never be discovered by the solver. The solver is not exploring the real game freely. It is optimizing on the action tree it was given.
Solver outputNash solution inside Gabstract\text{Solver output} \approx \text{Nash solution inside } \mathcal{G}_{\text{abstract}}

Back to that 100% fold from the opening. When a solution shows 100% fold at a node, it can mean two very different things. First, inside the opponent range and sizing menu the solver assumed, folding really does outperform calling or raising. Second, in real games opponents bluff this spot more often than the model assumed, or they take a line the solver never simulated, and the 100% you read no longer reflects reality. These two hundreds look identical on a frequency chart, but they mean opposite things. Memorizing the second kind as "this is a folding spot" deletes the bluffs that actually exist in live play from your own decision space.

How Does the Solver Actually "Discover" New Strategies?

Players often look back at solver history and imagine ideas like SB limping, overbetting, donk betting, and block betting as continents the solver suddenly discovered. A more accurate model is that most modern discoveries happen in two steps.

  1. A developer adds an action to the abstraction: SB limp, 125% pot, 200% pot, OOP donk, small block bet, or additional raise sizes.
  2. The solver recalculates and tells you how aggressively that action is used, on which textures, and with which parts of range.

So a solver discovery is not a one-sided breakthrough. It is an iteration between humans and the solver. Humans propose a richer action menu. The solver tests whether actions in that menu have EV. When a new size is included and appears at meaningful frequency, we later label it modern strategy.

Not Unwanted, Just Missing

At PokerAlpha we ran the same spot through several action abstractions and compared the outputs. A consistent pattern emerged: many strategies that look like "the solver dislikes this action" are, once you open the tree, actions the abstraction never allowed in the first place. Once we added these actions to the tree, the solver derived new strategies the original abstraction had never surfaced.

SB Limp

If a heads-up or blind-vs-blind preflop abstraction only allows raise or fold, the solver can never output SB limp. That does not prove limping has no theoretical value. It only proves that this model gave limping no space to exist. Once limp is added to the action tree, the real question becomes visible: at which stack depths, rake structures, antes, and positional setups should limp appear, and at what frequency?

Overbet

Overbetting follows the same logic. If the flop, turn, or river menu only contains 33%, 66%, and 100% pot, the solver cannot produce 150% pot. Early simplified solutions without overbets do not necessarily prove overbets are bad. The size abstraction may simply be closed. Once 125%, 150%, and 200% are included, the solver can reveal which boards have enough range advantage, nut advantage, or equity-denial pressure to support overbets.

Donk Bet

Donk betting is especially easy to misunderstand. Older teaching often turned "the preflop defender should not lead" into a rule, but that was frequently entangled with tree design. If OOP is only allowed to check at certain nodes, the solver will of course never donk. Once donk actions are allowed, they do not appear everywhere. They appear in specific turns and rivers where the board shifts nut ownership or changes range interaction.

Block Bets and Small Sizes

River block betting is the same story. If OOP on the river can only check or bet 75% pot, you will never see a 20-33% block-bet strategy. After the small size is included, the solver can express its dual purpose: thin value, low-cost bluffing, and denying IP the chance to build a polarized large-bet range. The solver did not suddenly become smarter. The action space became more complete.

Why Do Abstractions Keep Expanding?

The first reason is compute. Every additional size increases node fan-out. Every street with more actions grows the game tree dramatically. Early poker AI needed serious engineering even for limit holdem. Later came simplified no-limit heads-up abstractions, then multi-size, multi-player, ICM, PKO, and rake-specific modern solutions. More compute allows developers to include more actions.

The second reason is accumulated intuition. Developers observe how the solver behaves inside the current menu. Does a spot constantly push frequency into the largest size? Do medium-strength hands look like they need a smaller size? Is OOP checking too much on a turn card? These signals suggest the menu may be incomplete, so a new action is added and tested.

The third reason is feedback from elite play. Strong pros notice when a solver line feels strange in practice, or when a human line works repeatedly despite being absent from the solution. That does not automatically mean humans are smarter than the solver, and it does not automatically mean the solver is wrong. Often it means the current abstraction does not fully describe that spot.

This Does Not Make Solvers Useless. It Means You Must Read Them More Precisely

The worst conclusion is "abstractions have limits, so solvers are not trustworthy." That is backwards. Solvers are still the strongest microscope humans have for studying poker. You simply cannot forget the shape of the lens. You are seeing a strategy projected through a specific abstraction, not an unconditional truth without model boundaries.

A mature solver workflow asks four questions whenever reading output:

  • Which actions are allowed at this node? Limp, donk, raise, overbet, and small bet may or may not exist.
  • What is the size menu? 33 / 75 / 125 can produce a different strategy than 25 / 66 / 150.
  • How large are the EV gaps between sizes? If EV is close, practical simplification usually matters more than memorizing exact frequencies.
  • Is this solution for learning a baseline or studying a boundary? The two goals call for different abstraction widths.

The Practical Lesson for Players

The point is not that every in-game spot should use ten sizes. That is impossible and unnecessary. The real point is to stop treating solver output as static doctrine. Treat it as a research result with known boundaries.

When you learn a rule like "do not donk here," "only bet small here," or "there is no overbet here," the next step is not blind memorization. Ask: under what abstraction was this conclusion produced? If one more reasonable action is added, does the conclusion survive? If it survives, the rule is more robust. If it does not, you were not seeing GTO itself. You were seeing the shadow of a simplified model.

PokerAlpha should be used in the same spirit: not to make you blindly trust one answer, but to break a hand into board texture, ranges, opponent type, and possible actions. Once you start asking why a line exists and whether a new size would change the answer, you are no longer memorizing solvers. You are understanding them.

Conclusion: GTO Is Not a Map. It Is a Resolution Problem

The GTO we currently trust is shaped by compute, developer intuition, product design, and action abstraction. It is extremely valuable, but it is not the endpoint. The next time someone treats a solver screenshot as the final answer, ask for the details first: how was the opponent range defined? How wide is the action menu in this solution? How granular are the sizings? At which nodes is raising even allowed? Those settings are the real strategy behind the screenshot.

In the end, everyone at the table is running their own abstraction of poker and computing inside their head. A solver compresses an infinite action space into a tree that can be solved. A human compresses opponents, boards, and stack structures into a handful of fast-decision categories. The only difference is the resolution of the abstraction. The less you abstract and the more detail you keep, the more precisely you play.

References

  1. [1]Regret Minimization in Games with Incomplete Information - Zinkevich, Johanson, Bowling, PiccioneA foundational CFR paper explaining regret minimization for large imperfect-information games.
  2. [2]Solving Heads-up Limit Texas Holdem - ScienceThe Cepheus paper and a major milestone in solving large poker games under computational constraints.
  3. [3]Libratus: The Superhuman AI for No-Limit Poker - IJCAIExplains abstraction, nested subgame solving, and self-improvement in a superhuman no-limit poker AI.
  4. [4]What is a Solver in Poker? How Solvers Work & How to Think About Them - Upswing PokerA player-oriented explanation of how solvers work, emphasizing that solvers do not solve full NLHE but optimize within user-specified sizing and action constraints, and that understanding why matters more than memorizing frequencies.

Ready to experience AI poker analysis?

Download on the App StoreGet it on Google Play