Exploring Backtracking Techniques in Data Structures
Backtracking is a brute-force algorithmic technique that solves problems by trying to build a sol...
Abstract AlgorithmsTLDR: Backtracking is "Recursion with Undo." You try a path, explore it deeply, and if it fails, you undo your last decision and try the next option. It explores the full search space but prunes invalid branches early, making it far more efficient than brute force.
๐ Solving a Maze by Walking Backward
When solving a maze, you don't try every possible path simultaneously. You pick one corridor, follow it until you hit a dead end, then backtrack to the last junction and try the next corridor.
Backtracking algorithms apply exactly this logic to decision trees:
- Choose a candidate.
- Recurse deeper with that candidate applied.
- If the recursion fails or the constraint is violated, undo the choice and try the next one.
This differs from brute force: you don't generate all combinations and filter at the end. You prune the moment you know a branch can't work.
๐ข The Backtracking Template: Choose โ Explore โ Un-Choose
Every backtracking solution fits this skeleton:
def backtrack(state, choices):
if is_solution(state):
record(state)
return
for choice in choices:
if is_valid(state, choice):
apply(state, choice) # CHOOSE
backtrack(state, next_choices(state, choice)) # EXPLORE
undo(state, choice) # UN-CHOOSE
The critical line is undo โ this is what makes it backtracking rather than plain recursion. Without undo, earlier choices would permanently pollute later branches.
โ๏ธ N-Queens: A Step-by-Step Walkthrough
Place N queens on an NรN chessboard so no two attack each other (no shared row, column, or diagonal).
4ร4 Board:
| Step | Action | Board state | Valid? |
| 1 | Place Q at row 0, col 0 | Q . . . | โ |
| 2 | Try row 1, col 0 | Same column as Q | โ Prune |
| 3 | Try row 1, col 1 | Diagonal with Q | โ Prune |
| 4 | Try row 1, col 2 | Q . Q . | โ |
| 5 | Try row 2, col 0 | Diagonal conflict | โ Prune |
| 6 | Try row 2, col 1โ3 | All blocked | โ Backtrack to step 4 |
| 7 | Undo row 1 col 2; try row 1 col 3 | Q . . Q | โ |
| ... | Continue |
The algorithm never reaches invalid states โ it prunes at the moment a constraint is violated.
def solve_n_queens(n):
results = []
board = [-1] * n # board[row] = col of queen in that row
def backtrack(row):
if row == n:
results.append(board[:])
return
for col in range(n):
if is_safe(board, row, col):
board[row] = col
backtrack(row + 1)
board[row] = -1 # undo
def is_safe(board, row, col):
for r in range(row):
c = board[r]
if c == col or abs(c - col) == row - r:
return False
return True
backtrack(0)
return results
๐ง When to Use Backtracking
| Problem type | Backtracking applies? | Example |
| Constraint satisfaction | โ Strong fit | Sudoku, N-Queens, crossword fill |
| Permutations / combinations | โ Strong fit | All subsets, all permutations |
| Path-finding with constraints | โ Strong fit | Hamiltonian path, word ladder |
| Shortest path | โ Use dynamic programming/BFS | Dijkstra, BFS |
| Count occurrences | โ Use DP | Coin change, staircase |
โ๏ธ Time Complexity and Pruning Importance
Without pruning, N-Queens has $O(N!)$ candidates. With pruning (column + diagonal checks), the effective branching factor drops dramatically:
- $N=8$: 8! = 40,320 raw nodes; with pruning ~1,965 nodes explored
- $N=12$: brute force = 479M; pruned โ 320K nodes
Effective pruning is the difference between unusable and practical. The earlier you detect a constraint violation, the more of the search tree you skip.
๐ Key Takeaways
- Backtracking = recursion + undo. The
undostep is what allows the algorithm to explore multiple branches from the same state. - Template: Choose โ Explore โ Un-Choose.
- Pruning makes backtracking practical: reject invalid states as early as possible.
- N-Queens is the canonical example: place by row, prune on column and diagonal conflict.
- Use backtracking for constraint satisfaction, combinations, and permutations โ not for shortest-path problems (use DP or BFS there).
๐งฉ Test Your Understanding
- What is the difference between backtracking and depth-first search?
- In the N-Queens Python solution, what does
board[row] = -1on the undo line accomplish? - Why does early pruning dramatically reduce the number of nodes explored?
- You want all permutations of
[1, 2, 3]. Which part of the backtracking template handles swapping and unswapping?
๐ Related Posts

Written by
Abstract Algorithms
@abstractalgorithms
More Posts
SFT for LLMs: A Practical Guide to Supervised Fine-Tuning
TLDR: Supervised fine-tuning (SFT) is the stage where a pretrained model learns task-specific response behavior from curated input-output examples. It is usually the first alignment step after pretraining and often the foundation for later RLHF. Good...
RLHF in Practice: From Human Preferences to Better LLM Policies
TLDR: Reinforcement Learning from Human Feedback (RLHF) helps align language models with human preferences after pretraining and SFT. The typical pipeline is: collect preference comparisons, train a reward model, then optimize a policy (often with KL...
PEFT, LoRA, and QLoRA: A Practical Guide to Efficient LLM Fine-Tuning
TLDR: Full fine-tuning updates every model weight, which is expensive in memory, compute, and storage. PEFT methods update only a small trainable slice. LoRA learns low-rank adapters on top of frozen base weights. QLoRA pushes efficiency further by q...

LLM Model Naming Conventions: How to Read Names and Why They Matter
TLDR: LLM names encode practical decisions: model family, size, training stage, context window, format, and quantization level. If you can decode naming conventions, you can avoid costly deployment mistakes and choose the right checkpoint faster. ๏ฟฝ...
