Algorithm Field Guide

Maze Generation + Solving Documentation

A practical reference for every algorithm in this visualizer: what it does, why it behaves that way, and when to use it.

Loopy generators intentionally add cycles, so some mazes provide multiple shortest routes between start and goal.

49Generators
35Solvers
84Total Algorithms

Generator Algorithms

These construct the maze structure by carving passages between cells.

GeneratorResearch Coredfs-backtracker

Recursive Backtracker (DFS)

Grows a maze by walking deeply until stuck, then backtracks.

Topology: Perfect

Time: O(V)Space: O(V)

How It Works

  1. Start at one cell, mark it visited, and push it on a stack.
  2. Move to a random unvisited neighbor and carve the wall.
  3. If no unvisited neighbors remain, pop and continue from the previous cell.
  4. Stop when the stack becomes empty.

Pros

  • Fast
  • Simple
  • Produces long winding corridors

Cons

  • Can create low branching
  • Less uniform than Wilson/Aldous-Broder

Best use: Fast visual generation with dramatic backtracking behavior.

Interesting fact: This is one of the most common textbook maze generators because it is easy to animate step-by-step.

GeneratorResearch Coreprim

Randomized Prim

Expands a frontier set by attaching random frontier cells.

Topology: Perfect

Time: O(V)Space: O(V)

How It Works

  1. Start from one cell and mark adjacent cells as frontier.
  2. Pick a random frontier cell.
  3. Connect it to one random visited neighbor.
  4. Mark its unvisited neighbors as new frontier.

Pros

  • Good branching
  • Natural wavefront animation

Cons

  • Needs frontier bookkeeping
  • Can look noisy at high randomness

Best use: Balanced mazes with lots of local choices.

Interesting fact: It is inspired by Prim's minimum spanning tree method, but with randomized edge selection.

GeneratorResearch Coreprim-loopy

Prim (Loopy)

Prim-style frontier growth that intentionally opens extra visited-neighbor links.

Topology: Loopy

Time: O(V + E) expectedSpace: O(V)

How It Works

  1. Pick random frontier cells and connect each to one visited parent, like standard Prim.
  2. After parent carve, sample additional visited neighbors.
  3. Open some extra links using a configurable loop-density probability.
  4. Continue until all cells are visited.

Pros

  • Simple loop control
  • Keeps Prim wavefront visual style

Cons

  • Higher loop density can reduce dead-end challenge
  • Not a perfect maze

Best use: Tunable loop-heavy mazes with local alternate routes.

Interesting fact: This variant keeps Prim's growth order but relaxes the acyclic constraint after each frontier attachment.

GeneratorResearch Coreprim-frontier-edges

Prim (Frontier Edges)

Prim-style generation using explicit frontier edge sampling.

Topology: Perfect

Time: O(E) expectedSpace: O(E)

How It Works

  1. Start from one visited root cell.
  2. Push all edges from visited cells to unvisited neighbors into a frontier list.
  3. Pick a random frontier edge; if it connects to an unvisited cell, carve it.
  4. Add new frontier edges from the newly visited cell and repeat.

Pros

  • Classic Prim interpretation
  • Good branching and texture

Cons

  • Frontier edge list can grow large
  • Extra filtering for stale edges

Best use: When you want Prim behavior driven directly by edge randomness.

Interesting fact: Compared with frontier-cell Prim, edge-based Prim can emphasize different local branching statistics.

GeneratorResearch Corekruskal

Randomized Kruskal

Carves walls by joining disjoint sets while avoiding cycles.

Topology: Perfect

Time: O(E α(V))Space: O(V)

How It Works

  1. Treat each cell as its own set.
  2. Shuffle candidate walls (edges) between neighboring cells.
  3. If an edge connects two different sets, carve it and union the sets.
  4. Stop when all cells are in one connected set.

Pros

  • Strong theoretical guarantees
  • Uniform spanning-tree style behavior

Cons

  • Needs union-find
  • Less intuitive than DFS/Prim

Best use: When you want explicit disjoint-set control and predictable correctness.

Interesting fact: Union-find path compression makes practical performance very fast even on large grids.

GeneratorResearch Corekruskal-loopy

Kruskal (Loopy)

Runs classic Kruskal first, then reopens selected rejected edges to add controlled cycles.

Topology: Loopy

Time: O(E α(V)) + O(extraEdges)Space: O(V + E)

How It Works

  1. Shuffle all grid edges and run disjoint-set Kruskal to build a spanning tree.
  2. Track edges rejected during tree growth because they would form cycles.
  3. Compute loop budget from loop-density target.
  4. Reopen rejected edges until the budget is exhausted.

Pros

  • Predictable global loop density
  • Clear two-phase behavior

Cons

  • Needs storing rejected edges
  • Not a perfect maze

Best use: Evenly distributed loops with a deterministic cycle budget.

Interesting fact: Because edges are globally shuffled first, extra cycles are spread broadly rather than clustered in one region.

GeneratorResearch Corebinary-tree

Binary Tree

Each cell carves toward one of two preferred directions.

Topology: Perfect

Time: O(V)Space: O(1)

How It Works

  1. Visit cells in scan order.
  2. For each cell, carve either north or west when available.
  3. Edge cells carve only when the direction exists.
  4. Continue until all cells are processed.

Pros

  • Extremely fast
  • Tiny memory footprint

Cons

  • Directional bias
  • Predictable patterns

Best use: High-speed generation and teaching directional bias effects.

Interesting fact: Its directional rule creates obvious diagonal flow patterns that are easy to recognize visually.

GeneratorResearch Coresidewinder

Sidewinder

Builds horizontal runs and occasionally links them upward.

Topology: Perfect

Time: O(V)Space: O(1)

How It Works

  1. Move across each row while extending a run to the east.
  2. Randomly decide to close the run.
  3. When closing, carve one north connection from a random cell in the run.
  4. Repeat per row.

Pros

  • Very fast
  • Distinct corridor aesthetics

Cons

  • Visible row bias
  • Less organic appearance

Best use: Fast generation with intentional horizontal character.

Interesting fact: It is often paired with Binary Tree to show how tiny local rules create global maze style.

GeneratorResearch Corealdous-broder

Aldous-Broder

Uses a pure random walk and carves only on first visits.

Topology: Perfect

Time: Expected O(V^2)Space: O(V)

How It Works

  1. Start from a random cell.
  2. Walk to a random neighbor each step.
  3. If that neighbor is unvisited, carve the edge and mark it visited.
  4. Stop after all cells are visited.

Pros

  • Uniform spanning tree
  • Conceptually minimal

Cons

  • Can be very slow
  • Many non-carving steps

Best use: Demonstrating random-walk behavior and uniform spanning tree theory.

Interesting fact: Despite being simple, it converges slowly because random walks revisit cells many times.

GeneratorResearch Corehunt-and-kill

Hunt-and-Kill

Alternates random walking with linear hunts for a new branch point.

Topology: Perfect

Time: O(V^2) worst-caseSpace: O(V)

How It Works

  1. Walk randomly until reaching a dead end.
  2. Scan for an unvisited cell adjacent to visited territory.
  3. Connect there and resume random walking.
  4. Finish when the scan finds no valid unvisited cell.

Pros

  • Interesting phase changes
  • Good organic structure

Cons

  • Scanning phase can be expensive
  • Slightly more bookkeeping

Best use: Visual demos where algorithm phases should feel very different.

Interesting fact: The switch between hunt and kill phases is very visible and makes this algorithm great for education.

GeneratorResearch Coregrowing-tree

Growing Tree

Generalized family that blends DFS-like and Prim-like behavior via selection policy.

Topology: Perfect

Time: O(V)Space: O(V)

How It Works

  1. Keep a list of active cells.
  2. Select one active cell using a strategy (newest, random, or mixed).
  3. Carve to one unvisited neighbor if possible; otherwise remove the active cell.
  4. Continue until no active cells remain.

Pros

  • Highly tunable style
  • Good compromise between long corridors and branching

Cons

  • Selection policy impacts style heavily
  • Less canonical than pure DFS/Prim

Best use: When you want one algorithm knob to dial corridor-vs-branch behavior.

Interesting fact: Using newest-only turns it into recursive backtracker; random-only makes it feel close to Prim.

GeneratorAliasbfs-tree

Randomized BFS Tree (Growing Tree)

Builds a spanning tree layer-by-layer from a queue frontier.

Alias of: Growing Tree (same runtime behavior).

Topology: Perfect

Time: O(V + E)Space: O(V)

How It Works

  1. Choose a start cell and enqueue it.
  2. Pop cells in BFS order and inspect neighbors.
  3. For each unvisited neighbor, carve to it and enqueue it.
  4. Continue until queue is exhausted.

Pros

  • Very stable progression
  • Naturally broad, wave-like growth

Cons

  • Can look less organic than DFS variants
  • Needs queue/frontier bookkeeping

Best use: Clear breadth-first generation visuals and predictable expansion fronts.

Interesting fact: Unlike Prim, this variant commits tree parents by discovery order, creating BFS-depth layers.

GeneratorResearch Coreeller

Eller

Builds the maze one row at a time using dynamic disjoint sets.

Topology: Perfect

Time: O(V) expectedSpace: O(width)

How It Works

  1. Assign set ids across the current row.
  2. Randomly join adjacent cells with different set ids.
  3. For each set, carve at least one vertical connection downward.
  4. On the final row, force all remaining disjoint sets to merge.

Pros

  • Streaming-friendly
  • Memory-efficient for tall mazes

Cons

  • Row-wise bias can be visible
  • Implementation is more stateful than DFS/Prim

Best use: Generating large mazes where memory proportional to width is preferred.

Interesting fact: Eller is one of the few classic perfect-maze algorithms that does not require full-grid state history.

GeneratorResearch Corewilson

Wilson

Creates a uniform spanning tree using loop-erased random walks.

Topology: Perfect

Time: Expected polynomial; often higher than DFS/PrimSpace: O(V)

How It Works

  1. Start with one cell in the tree.
  2. Pick an unvisited cell and perform a random walk.
  3. Erase loops in the walk as they form (loop-erased random walk).
  4. When the walk hits the existing tree, carve the whole walk into the tree.

Pros

  • Uniform spanning tree output
  • Strong theoretical guarantees

Cons

  • Can be slower
  • Walk-loop bookkeeping is heavier

Best use: When statistical fairness of generated spanning trees matters.

Interesting fact: Wilson and Aldous-Broder both generate uniform spanning trees, but Wilson is usually much faster in practice.

GeneratorResearch CoreHybridhouston

Houston (AB + Wilson)

Hybrid that starts with Aldous-Broder then switches to Wilson for speed.

Topology: Perfect

Time: Expected faster than pure AB, near Wilson in late stageSpace: O(V)

How It Works

  1. Run Aldous-Broder random walk for an initial visited fraction.
  2. Use the visited cells as an initial tree.
  3. Switch to Wilson loop-erased walks for the remaining unvisited cells.
  4. Finish when every cell is merged into the tree.

Pros

  • Practical speed boost
  • Retains random-walk flavor early on

Cons

  • More implementation complexity
  • Hybrid behavior is less canonical

Best use: Balancing visual random-walk behavior with practical completion speed.

Interesting fact: Houston is a known optimization strategy specifically meant to mitigate Aldous-Broder's long tail.

GeneratorResearch Corerecursive-division

Recursive Division

Creates a maze by adding split walls with one opening per division.

Topology: Perfect

Time: O(V log V) typicalSpace: O(log V) recursion + patch batches

How It Works

  1. Start from an open field (internal walls removed).
  2. Pick horizontal or vertical orientation per region.
  3. Add a dividing wall and keep one random gap.
  4. Recurse into both child regions until indivisible.

Pros

  • Distinct architectural look
  • Fast wall-adding style

Cons

  • Can look structured
  • Less organic than random walkers

Best use: When you want blocky, room-like visual structure.

Interesting fact: This is the canonical wall-adder among classic maze generators.

GeneratorResearch Corerecursive-division-loopy

Recursive Division (Multi-Gap)

Recursive-division variant that keeps multiple doorway gaps per divider based on loop density.

Topology: Loopy

Time: O(V log V) typicalSpace: O(log V) recursion + patch batches

How It Works

  1. Start from a fully open field.
  2. For each split wall, keep at least one doorway gap.
  3. Add extra doorway gaps proportional to configured loop density.
  4. Recurse through child regions until no more splits are possible.

Pros

  • Architectural style with route multiplicity
  • Direct loop-density control

Cons

  • Can become too open at high density
  • Not a perfect maze

Best use: Room-like layouts where multiple corridor choices are desired.

Interesting fact: Adding just one extra doorway per divider can quickly increase alternate shortest routes across the layout.

GeneratorResearch Corebsp

Binary Space Partitioning (BSP)

Recursively partitions space into regions and connects region trees with bridge edges.

Topology: Perfect

Time: O(V log V) typical recursive partitioningSpace: O(log V) recursion + O(V) edge plan

How It Works

  1. Recursively split a rectangular region horizontally or vertically.
  2. Generate each child region subtree recursively.
  3. Choose one connector crossing the split boundary.
  4. Carve that connector so both child trees become one component.

Pros

  • Structured dungeon-like topology
  • Deterministic partition hierarchy

Cons

  • Can look too geometric
  • Less organic than walk-based generators

Best use: Room/partition-flavored maze layouts and dungeon prototyping.

Interesting fact: BSP is commonly used in roguelike map generation where rooms and corridors are assembled from a split tree.

GeneratorResearch Coreblobby-recursive-subdivision

Blobby Recursive Subdivision

Recursive subdivision variant that uses organic connected blob partitions instead of strict straight cuts.

Topology: Perfect

Time: O(V log V) typical, with local fallback O(V)Space: O(V)

How It Works

  1. Select two seeds inside a connected region.
  2. Grow two blob partitions from those seeds until the region is assigned.
  3. Recurse into each blob partition and build local subtrees.
  4. Carve one connector between both blob partitions to preserve tree connectivity.

Pros

  • More organic than strict BSP/recursive-division
  • Retains recursive controllability

Cons

  • More complex split validation
  • Harder to reason about visually

Best use: Hybrid aesthetic between structured partitioning and cave-like flow.

Interesting fact: Blob-first partitioning reduces long ruler-straight wall artifacts common in axis-aligned subdivision.

GeneratorAliasprim-true

Prim (True Frontier Edges - Random Prim)

Alias of Prim (Frontier Edges), kept for naming compatibility.

Alias of: Prim (Frontier Edges) (same runtime behavior).

Topology: Perfect

Time: O(E) expectedSpace: O(E)

How It Works

  1. Delegate generation to Prim (Frontier Edges).
  2. Sample random frontier edges from visited to unvisited cells.
  3. Carve accepted edges and expand the frontier.
  4. Finish when no valid frontier edges remain.

Pros

  • Familiar naming for edge-frontier Prim users
  • Exactly matches canonical runtime behavior

Cons

  • Not a distinct implementation
  • Behavior changes only when Prim (Frontier Edges) changes

Best use: Users looking for edge-frontier Prim under a more explicit label.

Interesting fact: In this codebase, prim-true is a direct alias of prim-frontier-edges.

GeneratorAliasprim-simplified

Prim (Simplified - Randomized Prim)

Alias of Randomized Prim (frontier-cell variant) with alternate naming.

Alias of: Randomized Prim (same runtime behavior).

Topology: Perfect

Time: O(V) expectedSpace: O(V)

How It Works

  1. Delegate generation to Randomized Prim.
  2. Track frontier cells near visited territory.
  3. Connect each selected frontier cell to visited space.
  4. Repeat until the frontier is exhausted.

Pros

  • Alternative naming for the same implementation
  • Exactly matches Randomized Prim behavior

Cons

  • Not a distinct algorithm
  • Can be confused with separate Prim variants

Best use: Users expecting a simplified-Prim label without changing runtime behavior.

Interesting fact: In this codebase, prim-simplified is a direct alias of prim.

GeneratorAliasprim-modified

Prim (Modified) (Growing Tree)

Random-active-cell variant close to Growing Tree random policy.

Alias of: Growing Tree (same runtime behavior).

Topology: Perfect

Time: O(V)Space: O(V)

How It Works

  1. Keep active in-maze cells.
  2. Select one active cell uniformly at random.
  3. Carve to a random unvisited neighbor if possible.
  4. Drop active cells that have no remaining unvisited neighbors.

Pros

  • Flexible texture
  • Simple random policy

Cons

  • Style depends on active-list evolution
  • Not uniform

Best use: Middle ground between Prim-like and DFS-like growth.

Interesting fact: This policy is one endpoint of the broader Growing Tree algorithm family.

GeneratorAdvancedgrowing-forest

Growing Forest

Grows multiple trees in parallel, then merges them into one maze.

Topology: Perfect

Time: O(V + E α(V))Space: O(V)

How It Works

  1. Seed several trees across the grid.
  2. Grow each tree into nearby unvisited cells.
  3. When growth saturates, collect boundary edges between trees.
  4. Union tree components with boundary carves until one tree remains.

Pros

  • Distinct multi-origin growth
  • Good global variety

Cons

  • More bookkeeping
  • Harder to reason about than single-tree growth

Best use: Visualizations emphasizing concurrent growth fronts.

Interesting fact: Forest merge phase is effectively a component-union process similar to Kruskal.

GeneratorAdvancedunicursal

Unicursal

Builds a single non-branching path through the full grid.

Topology: Perfect

Time: O(V)Space: O(V)

How It Works

  1. Create a serpentine Hamiltonian-style ordering of cells.
  2. Carve between consecutive cells in that ordering.
  3. Continue until all cells are connected in one route.

Pros

  • No branch ambiguity
  • Very readable path structure

Cons

  • Not a branching maze feel
  • Low replay diversity

Best use: Labyrinth-style experiences and path-only demonstrations.

Interesting fact: Unicursal structures have exactly one route and no junction choices.

GeneratorResearch Corefractal-tessellation

Fractal Tessellation

Recursively forms quadrant trees and stitches them with three bridges.

Topology: Perfect

Time: O(V log V) typicalSpace: O(V)

How It Works

  1. Split region into quadrants recursively.
  2. Generate subtree edges inside each quadrant.
  3. Sample cross-boundary connector candidates.
  4. Select three connectors that unify quadrants without cycles.

Pros

  • Recursive self-similar texture
  • Strong visual identity

Cons

  • Can look regular
  • More implementation complexity

Best use: Fractal-style maze aesthetics and recursive demos.

Interesting fact: Using exactly three inter-quadrant links preserves tree structure at each merge.

GeneratorAdvancedHybridcellular-automata

Cellular Automata (Cave-Biased)

Uses a smoothed CA mask to bias spanning-tree carving decisions.

Topology: Perfect

Time: O(V) generation + O(V) mask smoothingSpace: O(V)

How It Works

  1. Initialize random binary cave mask.
  2. Run CA smoothing iterations over the mask.
  3. Grow a spanning tree while preferring neighbors with similar cave state.
  4. Backtrack when stuck until all cells are carved.

Pros

  • Organic local texture bias
  • Still deterministic and step-friendly

Cons

  • Not pure cave generation
  • Requires extra precomputation

Best use: Hybrid cave-like style without losing spanning-tree guarantees.

Interesting fact: CA-inspired bias lets one algorithm bridge classic mazes and dungeon-like visuals.

GeneratorResearch CoreHybridmaze-ca

Maze CA (B3/S12345)

Applies Maze-rule cellular automata as a structural bias, then carves a deterministic spanning tree.

Topology: Perfect

Time: O(V) carving + O(V) CA iterationsSpace: O(V)

How It Works

  1. Initialize a random binary field.
  2. Evolve the field with Maze rule B3/S12345 for fixed iterations.
  3. Use resulting states to bias DFS-style carving toward same-state neighbors.
  4. Backtrack when needed until a full spanning tree is carved.

Pros

  • Research-rule inspired texture
  • Deterministic seeded behavior

Cons

  • Hybridized (not pure CA corridor extraction)
  • Mask quality depends on iteration settings

Best use: Including canonical CA rule families in a step-wise perfect-maze runtime.

Interesting fact: The classical Maze CA is usually run as full-grid evolution; here it is adapted as a guidance field for tree carving.

GeneratorResearch CoreHybridmazectric-ca

Mazectric CA (B3/S1234)

Mazectric-rule variant with slightly stricter survival behavior, used as a carving bias field.

Topology: Perfect

Time: O(V) carving + O(V) CA iterationsSpace: O(V)

How It Works

  1. Initialize a random binary field.
  2. Evolve with Mazectric rule B3/S1234 for fixed iterations.
  3. Bias DFS-style carving to neighbors with matching evolved state.
  4. Complete by backtracking until every cell is visited.

Pros

  • Distinct from Maze-rule mask behavior
  • Deterministic and animation-friendly

Cons

  • Still a hybrid adaptation
  • Can appear similar to Maze CA on small grids

Best use: Comparative demos between Maze and Mazectric cellular rule families.

Interesting fact: Mazectric typically favors straighter structures than Maze in unconstrained CA outputs.

GeneratorResearch CoreHybridbraid

Braid (Dead-End Reduction DFS)

Builds a perfect maze first, then removes dead ends by opening selected extra connections.

Topology: Loopy

Time: O(V) after base generationSpace: O(V)

How It Works

  1. Generate a base perfect maze (spanning tree).
  2. Collect dead-end cells and process them in shuffled order.
  3. For each remaining dead end, open one closed wall to a nearby passage.
  4. Stop once dead-end pass is exhausted or target reduction is reached.

Pros

  • Eliminates many cul-de-sacs
  • Creates harder loop-rich navigation

Cons

  • Not a perfect maze
  • Wall-follower strategies become unreliable

Best use: Loop-heavy mazes and solver robustness comparisons.

Interesting fact: Braiding is commonly treated as a post-process transform rather than a standalone carve algorithm.

GeneratorResearch CoreHybridweave-growing-tree

Weave Growing Tree

Growing-tree base maze with over/under crossings introduced as non-junction weave tunnels.

Topology: Weave

Time: O(V) base + O(V) crossing scanSpace: O(V)

How It Works

  1. Generate a base tree maze using growing-tree policy.
  2. Scan interior straight-corridor cells for weave eligibility.
  3. Mark crossing orientation at selected cells.
  4. Create tunnel links between opposite neighbors to model the underpass channel.

Pros

  • Pseudo-3D over/under aesthetics
  • Crossings avoid 4-way junction ambiguity

Cons

  • More complex rendering/traversal model
  • Some local solvers are incompatible

Best use: Weave puzzle styles and topology-aware solver comparisons.

Interesting fact: Weave mazes are often represented by layered traversal states even when drawn on a 2D grid.

GeneratorAliasvortex

Vortex Maze (DFS Backtracker)

Swirl-biased depth-first carving guided by local vortex centers.

Alias of: Recursive Backtracker (DFS) (same runtime behavior).

Topology: Perfect

Time: O(V)Space: O(V)

How It Works

  1. Sample several vortex centers across the grid.
  2. Run a DFS-style carve with backtracking stack.
  3. Score candidate neighbors by tangent direction around nearest center.
  4. Prefer moves that continue local spiral flow while preserving spanning-tree rules.

Pros

  • Distinct spiral visual texture
  • Deterministic with seeded randomness

Cons

  • Heuristic-biased look may reduce uniformity
  • More tuning-sensitive than vanilla DFS

Best use: High-obfuscation or artistic maze styles with curved-flow perception.

Interesting fact: Vortex-style constructions are designed to increase perceptual difficulty by packing locally parallel turning structures.

GeneratorAdvancedorigin-shift

Origin Shift

Rewires a rooted spanning tree by repeatedly shifting the root.

Topology: Perfect

Time: O(V + S), where S is shift countSpace: O(V)

How It Works

  1. Start from an initial rooted spanning tree.
  2. Pick a neighbor of the current root.
  3. Detach that neighbor from its old parent edge.
  4. Connect old root to the neighbor and promote neighbor as new root.

Pros

  • Unique dynamic rewiring animation
  • Keeps tree connectivity throughout

Cons

  • Less intuitive than carve-from-walls methods
  • Requires parent-edge tracking

Best use: Modern/experimental generator comparisons.

Interesting fact: Each shift performs a local edge swap while preserving a valid spanning tree.

GeneratorAdvancedreverse-delete

Reverse-Delete

Begins fully open, then adds walls back only when connectivity is preserved.

Topology: Perfect

Time: O(E * (V + E)) with bridge checksSpace: O(V)

How It Works

  1. Open all internal walls to start from a fully connected grid graph.
  2. Process candidate edges in randomized order.
  3. For each edge, test whether endpoints remain connected without that edge.
  4. If an alternate route exists, remove the edge by restoring that wall.

Pros

  • Strong correctness intuition
  • Distinct wall-adding visual style

Cons

  • More expensive than carve-forward methods
  • Needs repeated connectivity checks

Best use: Educational comparisons between edge-removal and edge-addition paradigms.

Interesting fact: Reverse-Delete is the conceptual mirror image of Kruskal: it removes non-bridge edges instead of adding acyclic edges.

GeneratorAdvancedboruvka

Randomized Boruvka

Builds a maze in rounds by merging components through cheapest outgoing edges.

Topology: Perfect

Time: O(E log V) expected rounds with union-findSpace: O(V + E)

How It Works

  1. Treat each cell as its own component.
  2. Assign randomized edge priorities and find each component's best outgoing edge.
  3. Carve all selected non-cycling edges in one round.
  4. Repeat until a single connected component remains.

Pros

  • Round-based growth is visually distinctive
  • Efficient component convergence

Cons

  • Heavier bookkeeping than DFS/Prim
  • Less common in maze tooling

Best use: Comparing MST families and parallel-style component merging behavior.

Interesting fact: Boruvka predates Kruskal and Prim and naturally lends itself to parallel implementations.

GeneratorAdvancedresonant-phase-lock

Resonant Phase-Lock (Noise-Weighted Growing Tree) (Codex)

Builds a spanning tree by choosing frontier cells using a self-updating resonance field derived from synthetic wave interference.

Invented by: Codex

Topology: Perfect

Time: O(V²) with full frontier rescoring per stepSpace: O(V)

How It Works

  1. Generate a bounded interference phase field from multiple radial + directional wave emitters.
  2. Seed one root cell, then maintain a classic visited/unvisited frontier boundary.
  3. For each frontier candidate, score the best visited parent by phase alignment and local field continuity.
  4. Choose the globally best frontier candidate, carve that parent edge, and assign the new cell a blended phase.
  5. Emit a local resonance pulse that re-weights nearby frontier cells before the next pick.
  6. Repeat until all cells join one connected tree.

Pros

  • Creates coherent macro-structures without strict geometric partitioning
  • Dynamic pulse feedback gives noticeably different flow than static-weight Prim variants
  • Deterministic and tree-safe under seeded execution

Cons

  • More arithmetic-heavy than baseline DFS/Prim
  • Frontier rescoring is slower than O(V)-style generators

Best use: Experimental maze aesthetics where you want coherent wave-like structure but still need a perfect planar maze.

Interesting fact: This project-specific algorithm combines interference synthesis and phase-entrainment feedback inside a strict spanning-tree frontier loop. Invented in this project by Codex.

GeneratorResearch Coreerosion

Erosion (Hydraulic) (Claude)

Simulates water flowing downhill across a heightmap; erosion feedback creates dendritic river-network branching.

Invented by: Claude

Topology: Perfect

Time: O(V²) with linear frontier scanSpace: O(V)

How It Works

  1. Generate a smooth heightmap from random control points using inverse-distance weighting.
  2. The lowest cell becomes the river outlet (tree root).
  3. Each step pops the lowest-elevation frontier cell (water flows downhill first).
  4. Connect it to the tree-neighbor with steepest descent.
  5. Erode: lower heights of unvisited neighbors, making cells near existing rivers more attractive.
  6. The erosion feedback creates self-reinforcing river capture, producing hierarchical dendritic patterns.

Pros

  • Produces unique dendritic (river-network) branching patterns
  • Self-modifying priorities create hierarchical passage structure
  • Visually distinctive — unlike any other maze generator

Cons

  • O(V²) due to linear frontier scan
  • Pattern depends on heightmap quality

Best use: Generating mazes with natural, river-like branching where main corridors collect tributaries.

Interesting fact: This is a novel algorithm that combines terrain hydrology with spanning-tree construction. The erosion feedback mechanism — where carving a passage lowers nearby terrain — has no precedent in published maze generation literature. Invented in this project by Claude.

GeneratorAliasquantum-seismogenesis

Quantum Seismogenesis (Multi-Source Prim's) (Gemini)

Accumulates synthetic stress and propagates fracture tips that carve only when they connect disjoint components.

Invented by: Gemini

Alias of: Randomized Prim (same runtime behavior).

Topology: Perfect

Time: O(V²) expected due repeated stress diffusion and frontier rescoringSpace: O(V)

How It Works

  1. Track per-cell stress and union-find connectivity across all cells.
  2. When no fracture is active, add random stress pulses to random cells until one ignites.
  3. Process one active fracture tip: rank neighbors by stress and attempt to carve the first non-cycling edge.
  4. On a successful carve, transfer stress toward the new tip and keep propagating the fracture front.
  5. If a tip cannot carve without creating a cycle, dissipate it and wait for the next ignition.

Pros

  • Produces sharp branching phases with clear fracture-wave behavior
  • Union-find keeps output cycle-safe (perfect maze)
  • Distinct visual rhythm versus standard frontier methods

Cons

  • More stateful and arithmetic-heavy than classic generators
  • Performance can lag on very large grids

Best use: Experimental generation where you want tectonic crack-like growth rather than smooth expansion fronts.

Interesting fact: This project-specific generator blends stress-threshold ignition with spanning-tree constraints, so fractures look chaotic while still guaranteeing connectivity without loops. Invented in this project by Gemini.

GeneratorAliasmycelial-anastomosis

Mycelial Anastomosis (Multi-Source Kruskal's) (Gemini)

Simulates fungal hyphae that grow from multiple spores, branch, and occasionally merge separate growth networks.

Invented by: Gemini

Alias of: Randomized Kruskal (same runtime behavior).

Topology: Perfect

Time: O(V) expected with near-constant-time union-find mergesSpace: O(V)

How It Works

  1. Seed multiple starting spores and treat each as an active growth tip.
  2. Pick a random active tip and shuffle its neighbors to choose a growth direction.
  3. If an unvisited neighbor is found, carve into it and continue from the new tip (sometimes branch).
  4. If a visited neighbor belongs to a different union-find component, carve an anastomosis merge edge.
  5. Retire dead-end tips and spawn fallback growth only when all tips die before completion.

Pros

  • Multi-source growth creates organic, network-like structure
  • Branching and merging phases are visually expressive
  • Maintains perfect-maze guarantees via component checks

Cons

  • Behavior tuning is more complex than single-frontier algorithms
  • Output style can vary strongly with branching probability

Best use: Organic maze aesthetics that resemble biological growth and network fusion.

Interesting fact: Anastomosis is a real fungal behavior where separate hyphae fuse; this algorithm adapts that idea into cycle-safe maze construction. Invented in this project by Gemini.

GeneratorAdvancedcounterfactual-cycle-annealing

Counterfactual Cycle Annealing (Simulated Graph Annealing) (Codex)

Starts from a valid spanning tree, then performs objective-driven edge swaps (add chord, remove cycle edge) under simulated annealing.

Invented by: Codex

Topology: Perfect

Time: O(KV) typical, where K is swap budget and each step includes tree-path searchSpace: O(V + E)

How It Works

  1. Generate a seed perfect maze using randomized depth-first tree growth.
  2. Define per-cell target branching degrees plus per-edge affinity scores.
  3. Each annealing step proposes one closed edge near high local stress.
  4. That proposal forms a virtual cycle; evaluate every removable edge on the induced cycle path.
  5. Apply the best swap if it improves local energy, or sometimes accept worse swaps early while temperature is high.
  6. Because each accepted move adds one edge and removes one cycle edge, the maze remains connected and acyclic.

Pros

  • Unique rewiring behavior unlike one-pass growth algorithms
  • Preserves perfect-maze guarantees throughout optimization
  • Can sculpt corridor/branching texture without rebuilding from scratch

Cons

  • More stateful than classic generators
  • Path search inside each swap attempt adds overhead

Best use: Experimental generation where you want iterative, optimization-style morphology instead of single-pass carving.

Interesting fact: This generator treats maze construction as local counterfactual editing: every move evaluates what would happen if one edge were replaced by another before committing. Invented in this project by Codex.

GeneratorAdvancedsandpile-avalanche

Sandpile Avalanche (Self-Organized Criticality) (Claude)

Builds a maze through self-organized criticality: sand grains accumulate and topple in cascading avalanches that carve passages at component boundaries.

Invented by: Claude

Topology: Perfect

Time: O(V²) expected due to repeated avalanche resolution and boundary scanningSpace: O(V)

How It Works

  1. Pre-seed each cell with 1–3 sand grains, placing the system near criticality.
  2. Each step drops one grain on a random cell. If its height reaches 4, it topples: −4 grains to itself, +1 to each neighbor.
  3. Toppling can cascade — neighbors pushed to 4 also topple, creating avalanches of any size.
  4. When a topple sends sand across a union-find component boundary, carve a passage and merge the components.
  5. After a carve drought, bias grain drops toward boundary cells to accelerate the tail phase.

Pros

  • Genuinely emergent: power-law avalanche size distribution creates unpredictable drama
  • Simplest possible rules — drop, topple at 4, carve on boundary crossing
  • Grounded in real complexity science (Bak–Tang–Wiesenfeld 1987)

Cons

  • Variable step count — some drops do nothing, others trigger massive cascades
  • Tail phase can slow without the acceleration heuristic

Best use: Watching self-organized criticality build a maze through emergent avalanche dynamics.

Interesting fact: The Abelian sandpile is the canonical model of self-organized criticality — systems that naturally evolve to a state where small inputs can trigger chain reactions of any scale. This is the first application of sandpile dynamics to maze generation. Invented in this project by Claude.

GeneratorAdvancedreaction-diffusion

Reaction-Diffusion (Turing)

Simulates Gray-Scott chemical kinetics to naturally grow Turing patterns.

Topology: Loopy

Time: O(V * steps) for chemistry simulationSpace: O(V) for continuous fields

How It Works

  1. Seed chemicals A and B with some noise.
  2. Simulate diffusion and reaction using the Gray-Scott model.
  3. Threshold the resulting continuous field into a binary passage mask.
  4. Ensure spanning-tree connectivity with a randomized bridge builder.

Pros

  • Truly organic, alien-looking mazes
  • Incredible visualization of pattern formation

Cons

  • Slower than graph algorithms
  • Requires post-processing to guarantee spanning

Best use: Artistic, cellular-automata or chemistry simulations producing mazes.

Interesting fact: Turing patterns explain how animal skins, like zebra stripes, get their complex shapes naturally. Invented in this project by Gemini.

GeneratorAdvancedant-colony

Ant Colony Excavation

Simulates pheromone-guided ants probabilistically digging out a spanning tree.

Topology: Perfect

Time: O(V * steps) dependent on ant explorationSpace: O(V)

How It Works

  1. Spawn active ants that move across the grid leaving pheromones.
  2. Ants heavily prefer moving to cells with high pheromones.
  3. When an ant moves between disjoint union-find sets, it carves a tunnel.
  4. Pheromones gradually evaporate, keeping the exploration dynamic.

Pros

  • Biologically inspired organic growth
  • Produces distinct arterial structures

Cons

  • Can take longer to finish if ants get stuck
  • Requires union-find components tracking

Best use: Demonstrating swarm intelligence applied to generative creation.

Interesting fact: This turns the classic Ant Colony Optimization solver upside down, using ants to *build* the maze instead of solve it. Invented in this project by Gemini.

GeneratorAliasising-model

Magnetic Spin Crystallization (Kruskal's)

Uses simulated annealing with an Ising model to crystallize a random grid.

Alias of: Randomized Kruskal (same runtime behavior).

Topology: Perfect

Time: O(V * steps) for annealingSpace: O(V)

How It Works

  1. Start with a hot grid of random spins.
  2. Attempt to flip spins to minimize an energy function preferring straight corridors.
  3. Cool the temperature down over time (Metropolis-Hastings).
  4. Feed the final cooled spin weights into a Kruskal builder.

Pros

  • Mathematical crystallization visuals
  • Tunable using Hamiltonian energy functions

Cons

  • Heuristic-based, occasionally unpredictable weights

Best use: Physics-based statistical mechanics demonstrations.

Interesting fact: The Ising model is fundamental in physics for simulating ferromagnetism and phase transitions. Invented in this project by Gemini.

GeneratorAdvancedwave-function-collapse

Wave Function Collapse

Collapses per-cell wall tiles under local compatibility constraints, then carves a connected loopy maze.

Topology: Loopy

Time: O(V * T + E α(V)) expected (T = tile options)Space: O(V + E)

How It Works

  1. Initialize each cell with all 16 possible N/E/S/W wall tiles.
  2. Repeatedly pick the lowest-entropy uncollapsed cell and sample one tile.
  3. Propagate edge-compatibility constraints to neighboring cells.
  4. After collapse, convert open tile edges into carve candidates.
  5. Connect any remaining components and add one extra edge to preserve loops.

Pros

  • Constraint-driven structure
  • Naturally loop-friendly output
  • Strong visual progression

Cons

  • More bookkeeping than classic tree generators
  • Requires propagation logic

Best use: Loopy mazes with local texture shaped by tile constraints.

Interesting fact: WFC popularized entropy + constraint propagation in procedural generation far beyond maze domains.

GeneratorAdvanceddla

Diffusion-Limited Aggregation

Random walkers stick to a growing cluster, carving one joining edge per absorbed particle.

Topology: Perfect

Time: Walk-length dependent; typically superlinearSpace: O(V)

How It Works

  1. Seed one aggregate cell.
  2. Launch a particle from a random non-aggregate cell.
  3. Random-walk the particle through neighboring cells.
  4. When it touches aggregate territory, carve one connection and absorb it.
  5. Repeat until every cell belongs to the aggregate.

Pros

  • Distinct organic branching
  • Simple local rule set
  • Great stochastic animation

Cons

  • Can require many walk steps
  • Performance sensitive to kill threshold

Best use: Tree mazes with dendritic, frost-like growth patterns.

Interesting fact: DLA is a classic model for natural branching structures such as dielectric breakdown and mineral growth.

GeneratorAdvancedhilbert-curve

Hilbert Curve

Orders cells by Hilbert-space-filling rank and carves one parent edge per ordered visit.

Topology: Perfect

Time: O(V log V)Space: O(V)

How It Works

  1. Compute Hilbert ranks over the enclosing power-of-two square.
  2. Filter ranks to in-bounds cells and sort by rank.
  3. Build a connected parent tree preferring visited neighbors with nearby rank.
  4. Visit cells in rank order and carve each parent-child edge.
  5. Finish when all cells are connected.

Pros

  • Fractal-inspired global ordering
  • Deterministic output
  • Smooth progressive carving

Cons

  • Less stochastic variety
  • Extra rank-precompute step

Best use: Demonstrating space-filling ordering effects in perfect mazes.

Interesting fact: Hilbert curves preserve locality better than simple raster order, which is useful in indexing and cache layouts.

GeneratorAdvancedvoronoi

Voronoi Tessellation

Grows seeded regions, builds a region spanning tree, then carves planned internal and doorway edges.

Topology: Perfect

Time: O(V + E α(V))Space: O(V + E)

How It Works

  1. Scatter region seeds and grow ownership with multi-source BFS rings.
  2. Record growth parents to form a tree inside each region.
  3. Compute region boundary adjacency candidates.
  4. Run a random spanning tree over region graph to pick doorway boundaries.
  5. Carve internal region-tree edges plus selected doorways.

Pros

  • Clear phase-based animation
  • Region-level structure
  • Perfect connectivity guarantees

Cons

  • More preprocessing than local-walk methods
  • Heavily seed-count dependent

Best use: Perfect mazes with visible territorial growth behavior.

Interesting fact: Voronoi partitions are foundational in computational geometry, graphics, and facility-location problems.

GeneratorAdvancedpercolation

Percolation

Randomly opens walls with probability p, then stitches components and preserves at least one cycle.

Topology: Loopy

Time: O(E α(V))Space: O(V + E)

How It Works

  1. Enumerate and shuffle all internal grid walls.
  2. Percolation phase: carve each wall independently with probability p.
  3. Track connectivity with union-find.
  4. Connect phase: carve remaining walls that merge disconnected components.
  5. If needed, carve one extra unused wall to guarantee loopy topology.

Pros

  • Direct loop-density control via probability
  • Phase-transition inspired behavior

Cons

  • Output character highly parameter-sensitive
  • Requires completion pass for guaranteed connectivity

Best use: Loopy mazes near critical connectivity thresholds.

Interesting fact: Percolation theory studies when local random connections suddenly create giant global connectivity.

GeneratorAdvancedl-system

L-System (Lindenmayer)

Expands a grammar string once, turtle-draws corridors, then connects remaining components.

Topology: Perfect

Time: O(L + E α(V)) (L = expanded string length)Space: O(V + L)

How It Works

  1. Expand an axiom with production rules up to a capped length.
  2. Interpret symbols as turtle moves, turns, and push/pop branch states.
  3. Carve forward moves while enforcing union-find acyclicity.
  4. Mark traversal overlays as the turtle draws the pattern.
  5. Run a randomized connect pass to absorb remaining isolated components.

Pros

  • Grammar-driven visual motifs
  • Good branch/trail visualization

Cons

  • Rule choice strongly impacts shape
  • Can overrun bounds without clamping

Best use: Fractal-inspired tree mazes with explicit symbolic generation phases.

Interesting fact: L-Systems were introduced to model plant growth and later became central to procedural graphics.

Solver Algorithms

These navigate an already generated maze from start to goal.

SolverResearch Corebfs

Breadth-First Search (BFS)

Expands in layers and guarantees shortest path in unweighted mazes.

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Push start cell into a queue.
  2. Pop in FIFO order and visit neighbors.
  3. Record each neighbor's parent when first discovered.
  4. When goal is reached, reconstruct path via parents.

Pros

  • Optimal path for unit weights
  • Simple and robust

Cons

  • Can explore many irrelevant nodes
  • Large frontier on wide-open maps

Best use: Reliable shortest path on unweighted grids.

Interesting fact: BFS is equivalent to Dijkstra when every edge has identical weight.

SolverResearch Coredfs

Depth-First Search (DFS)

Follows one branch deeply before backing up.

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Push start cell onto a stack.
  2. Pop and explore a neighbor branch deeply.
  3. Track discovery parents for reconstruction.
  4. Backtrack naturally when stack unwinds.

Pros

  • Very low overhead
  • Strong step-by-step visual clarity

Cons

  • No shortest-path guarantee
  • Can take long detours

Best use: Showing exploratory behavior rather than shortest paths.

Interesting fact: On tree mazes (perfect mazes), DFS still finds the unique path but may explore much more first.

SolverResearch Coreastar

A* Search

Combines actual cost and heuristic distance to guide exploration.

Compatible with: Perfect, Loopy, Weave

Time: O(E) to O(E log V), depending on priority structureSpace: O(V)

How It Works

  1. Maintain open set prioritized by f(n)=g(n)+h(n).
  2. Expand the cell with lowest estimated total cost.
  3. Relax neighbors if a better g-cost is found.
  4. Stop at goal and reconstruct via parents.

Pros

  • Usually much faster than BFS
  • Optimal with admissible heuristic

Cons

  • Needs heuristic tuning
  • Bookkeeping heavier than BFS/DFS

Best use: Fast near-optimal routing on grid mazes.

Interesting fact: Manhattan distance is a natural heuristic for 4-directional grids.

SolverAliasastar-euclidean

A* (Euclidean Heuristic)

A* variant using straight-line distance heuristic.

Alias of: A* Search (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(E) to O(E log V), depending on queueSpace: O(V)

How It Works

  1. Score candidates with f(n)=g(n)+h(n), where h is Euclidean distance.
  2. Expand the lowest f-score node.
  3. Relax neighbors with improved g-cost and update parents.
  4. Stop and reconstruct when reaching goal.

Pros

  • Admissible and consistent on grid movement
  • Often smooth directional guidance

Cons

  • Can expand more than Manhattan A* on 4-neighbor grids
  • Still more bookkeeping than BFS

Best use: Comparing heuristic behavior and exploring geometric distance cues.

Interesting fact: Euclidean is tighter for continuous geometry, while Manhattan better matches axis-only moves.

SolverAliasweighted-astar

Weighted A* (Greedy-Biased)

Biases A* harder toward the heuristic for more aggressive search on unit-cost maze edges.

Alias of: A* Search (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: Often faster than A* in practiceSpace: O(V)

How It Works

  1. Use f(n)=g(n)+w*h(n) with w>1.
  2. Expand the node with smallest weighted estimate.
  3. Continue relaxing neighbors and updating parents.
  4. Return reconstructed path when goal is dequeued.

Pros

  • Lower exploration count
  • Good speed at high grid sizes

Cons

  • Can lose optimality
  • Quality depends on weight

Best use: When responsiveness matters more than guaranteed shortest path, without changing maze edge weights.

Interesting fact: Weighted A* is common in games and robotics where near-optimal is acceptable for speed.

SolverResearch Coredijkstra

Dijkstra

Uniform-cost expansion from the source, optimal for nonnegative weights.

Compatible with: Perfect, Loopy, Weave

Time: O(E) to O(E log V), depending on queueSpace: O(V)

How It Works

  1. Initialize start with distance 0 and others with infinity.
  2. Expand the currently known minimum-distance node.
  3. Relax each outgoing neighbor distance.
  4. Stop when goal is finalized.

Pros

  • Optimal
  • Generalizes to weighted edges

Cons

  • More work than BFS on unit weights
  • No heuristic acceleration

Best use: Reference optimal solver and weighted-path baselines.

Interesting fact: Dijkstra published his algorithm in 1956 and reportedly designed it in about twenty minutes.

SolverResearch Corebellman-ford

Bellman-Ford

Relaxes all open edges in repeated passes until no shorter distances remain.

Compatible with: Perfect, Loopy, Weave

Time: O(VE)Space: O(V)

How It Works

  1. Initialize start distance to 0 and all others to infinity.
  2. Run full relaxation passes over every open edge using previous-pass distances.
  3. Update parent links whenever a shorter path to a node is found.
  4. Stop when a pass makes no improvements and reconstruct the path.

Pros

  • Provably optimal with nonnegative weights
  • Simple global convergence criterion

Cons

  • More work than BFS/Dijkstra on unweighted mazes
  • Global pass updates are still heavier than frontier-only methods

Best use: Reference correctness checks and shortest-path algorithm comparisons.

Interesting fact: This visualizer uses pass-snapshot relaxation so Bellman-Ford progression remains visible instead of collapsing in one cascaded pass.

SolverAdvancediterative-deepening-dfs

Iterative Deepening DFS (IDDFS)

Runs depth-limited DFS repeatedly with increasing depth limits.

Compatible with: Perfect, Loopy, Weave

Time: O(b^d) node expansions in the worst caseSpace: O(d)

How It Works

  1. Start with depth limit 0.
  2. Perform DFS constrained to the current limit.
  3. If goal is not found, increase limit and restart from the start node.
  4. Reconstruct the path as soon as a depth-limited run reaches the goal.

Pros

  • Low memory usage
  • Finds shallow solutions early like BFS depth ordering

Cons

  • Repeats work across limits
  • Can be slower than BFS on broad mazes

Best use: Memory-constrained search demonstrations and DFS/BFS tradeoff education.

Interesting fact: IDDFS combines DFS memory behavior with increasing-depth completeness, making it a classic AI search baseline.

SolverAdvancedgreedy-best-first

Greedy Best-First

Chooses nodes closest to goal by heuristic only.

Compatible with: Perfect, Loopy, Weave

Time: Highly input-dependentSpace: O(V)

How It Works

  1. Keep frontier ordered by heuristic h(n).
  2. Expand the most goal-looking node.
  3. Add newly discovered neighbors.
  4. Reconstruct once goal is found.

Pros

  • Very fast in easy layouts
  • Small control overhead

Cons

  • Not optimal
  • Can be misled by local geometry

Best use: Quick approximate solving and heuristic demonstrations.

Interesting fact: Greedy best-first can outperform optimal methods on easy maps but degrade sharply on deceptive ones.

SolverResearch Corebidirectional-bfs

Bidirectional BFS

Runs two BFS waves from start and goal until they meet.

Compatible with: Perfect, Loopy, Weave

Time: Often far less than single-source BFSSpace: O(V)

How It Works

  1. Initialize one queue at start and another at goal.
  2. Alternate expanding the smaller frontier.
  3. Detect when a node is seen by both searches.
  4. Stitch both parent chains into one path.

Pros

  • Big practical speedups on long corridors
  • Still optimal on unweighted graphs

Cons

  • More bookkeeping
  • Path merge logic is trickier

Best use: Large mazes with distant start/goal pairs.

Interesting fact: Meeting in the middle can reduce explored states exponentially in branching factor terms.

SolverResearch Coredead-end-filling

Dead-End Filling

Prunes leaves iteratively until only solution corridor remains.

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Compute each cell degree in the maze graph.
  2. Queue non-start/non-goal dead ends (degree <= 1).
  3. Remove dead ends and update neighbor degrees.
  4. Remaining cells form the final path corridor.

Pros

  • Very different visual style
  • No heuristic needed

Cons

  • Less intuitive to users expecting search waves
  • Best suited to perfect mazes

Best use: Teaching maze topology and pruning-based solving.

Interesting fact: This method solves by elimination, not by explicitly chasing the goal first.

SolverResearch Corelee-wavefront

Lee Wavefront

Distance-wave flood fill from goal, then gradient backtrace from start.

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Run BFS-like wavefront from the goal and assign distance labels.
  2. Stop once start is labeled (or all reachable nodes are processed).
  3. Trace from start by repeatedly moving to a neighbor with smaller distance.
  4. Mark traced cells as the final path.

Pros

  • Optimal in unweighted mazes
  • Path extraction is simple and deterministic

Cons

  • Needs full distance map storage
  • Can explore broad regions like BFS

Best use: Wave-propagation visualizations and shortest-path extraction via gradients.

Interesting fact: Lee's algorithm is a classic in PCB routing and grid-based shortest-path problems.

SolverResearch Corewall-follower

Wall Follower (Right-Hand)

Follows one wall consistently to eventually find the goal in simply connected mazes.

Compatible with: Perfect

Time: Input-dependent, typically O(E) in tree mazesSpace: O(V) with parent tracking

How It Works

  1. Keep a heading and prefer right turn, then straight, then left, then back.
  2. Move through open passages while maintaining wall contact.
  3. Record discovery parents for path reconstruction.
  4. When goal is reached, reconstruct and display the route.

Pros

  • Very intuitive
  • Great for demonstrating local decision rules

Cons

  • Not generally optimal
  • Can fail in non-simply-connected wall topologies

Best use: Educational demos of local navigation strategies.

Interesting fact: In perfect mazes (tree mazes), wall following always reaches the goal because there are no isolated loops.

SolverResearch Coreleft-wall-follower

Wall Follower (Left-Hand)

Left-hand counterpart of wall following with mirrored turn preference.

Compatible with: Perfect

Time: Input-dependent, typically O(E) in tree mazesSpace: O(V) with parent tracking

How It Works

  1. Keep one hand on the left wall while moving.
  2. Prefer left turn, then straight, then right, then back.
  3. Track visited/discovered cells and parent links.
  4. Reconstruct path after goal is reached.

Pros

  • Simple local rule
  • Useful for comparing handedness behavior

Cons

  • Not generally optimal
  • Same topology limits as right-hand variant

Best use: Educational side-by-side comparison with right-hand wall following.

Interesting fact: In simply connected mazes, left-hand and right-hand rules both solve, but can produce very different traversal traces.

SolverResearch Corerandom-mouse

Random Mouse

Chooses random moves at junctions with no directional strategy.

Compatible with: Perfect, Weave

Time: Highly variable; can be very largeSpace: O(V) with parent/discovery tracking

How It Works

  1. Start at entrance and pick random open move each step.
  2. Record first-discovery parents for optional reconstruction.
  3. Continue wandering until the goal is reached.

Pros

  • Extremely simple
  • Great baseline for comparison

Cons

  • Unreliable performance
  • No optimality guarantees

Best use: Baseline and educational contrast against informed solvers.

Interesting fact: Random Mouse is often used as the lower-bound benchmark in maze-solver studies.

SolverAliasflood-fill

Flood Fill (Lee Wavefront)

Alias of Lee Wavefront under an alternative name.

Alias of: Lee Wavefront (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Delegate solving to Lee Wavefront.
  2. Expand wavefront distances from the goal.
  3. Trace a descending distance gradient back to reconstruct a shortest path.

Pros

  • Familiar flood-fill naming
  • Identical correctness to Lee Wavefront

Cons

  • Not a separate implementation
  • Behavior tracks Lee Wavefront updates

Best use: Users expecting flood-fill terminology for wavefront shortest-path solving.

Interesting fact: In this codebase, flood-fill is a direct alias of lee-wavefront.

SolverAliaspledge

Pledge Algorithm (Wall Follower Extension)

Wall-following with turn accounting to escape loops/islands.

Alias of: Wall Follower (Right-Hand) (same runtime behavior).

Compatible with: Perfect

Time: Input-dependentSpace: Low local memory (higher in visualizer for tracing)

How It Works

  1. Pick preferred heading.
  2. Move straight when possible; otherwise follow wall.
  3. Track net turn count while wall-following.
  4. Leave wall when heading and turn count reset conditions are met.

Pros

  • More robust than naive wall following
  • Good local-navigation concept

Cons

  • Still not shortest-path optimal
  • More state than hand-on-wall rule

Best use: Comparing local heuristics in non-trivial topologies.

Interesting fact: Pledge specifically addresses wall-follower failures around detached wall islands.

SolverAliastremaux

Tremaux (DFS Path-Marking)

Mark-based exploration that avoids repeatedly traversing passages.

Alias of: Depth-First Search (DFS) (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E) typicalSpace: O(V + E) for marks

How It Works

  1. Mark edges/passages as they are traversed.
  2. Prefer unmarked exits; backtrack on marked branches.
  3. Avoid over-marked branches unless forced.
  4. Eventually reaches exit if reachable.

Pros

  • Guaranteed for finite mazes
  • Human-doable with marks

Cons

  • Not shortest-path optimal
  • Needs marking memory

Best use: Historical human-solvable guaranteed strategy demos.

Interesting fact: Tremaux's algorithm predates modern graph-search terminology but maps well to DFS ideas.

SolverAliaschain

Chain (Bidirectional BFS)

Alias of Bidirectional BFS with legacy naming.

Alias of: Bidirectional BFS (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Delegate solving to Bidirectional BFS.
  2. Expand search waves from start and goal.
  3. Stop when both waves meet and stitch the path.
  4. Return the same result as canonical Bidirectional BFS.

Pros

  • Compatibility with chain naming
  • Same shortest-path guarantees as Bidirectional BFS

Cons

  • Not a distinct algorithm
  • No behavior difference from canonical implementation

Best use: Users who prefer chain terminology for bidirectional search.

Interesting fact: In this codebase, chain is a direct alias of bidirectional-bfs.

SolverResearch Corecul-de-sac-filler

Cul-de-sac Filler (Dead-End Filling)

Extends dead-end elimination to prune cul-de-sac style branches.

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Identify dead-end-like and cul-de-sac candidates.
  2. Iteratively remove non-solution side branches.
  3. Repeat until only core route structure remains.

Pros

  • Topology-first solving
  • Useful structural visualization

Cons

  • Less intuitive than path search
  • Best with full maze map

Best use: Map-analysis style solving workflows.

Interesting fact: Cul-de-sac pruning can be seen as generalized leaf stripping on maze graphs.

SolverAliasblind-alley-sealer

Blind Alley Sealer (Dead-End Filling)

Alias of Dead-End Filling with alternate terminology.

Alias of: Dead-End Filling (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Delegate solving to Dead-End Filling.
  2. Iteratively prune dead-end branches that cannot belong to a solution path.
  3. Leave only the viable corridor structure.

Pros

  • Keeps sealer naming for discoverability
  • Matches Dead-End Filling correctness

Cons

  • Not a distinct implementation
  • Behavior follows Dead-End Filling changes

Best use: Users expecting blind-alley sealer naming in elimination-based solvers.

Interesting fact: In this codebase, blind-alley-sealer is a direct alias of dead-end-filling.

SolverAliasblind-alley-filler

Blind Alley Filler (Dead-End Filling)

Alias of Dead-End Filling with alternate terminology.

Alias of: Dead-End Filling (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Delegate solving to Dead-End Filling.
  2. Iteratively remove dead-end branches from the maze graph.
  3. Finish when only viable route corridors remain.

Pros

  • Keeps filler naming for discoverability
  • Matches Dead-End Filling correctness

Cons

  • Not a distinct implementation
  • Behavior follows Dead-End Filling changes

Best use: Users expecting blind-alley filler naming in elimination-based solvers.

Interesting fact: In this codebase, blind-alley-filler is a direct alias of dead-end-filling.

SolverAliascollision-solver

Collision Solver (Bidirectional BFS)

Alias of Bidirectional BFS with collision-focused naming.

Alias of: Bidirectional BFS (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Delegate solving to Bidirectional BFS.
  2. Expand start and goal frontiers toward each other.
  3. Detect wave collision and stitch parent chains.
  4. Return the same shortest path as canonical Bidirectional BFS.

Pros

  • Collision-oriented naming
  • Same shortest-path guarantees as Bidirectional BFS

Cons

  • Not a distinct implementation
  • No behavior difference from canonical implementation

Best use: Users who prefer collision wording for bidirectional search.

Interesting fact: In this codebase, collision-solver is a direct alias of bidirectional-bfs.

SolverResearch Coreshortest-paths-finder

Shortest Paths Finder (All)

Identifies the full set of cells that belong to shortest routes.

Compatible with: Perfect, Loopy, Weave

Time: O(V + E)Space: O(V)

How It Works

  1. Compute distances from start.
  2. Compute distances from goal.
  3. Keep cells where dStart + dGoal equals shortest distance.
  4. Mark resulting shortest-path manifold.

Pros

  • Shows all shortest alternatives
  • Great for route multiplicity analysis

Cons

  • More data than single-path methods
  • Can look dense in open areas

Best use: Analyzing alternative optimal routes.

Interesting fact: A single shortest path is just one sample from this full shortest-path set.

SolverAliasshortest-path-finder

Shortest Path Finder (BFS)

Alias of BFS for single shortest-path extraction.

Alias of: Breadth-First Search (BFS) (same runtime behavior).

Compatible with: Perfect, Loopy, Weave

Time: O(V + E) on unweighted mazesSpace: O(V)

How It Works

  1. Delegate solving to Breadth-First Search (BFS).
  2. Expand in layers from start over open neighbors.
  3. Reconstruct one shortest path from parent links when goal is reached.

Pros

  • Clear, search-friendly naming
  • Exactly matches BFS shortest-path behavior

Cons

  • Not a distinct implementation
  • Behavior follows BFS changes

Best use: Users who want BFS behavior under a shortest-path-centric label.

Interesting fact: In this codebase, shortest-path-finder is a direct alias of bfs.

SolverAdvancedq-learning

Q-Learning (RL)

Learns action values through repeated episodes, then follows the learned greedy policy.

Compatible with: Perfect, Loopy, Weave

Time: Training-dependent; typically much higher than graph searchSpace: O(V * A)

How It Works

  1. Initialize a Q-table for state-action values.
  2. Run episodes with epsilon-greedy exploration.
  3. Update Q-values from rewards and bootstrapped future estimates.
  4. After training, follow highest-valued actions from start to goal.

Pros

  • Shows reinforcement-learning behavior
  • Adapts through experience instead of hardcoded heuristic

Cons

  • Needs many episodes
  • No strict shortest-path guarantee without careful tuning

Best use: Educational RL comparisons against classical graph-search solvers.

Interesting fact: Unlike BFS/A*, Q-learning can improve policy quality over repeated runs on the same maze distribution.

SolverAdvancedant-colony

Ant Colony Optimization

Uses pheromone-guided stochastic exploration where strong routes reinforce over time.

Compatible with: Perfect, Loopy, Weave

Time: Generation/ant-count dependent; typically highSpace: O(E) pheromone storage

How It Works

  1. Spawn ants that probabilistically follow local pheromone signals.
  2. Let ants explore, backtrack from dead ends, and record successful paths.
  3. Evaporate pheromones globally and reinforce successful trails.
  4. Extract the strongest discovered route as the final path.

Pros

  • Strong emergent behavior visualization
  • Good metaheuristic comparison baseline

Cons

  • Heuristic/stochastic tuning sensitive
  • No strict optimality guarantee

Best use: Metaheuristic solver demonstrations and exploration/exploitation tradeoff analysis.

Interesting fact: ACO was inspired by real ant foraging behavior where pheromone reinforcement biases collective path selection.

SolverAdvancedgenetic

Genetic Algorithm

Evolves path-encoding chromosomes and keeps fittest candidates across generations.

Compatible with: Perfect, Loopy, Weave

Time: O(G * P * L), generations * population * chromosome lengthSpace: O(P * L)

How It Works

  1. Generate an initial random population of move chromosomes.
  2. Simulate each chromosome and score fitness by goal reachability and path quality.
  3. Keep elite candidates and produce offspring with crossover/mutation.
  4. Repeat until convergence or generation budget, then trace best path.

Pros

  • Handles large noisy search spaces
  • Shows exploration vs exploitation behavior

Cons

  • No strict optimality guarantee
  • Performance depends on hyperparameters

Best use: Metaheuristic comparisons and stochastic solver experiments.

Interesting fact: Genetic solvers trade deterministic guarantees for adaptive search pressure shaped by a fitness function.

SolverAdvancedrrt-star

RRT* (Grid Approximation)

Builds a sampled tree from start and locally rewires for lower-cost parent chains.

Compatible with: Perfect, Loopy, Weave

Time: Sampling-dependent; often near O(K * d) for K expansionsSpace: O(V) tree state

How It Works

  1. Initialize a search tree rooted at start.
  2. Repeatedly sample targets and expand the tree toward promising frontier nodes.
  3. Rewire nearby nodes when the new node yields lower path cost.
  4. Once goal is connected (or budget is exhausted), trace tree path to goal.

Pros

  • Anytime-style sampled exploration
  • Natural bridge toward robotics planning concepts

Cons

  • Approximation on discrete grids
  • Requires iteration budget / fallback policy

Best use: Bringing continuous-space planning ideas into grid maze comparisons.

Interesting fact: RRT* is asymptotically optimal in continuous settings; grid adaptations mimic that behavior with discrete rewiring.

SolverAdvancedphysarum

Physarum (Slime Mold)

Adapts edge conductivities from pressure-driven flow until a dominant route emerges.

Compatible with: Perfect, Loopy, Weave

Time: O(I * (V + E)) for I flow iterationsSpace: O(V + E)

How It Works

  1. Initialize traversable edges with equal conductivity and terminal pressures.
  2. Relax node pressures with conductivity-weighted averaging.
  3. Compute edge flow and update conductivity via growth/decay dynamics.
  4. Repeat until values stabilize.
  5. Extract the path by following strongest conductivities (with fallback pathfinding).

Pros

  • Biologically inspired dynamics
  • Rich iterative visualization
  • Handles all topologies

Cons

  • Iteration-heavy
  • Path extraction is heuristic without fallback

Best use: Visualizing adaptive network optimization on maze graphs.

Interesting fact: Real Physarum organisms can approximate shortest paths by reallocating protoplasmic tube thickness.

SolverAdvancedelectric-circuit

Electric Circuit (Resistor Network)

Solves maze voltages by Laplace relaxation and traces a descending-voltage route to the goal.

Compatible with: Perfect, Loopy, Weave

Time: O(I * (V + E))Space: O(V)

How It Works

  1. Set V(start)=1 and V(goal)=0 with neutral initialization elsewhere.
  2. Run Gauss-Seidel voltage sweeps over non-terminal nodes.
  3. Stop when maximum change is below epsilon or iteration cap is hit.
  4. Interpret voltage gradient as flow potential.
  5. Trace path by steepest local voltage drop (with fallback pathfinding).

Pros

  • Smooth gradient visualization
  • Deterministic relaxation process

Cons

  • Needs many sweeps on larger mazes
  • Greedy extraction is heuristic

Best use: Potential-field style solver comparisons against discrete graph search.

Interesting fact: Voltage solutions on resistor networks are tightly linked to random walk probabilities.

SolverResearch Coreida-star

IDA* (Iterative Deepening A*)

Runs depth-first f-bound searches with increasing thresholds using an explicit stack.

Compatible with: Perfect, Loopy, Weave

Time: Exponential worst-case; practical depends on heuristic qualitySpace: O(depth)

How It Works

  1. Initialize threshold with start heuristic to goal.
  2. Perform DFS limited by f=g+h threshold.
  3. Record the minimum exceeded f as next iteration threshold.
  4. Restart with increased bound while reusing lightweight stack state.
  5. Reconstruct path once goal enters the admissible bound.

Pros

  • Optimal with admissible heuristic
  • Memory-light compared to A*

Cons

  • May re-expand nodes across iterations
  • Can be slower than A* on broad graphs

Best use: Optimal solving when memory pressure matters more than re-expansion cost.

Interesting fact: IDA* keeps A*-style optimality guarantees while using DFS-like memory.

SolverAdvancedpotential-field

Artificial Potential Field

Moves greedily along local potential minima with random escape from local minima traps.

Compatible with: Perfect, Loopy, Weave

Time: O(K * d) for K steps and local degree dSpace: O(V)

How It Works

  1. Score candidate neighbors by attraction to goal plus repulsion penalties.
  2. Move toward lowest potential neighbor.
  3. Track revisit frequency to detect stagnation.
  4. Inject short random escapes when trapped.
  5. Build and mark discovered path when goal is reached.

Pros

  • Fast local decisions
  • Simple to animate
  • Demonstrates local-minimum behavior

Cons

  • Heuristic only
  • Can fail without escape strategy

Best use: Comparing local-gradient heuristics versus global graph search.

Interesting fact: Potential-field planning originated in robotics for real-time obstacle avoidance.

SolverResearch Corefrontier-explorer

Frontier-Based Exploration

Simulates incremental map discovery: explore frontier boundaries, then navigate once goal is known.

Compatible with: Perfect, Loopy, Weave

Time: O(K * (V + E)) in repeated BFS replansSpace: O(V)

How It Works

  1. Start with partial knowledge around the initial position.
  2. Identify frontier cells bordering unknown space.
  3. Plan to nearest frontier over known traversable graph.
  4. Move one step, reveal neighbors, and update frontier.
  5. When goal becomes known, switch to direct navigation and trace final path.

Pros

  • Strong exploration narrative
  • Deterministic and complete on connected mazes

Cons

  • Repeated replanning overhead
  • More state than simple shortest-path solvers

Best use: Fog-of-war style discovery visuals and robotics-inspired exploration behavior.

Interesting fact: Frontier exploration is widely used in autonomous mapping systems for unknown environments.