A decade-old folder, handwritten notes, and a deceptively simple robot.
Introduction
Wrapping up a third personal fun project in two months? Check!! And this is the longest-standing one, and possibly one of my favourite ever. It goes back to when I was barely past the first steps into my exploration of both Python, and Computer Science. This project was fun because it had to do with solving puzzles. I am happy to share it with you, my readers, today.
If you’ve ever watched a Roomba bump into a wall, spin around, and trundle off in a seemingly random direction, you’ve witnessed a real-world version of the problem I’m about to describe. How does a robot that can only sense what’s immediately around it — no map, no memory of where it’s been, no grand plan — manage to cover every square inch of a room?
In January 2015, I was working through Harvey Mudd College’s “CS for All” materials on my own — no live instruction, no solutions to check against — and I encountered Picobot: a simulated robot even simpler than a Roomba. Picobot became one of my favourite puzzles. I scribbled diagrams, wrote copious amounts of notes, tested rules, and eventually optimized my solutions down to what I believed were the minimum number of rules needed to cover the whole room. I kept everything into a well-worn file folder. This was my very first serious dab into CS, and I loved it!
That folder has survived multiple reorganizations over the years – every once in a while I’d open it, think about writing it up properly, and close it again. But, after positive experience wrapping up projects collaboratively with Claude — the colormap app, the Mill’s Methods post — Picobot was next in line.
With the help of Claude Opus (v 4.5) I verified those old solutions, built a Python simulator, and finally documented the work properly.
This post is about the optimization journey. The reasoning. The moments when things click.
What is Picobot?
Picobot is a pedagogical robot created for Harvey Mudd’s introductory computer science course. It lives in a grid world and has one job: visit every empty cell. The catch? Picobot is nearly blind.
The Constraints
Picobot can only sense its four immediate neighbours: North, East, West, and South. For each direction, it knows one thing: is there a wall, or is it empty? That’s it. No memory of where it’s been. No coordinates. No global view.
Here’s an example of what Picobot “sees”:
N
W ● E ← Picobot sees: N=empty, E=wall, W=empty, S=empty
S
We encode this as a 4-character string: xExx
xmeans empty (nothing there)N,E,W, orSmeans wall in that direction- Position order is always: North, East, West, South
So xExx means “wall to the East, everything else empty.”
The Rules
Picobot follows rules that say: “If I’m in this state and I see this pattern, then move this direction and switch to this state.”
The format is:
STATE SURROUNDINGS -> MOVE NEW_STATE
For example:
0 Nx** -> E 1
This means: “In State 0, if there’s a wall to the North and East is empty, move East and switch to State 1.”
The wildcard * matches anything:
0 x*** -> N 0
“In State 0, if North is empty (don’t care about the rest), move North and stay in State 0.”
There’s also a special move: X (stay put). The robot doesn’t move but can change state. This seems useless at first. It’s not.
The Goal
Write the smallest set of rules that makes Picobot visit every empty cell in a room, regardless of where it starts.
The Harvey Mudd Picobot lab posed two main challenges, below, and several optional one.


- Empty Room: A rectangular room with walls only on the boundary
- Maze: A maze with single-cell-wide corridors
The lab simulator is actually still live at https://www.cs.hmc.edu/picobot/
Give it a shot, it’s fun!

Back to the story.
The Empty Room: From 7 to 6 Rules
The Strategy: Boustrophedon
The word comes from Greek: “ox-turning.” It’s how you plow a field — go one direction, turn around at the end, come back the other way. Mow a lawn. a line of text, then the next (if you are Etruscan).
For Picobot, the boustrophedon pattern looks like this:

The robot sweeps East, drops down, sweeps West, drops down, repeats. But first, it needs to get to the top of the room — so it goes North until it hits the wall.
My Initial Solution: January 6, 2015
I have an email I sent to myself at 12:44 AM on January 6, 2015 — working late (on a Tuesday night!!!) on this puzzle. It shows my first experiments:
First experiment: go to origin:
# go to origin
0 **** -> X 3
3 ***x -> S 3
3 ***S -> W 2
2 **x* -> W 2
2 **W* -> X 0
And then my first complete solution:
Final solution program 1
0 x*** -> N 0 # (initial) state 0 with nothing N: go N
0 Nx** -> E 1 # state 0 with a wall N but none E: go E, AND
1 *x** -> E 1 # state 1 with nothing E: go E
# OR, instead of previous 2. This is if initially by E wall
0 NE** -> W 2 # state 0 with a wall N and one E: go W
# once it reaches east wall
1 *E** -> W 2 # state 1 with a wall E: go W
2 **x* -> W 2 # state 2 with nothing W: go W
2 **W* -> S 1 # state 2 with a wall W: go S
That’s 7 rules. The comments show my thinking — I was handling the case where Picobot starts by the East wall separately.
The Harvey Mudd lecture slides posed an extra challenge: “how FEW rules can you use? The current record is six rules” The solution wasn’t shown — just the target. That became the question that hooked me: how do you get there? I was one rule away
The Insight: “C and F Are the Same”

My handwritten notes show positions labelled A through F, representing different situations Picobot might encounter. The breakthrough came when I realized:
Position C (just finished going North, need to decide: East or West?) and Position F (at a wall during the sweep, need to decide direction) were being handled by separate rules — but they didn’t need to be.
The key insight: after going North and hitting the wall, I don’t need a separate rule to check East. I can use the X move (stay put) to transition to State 1, and let State 1’s existing rules handle it.
This is counter-intuitive. The X move looks like wasted time — the robot just sits there! But it’s not wasted. It’s a state transition without movement that lets me reuse existing rules instead of duplicating logic.
The Final Solution: January 24, 2015
Eighteen days later, I emailed myself the optimized solution — Saturday, January 24, 2015 at 5:05 PM (weekend fun work):
# Optimized EMPTY ROOM program:
0 x*** -> N 0
0 N*** -> X 1
1 *x** -> E 1
1 *E** -> W 2
2 **x* -> W 2
2 **W* -> S 1
Six rules. Let me walk through why this works:
State 0 handles “going North.” When Picobot hits the North wall, it executes X 1 — stays put but switches to State 1. Now State 1 takes over.
State 1 is dual-purpose:
- If East is empty → go East (continuing the sweep)
- If East is wall → start going West (end of row)
Because Picobot stays put when transitioning from State 0 to State 1, it’s in the exact same position, and State 1 correctly determines whether to go East or start heading West.
State 2 sweeps West. When it hits the West wall, it goes South and switches back to State 1. Again, State 1 determines: East or end of row?
The elegance is that State 1 does double duty. It handles both “continue going East” and “decide what to do at the end of a row.” The X move is what makes this possible.
Verified
I tested this against all 529 possible starting positions in a 25×25 room. Every single one reaches 100% coverage. Maximum steps: 1,013. The solution works.



The Maze: From 16 to 12 Rules
The maze challenge is different. Corridors are one cell wide. There are dead ends, branches, and loops. The boustrophedon strategy won’t work here.
The Strategy: Right-Hand Wall Following
The classic maze-solving algorithm: keep your right hand on the wall and walk. You’ll eventually visit everywhere (in a simply-connected maze).
For Picobot, “right hand on wall” translates to:
- If you can turn right, turn right
- Otherwise, if you can go forward, go forward
- Otherwise, if you can turn left, turn left
- Otherwise, turn around (dead end)
With four directions (North, East, West, South) and the “right-hand” rule relative to each, we need four states — one for each direction Picobot is “facing.”
- State 0: Going North (right hand on East wall)
- State 1: Going East (right hand on South wall)
- State 2: Going West (right hand on North wall)
- State 3: Going South (right hand on West wall)
Initial Solution: 16 Rules
The straightforward implementation uses 4 rules per state:
# State 0: Facing North (right hand = East)
0 *x** -> E 1 # Can turn right → turn right (now facing East)
0 *Ex* -> N 0 # Can't turn right, but forward is open → go North
0 *EW* -> W 3 # Can't go forward → turn left (face West)
0 *EWS -> S 2 # Dead end → turn around (face South)
# ... and similarly for States 1, 2, 3
16 rules total. It works. But can we do better?
Two-Phase Optimization
My maze notes show two distinct approaches:
Phase 1: Working from principles. The small diagram in my notes shows me reasoning about the state transitions theoretically. What’s the minimum information needed at each decision point? Where is there redundancy?



Phase 2: Empirical debugging. The large diagram shows positions A through K — specific spots in a maze where I tested rules. When the principled approach hit edge cases, I sketched the situation, walked through it (“what would I do here?”), and translated my intuition into rules.

The note “Key is G” appears on the page. Position G was where the solution got validated — when it handled G correctly, the logic was proven.
The Iteration: A Failed Attempt
That same January 24 email shows me trying to adapt the empty room optimization for the maze — and failing:
This, optimized for maze, does not work. At dead ends it turns around but then it goes to the other end and enters an infinite loop...
The attempt that followed didn’t handle dead ends properly. The robot would turn around, walk to the other end, and loop forever.
The Final Solution
Then, in the same email:
This works!!
0 *x** -> E 1
0 xE** -> N 0
0 NE** -> X 2
1 ***x -> S 3
1 *x*S -> E 1
1 *E*S -> X 0
2 x*** -> N 0
2 N*x* -> W 2
2 N*W* -> X 3
3 **x* -> W 2
3 **Wx -> S 3
3 **WS -> X 1
12 rules: 3 per state instead of 4. A 25% reduction.
The key insight: each state now handles only three cases:
- Right is open → turn right
- Forward is open → go forward
- Both blocked → stay put, rotate to next state (which will check left/behind)
The X move chains states together. If right and forward are blocked, we stay put and try the next state. That state checks its right (our left). If that’s blocked too, it chains again. The sequence continues until we find a way forward.
Verified
Tested against all 287 reachable positions in a 25×25 maze, and all 280 cells in the actual Harvey Mudd lab maze. 100% coverage every time. Here’s one simulation:

The right-hand rule doesn’t just guarantee coverage — it collapses the state space. The rules are ordered to check “right side open” first. In State 0 (facing North), rule 1 asks: is East open? If yes, go East — Picobot never evaluates what’s ahead. That’s how rule ordering implements “keep your hand on the wall.” Different physical positions with the same wall-relationship become equivalent, and that’s what makes 4 states and 12 rules possible. Take a look at the simulations below of the two equivalent positions sketched in my handwritten notes, shown earlier:


Making It Explicit: Starting State Matters
Here’s something worth highlighting — something that’s in the Harvey Mudd lab instructions but easy to overlook.
The 6-rule empty room solution requires Picobot to start in State 0.
The Harvey Mudd simulator always starts in State 0, and the lab materials mention this. Whether I consciously accounted for this in 2015, I don’t remember — I didn’t document it in my notes. But when I built my own simulator in 2025, I could test explicitly: what happens if Picobot starts in State 1 or State 2?
| Start State | Initial Direction | Coverage |
|---|---|---|
| 0 | North | 100% ✓ |
| 1 | East | ~50% ✗ |
| 2 | West | ~45% ✗ |
Starting in State 1 or 2, Picobot gets stuck. It begins the East-West sweep from wherever it starts — never going North to reach the top first. The rows above its starting position never get visited.
This isn’t a bug in the solution. It’s a constraint: the boustrophedon pattern assumes you start by going North. The 6-rule minimum only works because State 0 guarantees that first trip to the top wall.
A truly state-agnostic solution — one that works regardless of starting state — would need more rules. The elegance of 6 rules comes from working within the standard initial conditions.
What I Learned
- The X move is not wasted time. It’s a state transition that enables rule reuse. This is the key to minimizing rule count.
- Different problems, different methods. The empty room yielded to analytical insight (“C and F are the same”). The maze required two phases: principled derivation, then empirical debugging.
- Implicit assumptions matter. The starting state requirement was in the lab materials all along, but easy to overlook. Building my own tools made it explicit.
- Old projects are worth revisiting. With fresh eyes — and some help — I found new ways to understand and share work I already knew.
- How I approached it. Looking back at my notes, I see a pattern that’s familiar from my day-to-day work: diagrams everywhere, positions A-K labeled, “me walking in the maze.” Try something → watch where it fails → sketch that spot → ask “what would I do here?” → translate to rules → repeat. “C and F are the same” collapsed the problem by seeing equivalence the formal notation obscured. The notes weren’t just records — they were how I thought. And 18 days between 7 rules and 6 rules: no rushing, no giving up. This is field scientist methodology applied to computer science. Maybe that’s why I loved it.
- There is no free lunch in AI collaboration. This project — both the technical verification and this blog post — would not have been possible without deep understanding of the subject matter. That understanding came from me (the 2015 work, the insights, the diagrams), from the extensive documentation I’d kept, and from all the iterative work we did together. This isn’t “vanilla coding” where you prompt an AI and get a finished product. It’s genuine collaboration: human insight plus AI execution. The AI didn’t optimize Picobot — I did, in 2015. The AI helped me verify, document, and communicate that work in 2025.
Try It Yourself
The full Python implementation is on GitHub: https://github.com/mycarta/picobot-optimizer
Itncludes:
picobot_simulator.py— The core enginepicobot_rooms.py— Empty room and maze generatorspicobot_visualizer.py— GIF animation creatoroptimized_solutions.py— The 6-rule and 12-rule solutionstest_solutions.py— Exhaustive verification
All documented and ready to explore.
What’s Next
Part 2: How I revisited this project with AI assistance — and what that collaboration actually looked like.
Part 3: Educational materials. Exercises, concept checks, and scaffolded challenges for those learning to code.
The Picobot simulator was created for Harvey Mudd College’s “CS for All” course. My optimization work is from January 2015. Verification, documentation, and visualization were completed in January 2025 with AI assistance.
AI/HI (Human Intelligence) Transparency Statement
Modified from Brewin
| Has any text been generated using HI? | Yes |
| Has any text been generated using AI? | Yes |
| Has any text been improved or corrected using HI? | Yes |
| Have any methods of analysis been suggested using HI? | Yes |
| Have any methods of analysis been suggested using AI? | Yes |
| Do any analyses utilize AI technologies, such as Large Language Models, for tasks like analyzing, summarizing, or retrieving information from data? | Yes |
Additional context:
The Picobot optimization work described in this post — the solutions, the insights, the handwritten diagrams, the reasoning behind “C and F are the same” and “Key is G” — was done entirely by me in January 2015, working alone through Harvey Mudd’s CS for All materials with no live instruction and no solutions to check against. The emails quoted in this post are timestamped records from that work.
In January 2025, I revisited this project with Claude AI (Anthropic). Claude built the Python simulator, ran exhaustive verification tests, created the GIF visualizations, and helped document the reasoning. The explicit testing of starting states emerged from our joint exploration — I asked the question, Claude ran the tests.
This post was drafted collaboratively. I provided the source materials (my 2015 notes, emails, the verified solutions, our session transcripts), direction, and editorial judgment throughout. Claude drafted based on these inputs and our discussion of structure and framing. I reviewed, revised, and made all final decisions about what went to publication.
A note on AI collaboration: This kind of work is not “vanilla coding” — prompting an AI and receiving a polished output. It required deep domain knowledge (mine), extensive primary documentation (my 2015 notes and emails), iterative correction (many rounds), and genuine intellectual engagement from both sides. The AI contributed too — not the original insights, but meta-insights: recognizing patterns in my notes, naming things I’d done but hadn’t articulated (like “C and F are the same” as a key moment), and seeing that I’d used different methodologies for the empty room versus the maze. The AI did not and could not have done this alone. Neither could I have done the verification, visualization, and documentation at this scale without AI assistance. That’s what real collaboration looks like.
The intellectual work is mine. The documentation, verification, and articulation is collaborative.

























