← All posts
Monday, April 20, 2026·5 min read

How gamification fixes stall cleaning quality, not just speed

stall cleaningdesign

The first version of any “gamified” barn cleaning app makes the same mistake: it tracks stalls per shift. The leaderboard rewards whoever finishes fastest. Within a week, your top cleaner is rushing through twelve stalls leaving manure in the corners while your conscientious cleaner does six perfectly and feels punished.

The real metric is throughput at acceptable quality. The challenge is measuring quality without making the manager type a paragraph after every cleaning. Here's how we solved it.

The chain

A worker can't start a cleaning without an active shift. They can't start the timer without taking a Before photo of the stall. They can't submit without taking an After photo. The result is two photos for every cleaning, taken from roughly the same angle by the same person. The phone's native camera does the work — we don't need any clever AI.

The cleaning row lands in your manager Review Queue with status pending_review. You see Before and After side by side. You tap a star (1 to 5). That's it. Status flips to rated. The whole rating action is one tap and takes about three seconds per stall, after which you have a defensible record of quality that survives the worker leaving.

How quality feeds the leaderboard

The leaderboard sorts by stalls completed in a window — but secondary sort is by average quality rating. Tertiary is average time. So a cleaner who finishes 15 stalls at 3.0 stars ranks lower than a cleaner who finishes 12 at 4.8. In practice this nudges everyone toward the median: fast-and-sloppy gets called out by their rating average; slow-and-perfect feels rewarded but also incentivized to find efficiency.

We resisted the urge to bake quality into the pay calculation directly. Pay is hours × hourly-rate or stalls × stall-rate, whichever is higher. Quality affects rank, not paycheck. Mixing the two creates more conflict than it solves — workers will argue every star.

What we don't do

We don't auto-grade with image classifiers. We tried it. Manure detection from a phone photo is solvable but unreliable enough that one false negative makes the worker furious for a week. The 3-second human tap is faster than chasing edge cases.

We don't timestamp the stars to track manager review speed. That would gamify your work and create a different kind of pathology.

We don't show the worker their before/after photos until after rating. Showing them earlier creates a feedback loop where they retake After photos until they look right, defeating the point.

The result

After a few weeks of running this, the natural distribution shows up: top cleaners are fast and rate 4.5+ on average. Bottom cleaners are usually fast but consistently rate 3.0 or lower — and they're the conversation you needed to have anyway, just now you have evidence. Quality goes up because the workers know the photos are looked at, and looking at photos is something the manager actually does because it takes three seconds.

Gamification done badly is just a faster path to bad work. The trick is picking the metric that matches the goal. For stall cleaning the goal is clean stalls — so the metric is photo evidence + a star, not the number on a stopwatch.

More posts
Migrating live lesson subscriptions from WooCommerce to Stripe in one afternoon
A trainer in our barn had been billing lessons through WooCommerce Subscriptions for years. Here's how we moved her live customer base into our system without making a single student re-enter their card.
What this platform is, and why we built it
A working barn runs on a stack of sticky notes, group texts, spreadsheets, and a payment processor or two. We replaced all of that with one app — here's what it does.
One login, three views: workers, trainers, and students share an app
A barn employee who also takes lessons shouldn't need two accounts. A trainer who's also a barn manager shouldn't flip between portals. Here's how we model overlapping roles in one user record.