Six bugs the AI wrote, and the one it missed
ChronoCrosser is a cannabis breeding simulator I built on an evening. It is a toy. The codebase is small, there is no network, no auth, no backend. I wrote it with heavy AI assistance because I wanted to see how far I could push that style on something fun and low-stakes.
v1.0 shipped to GitHub Pages. v1.0.1 shipped six days later, with six fixes. Here are the ones worth writing down, because in every case the generated code compiled, ran, and looked right. The bugs were only visible if you knew what the output should have been.
The parameter-order bug
A genetics function took two strains and a mutation probability. The AI wrote the call site with the arguments in the wrong order: strain, probability, strain. The function signature accepted three arguments, the types worked (everything was an object or a number), there was no runtime error. The output was wrong in a way that only showed up if you tracked the trait inheritance manually across ten generations.
I caught it because I could tell a particular trait was not propagating the way it should. Then I had to read the call site carefully to see the swap. A linter would not have caught this. A test suite with fixture strains would have caught it on the first run.
The double mutation chance
A mutation check was run in two places in the breeding pipeline. The AI output had added a second call in a new helper without removing the original. The end user saw mutation rates that felt too high. It was not random noise; it was the probability being applied twice, which roughly squares the effective rate for edge-case traits.
This one was only visible over hundreds of runs. A casual tester would have shrugged and called it RNG.
The missing version display
Small thing, but worth calling out. The AI did not add a version string to the UI. I only noticed after shipping, because users on Discord were reporting bugs from different versions and I could not tell which build they were running. v1.0.1 added a visible version tag in the corner and in the page title.
The lesson: AI output optimizes for shipping the feature, not for the operational needs of the shipped thing. You still have to think about what you need to support the feature six months later.
The .nojekyll file
GitHub Pages runs Jekyll by default. Any file or directory that starts with an underscore is treated as private and not served. The project had a _data directory with breeding tables. They worked locally and then did not work on GitHub Pages. The fix is one empty file, named .nojekyll, at the repo root.
I spent twenty minutes diffing local vs. deployed output before I remembered this. The AI did not suggest it because the AI did not know where I was deploying. Context is still the thing the human has to bring.
Two more and the one I missed
The other two fixes were smaller: a typo in a strain name and an off-by-one in the lineage display. Both were human-level errors that generated code does not particularly help or hurt with.
The bug I am still chasing is in a pheno-stability calculation that looks correct but produces a distribution with too-heavy tails compared to what the original research paper describes. I do not know yet whether the AI misread the paper, whether I misread the AI, or whether the paper itself has an error. I will figure it out and ship it as v1.0.2. That is the deliverable.
What this session taught me
AI-assisted is not AI-only. Every line of code on this project passes through a human who has to vouch for it. The six fixes in v1.0.1 are the proof that I did the vouching, not just the prompting.
That is the thing I want people looking at my work to notice. The human caught these. The human can explain why. The artifact you are looking at had a mind in the loop.
The game is playable at massentropy.github.io/chronocrosser. Source at github.com/massEntropy/chronocrosser.