I take several approaches, often multiple ones:
- Determine test fixture — what do I need to set up to actually debug / test? How do I isolate the test fixture? How can it be automated so I’m not clicking through the UI?
- Write the test first. I actually do this very rarely because I find it hard to express the test results ahead of time. It’s only when I write the actual code that I think, ok, this needs to be tested, that needs to be tested. Also, when dealing with new technologies, I often don’t know what the format of the data is until I get the hardware or API working — what does the card scanner actually give me, what does the JSON actually look like?
- Step through the code. I do this almost all the time to make sure that I’m understanding correctly the tech with which I’m interfacing.
- Write the test after writing the code. For certain things, I definitely do this. Once the baseline code is working, I can then throw different scenarios at it. Useful scenarios that represent anticipated use cases, not just mindless contract testing or the like.
- Debug the tests.
- Try out the code via the UI. Funny thing is, that often reveals things I didn’t consider.
- Put the app in front of someone else. Not so funny is seeing how other people go about using the app, and the bugs that are revealed in that process. Not just algorithmic bugs, but also UX bugs — if the user experience sucks, I consider that a bug.
- Put the app in front of select customers. Very revealing and is the best way to discover that the spec itself is buggy.
So debugging is not just about code. It’s also about the user experience, whether the spec accurately captured the user’s needs, and understanding how stuff is used in the wild.