Dissecting Some Anti-TDD Statements

"You can't use TDD unless you know the requirements"

Requirements Are Bogus

The requirements are wrong. You're understanding of them is wrong. The user's needs will change as you develop and deliver. Repeat.

https://medium.com/defense-unicorns/5-minute-devops-the-three-wrongs-6c660f1287e7

Delivering Requirements is Not What We Do Here

Your success as a dev is if you deliver software that's valuable to the user. It's not if you built to the requirements. Your job is also to help understand and redefine those requirements as time goes on.

"TDD is pointless because when things change, you have to change the tests, or even delete them + the code. Why do twice the amount of work for zero point?"

Test What it Does, Not How it Does It

Testing behavior means you can refactor the code, and the tests don't have to change. Testing just behavior and not implementation details is harder than most testing zealots make it out to be. You can, however, get better with practice. So practice.

Visualize the API, Then Test It Into Existence

Anti-TDD devs will "explore the problem through writing code" and eventually arrive at something that feels right. Testing that after can be hard, unless they're one of those rare devs that can write testable code without writing the tests first.

You can arrive at the same design, testing first, and you don't have to risk having code that's hard to test/low coverage. While the tests and types you write help guide the design, they too are not infallible or immutable. Doesn't feel right? Don't like your design in a particular part? Change it. The types and tests will tell you what needs to change and what doesn't. As you iterate on this, they'll continue to tell you if your design is good (e.g. easy to test), and what parts aren't (e.g. hard to test, or lots of mock/stub setup).

Test Coverage & Coupling

When you're "done" for the day, week, month, you'll have tests covering the parts you're not working on, or didn't realize were coupled, which is a nice side benefit.

"I hate updating tests."

Empathy on Factors at Play

Me too. There are a lot of factors here, specifically on:

  • programming language & types
  • framework
  • skill level

Languages like Elm, Scala, or OCaml have such a good type system, they negate the need for many unit tests (does the code work to a dev's approval) so you can focus more on acceptance tests (does the code work according to users/business/product people).

Languages like JavaScript or Python are so error prone, you have to write tests just to ensure you can successfully import modules, and this work can be quite tiresome. So you can see how TDD practitioners in in something like Haskell are confused when someone in JavaScript has having so much irritation writing unit tests.

Frameworks can make it difficult to test. Angular, for example, requires an immense amount of setup just to test 1 class method. In addition, the way you test class methods is using return values or assertions on class properties while HTTP calls require expectations with 2 manual steps, whereas testing the DOM requires yet another way. This can make testing not fun at all, or someone to just prefer Acceptance Tests only in Cypress or Playwright.

Finally, skill level can prevent many from making progress despite evidence from DORA (?? be wary of the non-transparent research) that it is the only known way for Juniors to not create a big ball of mud ( e.g. large, untestable, technical debt filled mess).

Solutions To Challenges

Regardless of language, focus on Acceptance Tests first. For Web UI's, that means things like Playwright/Cypress/Puppeteer where you stub all HTTP calls to ensure your tests work every time (aka Whitebox aka Component tests). For unit tests, only cover the places where you code makes decisions, such as if thens, switch statements, or does raw data transformation (e.g. JSON.parse, Zod parsing, file reads, fetch response parsing).

Regardless of app or testing framework, try to follow Pure Core, Imperative Shell. It'll result in 80% of your code being pure, and much easier to test. https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell/ Scott Wlaschin has a good talk here outlining how you can do that: https://www.youtube.com/watch?v=P1vES9AgfC4

As soon as you hit the 20% that has side-effects, where you start needing to use Mocks, Spies, or Expectations (as opposed to pure function Stubs), don't. Just cover those cases in your Acceptance Tests to give yourself a break.

If you're Junior, try following the Chicago method of testing, where you just start unit testing little functions/classes that you start to wire together into larger classes. https://devlead.io/DevTips/LondonVsChicago The tradeoff is all the pieces might not fit together at the end, but you'll have learned how to write testable code, and how using dependencies and side-effects in your code makes it harder to test. Try to not use mutation, as it's a side-effect, and treat all data as immutable as it's easier to test. If your function/class method needs to do a side-effect, make it take in that class instance/function as a method parameter/function parameter so you can stub it in the test, and in the real code give it a concrete. Get nervous when you see class methods/functions not returning values as they're probably doing side-effects.

Join us in Lamdera land brother! The robots will soon be up to speed then even the "there are no devs" argument will evaporate. With you by our side we could take over the world!

回复
Patrick Mahloy

SWE ████?????? EM

3 个月

I knew that requirements always changed, but the table in the DevOps article you referenced was illuminating. It takes 4-5 people NOT INCLUDING THE CLIENT to be 100% correct in their individual understanding of the work needed in order to prevent re-work….which renders my idea of requirements being a blocker of TDD null

要查看或添加评论,请登录

社区洞察

其他会员也浏览了