Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Test-Driven Development is Stupid (geometrian.com)
112 points by henrik_w on Nov 24, 2015 | hide | past | favorite | 149 comments


>You are writing code to test something that doesn't even exist yet. I am not rightly able to apprehend the kind of confusion of ideas that could provoke such a method.

Yeah, the fact that you can't comprehend why people do this is very clear; if you could, you wouldn't have written this terrible rant. It feels like the author is criticizing this before coming anywhere close to understanding why people do it.

I really wanted some kind of point to find to disprove, bu there aren't any in this post. The strongest proof the author offers to his point is using one word tautological sentences to repeat what he just said: "Tests don't work because they just. don't. work."

The more I read, the more I realized this post isn't about testing, it's about the author letting the world know how fucking awesome and smart they are. They build huge software suites (from the sound of it, completely alone) with no tests, and everything works out fine, even better than fine, spectacular. It seems like the author deleted this line from the post in response to ridicule in the comments:

> "As it happens, I do write code others depend on--as it happens, a lot of it--and, my code has never, even once, failed in production: a record I am extremely proud of."

I don't even particularly ascribe to TDD, but the arrogance and dismissiveness and contempt of this guy made want to see him proved completely and utterly wrong.


>You are writing code to test something that doesn't even exist yet.

To piggyback on this... sure, the code doesn't exist yet, but the project specs do. And unit tests can help by making the required specification explicit.


In any other field, specifying a function by its value at a handful of points would be a bad joke.

If you don't know what you want your code to do then tests will just make it harder to experiment, and if you do know then there's no harm in writing them after the fact.


Bench vices pin down a board at a handful of points. Other things about the environment (shape of board, material properties of wood) help me make sure it generalizes in the ways I care about.


> In any other field, specifying a function by its value at a handful of points would be a bad joke.

There is a difference between making the spec explicit (which is providing concrete examples of the spec's meaning) and replacing the spec. You seem to be talking about tests doing the latter, while the grandparent comment talked about the former.

> If you don't know what you want your code to do then tests will just make it harder to experiment, and if you do know then there's no harm in writing them after the fact.

If different people are responsible for the unit under design vs. the parts of the software that will consume that unit, then the concrete examples provided by test cases can help confirm that there is a common understanding of the spec; as such, tests serve to validate the spec.

You want to do that before you code the implementation. And once you've done that, the tests are also available to validate the implementation.


When theseatoms says the specs already exist and the tests help, then you talk as if the test was the spec, it sounds like you're replying before you read.


I agree, but the corollary of that is that if the project specs exist (and they do, at least conceptually), then it is they that are ultimately driving development. This is outside of the scope of TDD, and so TDD is not a complete methodology, a fact that often seems to be forgotten or overlooked.


I've been wanting (and, off and on, building) a system for tying documentation to supporting tests. On the one hand, it should serve as citations, making sure I don't make claims I can't hold up. On the other, it should help avoid the docs drifting out of sync with the code without undue cognitive overhead - telling me what I need to consider changing when the code changes.

One option would be to start with a spec and gradually add citations, doing TDD along the way for a sort of Documentation Driven Development, although I've no particular confidence that's actually a sweet spot.

doctest is a move in this direction, but only really makes sense for code you wish to surface as examples. Cucumber is also obviously related, but - so far as I understand - isn't really suitable for producing arbitrary documentation. Types are (amongst other things) another form of machine checked documentation, but the audience (and, usually, expressiveness) is naturally limited.


Anecdotal, but I've always found that if you can't write the test first it's a good indication that you probably don't yet have a good enough understanding of the problem you are trying to solve.

Regarding the rest of the article it is pure trolling with nothing really useful to add to the debate.


I've always found that if you can't write the test first it's a good indication that you probably don't yet have a good enough understanding of the problem you are trying to solve.

The author's point, in the painting comparison and when mentioning other fields, is that you never have a good enough understanding of the problem you are trying to solve when you're just starting to work on it. Developing the code is an iterative process that refines your understanding of the problem and the correct solution, and only once you've done that do you know enough to write appropriate tests.

There are exceptions to this, but they tend to be less common or trivial. If you're implementing a known algorithm, such as an implementation of a mathematical function, you know enough to write the test cases first. You can do it for very generic libraries too, like a library of sorting methods. But in practice most code is not so easy to spec completely before you start to implement it.


> But in practice most code is not so easy to spec completely before you start to implement it.

No TDD advocates say you should spec completely before you start to implement. Most say you should write a single, very simple test. It's not much more than the function signature, and almost everybody in almost every language writes the function signature before writing the code.


And my point is the opposite

Developing the tests is an iterative process that refines your understanding of the problem and the correct solution, and only once you've done that do you know enough to write appropriate code.


Maybe it's a matter of personal style. Some people are better if they focus on "how do I write the code that's intended to solve my problem", and some people are better if they focus on "how do I write tests that can determine that my problem is solved, and that the solution is correct."


On a related note, when it comes to types I find very odd the persistent notion that a statically typed language means it's vital that you get the types right at the outset.

Quite the contrary, as we build anything, we necessarily pick some way of organizing our code and our data. When we inevitably discover that we were horribly wrong, it is tremendously valuable to be told more about what needs to change in concert as we reorganize.


Yes. I was going to come on here and say the same thing: this guy doesn't know what the heck he is talking about.

Your point is an excellent one. If you see hundreds of really smart people doing X and seemingly very happy with it, there's probably something going on there that's more important than just a bunch of fools with a fad. It might not be appropriate in your situation, it might be oversold by those folks -- but it's highly unlikely that's it's stupid and dumb.

I work with training developers all the time. It continues to amaze me how people go from being a novice to being an expert. One day they can't code at all. Next day they know everything there is to know about it -- and can go on at length about why process X is a terrible thing, without even trying or understanding it.

Of course, we have the opposite problem in development also. People get sold on ideas that will never work simply because they sound cool or the cool kids are doing them. This reminds me of the old saying "Always be open-minded -- just not empty-headed"


Unfortunately when it comes to software development, you can count on many techniques, processes and tools being exactly that: fads packaged expertly by consultants and salesmen to be gobbled up by the masses of managers, architects and developers in exchange for big bucks from trainings, courses and certificates. And you can bet that they will oversell it, hype it and denounce everyone that's trying to be reasonable as old fashioned, conservative and inflexible.

Extreme Programming has through its "extremeness" brought automated unit testing front and center. That doesn't mean it's necessarily a good idea to do XP or TDD. Unit tests are now a mainstream idea, and being pedantic and militant about when exactly they should be written and what are the steps needed to be blessed as using the approved way is not being helpful.


Damn. The misunderstanding just goes on and on.

TDD is not a unit testing religion. It's a way of designing software. Some have even advocated calling it Test-Driven Design, not Test-Driven Development, because the end result is a design, not a series of tests.

This is important to know, because the key factor in the red-green-blue cycle is the blue step: refactoring. The example given by the author is of a system that kept getting tests -- but was never refactored. Instead, more junk kept got thrown in. In this case, they were not doing the most important thing in TDD, refactoring. Not really sure you can even call this TDD. Looks more like a CF created by a bunch of folks under time pressure who thought it would be easier to add stuff than refactor.

If you don't understand what it is, probably not a good idea to criticize it. TDD is a way at looking at constructing OO programs. It's main competitor is probably more along the lines of Model-Driven Development.

TDD is a response to the large amount of commercial code being written in complex object-oriented environments. The goal is to ensure that changes to the code maintain the economic value of the code. In my opinion it is not appropriate where there is little or no economic value, like personal, academic, and start-up code. In cases where there are not complex object graphs full of dependencies, such as small pure FP projects, it's also not indicated -- there's no OO system for TDD to help you design. And if you're not designing stuff, you're not doing TDD.

People get religious about TDD, yep, and it drives me nuts. But like I said, that's no reason not to understand what the hell is going on before criticizing it. Unfortunately, people getting excited and over-selling things is just the way it is in tech, whether it's TDD or the latest cool programming language.


Daniel, there is no misunderstanding, I know about the two definitions of TDD - one focusing on testing and one on design. I think that automated unit testing is a useful method of reducing errors and regressions, however not alone but as part of a series of quality control strategies such as integration and higher level and manual testing, static and dynamic analysis, using higher level programming concepts, etc.

I am afraid I do not consider TDD to be a legitimate design method. There's an inherent tension between well designed OO code and testable code.

Well designed code is properly encapsulated, does one thing, has a minimal interface and few dependencies. Testable code is often less encapsulated, contains extra complexity for the test machinery, among which we can count extra methods (with potentially non-private visibility) and dependencies between classes (such as strategies, policies, factories, etc).

When I design I always start from the closest-to-ideal incarnation of the objects, based on OO principles and the way clients of my components will want to use them. Then I compromise on that design just enough to be able to test it to an acceptable level (which varies per component). It could be that I will write no unit test at all, and instead test several interacting components. Someone following TDD will immediately compromise in favor of testing and automatically add maintenance overhead.

Furthermore, while designing at the class level is important, it is much more important to architect at higher levels: component, subsystem, application, system/platform. All of these aren't considered when writing unit tests and what makes perfect sense in a "unit" might only be a local optimum.

Designing at the test level is to me as architecting a house at the brick level...


> There's an inherent tension between well designed OO code and testable code.

No, there isn't. Code that is not testable is not well-designed.

> Well designed code is properly encapsulated, does one thing, has a minimal interface and few dependencies.

And testing tests what the consumer of code will use, using the interface.

> Testable code is often less encapsulated, contains extra complexity for the test machinery, among which we can count extra methods (with potentially non-private visibility) and dependencies between classes (such as strategies, policies, factories, etc).

Only if you are doing testing wrong; what you should be testing, even for unit testing, is the behavior of the unit under test as provided through its exposed interfaces to consumers of the unit under test.

> Furthermore, while designing at the class level is important, it is much more important to architect at higher levels: component, subsystem, application, system/platform.

Conventional, unit-test-focused TDD is a practice isolated to a particular level which does not conflict with higher-level practices of the same type that address that concern, like Acceptance Test Driven Development (ATDD).


Testable code is often less encapsulated, contains extra complexity for the test machinery, among which we can count extra methods (with potentially non-private visibility) and dependencies between classes (such as strategies, policies, factories, etc).

I think that's a more a consequence of poor testing libraries/frameworks than of TDD itself. A method shouldn't have to be turned non-private to be tested.

Furthermore, while designing at the class level is important, it is much more important to architect at higher levels: component, subsystem, application, system/platform. All of these aren't considered when writing unit tests and what makes perfect sense in a "unit" might only be a local optimum.

It's TDD, not UTDD.


I've had a bit of a journey in my professional career when it comes to writing apps. I started in heavy OO-mode, much as you describe, but I'm a long ways from there right now.

We're pretty far down an HN thread, but if you're interested in pursuing this further, I shot about an hour of video to show how I would code a small app. You'll find the three parts here:http://tiny-giant-books.com/blog/technical-story-slicing-1-o...

I'd enjoy continuing the conversation via email if you'd like. Let me know!


Thanks, I'll take a look.


"In this case, they were not doing the most important thing in TDD, refactoring. Not really sure you can even call this TDD. Looks more like a CF created by a bunch of folks under time pressure who thought it would be easier to add stuff than refactor."

To be fair, this is a common failure mode when people try to do TDD.

(to be clear, common amongst failures; not necessarily most common amongst failures, and not necessarily a terribly common outcome overall)


> It continues to amaze me how people go from being a novice to being an expert. One day they can't code at all. Next day they know everything there is to know about it -- and can go on at length about why process X is a terrible thing, without even trying or understanding it.

The Dunning-Kruger effect in action.


> The Dunning-Kruger effect in action.

But that also applies to ...

>> If you see hundreds of really smart people doing X and seemingly very happy with it, there's probably something going on there


That's less Dunning-Kruger and more appeal-to-authority fallacy.


> I am against writing unit tests in general, since experience shows that they actually prevent high-quality code from emerging from development

And he's lost me in his first sentence.

Unit testing, good unit testing at least, is not just about developing as it is about preventing regressions. A unit test that runs on every build ensure that something is true and that it stays true forever.


This. I agree TDD can cause more harm than good in software design, but my test suite saves me from shipping several regressions per day that I would have otherwise not noticed.

Obviously if you are against testing in general you are against TDD (it's just testing taken to the extreme).

I thought he at least was going to say that he preferred higher level testing, to low level unit tests (which can be a fair point, you can find 90% of the issues with 10% of the test code if you accept that you can't always isolate the failure based on the test report). But no. He is against testing.


How does TDD cause more harm than good in software design?


You quickly get to the situation where the code is clean, concise modular and all those good words, but requires one more tweak/indirection only for the unit testing. the result is popularly called "Test induced design damage".


I work somewhere without any unit tests at all, and he still lost me at that sentence.

If he thinks test driven development can produce horrible implementations then he's right, they can. But he's horribly wrong that the solution is to abandon unit testing.

Without unit tests developers begin to live in fear of parts of the code, and refuse to clean it up "in case it breaks something", and there's no way to verify those fears aren't founded.

Any refactoring job starts with annoying a lot of people by breaking things that previous worked and questions start to appear from people that "it used to work before" when some of those breakages slip through to the customer.

Refactoring becomes something to fear because "we'll have to retest everything" which is a large expense that cannot be afforded.


> Unit testing ... is not just about developing as it is about preventing regressions.

Absolutely this! Unit testing helps you during the dark years of maintenance phase, when you must fix a bug or add a new feature and you don't have the knowledge, nor the time, to build a full representation of the application in your head: you just touch as little code as possible to get the work done, and unit tests help to ensure that all the rest stays equal.


Indeed. I was ready to hear a rant about TDD with an open mind since I have some issues with it myself, but the opening line completely ruined it. He might as well rally against refactoring because unit tests are essential part of that too.


I think I see part of his point, though. By its nature, a unit test is (often, albeit not necessarily) tightly coupled with the thing it is testing--which means that anyone who changes the implementation must change the test, which increases the complexity of code changes.

I know that in my own work, the single most important thing for me is to be able to massively refactor, restructure and redesign my and others' code, as I'm working with it (this is probably why I like dynamic languages and macros so much, and probably why I benefit from static typing so much); anything which gets in the way of deleting, reorganising, clarifying, duplicating, altering, reducing and shifting code is going to slow me down and make the resulting code worse.

Perhaps, though, there is a middle ground: while the internals of a library must be free to mutate, the external interface ought not to change nearly as much. Perhaps all unit tests should be written to test the external interface, not the internal details. That might also help avoid the tests-which-test-that-the-code-does-what-it-does disease.

I do really appreciate tests, and they do help detect and prevent certain types of regressions.


Which is why testing behaviour not implementation is so important. Admittedly this sounds easier than it is in practice. Personally I've found that in-memory coarse-grained tests more akin to integration tests, that test from the outside in are helpful and easier to maintain because they tend to be less concerned with the how. I can then choose to write finer-grained unit tests where appropraite


Agreed! A code base I recently inherited (and exorcised) contained unit tests with roughly 90% coverage... And not a single assert. They didn't understand the idea of testing behavior, they just copied the implementation as tests.

I reduced 20,000 lines of tests to 5,000, and caught a dozen bugs.


Regression tests are absolutely awesome, especially if written for nontrivial components which will later be modified or expanded.

Having worked with dynamic languages quite a bit, I found that iff your language has a decent REPL, so you can code iteratively, you don't need unit tests to drive your design. You perform those tests in REPL yourself, which helps you flesh out the design, but you don't have layers of code that'll make changing the design harder.

The way I personally work with Lisp is a mix of writing code in a file, compiling particular parts of it and playing around with it in REPL until I've figured out the right design - at which point I may as well start adding some tests around tricky places. The idea is to not burden yourself with permanent tests until you have your design figured out.


> all unit tests should be written to test the external interface, not the internal details

Yes, this is the most useful approach to unit testing. If you have a functional interface which doesn't cause side effects, (i.e. state changes of any kind, external actions that can't really be effectively verified by test code, etc) then unit testing is very valuable and useful given you just test the interface using the simplest possible approach: a table of some inputs which should map to some outputs.

I'd go further and say you should always write the tests only after this interface has solidified. Test first makes sense only under either the assumption that your first design is always the best design, or that you are happy to spend the time to morph the tests, as well as the code when you realise your first design wasn't the best and have to iterate and tweak.

We all know from experience that the first assumption is never true. As for the second assumption, it may be true, but then the question becomes, well, why waste the time? You know those 'test-first' tests will just be rewritten anyway. Why not just write the tests last and save yourself a lot of wasted effort?


It depends what you're developing, and who you're developing with.

If you're developing something which must work 100% every time or else someone will die, then unit tests are a must.

If you're developing with people who break the build all the time, then unit tests are useful.

Unit tests that aren't 100% up to date with the code are worse than no unit tests.


On the teams I've work on, pushing code that doesn't build and pass all unit tests is grounds for mocking. I have an alias in my zshrc file called "safepush" that runs [clean command] && [build command] && [test command] && git push

If any piece doesn't succeed, the command stops. No push without unit tests passing.


And, as the article points out nicely, this basically prevents people from refactoring their code. In any nontrivial codebase, when you have a lot of tests, then what would have been 15 minutes of work rearranging class structure turns into a whole day of rearranging test cases to fit the architecture again.


It can be, I won't entirely disagree. But it depends how highly coupled your unit tests are to the design.

Unit tests are a safety mechanism- you can easily bog yourself down with too many, tying things up in bad ways, or presuming a specific design. But do you really want to live with no safety at all?

The art, as I see it, lies in writing tests in such a way that you get the guarantees you need, the safety of not letting the next guy (or yourself) screw it all up during later changes while simultaneously not preventing you from doing needed refactoring.

It's a fun challenge.


Exactly what you wrote. That's why I am generally for testing, but I don't buy into Test Driven Development.


In any halfway decent build your tests will run whenever you build the project, to prevent tests from ever getting outdated.

If any test fails, it is because you broke it just now, so you go in and see if your test is broken, or, as is often the case, your refactored code is breaking the expected behaviour of the tested code.


> Unit tests that aren't 100% up to date with the code are worse than no unit tests.

A bit of a tautology... broken code is broken. Unmaintained code, even unit tests, should be deleted.


I'd be a lot less gung-ho about unit tests if all the tests I wrote always passed the first time I expected them to pass, and always stayed passing regardless of what changes people make to the code (including people who are not me). As neither of these things happens, I'm pretty gung-ho.


I love the first quote for how (ironically) true it is:

"Trying to improve software quality by increasing the amount of testing is like try[ing] to lose weight by weighing yourself more often."

From someone who has lost over 15kg in the past in few months, one of the things that helped most was starting weighing myself as I didn't do that previously. It kept reminding me that I wasn't still there and gave me more motivation


And it's actually been confirmed in a recent study:

"Frequent Self-Weighing and Visual Feedback for Weight Loss in Overweight Adults"

"The major finding of this study is that the use of frequent weighing accompanied by visual feedback of weight, without a prescribed diet or exercise plan, was effective in producing a small but sustainable weight loss in overweight males."

http://www.hindawi.com/journals/jobe/2015/763680/


This is also a central theme in the popular book Willpower, which draws on a high number of psychology studies.


What is "more often" ? Weighing yourself daily is useless, since the weight fluctuates too much. Not to mention that "weight" is useless as well, while you're actually trying to loose FAT. Fat percentage and resulting lean body mass (body weight minus body fat) are the metrics to measure. But still a waste of time doing it daily, twice per week to gain insight on the delta and where it's trending is enough.


Weighing yourself daily is not useless. The idea of doing it weekly is being spread explicitly because GenPop doesn't understand the concepts of a moving average and a low-pass filter - instead, they freak out over those daily fluctuations. If you weigh yourself daily and remember to always look at the average of the last few samples, you get a more useful indicator of your current weight trend.


It's not useless, it taught me that weight fluctuates a lot by the hour and some by the week (;

I took the average of the 3-4 last days at the same time (about waking up) is a good enough approximation. Once I learned this, "more often" means once daily, as opposed to "once monthly" (if any). It wasn't only about the hard number, it was about the motivation.


I do it daily as a motivation trick.


There're already people speaking about the limits of a too strict TDD:

http://david.heinemeierhansson.com/2014/tdd-is-dead-long-liv...

And yes, it may be stupid to test before a design emerges - but only if you start with a very fine-grained test. Usually when I'm coding from scratch I write a very, very, VERY coarse-grained test that "tests something", and when I reach the point of passing it (which may involved creating and designing multiple classes) I probably have a working design and I may begin creating other, smaller unit tests for individual components. The initial test may disappear or become an integration or acceptance test.

By the way there's little content in the article. It's just a rant. And the article about not writing test cases at all is simply ridiculous - since an error can exist in a test, we shouldn't write test cases?!?


>Usually when I'm coding from scratch I write a very, very, VERY coarse-grained test that "tests something", and when I reach the point of passing it (which may involved creating and designing multiple classes) I probably have a working design and I may begin creating other, smaller unit tests for individual components. The initial test may disappear or become an integration or acceptance test.

This reads pretty much how I work too.I don't think it is unusual. Start coarse, outside in. Write the finer grained 'unit' tests where appropriate. Focus on where you need the crutch (design and confidence) not on clambering to test all the things.


This is my view as well.

More generally, it's about not 'coding too far into the future', even (especially) with tests.

But I like to code a little ways into the future.

'README driven development' is also interesting.


> Usually when I'm coding from scratch I write a very, very, VERY coarse-grained test that "tests something", and when I reach the point of passing it (which may involved creating and designing multiple classes) I probably have a working design and I may begin creating other, smaller unit tests for individual components.

That is exactly how I start coding something from scratch, except I don't have that initial test. I don't think such a vague starting test adds anything real.


It offers the value of having an initial target, otherwise you risk coding too much without stopping.


If you're doing TDD (or software development) like this, you're doing it wrong. Yes, yes, I know. No True Scotsman [1]. But when many - I mean LOTS OF - people say "I get benefits form [Technique]", you can't just say: "It cannot work. I tried it, it sucked.".

I mean, you can say that. But doing so makes you look... ignorant - at best. You know, there is a possibility that you just got it wrong.

So, many great programmers say that they get benefits from TDD. They get benefits. Not the suits. Not their co-workers. They.

TDD is hard. I had to learn it and practice it. And I'm still learning and practicing it, even though I'm now also teaching it to others and helping teams implement it. But I get benefits from it. Writing tests helps me to think about problems and to actually improve my designs [2]. And writing tests helps me to know when to stop - To not gold-plate my designs.

Sorry, but this "article" is just an angry rant, with no real arguments. Please, don't give up on TDD early because of rants like this. If you have any questions about TDD or need help getting started, feel free to ask (here or in private - you'll find my email address in my profile)...

[1] http://davidtanzer.net/no_true_scotsman_in_agile

[2] But you have to refactor ruthlessly. And learn and practice refactoring, which is hard.


>If you're doing TDD (or software development) like this, you're doing it wrong.

What do you think is the right way to do it, then?


My point is: If TDD leads you to bad design you're doing it wrong. You are probably not listening to your tests and you are probably not taking care to refactor towards a better design. Maybe you are even writing bad unit tests [1].

BTW, if any technique in software development leads you to bad design, and you don't stop and try to improve something, you're doing software development wrong. If a technique does not help you, you have two possibilities: Trying to do it better (maybe with outside help), or trying something different.

[1] http://www.makinggoodsoftware.com/2012/01/27/the-evil-unit-t...


But that's a general counterargument for criticism against anything - "if it doesn't work for you, you're doing it wrong"!

Ultimately, we're too young a field to be able to replace common sense with a process.


Way stronger than I'd write, and I don't agree with it entirely, but the author has a point.

Personally, what annoys me the most about TDD I've seen in the wild are two things: designing for tests instead of actual problems, and tests affecting the structure of "real" code.

Designing for tests - the standard TDD approach, first we write tests, then we write code to pass the tests. Quite often the consideration of the problem being solved disappears. It's a fine approach when your task is to write a small black box that takes some data in and transforms it into something else. But I've never seen a case where someone made it work for complex tasks. It always ends up the same - your tests become more complicated than the tested code. It happens with any non-trivial problem, because the thinking you have to do to write tests that make sense is the same as the thinking you need to do to solve the problem in the first place. So you're basically writing the program twice, only in a convoluted way, and without considerations for global design.

Tests affecting the structure - this is IMO a strong code smell. If you're modifying your design to accomodate for tests, by e.g. adding superfluous dependencies, hooks or injection points, you've screwed up. It only makes the code more complicated and less reliable.

The only tests I've found valuable so far (in terms of effect for effort spent) are regression tests - the ones you write to catch bugs in order to make sure they won't happen again. Everything else in TDD seems to be easily replaceable by proper iterative programming.

Maybe that's why TDD is popular in the languages without a sane REPL.


> Tests affecting the structure […] is […] a strong code smell

On the other hand, code being hard or impossible to test is often thought of as a code smell as well. Code that is easy to test is easier to understand — not in the least because there are tests demonstrating its use. Techniques such as dependency injection help a lot here.


Techniques like dependency injection can be really useful -- see Angular -- but too often I see a perfectly understandable piece of code expand into a mess of multiple constructors (only one ever called in production) and helper methods all so that strict unit testing can be done. The post's commit story was unsurprising. If TDD is being done this mangling often just happens upfront. DI is great in the same way interfaces are great, but if your code actually just cares about a particular implemention you instantiate wherever, or even better is built-in to the language, it's easiest to reason with that implementation. Taken to the extreme, you get Enterprise FizzBuzz.


The effect that I'm complaining here is visible in "weaker" languages, like Java. People are too afraid to use reflection to instrument the code for testing, so e.g. in a codebase I'm working on right now at $JOB I get to see classes in which you can't really tell what code is there for business purposes and what code was added so that the business code could be tested.

I agree with you that a code that's hard or impossible to test is a code smell. I argue that having to modify a good design to accomodate for more testing is also a code smell. I think the two heuristics narrow down the design space nicely, pointing one towards designs that are easy to test because of their natural boundaries, and not because of additional testing cruft being added.


> Personally, what annoys me the most about TDD I've seen in the wild are two things

It's like saying that one is annoyed by programming in general because he's seen too many horrible things done with it! Anything can be abused, TDD is not an exception.

> your tests become more complicated than the tested code

Well, then don't tdd that code on unit level. Keep some high-level(smoke, integration etc) tests that executes it , relax and write/design it without TDD the best way you can.

> Tests affecting the structure - this is IMO a strong code smell.

Yes, and this smell (by definition) shows you a flaw in the code design. TDD helped to identify this. Apparently)


Just wait until he has to write a system with several hundred web services that have been documented to perform in a very precise manner given particular data sets, and have hundreds of customers who have integrated to that API, and absolutely required that there be no variance in the output in the API, or there (extraordinarily expensive) system integration will fail, at great costs to their business systems.

I'm not saying TDD makes sense everywhere, but being able to confirm that your 3000+ continuous integration tests all passed green before shipping a new version to all of your customer is a huge way to avoid embarrassingly predictable bugs. (Leaving, of course, the opportunity to ship the embarrassingly non-obvious bugs)

And, honestly, it's not clear that he was opposed to TDD, as he was opposed to being constrained to having to write huge test frameworks during the exploratory phase of code development, in which you may want to have a bit more freedom to write/discard while you feel out requirements/solutions.


What you described having tests, not doing TDD. That is, it's important to have those test cases verifying that your APIs still work, but that doesn't mean you have to design the APIs by writing tests, as opposed to designing by thinking about what you actually want to achieve.


Fair point. I was more referring to his comment here: I am against writing unit tests in general, since experience shows that they actually prevent high-quality code from emerging from development--a better strategy is to focus on competence and good design.


I disagree about that with the author too. I think testing and TDD should be clearly separated as two different things.


Unit tests are useless for ensuring that a web service API works, they are too low level. You're supposed to be testing only the "units", aka classes or groups of methods.


I stand corrected twice. Probably just goes to show that a non-(current)developer should spend 30 minutes of research on a topic before commenting on a thread.


I genuinely wanted a good critique (It's nice to get dissenting viewpoints) but reading text so charged with anger and so scant in information is stressing me out.


agreed - there are certainly some disadvantages to TDD, and I was hoping for an discussion raising some ideas I'm yet to formulate fully.

Instead there were lots of parallels that I don't recognise.

It was disappointing the author didn't seem to try and find the advantages in TDD and discuss a counter argument or alternative, such as modifying a foreign code base.


OK, I'll give it a shot. This isn't a criticism of TDD per se, but of over-reliance on code coverage tools.

Because it's easy to measure whether you're exercising every code path, software engineers think in terms of how close they are to "100% coverage" of tests. What you really should be thinking about is whether you're exercising every possible input, not every possible code path.

https://twitter.com/brlewis/status/553191394639360000


There have been one or two HN posts somewhat recently indicate that developers should stop calling themselves "engineers." Enter examples of how engineered solutions generally work the first time round, and how software almost universally does not. There's a fundamental reason for that: it's computer science.

One of the facets of science is that you have a hypothesis. Writing a unit test before you start is analogous to having the hypothesis: "the solution I come up with for this problem will work." The rest of the exercise of TDD is a scientific process of creating a solution and proving that your solution works (and adjusting your hypothesis if you find it to be incorrect), albeit in a slightly strange way.

I could have a hypothesis about how earth is really traveling through the stars on the back of a turtle. We know that is not true because Einstein came up with a solution, followed by him and others proving that solution. Relatively is only valuable because proof of it exists.

If computer science is a science (which it is) we must put our solutions to the same degree of rigor that other scientists do.

While I no longer follow the strictest form of TDD (starting with tests that fail to compile) I'll never forget the singular most important lesson that it has taught me:

    A solution is worthless if you cannot prove that it works.
I'd take this article more seriously if Mr. Mallett provided an alternative for correctness proofing. I can't honestly sell software to someone if I don't know if it works myself.


I disagree on multiple levels. First of all, programming is not computer science. Sure, we probably do not deserve to be called "engineers", but what we do goes in completely other direction - away from science and towards art.

Secondly, I wouldn't try to fit unit testing into scientific process - because if the test is a hypothesis, then what you're doing is the exact opposite of how science is done. You do not design your experiment to make your hypothesis come out true!

While the quote you cited is interesting, I'd treat it with a grain of salt, given that you can't prove that your solution works - and if you think you did, it usually turns out the proof itself is wrong. It happened even for formally proven algorithms.

Testing gives you increased confidence. It's a worthwhile goal. But IMO, driving your design by tests is going a bit too far, and something you don't need to do to have tests that ensure your code works.


"Secondly, I wouldn't try to fit unit testing into scientific process - because if the test is a hypothesis, then what you're doing is the exact opposite of how science is done. You do not design your experiment to make your hypothesis come out true!"

True, but I think that objection can be trivially fixed by simply... doing that. I write tests that confirm the code does what I designed, but I also write code that tries to break my design, and tests that verify that it errors as expected. I do approach it with a scientific mindset. (And I tend to consider "scientific mindset" to be the more important part of science vs. some overprivileging some checkpoint list of specific techniques, which ought to have come from the scientific mindset in the first place.)

Of course it remains true that you can't do that perfectly, but that's a null objection in the end. Nothing ever can be, but at least I try.

I can't even count how many times I've tested a code's error case, only to discover that it unexpectedly "worked". Usually that's because there's a bug and I need to fix the error case... every once in a while it turns out my code corrects my own understanding when it reveals what I thought was an error case is actually perfectly valid and sensible, though. It's important to try to break the code.


> I do approach it with a scientific mindset. (And I tend to consider "scientific mindset" to be the more important part of science vs. some overprivileging some checkpoint list of specific techniques, which ought to have come from the scientific mindset in the first place.)

True. If you just follow the checklist without following the spirit of the scientific method, you end up doing socio^H^H^H^H^Hcargo-cult science.

As for scientific mindset in programming, I think it's a very valuable thing to have on both larger scale - in various forms of testing - and smaller scale. I found that, when running your code, it's good to just ask yourself what exactly do you expect to happen beforehand, and if you see any deviation, immediately go figure it out, or at least note it down. If a program does something unexpected it means you don't understand it.


Fair enough.

> art

In stark contrast to what I've said: I also believe this to be true. Software development is a cross section of multiple disciplines: art and mathematics possibly being two of the largest within that cross section. Science, while a smaller component, is still present.

> You do not design your experiment to make your hypothesis come out true!

Indeed, as stated the TDD approach is strange and is only analogous to the formal scientific process.

> driving your design by tests is going a bit too far [...]

I didn't stick to formal TDD for very long for exactly that reason: it takes ages and constrains you to think about the tiny details instead of the grander design of the project. The upshot of formal TDD is that it usually results in low coupling but there are other ways to achieve that.


> Let me emphasize: you write the test cases for your program, and then you write your program. You are writing code to test something that doesn't even exist yet. I am not rightly able to apprehend the kind of confusion of ideas that could provoke such a method.

I don't understand this. What is so absurd about specifying facts about the program you will write? When we have tools that can prove facts, we will be doing formal specifications instead of random sampling. But still, testing is a way of statistically specifying facts about your program, for which it seems sensible to be written before the program.


Because like the author says, there is no substitute for competence. If you can't write good code, then your assumptions about the future design will be wrong and/or bad, and your tests will be as poorly written. In other words, TDD (anecdotally) doesn't improve your code but only introduces extra levels of complexity. Which in the hands of an incompetent developer becomes an even bigger problem than if there was no TDD.


If you can't write good code, then you will not write good code.

But, there is another common case where unit tests (written before or after) also come in tremendously handy: if you can't write good mistake-free code 100% of the time.

Which describes all programmers who have ever existed.


> What is so absurd about specifying facts about the program you will write?

Not really absurd, just impractical. It's because all those facts are wrong, and you don't know yet why.


Honestly this doesn't belong on the front of HN. There is no discussion value, or sense of professionalism. It's just a rant riddled with misplaced anger.

There are valid reasons to be against TDD, but it is not stupid. To call a methodology "stupid" is needlessly judgmental and really has NO place in the pragmatism and trade offs that so often go along with writing software. Anyone that calls a software methodology "stupid" so casually has no business writing software at all, nor blog posts about software.

I don't care how smart you are, or how right you are. If you want people to listen to you and consider your ideas, don't write posts like this, in this tone, at this length, with so little actual substance.


When I read these kind of articles I'm always curious to see the professional background of the author. Not to criticize, but to see if he/she's talking about something he/she saw in scale or not. Because if you're working with a very small code base then I may even understand sentences like "I am against unit tests in general". I've never met people who work (or worked) in very large companies being against, at least, unit test. Where hundred/thousands of people touch the same code .. not having unit tests, in the long term, is a suicide.


I worked at a large company where we didn't unit test. We should have. I worked on embedded SW and HW projects for office multi function printers.

Due to poor planning / management, unit tests often weren't done. Bad decisions by others ended up biting me! I got pulled into a project to do a big refactor because somehow I was considered the DSP expert and a predecessor picked a lame DSP for the new version of the product. No unit tests meant I was pretty screwed.


None. He's still in university. [0]

[0] http://geometrian.com/about/cv.pdf


I think you're being slightly disingenuous there with the "still in university" dismissal. That CV is a lot more impressive than many people I've worked with in the "real world".


A while back I described the benefits I think unit tests give you:

1. Well-tested parts

2. Decoupled design

3. Rapid feedback

4. Local context to test in.

At the same time, there are many cases where they don't help much. More here: http://henrikwarne.com/2014/09/04/a-response-to-why-most-uni...


I think you're missing the most important one: ensuring you don't unexpectedly break your own code in the future (when you come back in 6 months and forget why exactly it's arr[1:n-1] not arr[0:n-1] or arr[1:n]).


Sounds like a well placed comment would be of more value than a test there.


Comments are easy to miss though. We arnt perfect about going through our code and testing gives us objective measures.


A test won't say why code is the way it is; it will simply provide some sample input and expected output to that code. This may be enough, but not always.


They are also a word of encouragement to whoever sees your code for the first time because he needs to make a change in it, five years from now.

"Look, this code looks strange to you, you would have done it differently, and you are afraid you might break things you don't even know exist. But fear not, there are tests, they run, and they will guide you."

It's always relieve to find that some random old crap is actually well tested.


Not just to someone else. Nothing is more pleasant than going in to refactor code you wrote years ago, and find that you wrote comprehensive unit tests. Thank you past-me!


His experience might be a lot different from mine, but I've found TDD enormously helpful when tackling legacy systems. Maybe that's not "test-driven development" and more like "test-driven refactoring", but working on complex legacy systems where the original developers are basically all gone is scary, and TDD has helped me make some sense of it and feel a lot better about making changes.


I don't necessarily use TDD all the time, but I think it provides significant value. A key one is to help guide developers break down complexity.

This is quite apparent especially when I conduct pair programming interviews. Developers who were exposed to TDD (or to projects with significant test code) approach the problem in a far more structured manner, and their code is much more pleasant to look at :)


No you're stupid. :P

This is just a rant. I was hoping for an actual study, but the title should have given it away.


I'm not clear what technique this article is trying to describe, but anyone that spends two weeks pre-writing tests is not performing TDD. That simply isn't how it works.

The cycle is: red, green, refactor, repeat. That's per test. It shouldn't take long. It works nicely.


TDD assumes you know what all the interfaces are going to be beforehand. If you discover that an interface needs to change you have to refactor all of your tests. Unit tests are momentum against change.

It also assumes a bug free app with 100% code coverage is the objective, regardless of the cost it takes to write all that test code. An app that has a few bugs but takes half the resources to develop can make better business sense.


> It also assumes a bug free app with 100% code coverage is the objective, regardless of the cost it takes to write all that test code.

I disagree on this. 100% coverage is actually a bad thing, in my opinion, because there's always a lot of code you do not need to test (simple constructors, accessors/mutators, etc).

A unit test should test things that you are relying on being correct. You write a unit test as a way to protect future developers (including yourself) from totally screwing things up by accidentally changing something important.

An application with a few bugs that took half the resources to develop is nice- until the customer needs a few extra changes 18 months later, and you turn 'a few' bugs into 'a whole lot of' bugs.


> Unit tests are momentum against change.

I get the point, but feel different. When a project becomes large enough, changes can result in breakage in the weirdest places. Proper testing can help discovering this breakage before shipping. For me, tests are a safety net for changes in the tested code.

Large projects without any tests are effectively unmaintainable, often times even for the original author, and most certainly for others.


I do agree with this, and have felt it from both ends. If you need to change a core class or interface which cascades throughout the system, those unit tests are sure nice to have. I'm not sure I'd be confident enough to change the core class without the unit tests. In that sense, lack of unit tests is momentum against change.


There is no such thing as bug-free app, and I do not agree for 100% code coverage (rather write test for what's important to you, i.e. prioritise). However, tests gives you a higher level of confidence when you perform refactoring.

PS. Regression is far worse than momentum against change, when your app is `broken` you can't deliver it.


Buried in the rant are a couple of sound points:

>- You're trying to make a design before you learn anything about it.

>- People write something one way, but then are afraid to change it because they'll have to rewrite the testing code that goes along with it.

Both of these suggest tests of the wrong granularity; that people are, as he says, writing silly little fencepost error checks for every single function.


> Week 3-4: Write tests.

> Week 5-10: Write code.

??? what

The feedback cycle is one test, implement code. Not implement test suite for the entire program, then implement your code base.


Please, correct me if I am wrong, but the process he described is completely against TDD and no wonder it did not work (they wrote the function first, and added tests later, lot of tests, in TDD you would have rather more shorter functions with a few tests for each):

- Function A is 147 lines long. It is the simple core of the program.

- Function A is committed to the repository on June 26th, 2002. Function A has four test cases. Nevertheless, a bug is found in Function A on the 28th and a patch is uploaded on July 6th. It contains two new test cases.

- This continues a bit. However, by August 2002, function A is mostly stable and has no fewer than thirteen test cases--mainly for fencepost errors and other idiotic things anyone can find with a stack trace. Except for a blip in early 2003, function A, now 152 lines long, is unchanged until mid-2006.


I'm writing an SMTP server, and when I started it I looked at the problem, and for the first time since I stopped being a student, I had a well specified problem where I could try TDD.

At the time I rationalized it and said - nah, I'm too lazy for doing that on a side project. But as soon as I tried to use it in the real world, I discovered that all those RFCs are just crude approximations of how people used the server (most of them are even incomplete). There's absolutely no compliant client or server out there, and TDD would catch none of the bugs I discovered at the time. And that's with email - a protocol that people are working on standardizing since before the Internet existed.

Sorry, but nowadays I'm extremely skeptical about TDD having any application at all. Not even for reinventing the wheel.


TDD is truly cult like in it's extreme. Usually it's managers and slightly weaker technical folk with a scrum certification that spew the absolutist TDD path.

I'm a contract java developer and I know how to 'play the game'. The hypocrisy in the amount of fizbuz clones I've written in a TDD fashion on interviews, to then see the production code has no little to no unit tests.

I once got negative feedback on an interview because I wasn't TDD enough after expressing the opinion that TDD works great for most use cases but there are limits, e.g. positioning of a front end element isn't always best done TDD. The guy interviewing was non technical and of course any dissent to TDD meant I was a bad fit.

Anyways - end rant. Like I said I just learnt to play the game.


It's not really a rant, it is a very strongly communicated opinion. And I agree with a lot of it.

I don't mind writing a few test cases during or after I am done with my work. This is mostly to protect someone (including me) to change my perfect design after I have found it, though ;-)


Oh look another article with a provocative (e.g. link-bait) title. I'm sure this will be a well-balanced, nuanced piece on the costs and benefits of TDD, when it makes sense, when it does not make sense, and will leave me with a better understanding of the subject.

Oh.


Does anyone have a good counter-point worth reading? I understand that it's more of a anecdotal rant, but it's not the first time I've read something like this about TDD.

I've tried TDD, but I've found that it just stifles my productivity too much.


TDD isn't about writing tests. It's about designing architecture.

http://www.drdobbs.com/tdd-is-about-design-not-testing/22921...


It seems a lot of these discussions surrounding testing are very anecdotal; this link being a prime example. I remember hearing an interview with Greg Wilson regarding his book Making Software, [1] the premise of which is to apply a more rigorous methodology to understanding what makes software work.

If I remember correctly from the interview (I think it is here[2]), one conclusion was that TDD doesn't have a clear benefit when you add it to a project. On the other hand, in a survey, TDD projects are more likely to succeed because it is a habit common to good developers. I hope I am capturing the subtlety there. Essentially, TDD is not a silver bullet, but rather a good habit shared by many good developers. That was enough to convince me of the merits.

It's another problem altogether to try to institute TDD for a project, especially for a team. Like so many things in programming, TDD could be used and abused. The same could be said for JavaScript or [insert proper noun here]. If misunderstood or used incorrectly, TDD could be a drain on the project. A benefit--and this ties back into the idea of TDD as a habit--is that it forces the code you write to have at least one other client. This requirement would alter the way you write code and arguably for the better.

[1] http://shop.oreilly.com/product/9780596808303.do

[2] https://blog.stackoverflow.com/2011/06/se-podcast-09/


> I was once one of these newly educated kids. So, for a few years, I worked exclusively with the Test-First strategy. When it was over, the results were undeniable. The code--all of it--was horrible. Class projects, research code, contract programs, indie games, everything I'd written during that time--it was all slow, hard to read, and so buggy I probably should have cried. It passed the tests, but not much else.

Sounds like most everyone's first years :)


TDD has always looked to me like a good idea taken to ridiculously dogmatic extremes (a very common occurrence in software development, IMHO), but I think many of this author’s indiscriminate potshots miss the most important problems.

In order to write a test for something, you need to know what it is supposed to do, and testing will not tell you that. In order to make something that passes your tests, you have to design it, and a test does not tell you what that design should be - it can tell you if you failed in a specific way, but not what to do about it. There is a whole lot of analytical reasoning and technical judgement to software development that is ignored by TDD, and while thinking about test cases can help with this, it is an insufficiently powerful method to complete the job.

Agile methods have not rescinded this fact. Insofar as they form a complete development methodology (and I am not sure about that), they offer an empirical approach to discovering your requirements (which may or may not be appropriate to your situation), and the use of short-cycle iteration to gauge how you are progressing, but they are largely silent on the issues I raised above.


Well it is not a great rant but ultimately if you just read the conclusion he does have some good points. There really is no substitute for competence, the process can and does hinder good design, it is used as a false crutch against incompetence. And I do think that test first and in general a large overburden of existing tests does subconsciously limit refactoring and better design.

Thats not to say all testing is bad. I do some unit test and some functional/integration tests but by no means do I strive for some arbitrary % of coverage as if that means anything. You can have 100% coverage with completely brainless useless tests that are testing lots of simple low risk code. Any sense of security from that is beside the point if your team is incompetent, the overall design calcifies and becomes a mess and nobody fully understands how it all works.

Targeted testing I guess is how I think of it, very targeted. I work on a small project in a small team, in a much larger project with lots of devs I would have to rethink maybe but then again with properly sized teams (two pizzas) that don’t even come into existence.


Ok so I read the first sentence and felt it set the tone of the paper. Continued reading and it pretty much played out exactly as I thought it would.

>"Trying to improve software quality by increasing the amount of testing is like try[ing] to lose weight by weighing yourself more often."

So this is really funny to me because I just read an article that in fact states that weighing yourself more often does help people lost weight. http://www.sciencedaily.com/releases/2015/06/150617134622.ht...

Just by looking at the page and the style of reading you can tell this is someone who has not progressed in their career skills since the early 90's


In my first big project after university, a RPC API, I used test driven development and it felt really good. I had about 300 tests in the end and if I changed something, a few would blow up and I could fix it.

But it slowed development down massively.

In the start because I had to set up the testing as I had to setup the real software.

Then I had to mess around with the testing framework as I had with the projects frameworks.

Also I had to write features AND write their tests.

And later in the project, when new features broke old stuff not only the features had to be fixed but also the tests.

I don't know, but I had the feeling that the time I spent with the test-code was the same I would have spent with fixing possible bugs later.


In my oppinion maybe it's too harsh to dismiss TDD. It's certainly good in some cases. But if you start using it everywhere everything will look like a nail when all you have is hammer. Clearly not everything is easily unit testable.

What I like about unit testing that it's forcing you to separate irrelevant code and prevents you from writing spagetti code.

In my personal oppinion you should test core functions as someone already said, which validates 90% code with 10% of time. And leave parts which can fail gracefully in case of a bug. It's not practical/impossible to test 100% of use cases of a big codebase. But you can test critical parts.


Starting out with a demonstrably false quote...

http://www.hindawi.com/journals/jobe/2015/763680/


Ha that is hilarious. I am actually trying to gain some weight and i find i weigh myself too often. I think i can could come up with the study that shows the opposite effect. After all it doesn't have to be true, just significant.


Apparently the author didn't test the quote against a search engine.


> I am against writing unit tests in general

> How often have you seen a program crash? If it was developed by a large software company, chances are it was written using TDD. Clearly, TDD is not a magic bullet. So, TDD does not "prove your code works".

> Developing software is like a painting commission

Several reasons I gave up a third of the way in. I rarely write tests first, but I can appreciate that it works for plenty of people - my brain just doesn't work that way.

It's hard to take anything in this article seriously because of the nerdrage and the dismissal of anything he disagrees with as "stupid".


This is a bad article, based on a fundamental misunderstanding of TDD, as virtually all the comments on the blog post itself and here on HN attest.

People don't spend weeks writing tests for all the functionality and then write the tests. That's not what even the most die-hard TDD advocates do.

I think many people here (like me) clicked through to read a well-reasoned article about how TDD enthusiasm may have gone too far, but sadly, this isn't that.

HN readers would do well to just move on, and the author would probably be well served by the advice of some of the commenters on his blog to take this post offline.


There is a way out that he has not found (yet). To design with complete testability as a design goal. TDD with no supporting design can indeed lead to disaster. Hope he can paddle past the rant pond.


> "Trying to improve software quality by increasing the amount of testing is like try[ing] to lose weight by weighing yourself more often."

I realize this is probably here to provoke a reaction, but it's still retarded and it's obviously wrong for the obvious reasons.

> You are writing code to test something that doesn't even exist yet. I am not rightly able to apprehend the kind of confusion of ideas that could provoke such a method.

I agree TDD can be taken too far and be enforced too strict (although I've yet to encounter that in practice).

That said: It's not stupid to write tests before the code which you are supposed to test. The order of these things are here for a reason: If you don't write your tests before you have working code, how do you know your tests will detect a failure mode, and thus can be used to prove that your code is now working?

Just the other day I thought I had fixed a bug, and then proceeded to write a unit-test for it. The unit-test went green and I was happy.

But then I decided to comment out my fix and re-run the test. I assumed it probably wasn't needed. I was confident and knew after all that I had "fixed the bug".

But that way I could at least say I had followed the TDD mantra, which claims to be there for a reason: 1. write test to reproduce bug, 2. write fix, 3. rerun test and if green 4. commit.

Upon commenting out my fix and "needlessly" rerunning the tests, lo and behold: The test was still green.

My test failed to detect the error-condition. Which meant that my patch had probably not fixed the reported bug. Quality had not improved.

I rewrote the tests and managed to make them go red, that is, detecting the failure mode. Uncommenting my fix and rebuilding, my test was still red. My fix was indeed invalid!

Another round of investigation showed that I had misinterpreted the error condition and produced a "fix" which didn't solve the real problem.

And the "stupid" principles of TDD helped me detect a invalid fix, find the real issue and verify a real fix for it. TDD helped me increase quality.

Stupid indeed, eh?


> "Trying to improve software quality by increasing the amount of testing is like try[ing] to lose weight by weighing yourself more often."

Giving the benefit of the doubt that this quote is in fact an accurate metaphor – to begin with this quote is to begin with the assumption that TDD's single motivation is improving software quality. TDD is a fantastic practice for ensuring conformity to a specification; which one might describe as software's correctness. A more objective metric than "quality".


What the author describes, writing the tests for an entire program before writing any of the program, may actually be stupid. But it is not TDD. According to Wikipedia it's a feature at a time, not a program at a time. I usually see it done one class or function at a time.

If the author had bad results writing tests for an entire program before writing any of the program, I'm not terribly suprised. But since doing so isn't TDD, it doesn't add even a single data point to the TDD discussion.


>So this is the first reason TDD fails: You're trying to make a design before you learn anything about it.

Well yes, I use my unit tests to learn what works and what doesn't. You're allowed to rewrite or scrap tests, if they are no longer relevant. In the end I feel that having unit tests result in a better final design.

Yes, sometime unit tests are a little contrived. But they can also help you design cleaner interfaces and increase your code reuse.


"I am against writing unit tests in general..."

After this sentence I abandoned the article.

What kind of serious software don't need automated tests? Maybe the kind of no one uses.


    If you want to change what a function does, you have to change all the tests you wrote for it.
This guy doesn't know how to write good tests at all. The whole point of testing is that you can change those function without any fear of breaking anything because the tests will prove it works the same way as before.


I tend to agree with this. First, no I am not against writing unit tests, they are great way to trap regressions. But test driven DESIGN? really?

Do you drive your car (software) by banging against rail guards (unit tests) to reach a destination? They are there so that you do drive off the road accidentally not guide your way.


I'd love to hear both sides of this story from experienced people (and not the ones who speak loudly one way or another).

We'll probably find experience on both side of "good" and "bad". I'm curious about the "bad" and how that comes about!


> Week 3-4: Write tests.

> Week 5-10: Write code.

I'm not necessarily a TDD advocate but I've literally never heard anyone say you should spend two solid weeks writing tests and then six solid weeks writing code against those tests.

I mean, that's just idiotic.


I don't recognise most this criticism. TDD is designing up front. But your designing only a little bit up front. You implement the design, if it doesn't fit, you scrap it and try again. That doesn't matter since its only a little bit.


Painting is a science and should be pursued as an inquiry into the laws of nature. Why, then, may not a landscape be considered as a branch of natural philosophy, of which pictures are but experiments? -- John Constable.



Good observations! At that company, sure, TDD is ridiculous.

But other companies don't do it that way, and require TDD, so the article took a narrow, biased view without considering pros and cons and different scenarios.


Let's see author's bio. Only a few years of programming experience - check. Never had a job doing programming outside of academic research - check. Everything checks out, folks!


just a sorry rant about how OP tried and failed TDD, and then he points to some project from 2004 that to me seems to be missing code ownership (ppl afraid of changing code they dont fully understand), but to OP seems to be doing TDD wrong.

it's a waste of time to read, as it don't bring anything new to the table. i wish the OP would at least attempt to reflect on what TDD is, what it's not and why oh why, so many devs seems to like it.


I hope I never work with the author..


> I am against writing unit tests in general, since experience shows that they actually prevent high-quality code from emerging from development--a better strategy is to focus on competence and good design.

False dilemma. Are competence and good design exclusively from people that don't TDD?

> The basic idea--amazingly, one of the most popular methods of software engineering (and growing in popularity!)--is that, after you figure out what you want to do and how you want to do it, you write up a set of small test programs--each testing some tiny piece of the program that will exist. Then, you write the program.

No. You are wrong. It is not that.

> Let me emphasize: you write the test cases for your program, and then you write your program. You are writing code to test something that doesn't even exist yet. I am not rightly able to apprehend the kind of confusion of ideas that could provoke such a method.

You are writing specifications for a program that doesn't exist yet!? Nuts.

> The most important argument is a practical one: Test-First doesn't work.

It doesn't work because it doesn't work. Sad argument.

> I was once one of these newly educated kids. So, for a few years, I worked exclusively with the Test-First strategy. When it was over, the results were undeniable. The code--all of it--was horrible. Class projects, research code, contract programs, indie games, everything I'd written during that time--it was all slow, hard to read, and so buggy I probably should have cried. It passed the tests, but not much else.

So it didn't work for you so it is everybody else's problem but yours.

> So this is the first reason TDD fails: You're trying to make a design before you learn anything about it.

No. You are not. Indeed, depending on how you decide to do your tests you should be able to change your design and the underlying implementation and no test should break. Please, take a look to the outside in approach.

> Week 3-4: Write tests.

> Week 5-10: Write code.

Oh my god. This is terrible. And no, this is not TDD either. This is inverse waterfall :P

> Even if you somehow succeed, TDD prevents incremental drafts by functionally requiring all tests for a module to pass before you get any real results.

Probably this is a misconception too. You don't write all the tests before. One test, small implementation pass, another test, small implementation pass... You get the results incrementally.

> There is literally no substitute for competence. If your coders don't have it, TDD won't fix it.

I fully agree on this. And probably this is a really common misconception. But false dilemma again. I don't think TDD came in this world to turn bad programmers into good programmers. It is just another tool in the toolset. You can have terrible programmers that do TDD and excellent programmers that do it.

My recommendation: https://vimeo.com/68375232


Up next, on 2003's hottest takes...


>I am against writing unit tests in general, since experience shows that they actually prevent high-quality code from emerging from development

The essay doesn't have the high quality of content I'd expect from PhD candidate in computer science. I'd expect less outbursts of passion and more discussion of pros/cons/tradeoffs.

First, without getting into "TDD" itself, let's just isolate whether "tests" are valuable. I think the examples of SQLite's rock solid reliability (extensive Tcl regression tests[1]) and NASA's disciplined software verification prove that tests help discover bugs and increase code quality.

Tests are valuable -- but they also have a "cost" to write. Let's cover the cost issue at the end.

When to write the tests. If the sequence is: write the code first and then the tests, you might call them regression tests. If you write the tests first and then the code (e.g. iterations to turn red FAILED into green PASSED), you can call it "TDD". And those TDD tests can still function later as regression tests.

To me, TDD acts as a "design" step or "outline" 10,000 foot view. Before any plumbing code is written, what do you want the REST API or group of functions to "look like". Is there unity and coherence to the collection of fucntions modeling the abstraction? The subsequent of fleshing out TDD with "expects()" and "asserts()" is just mechanical work to glue the edit+compile cycles to a verification target but it's not the most interesting aspect philosophically.

However, even though tests have a benefit, there is a cost. The cost-benefit works in some cases but not others:

In my experience, I'm completely sold on TDD (or regression tests) for foundational library type of code. If you're writing a core string library that 100 developers at the company (or open source community) will link into their projects, I prefer seeing extensive regression tests covering all edge cases that proves that it actually works the way the developer intended. It's not strange at all if the code for regression tests outnumber the actual code 10-to-1.

On the other hand, TDD that is mostly UI verification is extremely brittle. If you have TDD that simluates mouse clicks and has "expects()" on reading webpage UI elements to check if things like sales tax calc is correct, you could easily get overwhelmed by all the extra work that synchronizing the actual code and the TDD scenarios generates. (E.g. a UX designer moves an icon 2 pixels or adds a row to table and ends up breaking the entire TDD validation suite for developers.) I could see where TDD at that level would be counterproductive.

[1]https://www.sqlite.org/testing.html

[2]http://www.fastcompany.com/28121/they-write-right-stuff


The "problem" with TDD is that it can only tell you that your expectations have been met. It can't tell you if you are doing the right thing. Only that the things you guessed at are working in the way you decided they should work.

If, on the other hand, your problem is not very well defined, then your tests have a good chance of eventually becoming a liability. If you ever discover that your domain model, interfaces, or even selection of algorithms are insufficient (or just plain wrong), it's more likely that your pre-existing test code will be unusable, rather than the sort of guide for refactoring that they're touted to be. If you had guessed at the interface or algorithms correctly, but merely implemented them incorrectly, yes, the tests will guide you back to correctness, but I think that tends to be a big "if". Given that you don't understand the problem, the likelihood that you've made mistakes in the design are high.

Of course, this isn't the fault of TDD, that's exactly what it's meant to be. The problem is in people thinking that is equivalent to verification.

TDD is ultimately a design tool, one that is useful in cases where we have a very good idea of constraints and requirements of a problem, on where the problem is very, very well defined.

And that's why I say it's a code-smell. The problems that are well-defined are often that way because several people have created several implementations of it already. If I'm TDDing something, it usually means I'm rewriting code that already exists somewhere.

Now, I might have good reason to do that. Perhaps I have constraints that nobody else has ever considered. I generally hate the phrase "don't reinvent the wheel". I can think of at least 3 times off the top of my head that the wheel was successfully and usefully reinvented in the 20th century alone. But it's very important that you understand that is what you're doing. If you are aware you're reimplementing a known solution, you can now choose to study your forebears and get an even better understanding of the problem.

Personally, I think saved REPL sessions are better than TDD for problems that are not very well understood or well defined. If I were to try to build an MPEG encoder in JavaScript (for whatever reason), I'd certainly use TDD, because MPEG encoders have specifically well-known inputs and outputs. But if I am trying to invent a completely new UI paradigm for virtual reality, then TDD is just not an applicable tool.

Also, it's a lot easier to tell myself to just discard a saved REPL session that ceases to be valuable after a major rethink in how the problem works than it is to discard tests. Plus, I find them to be a little more informative in terms of demonstrating behavior to other developers than test code.


The tone of the article is way over the top, but I 100% agree with this quote: "[E]xperience has shown repeatedly that good designs arise only from evolutionary, exploratory interaction between one (or at most a small handful of) exceptionally able designer(s) and an active user population--and that the first try at a big new idea is always wrong."

I agree so much so that the best systems I've built have always been the 2nd revision of a build one to throw it away prototype.

That said, I like to surround my code with lots of unit tests to eliminate absolutely stupid bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: