circa75 Home | About circa75 | Articles | Links | Contact Us

Posted by aaron at 01:01AM, Friday, December 02nd, 2005

People who consider thinking something taxing and to be avoided shouldn't write computer software

I read an article on Cafe au Lait this afternoon that really has me worried for the future of the programming industry. The author thinks that test-driven development can replace development where the person writing the code attempts to fully understand the problem before programming a solution. I think this is incredibly wrong-headed. Is this an April Fool's joke?

Here's the nub of the article:

Iím done. I saved myself hours of hard mental effort trying to understand this code. In fact, I donít need to understand the code. I only need to understand what the code is supposed to do, and have test cases that prove it does it. This is a radical rethinking of how we program, but I think itís essential for modern programs. XOM is small enough that one person could understand it, but many programs arenít. Does anyone really know all the inner workings of Apache? or Mozilla? or MySQL? Maybe, but I doubt it; and I know no one really understands everything thatís going on inside Linux or Windows XP.

The only way we can have real confidence in our programs is by testing them. Practical programmers long ago gave up on the fantasy of proving programs correct mathematically. Increasingly I think weíre going to need to give up on the fantasy of even understanding many programs. Weíll understand how they work at the low scale of individual lines, and weíll understand what theyíre supposed to do, but the working of the whole program? Forget it. Itís not possible.


In fact, I think this is precisely what's making a lot of software unusable, and a lot of software shops incompetent, these days. I couldn't disagree more with the assertion that testing is the only way we can have confidence in our programs. Back in the old days, we had confidence because we understood what the hell we were doing. We understood the context of the smallest pieces, and the program as a whole. That's why computer science majors have been expected to complete all those rigorous courses on algorithm design, and theory and history of computation, and provability, and discrete mathematics, and artificial intelligence.

We took those coursess so we could see arguments like this and call them for the bullshit that they are. In the comments, people refer to this proposal meaning, essentially, that we could replace coding-with-understanding with fiddling around, perhaps using genetic algorithms, until we get a program that passes the unit tests. That's an interesting idea, but it misses the crucial point of genetic algorithms, which is that they rely on your heuristics to judge fitness being very, very good -- and people who work with this stuff understand that they're never as good as they should be. They're sometimes so good they're adequate. But you have to spend a hell of a lot of time doing deep thinking in order to come up with good heuristics, and then a hell of a lot more analyzing various good heuristics in the hope of finding one adequate one.

The approach the author's suggesting is one that would result in millions of inept, worker-bee-like programmers trying to do the fastest, cheapest solution to a tiny problem without understanding any of its context. He doesn't suggest that the unit tests measure performance, which was my first thought reading his proposal. Sure, it's often possible to randomly stumble upon a solution to a hard problem that will pass unit tests, but making it pass them in a reasonable time-frame is a lot harder. And that's just the tip of the iceberg. What about other resource issues? What about disk space? What about shared resources? What about all the problems I find maintaining code, where someone made a solution that "works" but that eats up memory in subtle ways that cause huge problems later? What if I come up with a dirt-simple, contextually-blind solution to a problem that means that I'll never be able to plug a whole class of devices into an extension port? Would the unit tests need to anticipate the entire panoply of use cases in order to work or be reliable? I think they do, and I think that's a pretty fatal flaw in this guy's cunning plan.

There's been this movement going on for years in the Java community to save programmers from having to think about anything too hard. "Use EJBs and follow these patterns, and all your problems will be magically solved!" "Use MVC and your programs will magically become maintainable!" And now, "Use unit tests and random, uninformed coding, and you won't even have to think about what you're doing!" What these all have in common is that they excuse the programmer from having to think about context, about how pieces of code operate with one another and affect one another and the user. The problem is, if programmers don't contextualize what they're doing -- if they never think about big performance implications, if they never think about dramatically different ways of doing things, if they never think about users -- then what results is Oracle's JDeveloper on Windows 2000 -- a big, impressive, feature-laden program on a big, impressive, feature-laden OS, but buggy as all hell because of compatibility issues that were never tested for, and can't really be, because of a paradigm of drivers and extensions -- and even a general programming model -- that never steps back and considers the ultimate goal, which is to let users get the stuff done that they need to, quickly and easily.

Trying to strip context out of the mind of the programmer may seem like a worthy goal when you need to outsource something to India stat, but its goals are diametrically opposed to those required to get software that's actually useful and powerful. The shop-of-code-monkeys model is one whose tragedy we should all recognize by now.
circa75 Home | About circa75 | Articles | Links | Contact Us

All content copyright © 2001-2009 the owners of http://www.circa75.com/