Wednesday, December 9, 2009

Code that glows in the dark

I'm currently enjoying the Pragmatic Programmer (T.Hunt), which reads fluently thanks to the fact that it's build out of tips in the form of small chapters. Currently stranded at tip 15, I discovered a principle that I've already applied on a couple of projects, never realizing that it was a principle. Here's a distilled summary out of the chapter, shortened and mixed with an example:

Summarized book quote:
"There are two ways to fire a machine gun in the dark. You can find out what the target coordinates are. Then determine the environment conditions (wind, temperature, humidity, ...). After that investigate the specs of the cartridges and bullets and their interactions with the model of your gun. You can calculate the bearing and elevation of the barrel by a program. If all your tables of data are correct and the environment doesn't change, your bullets should land pretty close to the target.

Or you could use tracer bullets...

A tracer bullet is loaded on the ammo belt at intervals. When fired in the dark, their phosphorus ignites and leave a trail to what they hit, providing the gunner with instant feedback.

When you're starting a new project or building something that hasn't been built before and you're dealing with multiple layers/tiers/rpc's/libraries/ languages,... there are a lot of unknowns. The classic solution is to specify the system to death, clarifying each unknown. One big calculation of everything up front, then shoot and hope. Pragmatic Programmers tend to use tracer bullets."

Example:
For instance, let's say you have multiple clients on different platforms, calling your new webservice that needs to access some info in the DB, used to shoot an external webservice. Before the last step, giving back the result of this external webservice, it has to be transformed to another format and placed on a queue towars another system.

For each client, you could create simple tests firing a soap request asking about the one row present in the DB. A mock webservice deployed on a local or dev server, always returning the same (validated) result, could act as an external service. Maybe some sort of a persistent mean, like a file could act as a queue to see if the mock answer got transfered in the proper format.

Functionally there are few or no requirements implemented, but you've built a framework that allowed you to find out how the application hangs together as a whole (testing the different libs, marshalling, serialization, db access, soap calls, ...). You can show the users the interactions in practice and provide your development team an architectural skeleton on which to hang code.

Sunday, December 6, 2009

That's a negative

The talk at this year's Devoxx I enjoyed the most, was the keynote of Uncle Bob aka Robert C. Martin. The whole talk was brilliant but there was one part I particularly enjoyed. As a reaction on his claim that testing is enormously crucial for a software Craftsman, he was confronted with a question from the audience, stating that ensuring code has a large test coverage, also comes with a lot of hastle maintaining those tests. He then replied with the single coolest answer on the topic ever:
"Ok what's the alternative? We'll just dance in circles around our code claiming that it works then shall we?!?"

It's funny because it's true. To me, it seemed that in the way he replied there was a bitter undertone. As if he had been asked this question far too many times and far too few people listened. To get the proper coding standard, testing is a serious business.

For my first contract I was lucky enough to end up in a fully compliant TDD project. The methodology was taken serious, along with the other eXtreme programming principles such as pair programming.
We swiched keyboards every 15 minutes and always started with the writing of a test. In the beginning this really helped to think about the contract of my class. What is it supposed to do? For me, after a couple of years the test 'first' became less useful since thinking on the interface of your class comes natural after some time. As long as you test your class well, you'll refactor it until it's working and readable.

However, we're not paid for working code, we're paid for bug free (*) code. What does it mean to test your class well? What all developers, including myself, constantly have to keep in mind, is to not forget to also negative test.

On a system level negative testing is referred to as tests trying to break the system, for instance massive load tests. I'm referring to negative testing on a unit level. That's seeing how your code (CUT) responds to different behaviour than expected. The non-standard scenarios if you like.
For instance, let's say a method takes in an account number as a parameter. What will the method do if the account number format is invalid. Or if the format is valid but the account can not be found in the persistent store. Or if it's found but no longer active?

Negative testing might sound logically or easy to do, but two things get in the way; first of all, you need to stay disciplined enough (read: don't be too lazy too implement the additional test cases). All...the...time.
Second of all, it's hard to negative test code that you've written yourself. It is inherently human to find the code you've written two minutes ago, easy and clear code. It does what it needs to do right? When you're pair programming, your pair might have an interesting vision on code you wrote, providing you with a little more feedback. This usually results in additional test cases.

But not everyone is pairing. You might be the only developer on the team. What helps for me is the blackbox testing approach: looking at the interfaces you're dealing with, but not at the implementation.
For instance, on an individual class level, think about how heavy your methods depend on their parameters. If a parameter is optional for a method, how does your code react when it's not there? What if it's there but it contains a functionally wrong value? If a parameter is absolutely crucial and it's not there, does this indicate a bug? If it does, you might want to fail early and add an explicit not null check as your first statement.
On a higher level, if your class depends the output of collaborating classes, is the return value of that class always as expected? What happens when it's not?

Of course integration tests cover a lot of the scenarios at a certain point in time. However, you never know when your piece of code will be called in future or different scenarios.
That's why negative testing is important.

I'm interested in hearing about your experience in negative testing, so comments are welcome.

(*) This is an utopia of course, but we should at least strive towards it.

Friday, December 4, 2009

The birth of a blog

Ah yes, another blog is what the world needs... I've been thinking a lot about the possible usefulness of my own blog and I finally decided to go for it (praise the lord).
There is no point in learning interesting things and keeping them to yourself. By doing this, other people can share their opinions on my ideas and see if they match. Enjoy and by all means, feel free to comment.