Posts

Showing posts with the label testing

Improve Your Unit Testing 12 Ways

Everyone these days seems to understand that having unit tests are an important part of the work of developing software. Yet, people struggle with the practice. Here are 12 ways you can immediately improve your unit tests (and at the bottom, a few ways to go well beyond these 12 rules).  Select assertions to provide clear error messages. Assert.That(x.Equals(y), Is.True) is not it. Try Assert.That(x, Equals(y)) . See the difference in how the test tool explains the failure. If the failures aren't clear, add a text message to the assertion. Go for Maximum Clarity The rule here is "you read the tests to understand the code, you should never have to read the code in order to understand the tests." The tests need to be better documentation than the code or its comments. When this is not true, it's generally because the code isn't doing too many things in a given function. When code is clear and simple, the tests can be also. Watch Out For Side Effects There...

The BS Line

Image
Consider that everyone has in their mind a fuzzy line: On the left side of the line is what we like to call "the real work" (abbreviation: RW). It's what we get to do . It's the value we provide to the world. It is usually about 30-40% of the work we do, which pays homage to Gall's observation about the inefficiency of human systems. The stuff to the right of the line is what we call "bureaucratic silliness" (abbreviation: BS).  It is the stuff we have to do.  These are things we do out of obligation (when reminded), or to satisfy the internal "process police." These things feel like nonsense that we have to deal with because that's the price of being a developer, tester, manager, whatever we consider ourselves to be. The Difference We tend to take on Real Work happily and go right to it. It allows us to practice and pursue the skills that define who we are in the workplace. They further the goals of the company, the customer, a...

TDD: more to know

The basics are well-known: Everyone knows the basic cycle of TDD. You should also know the improved Industrial Logic version of the TDD cycle . You have heard Uncle Bob's three rules . But there is so much more to know. I have been gathering little sound bites for you which may help you build your skills and knowledge. Please feel free to drop additional factoids or questions. I'm happy to explain any of these at length if you like. Here is my list: Your code has two parts: the part you have covered with TDD, and the part that requires you to use a debugger. Microtests are F.I.R.S.T.   (you cannot TDD after writing the code) Only microtests are appropriate for TDD; other tests are useful, but not for TDD. Microtests are not all your tests - you need other levels of test still.  TDD does not validate your system; it only speeds development and improves quality. TDD without a pair programming partner is like programming while wearing only one shoe. ...

Three Steps to Safer Development

Eric Ries has suggestions on why/how to move your engineering practice forward and gain speed and reliability while you're at it: So how can I help the engineering manager in pain? Here's my diagnosis of his problem: He has some automated tests, but his team doesn't have a continuous integration server or practice TDD. Hence, the tests tend to go stale, or are themselves intermittent. No amount of fixing is making any difference, because the fixes aren't pinned in place by tests, so they get dwarfed by the new defects being introduced with new features. It's a treadmill situation - they have to run faster and faster just to stay at the level of quality/features they're at today. The team can't get permission from the business leaders to get "extra time" for fixing. This is because the are constantly telling them that features are done as soon as they can see them in the product. Because there are no tests for new features (or operational...

Tests and Immersion in Code: The relationship

There is a relationship between how slow tests are and how much we interact with the tests while we are developing software. If the tests run instantaneously, I'll run them constantly. If the tests run in 30 seconds, I will run them 3 times in any 5 minute window of coding. If they run in 1 minute, I will run them 3-4 times per hour. If they run in 10 minutes, I will run them maybe 3 times a day. If they run in an hour, I'll run them 5 times a week at most. If they run for a day, I will almost never run them. If the build/test cycle is not incredibly fast, I will not participate in the code as fully and richly, instead falling back to rely on my IQ and memory and good intentions more. This is why tools like Infinitest for Java, sniffer for Python, autotest for ruby, and the like matter so much. It is also why there have to be build servers and test servers for any significant project. It is also why manu...

Product Health Rules!

Image
Dealing with product health is simple in theory. You need to have a central build-and-test server and a repo that is treated as the central repo for the developers (in git, servers have no built-in roles). It has to be set up to run all the tests, whether they are unit tests, story tests (cucumber, etc), or what-have you. Now, the thing you have to know is the state of your local machine, and the state of the build server. When I say GREEN , I mean "builds and all tests pass."   When I say RED  I mean that something doesn't build or did not pass all the tests. The Rules The rules in precedence order are: GET TO GREEN.   Green to green ; anything else is obscene. You need to know that your code is good, and the server's code is good, and you can push your code to the server. "But wait", you might say, "there are states unaccounted for here. What about pushing green to red?" "But Tim!" you may cry, "my code isn...

A Process For Naming Tests

The excitement (aside from work and family travel) lately has been at Agile In A Flash , where we released a new blog and card which reveals a process for naming tests . After the naming papers I've written while at Object Mentor, and the chapter I supplied to Bob Martin's Clean Code  and the subsequent video episode , I am known as a "naming guy."  I'm expected to always have a choice name in mind, in line with my own naming rules , for any circumstance in which I might find myself. True to form, anyone pairing with me runs the risk of being exasperated at my constant two-step of "What's that for, really?" and "Can we rename it right now?" My coworkers are often surprised when they see me use a silly or meaningless name early in a test or body of code. Why would I not know exactly what to name a variable, class, method, or test? How is a test fixture not obvious to me from the very beginning? Roy Osherove , in initial shock at the...

The Build is Broken, Now What?

Your team is hard at work, testing and coding and planning, and suddenly the build breaks. Now what can you do? The broken build might not be your fault at all, and besides you have work of your own to do. You could go ahead and practice business-as-usual, but this is probably a bad idea. Here's what I say: Never pull from a broken build . Assume that version control is poison. If you are not the one fixing the build (and shouldn't you be?) then you should leave it be. The last thing you want to do is import brokenness into your local build environment. Import from a "green" build, and you will know that any brokenness is your problem. Test locally before considering pushing code to the CI build, even though it takes a bit more time. There is really no good time to NOT know that you've broken something. A little "due diligence" goes a long way. Never push to the CI server if your local build is broken . If your build is broken, it inconveniences...

You Cannot Possibly Do TDD after Coding

Just for the record: it is flat out impossible to "do the tdd" after the code is finished. This is just a matter of definition. You can write tests after the code is finished, but that has no relationship to TDD whatsoever. None. Nada. Zip. In TDD you write a new, failing test. Next you write enough code to pass the test. Then you refactor. This repeats until the code is done, writing and running tests continually. It shapes the way the code is written. It is a technique for making sure you write the right code, and that you do so in an incremental and iterative way. In ATDD, you have a failing acceptance tests. You take the first failing tests (or the easiest) and use TDD to build the code so that that part of the AT passes. You run the AT after each major step that you've built using TDD. When the AT is all green, you have completed the feature. This helps avoid early stop and also helps avoid gold-plating. If tests were product, then it would make no difference...

Agile Progress and Branching

This week, and last, we are doing our work in the release candidate (RC) branch, which will eventually be merged to trunk. We maintain a "stable trunk" system, with the RC as our codeline (for now). This is an intermediate step on our way to continuous integration. Partly because of the change in version control, the team has learned to rely more upon the tests, and is writing them quickly. We have had a noticeable increase in both unit tests (UTs) and automated user acceptance tests (UATs) in only one week There were some problems with people checking in code for which some tests did not pass, but they have learned very quickly that this is quite unwelcome. We are painfully aware of the time it takes to run both test suites. The UTs suffer from a common testability problem, in that they were written to use the database and they sometimes tend to be subsystem tests rather than truly unit tests. When they are scoped down and mocking is applied, they should be much faster...