I remember the heyday of Joel on Software back in the early 2000s. It was a great site with excellent articles coming out every month. The forums were good too.
One of the major posts with remarkable sticking power was the Joel Test. At the time back in 2000 it was like wow a way to quantify how good your team was. And Microsoft, the top of the heap, does all 12. Joel was probably right that most teams were about a 2 or 3 at that time.
Today I'd say the industry has matured and with tools like Jenkins and Bugzilla and git available for free. Most teams I've been on over the years I'd say are around 7-10 on this scale. and scores have generally improved over the years. culturally I think a lot of that stuff is now standard and most teams, even mediocre teams, would just put a lot of that stuff in on their own now without having to be told to or getting resistance from management. as noted above since the stack is generally free it helps a lot that it only takes one good developer who cares about it go establish a pretty good stack off some spare Linux VM somewhere at no cost to the firm.
So using the Joel test as a starting point, what are some guidelines to today's software development teams. well here goes
1. do you use visual tools?
2. do you have automated testing?
3. do you use static source code analysis?
4. is all code reviewed?
5. do you build on every check in?
6. do you level the workflow?
7. do you use one piece flow?
1. do you use visual tools?
For your current project, how much stuff is in development? how much in code review? how much with the test team? how many requirements and defects are in the project scope but not yet assigned to a developer? at this moment how many issues on average are assigned to each developer?
With visual tools the process runs more smoothly as problems such as open code reviews piling up, or a negative find vs. fix ratio on defects are obvious and public. this allows issues to be identified and addressed much earlier, before it is a crisis.
2. do you have automated testing?
Since we work with computers let's get the computers to do the testing. there are some things computers are good at and software testing is one of them.
with powerful modern tools like powermock and mockito many of the historic barriers that made automated testing difficult or painful are no more. in general developers will write code. with automated testing we get development on board to write self-testing code
3. do you use static source code analysis?
Tools like Coverity and Klocwork are awesome. they find the really hard stuff, like did you know this API can return null on some obscure but possible scenarios. what happens in our code then? something the programmer overlooked but the static analysis found.
as well they find the "easy" stuff - uh this will NPE, oops. quick fix yay
4. is all code reviewed?
code review done properly is an effective and cost efficient way to find and fix runtime software errors at the early stage, when it is still very inexpensive to fix them.
5. do you build on every check in?
if someone checked in code that doesn't compile for whatever reason (such as forgetting to add a new file to the repo), then it should be made known so it can be fixed ASAP.
this checking is also part of the automated testing. after compiling the code it makes sense for the computer to also run the automated tests since it's free at that point. again if a developer broke or possibly broke something then everyone should know about it right away so it can be fixed right away
back at a previous company there was an informal rule "never sync before 10 AM" that was because to sync is to pick up whatever stuff that didn't compile and didn't work had been checked in since yesterday. by waiting until 10 you could be sure that the offender was in and after spending time manually investigating the problem you could ask him to look at it and fix. so in the old days when someone checked in bad code it typically wasn't noticed or fixed until the next day.
6. do you level the workflow?
a reason to use visual tools is to identify lumps or bottlenecks in the workflow. if code review is being neglected and backed up then announce "halt production of new code until the reviews are cleaned up". so many years I've heard people throw the term "waterfall" around like some kind of stick to hit people with. but the team has to be willing to do something to combat waterfall, which is something of a natural entropy state. using techniques like Kanban and lean to level out the amount of outstanding work at each station in the software development process
7. do you use one piece flow?
Queuing is toxic to software development. queues destroy value stream productivity. if you look at the lifecycle of some feature or defect reported, on many teams you will find that the work spent the vast majority of its time idle, sitting in some queue making zero progress while waiting for a tiny amount of attention from someone before going into the next queue and being idled again
some defect is reported. it sits until the 10 AM triage meeting the next day. assigned to some developer who already has multiple assigned issues. it sits idle for a few days until the developer gets to it. developer codes and tests a fix in a couple of hours. developer creates a code review. it waits for the reviewers to get to it, more than a day passes to do the review which was an hour of actual work. developer checks in. the official build isn't until next week so the work is stalled, done but not delivered to the test team. a week later there's the build. the item is one of a large batch of items delivered in the build. the test team will get to it when they get to it perhaps a few days later after running through the full regression first. it takes half an hour for the tester to retest and confirm the fix
now with one piece flow these disconnects, queues, asynchronous handoffs and large batches of work are sharply reduced or eliminated. once work on some item begins it proceeds expeditiously with little or no delays between each station.
No comments:
Post a Comment