Monday, August 24, 2009

Emergent behaviour in self organizing teams

In recent months I've been working within scrum. It's been a learning experience for me. I hadn't worked in this type of environment before.

An important aspect of scrum that has taken some getting used to is the self organizing aspect of it. Historically I've always been in a model where there is the team lead/project manager who hands out assignments to programmers, monitors progress, and then hands out the next assignment. Over the years I've been both the programmer receiving and carrying out instructions, and the team lead identifying tasks and distributing them to team members.

With scrum there's just the list of tasks and people just grab one. That can lead to some interesting dynamics. In a way it can lead to silos if people just do what they know and are comfortable and familiar with. Perhaps that's not necessarily bad. I think that the team lead (but more importantly the team itself) still has something of a role in self organizing in ensuring that less desirable tasks are distributed fairly and the more interesting work is also distributed fairly.

In self organizing teams different themes may emerge. One would be the classical concept of the "chief programmer team". Once thought to be an academic construct, this could emerge in a scrum type setting. Especially if an individual is a really outstanding developer. In that case you may find the others will take on secondary tasks so that the key developer's time is maximized writing code. Also the key developer may either explicitly or implicitly in the group take on the most difficult assignments and more routine assignments are picked up by the others. In that sense it optimizes the work load in a way that the traditional team leader/task distributor model cannot.

I didn't look into it so I don't know if there's a lot of research around on self organizing teams. In software specifically this has only really been strongly around for the last few years. So I think there's some opportunities there for social scientists to make their mark and study this emerging area. I think there's probably some interesting and perhaps unexpected observations to be made.

This might be a good topic for someone looking for an honors project in the social sciences. Hang out with a scrum team or two for a few months, attend the daily standup, retrospectives, demos, etc. See what you can come up with about the emerging group dynamic. This is quite doable at most universities (I'm talking about you Laurier) as there are always tech companies near to campuses. Another possibly interesting angle might be to correlate DISC scores and scrum team dynamics to see if there's any influences there.

Monday, May 18, 2009

Thoughts on software source code review

Code review is an interesting subject. Software history is littered with companies that began doing code reviews with the best of intentions. Then somewhere along the way they became discouraged and just gave up on code reviews.

Why does code review fail?

That's a good question. Most people agree that code review is beneficial. It catches bugs very early when they are extremely inexpensive to fix. Knowing that the code is going to be reviewed keeps "crap" code out as the programmer will be far less inclined to try to sneak in bad code if he knows he's going to have to answer to it. Code review can identify and correct subtle defects that can be extremely difficult for the test team to produce in a lab testing environment outside the field.

I suspect a lot of the problem is when code reviews take on a life of their own. Code review is governed by the 80/20 rule and 80% of the benefit is realized in the first 20% of time expended. After that the point of diminishing returns is quickly reached and it becomes quite inefficient and unenjoyable for all involved.

So the trick is to set it up to get that 80% of benefit quickly, the easy wins, then basically halt the review when it reaches the point of diminishing returns.

Well then why do they drag out? What happens during that last 80% of the time. I think this is where software companies that start doing code reviews get into trouble. The reason reviews take on a life of their own is that the purpose of code review is not well defined. Some see code review as a time for mentoring, coaching, showing the "better way", being pedantic and showing off reviewer skill, training less experienced staff.

What that relates to is that "crap" code turns out to be extremely subjective. Here's some examples of stuff in Java that I've seen sent back in code review for rewriting.

int x = f1();
f2(x);

The reviewer sent it back stating that it be changed to

f2(f1());

Here's another

void f3(int p1, String p2) {
float f4 = ...

The reviewer sent it back stating that it be changed to

void f3(final int p1, final String p2) {
final float f4 = ...

This are examples of what I call Programming by Proxy. We have valid, working, maintainable code being sent back for rewrite. There's nothing wrong with the code as originally written, it works properly, the reviewer just wouldn't have done it himself that way so he demanded it be rewritten to the way the reviewer would have done it. Once programming by proxy becomes the norm then code review is well on the way to the trash heap of company history as a well meaning but failed experiment.

How to keep reviews on track

The answer to this is as simple as it is obvious. Impose discipline on reviewers. Specifically this: the only thing that is actionable in a code review is runtime errors. That's it, runtime errors only. Otherwise the code stands as written.

I know some people may at first recoil at that idea. Indeed it seems counter intuitive as you are losing the value of doing code reviews. But you aren't losing value. By focusing on runtime errors only you get the most value of the code review, positive changes to the code that will correct real user affecting defects. In the 80/20 rule 80% of the benefit of code review is elimination of defects. The rest is rewriting code that works which is of marginal benefit at best.

What about the crap code though? How do we keep that out if runtime errors are the only things reviewers are allowed to raise. There's a couple of counters to that. First of all by having to submit the code to review the programmer will be more inclined to check in good code if he knows for sure someone is going to see it. The way bad code gets in is when a programmer is working by himself with no guidance or oversight and never having to answer for his work; then he can become sloppy and take shortcuts. Simply knowing that the code will be looked at will prevent most of the crap code from getting in.

Also by utilizing the 80/20 rule the reviews will finish up so much faster. That way if the reviewer saw something in the review that he personally disagrees with (like the examples above), then the reviewer is free to go in there and fix it himself.

Thursday, April 02, 2009

Scrum and XP from the trenches

I recently finished up reading another book about agile. This time it was Scrum and XP from the trenches.

It was a pretty good book. Well written, an easy read. I liked it because it described real world experience using scrum over several years at a software company in Sweden. It's good because it shows that in practice there were some deviations from agile doctrine that worked for them.

For example he suggests a three week sprint length. This makes sense because then the team can get some momentum. Plus the overhead of setting up and tearing down a sprint is very non trivial; so extending the actual clean development time is a good thing. He also recognizes that within a project the team and team members are constantly bombarded with issues that are not part of the sprint planning. In general only 40-60% of the team members time will be available for the actual new feature work in a sprint. The rest may be addressing new bugs, distractions with customer requests, network being down, helping out people who are looking at code you know about, general delays and distractions that happen almost every day. Unlike the unrealistic viewpoint of "sprint safety"; in the real world there is distraction and the wise developer accounts and allows for it.

Another thing I really liked is his honest talk about personnel. In one passage he suggests the ideal sprint team size is no more than eight. If you find there are 10 people assigned to the sprint he suggests to eject the two weakest team members. Excellent practical advice. In another passage about difficult to manage team members he says in some cases to consider if you even want this person on your team.

Some valuable common sense advice there. Far and away the real key to a successful software team is having predominantly strong developers on the team. That's more important than religious adherence to the development methodology fad of the quarter. The author recognizes scrum will not transform weak developers into average developers; or average developers into good developers. After the stand up meeting everyone just goes back to their cubicle to write good software; mediocre software; or bad software.

The author talks about the problems there were at the company before he joined. I know he credits scrum with a lot of improvements. Still I wonder how much improvement was the undertone of the book which was that after he arrived he was allowed to get rid of low performing developers; I suspect previous management was unwilling or unable to do this.

I liked what I would call real world scrum like this book and the works of Scott Ambler. Hearing some of the dogmatic scrum theory; it's hard to grasp some of it. So it's good to hear of bridging the gap between the real world software development and the strange world of the agile theory.

Friday, March 27, 2009

Python back on my desktop

This week I was doing some stuff with some log files. In the logs there were some periodic logs that appeared every 30 seconds which was distracting and annoying.

In UNIX this is of course swoosh, its

tail -f logfile | grep -v pattern >> outfile

Alas running on my Windows desktop Microsoft in its wisdom sees no need of such conveniences like tail and grep.

So what to do? Of course, what I did back at SupportSoft when I had to deal with this. Write a Python script to tail and grep. It was pretty easy since there was already stuff on the Internet for that - of course in Python it was only about three lines anyway. And just like that I was in business, no more annoying periodic log messages.

It was a bit strange though when I went to write the script. I was surprised to notice that I didn't even have Python installed on my desktop. That's weird, have I really gone that long without it? Maybe since I switched jobs close to a year ago it takes a while to properly get settled in and comfortable to the point of firing up Python to make your day more productive and enjoyable. Plus I've switched to a new team recently so maybe the dynamic here is somehow different in some more positive way that steered me back toward my correctness of the past.

Anyway I'm glad to have it. Since then I've had my interpreter up all day long and it's just pleasing to see it there on my taskbar, like an old friend. I even found some more uses for some quick one line things. So welcome back to my desktop Python.

Monday, March 09, 2009

Non functional requirements

This is kind of a strange one if you think about the name. I mean, why would you want the software to be non functional?

Of course we know what non functional requirements are. That is stuff that an end user or even tester cannot verify directly by using the system. Such as "all code is reviewed", "JSF will be used to generate the GUI web pages", "database access will be through stored procedures", "development will follow the S methodology". Stuff like that. But if you look at the name itself in isolation it's kind of funny.

Wednesday, February 18, 2009

Software development projects

I recall back in fall 2001 back at Core Networks, shortly after I joined Core. We were kicking off a project for the next version of the CoreOS flagship product. The project lead mentioned that we were going to be doing some tweaks to the software development process for this project.

At that time Andrew, one of the earliest Core Networks employees from the founding in 1998, had this comment. He said "except for emergency bug fixes, no two software development projects in the history of this company have ever been done the same way."

Which was true at the time. What's interesting is 7 years later in 2008 when I left SupportSoft (which had acquired Core in 2004). In 2008 Andrew's statement was just as true as in 2001. Except for emergency bug fixes, we still had never done two software development projects the same way.

At least with Core/SupportSoft there was always some desire to change the way we did software development projects. For whatever reason management, the architects, or the developers or whoever instead of standardizing on something that worked was always reinventing how software was to be developed.

Perhaps it's like software itself, which is perceived as infinitely flexible and changeable. The process of software development is perceived as infinitely adjustable and changeable. Some new fad or methodology. I would argue that the perception that changing software development process is cost free encourages excessive and unnecessary tinkering and adjustments.

I think most everyone would agree that the old style late and large delivery to the test team is not a good way to go. However once you get past that I don't think people recognize that the changing the software development process is disruptive and expensive. I suspect the point of diminishing returns is reached sooner than people would like to admint.


Some recent work on my street has made me think about it I guess. There are 6 apartment buildings on my street. They are all wood, 3 storey, around 30 years old, built around the mid 70s. Although the ownership of the buildings is mixed, within the last 8 months or so 4 of the 6
buildings have had their exterior renovated. Front shingles and siding replaced. Two of the buildings have also got new vinyl windows to replace the legacy metal frame windows.

For the companies doing the work, I'm sure this is a pretty standard project. Shingles and siding on a 30 something 3 storey wood 20 unit apartment building. I suspect that they followed the same methodology for each building project, and did not make material changes from building to building on the order of work done, number of people per building, starting on front vs. back, amount of cleanup done per day and at the end, what work is done by the most experienced journeymen vs. the less experienced apprentices, etc. It was just the same standard way that everyone understands, is predictable, and works well and everyone knows what to do and how and when.

Thinking about it I kind of envied them a little bit. Maybe there's something to that we could use in software.

Tuesday, January 06, 2009

swag

A little surprise to start off the work year. I guess while I was on vacation they gave some stuff to everyone for end of year. There was still some left so I went up to where HR had it and got mine.

It was a soccer theme with company logo gear from umbro. There was a carry bag with a soccer ball to inflate, a ball cap, and a long sleeve soccer shirt. It looks fairly nice.

In high tech the flow of swag is often a fairly good indicator of how the company is doing. At least it was pretty accurate back with SupportSoft and Core Networks. When the company is doing well the swag can flow pretty fast and thick with t-shirts, baseball caps, golf shirts, knickknacks, mouse pads, fridge magnets, pens, rugby shirts, jackets, free meals, etc. When the company is not doing well then you can tell because the flow of swag dries right up.