Unit Tests Just a Happy Side Effect of Test-Driven Development

Interviewing developers has given me an interesting, although not scientific, sampling of the thinking and mindset of software developers in the Atlanta area. While there have been a few bright spots, most of what I’ve learned has been disheartening.

I have found that TDD is still only an idea that most developers have heard about but never practiced. Of the ones who have practiced it, almost none of them understand what TDD was meant to achieve and only view it as automated unit testing. I believe this is due to two factors working together derail the original intent of TDD. First, TDD was named poorly. “Test” has specific meaning and has for a long time. It naturally biases us toward a certain type of thinking (strangely enough… testing and quality assurance). Second, most developers and development organizations would rather apply a formula or recipe to what they do than to take the time and effort to deeply understand what makes software development success or failure (and all of the degrees in between).

Scott Bellware is the guy who first got me thinking about a new way to approach TDD. It started with a new name: Behavior Driven Development (BDD). I actually think the “D” should be changed to Design to complete the mind shift. In a nutshell, BDD attempts to get back to the XP view of the tests being documentation of what the code should do. In this way of thinking, the set-up code is referred to as context and the assertions are referred to as specification. It has helped reform my thinking about TDD for the better.

You can catch Scott pontificating on the subject on the latest Hanselminutes podcast. Scott doesn’t get as specific as I would have liked, but this interview shows the side of Scott that I like so much. He’s passionately advocating a valuable practice and philosophy without the insults and vitriol that he is prone to fall into. I highly recommend it.

[Update: My reference to “Scott” in the above paragraph is to Bellware–not Hanselman, who is always pleasant and rarely vitriolic.]

Spell-Checking for ReSharper

Your code will be written once. It will be read (by you and others) many times. Code should be easy to read. This means that I should be able to scan a screen full of code and get a sense of what it does quickly. Layout and spacing plays s big part in that. Naming things with real, recognizable words just as important.

If you buy that, it goes without saying that those words that make up your identifiers must be spelled correctly. I am a terrible speller. As a C++ coder, I loved that Visual Assist would give me the red squiggles under any word that I misspelled. This has been the only regret I have about moving from VA to ReSharper. In every other way, ReSharper is a win over what VA offered.

I’ve just discovered Agent Smith as a plug-in to ReSharper. I had to make a few tweaks and drop one of the checking rules, but I am loving it. My ReSharper experience is now complete!

Desktop Developer’s Introduction to Compact Framework Development: Part 2-Windows CE versus Windows Mobile

Sorting out the difference between Windows CE and Windows Mobile is the key to understanding the platform versioning (which can be very confusing). Here it is in a nutshell…

Windows CE is a set of modules that Microsoft releases for hardware vendors to build operating systems custom for their devices. Because these devices are so resource constrained, manufacturers don’t want to put any functionality on them that isn’t necessary.

It turns out that there are a whole bunch of devices that need about the same functionality, namely smart phones and PDAs. Instead of having each smart phone and PDA manufacturer build their own version of CE and add in all of the niceties that are needed to be productive on those devices, Microsoft offers Windows Mobile. In other words, Windows Mobile releases are instances of Windows CE specifically built for handheld devices.

The group that develops Windows Mobile is a customer of the CE group. The Windows Mobile releases always trail the CE releases and have their own (crazy) naming and versioning scheme. In the beginning, it was called PocketPC (first 2000, then 2002), then there was a split, as they offered Windows Mobile 2003 for PocketPC and Windows Mobile 2003 for Smartphone. PocketPC was optimized for a touch screen and keyboard while Smartphone was optimized for menu buttons and a numeric keypad.

With Windows Mobile 5 and 6 (which are found on most devices today), there are three flavors: Pocket PC without phone (classic), Pocket PC with phone (professional), and Smartphone (standard).

Wikipedia has a decent breakdown of the various combinations.

SOLID: What is Old is New Again

SOLID

Sometime around 2001, I came across a series of articles by Robert C. Martin. At that time they were already a little old—from The C++ Report in 1996. I didn’t really care. This guy (who I now know as everyone else does as “Uncle Bob”) had captured and expressed core principles that I knew from experience were the key to creating maintainable software. I mostly knew it mostly from having violated them and paying the price.

The principles put forth in those articles express fundamentals truths that are in play in any Object-Oriented software project.

In the fifth article in the series, Martin sums up the first four…

1. The Open Closed Principle. (OPC) January, 1996. This article discussed the notion that a software module that is designed to be reusable, maintainable and robust must be extensible without requiring change. Such modules can be created in C++ by using abstract classes. The algorithms embedded in those classes make use of pure virtual functions and can therefore be extended by deriving concrete classes that implement those pure virtual function in different ways. The net result is a set of functions written in abstract classes that can be reused in different detailed contexts and are not affected by changes to those contexts.

2. The Liskov Substitution Principle. (LSP) March, 1996. Sometimes known as “Design by Contract”. This principle describes a system of constraints for the use of public inheritance in C++. The principle says that any function which uses a base class must not be confused when a derived class is substituted for the base class. This article showed how difficult this principle is to conform to, and described some of the subtle traps that the software designer can get into that affect reusability and maintainability.

3. The Dependency Inversion Principle. (DIP) May, 1996. This principle describes the overall structure of a well designed object-oriented application. The principle states that the modules that implement high level policy should not depend upon the modules that implement low level details. Rather both high level policy and low level details should depend upon abstractions. When this principle is adhered to, both the high level policy modules, and the low level detail modules will be reusable and maintainable.

4. The Interface Segregation Principle. (ISP) Aug, 1996. This principle deals with the disadvantages of “fat” interfaces. Classes that have “fat” interfaces are classes whose interfaces are not cohesive. In other words, the interfaces of the class can be broken up into groups of member functions. Each group serves a different set of clients. Thus some clients use one group of member functions, and other clients use the other groups.

The ISP acknowledges that there are objects that require non-cohesive interfaces; however it suggests that clients should not know about them as a single class. Instead, clients should know about abstract base classes that have cohesive interfaces; and which are multiply inherited into the concrete class that describes the non-cohesive object.

Martin later published The Single Responsibility Principle, which says that there should never be more than one reason for a class to change.

There were several aspects of these short articles that helped me so much…

  • The articles were short but clear.
  • The articles were available on-line for anyone to download and read.
  • Martin gave these principles names.

All of these aspects made these principles easy to promote—and I did! I preached all of these principles, whether in C++ or later C#. So, I am particularly pleased to see the recent resurgence in these principles as the ALT.NET crowd has arranged them into a nice acronym (SOLID) and made back-to-basics cool again. We see the SOLID principles all over blogs and articles in the agile community. It reaffirms my belief that the fundamentals may go in and out of style, but they are always what matters most in good software development.

Craftsmanship over Compromise

(Uncle) Bob Martin made a keynote address at Agile 2008 where he spoke about many of the concepts in his book Clean Code. In that talk, he proposed amending the Agile Manifesto to add the value of “Craftsmanship over Crap.” He later realized that this wouldn’t work because no one actually values crap. He later changed his proposal in a blog post where he suggested “Craftsmanship over Execution”. In that post he offered up a challenge to find a better word than Execution.

I hereby offer my entry as Craftsmanship over Compromise. I like it because we all value compromise, just like we value processes and tools, comprehensive documentation, contract negotiation, and following a plan. Compromise is a necessary part of the software business. However, we should value the ethic of creating an effective, lasting solution over shortcuts that are instantly gratifying, but are more costly in the long-run.

What is Architecture?

Architecture is one of those overloaded terms in the software industry that has been so abused that we should consider scrapping it and coming up with a new words to replace it. However, that isn’t going to happen anytime soon; so, we’ll have to try to recover this one.

I recently came across Grady Booch’s definition of architecture, and I really like it…

All architecture is design but not all design is architecture. Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change. –Grady Booch

This succinctly captures what I think makes for good architectures and good architects. The best architects have good judgment and intuition, based on experience, about what the most important design decisions are. They recognize which aspects of a system they’ve got to get right from the start and which can be firmed up later. They focus on the design decisions that would be costly to change later.

Desktop Developer’s Introduction to Compact Framework Development: Part 1-Introduction

As I’ve made my transition from the Microsoft desktop platform (x86, Windows, .NET, C#, etc.) to the world of handheld devices (ARM, Windows CE, Windows Mobile, .NET CF, etc.), there have been a number of fairly basic things that I’ve had to learn the hard way. They are so basic to being productive in the compact world that there ought to be a brief guide to bring an experienced desktop developer up to speed in short order. I didn’t come across that; so, I intend to provide that here for the next guy (or gal) who comes this way.

I am not an expert on this stuff (yet) but I am immersed in getting up to speed on them. So, I will introduce the important topics and point you to resources that I found helpful for the details.

The sorts of things I plan (so far) to cover include topic like… What is CE and how does it relate to Windows Mobile? What is the memory model on CE? What are the memory limitations and strategies for working within and around them? What do you have to do to run your .NET code on the handheld device? How do you do unit testing with CF code? I’m also going to document a few potholes that initially slowed me down.

Just so this initial post is not completely devoid of actual value, I’ll start with the most basic thing you need to know. Most handheld devices are built on a flavor of RISC processor called ARM (or Advanced RISC Machine). You can get a good overview from Wikipedia and the company view from ARM itself. While the primary measure of desktop processors is speed and throughput, the primary concern in the handheld world is power consumption. You learn that pretty quickly.

Looking for a ScrumMaster

[Update 1/29/2009: I’m happy to report that we are no longer looking!]

We (Agentek) are looking for an experienced ScrumMaster. If you know of someone in the Atlanta area that is looking for a position with a growing company that is early in its transition to agile software development and is committed to the Scrum process, please send him or her our way.

Here is the job description…

Agentek is a leading provider of mobile field force automation and communications solutions that enable businesses to achieve a predictable, reliable and repeatable customer experience from commitment to service fulfillment in real-time. From customized and pre-packaged applications to backend enterprise integration, Agentek delivers best-in-class mobile computing solutions that address the challenges companies face in areas such as field service, transportation and logistics automation. For additional information visit www.agentek.com.

We currently seek a Software Development Manager/ScrumMaster to join our expanding technical team.


Description:


Reporting to Agentek’s Vice President of Products and Engineering, our Software Development Manager/ScrumMaster will be working in an environment specifically designed for rapid application development of wireless applications for the Field Service Automation and Logistics market. The ideal candidate will have experience managing a team of 8-10 developers AND have experience as a ScrumMaster in a highly Agile development environment. Will be ScrumMaster for 2 teams and provide guidance to developers as a manager and Scrum coach.


Responsibilities:

  • As ScrumMaster, coordinate and facilitate daily stand-up meetings, Sprint planning sessions, Sprint reviews, and Sprint retrospectives
  • Work closely with the Product Owners, ensuring that product backlogs are updated and ready for the teams
  • Facilitate discussion and conflict resolution
  • Help team members to understand roles in the Scrum process (planning, review, estimating, etc.)
  • Work with team to estimate story points
  • Help team to choose stories and tasks in a Sprint, decompose stories into appropriate size tasks, and ensure that team does not over commit
  • Work with team to define what “done” means
  • Ensure that Sprint burndowns are updated daily and are visible to the team
  • Encourage team to adopt Agile engineering practices
  • Ensure accuracy of information tracked during Sprints
  • Manage departmental resources and staffing – enhancing and maintaining a best-of-class engineering team, hiring and terminating as appropriate to ensure a solid team
  • Conduct performance reviews of team members, coaching team members in how to be most effective in the Agile/Scrum environment

Required Qualifications:

  • Bachelor’s Degree in Computer Science or equivalent
  • Certified ScrumMaster
  • Deep understanding of Agile principles including eXtreme Programming and Scrum
  • 5-10 years experience in software development and 2-5 years as development Manager or Director managing teams of at least 6 engineers.
  • Previous development experience with Visual Basic, C#, ASP.NET, and MySQL
  • History of managing projects, both successfully and unsuccessfully
  • Ability to balance the servant/leader role
  • Strong business and technical vision
  • Excellent verbal and written communication skills
  • Strong presentation, leadership, and mentoring skills as a motivated person who encourages others
  • Strong team work and interpersonal skills – ability to communicate and work well with people at all levels
  • Ability to field and manage multiple competing priorities in a fast-paced environment
  • Experience working in a early-stage or venture-funded company
  • Ability positively to engage conflict and constructively resolve problems
  • Mobile application experience a big plus, but not required

To learn more about this exciting opportunity with a leading provider of mobile computing solutions, please send an email to greg.schillo@agentek.com, including a cover letter describing your experience of required skills in the body of the letter, and include or attach a Word, HTML, or PDF version resume.

Pair Programming Resources

Looking for a Good, Free Resource

After introducing the concept of pair programming, I went looking for (free, of course) resources to further explain how it worked and what the benefits are. At first I was surprised at how little I found. Sure, there are chapters in all of the XP books on pairing, but I was looking for something that I could just send a link to in an email. I figured there would be lot of good stuff to choose from.

I think there is not a lot of profound work on the Web explaining what pair programming is because, at it’s core, pair programming is pretty simple. In addition, there is not much written on its costs and benefits because it is a relatively new practice and doesn’t have a whole lot of academic research done on it.

What is Pair Programming?

As is the case with a lot of things, a good starting point for a brief overview of pair programming is to go to the Wikipedia entry. It has a surprisingly good, brief overview of the what and the why of the practice.

The Costs and Benefits of Pairing

Once you’ve got a basic idea of the concept, the next step is to analyze the pros and cons of the practice. For this, I recommend Alistair Cockburn and Laurie Williams’ paper The Costs and Benefits of Pair Programming. This paper appears to be about eight years old, but is still perfectly relevant. This might be the best free, less-than-ten-printed-pages introduction to pair programming.

Installing Pair Programming

My limited personal experience was pretty successful, but there are so many dynamics that come into play when introducing a practice like this into an organization where the concept in entirely new. So, I was still looking for more resources on how to introduce the concept and the practice of pairing. I ordered a copy of Pair Programming Illuminated by Laurie Williams and Robert Kessler. I’m only on the second of twenty-seven chapters, but what I’ve read so far seems to be insightful and helpful. I’ll post more about what I find as I work through it.

Introducing Pair Programming: Trying is Believing

I was a long-time skeptic of pair programming. More than anything I think I was resistant because the prospect of sharing a keyboard with another developer sounded unappealing, even if it worked. Like most developers, I like to tinker with code… try one thing… Google for other people’s solutions to the same problem… tinker some more. How would this work in a pairing situation?

People with more agile software development experience than I had would say two things: 1) it works, and 2) you have to try it to believe it. But for years, I only dabbled in it, never getting past the discomfort of it–sort of like deciding to take up running. You put the shoes on and huff out a mile or two only to feel terrible and put up the shoes for another six months until the guilt wells up again. You don’t make the breakthrough to enjoying it and realizing the benefits until you get better at it and get past the initial discomfort.

It was only after trying it for a sustained period of time at VersionOne that I came to appreciate that both things I was always told. It worked and I had to try it to believe it. The increased focus and benefit of talking through the minor design decisions that get made as we write code more than made up for any productivity loss of having each of us work on different problems at the same time. At VersionOne, we even found, during early feature design stages, we would have three of us, Visual Studio, and a whiteboard. We would design and code as a team. The consensus was that we tended to craft our best solutions and write our best code during those sessions.

Now I find myself in a position of advocating pair programming at Agentek. As you might guess, I’m saying things like “It works, but you’ll have to try it out for yourself to believe it.”