Desktop Developer’s Introduction to Compact Framework Development: Part 3-C++/CLI on Compact Framework

It may seem like a waste to have a whole post dedicated to a feature that isn’t there, but knowing this fact would have saved me some wasted mental effort and disappointment.

The Compact Framework is necessarily small. Microsoft only puts functionality into it that developers will be likely to actually need. This means that a lot of stuff you’d like to be there isn’t. This also means that you might need to interoperate with native code to do some of what you want to do.

If you are like me, this means you would like to use C++/CLI (the old-school .NET technology formerly known as Managed C++) to do that interop. You, like me, would be sorely disappointed. C++/CLI is not supported on the Compact Framework, and as far as I know there are no plans to add it. Prepare to P/Invoke.

Unit Tests Just a Happy Side Effect of Test-Driven Development

Interviewing developers has given me an interesting, although not scientific, sampling of the thinking and mindset of software developers in the Atlanta area. While there have been a few bright spots, most of what I’ve learned has been disheartening.

I have found that TDD is still only an idea that most developers have heard about but never practiced. Of the ones who have practiced it, almost none of them understand what TDD was meant to achieve and only view it as automated unit testing. I believe this is due to two factors working together derail the original intent of TDD. First, TDD was named poorly. “Test” has specific meaning and has for a long time. It naturally biases us toward a certain type of thinking (strangely enough… testing and quality assurance). Second, most developers and development organizations would rather apply a formula or recipe to what they do than to take the time and effort to deeply understand what makes software development success or failure (and all of the degrees in between).

Scott Bellware is the guy who first got me thinking about a new way to approach TDD. It started with a new name: Behavior Driven Development (BDD). I actually think the “D” should be changed to Design to complete the mind shift. In a nutshell, BDD attempts to get back to the XP view of the tests being documentation of what the code should do. In this way of thinking, the set-up code is referred to as context and the assertions are referred to as specification. It has helped reform my thinking about TDD for the better.

You can catch Scott pontificating on the subject on the latest Hanselminutes podcast. Scott doesn’t get as specific as I would have liked, but this interview shows the side of Scott that I like so much. He’s passionately advocating a valuable practice and philosophy without the insults and vitriol that he is prone to fall into. I highly recommend it.

[Update: My reference to “Scott” in the above paragraph is to Bellware–not Hanselman, who is always pleasant and rarely vitriolic.]

Spell-Checking for ReSharper

Your code will be written once. It will be read (by you and others) many times. Code should be easy to read. This means that I should be able to scan a screen full of code and get a sense of what it does quickly. Layout and spacing plays s big part in that. Naming things with real, recognizable words just as important.

If you buy that, it goes without saying that those words that make up your identifiers must be spelled correctly. I am a terrible speller. As a C++ coder, I loved that Visual Assist would give me the red squiggles under any word that I misspelled. This has been the only regret I have about moving from VA to ReSharper. In every other way, ReSharper is a win over what VA offered.

I’ve just discovered Agent Smith as a plug-in to ReSharper. I had to make a few tweaks and drop one of the checking rules, but I am loving it. My ReSharper experience is now complete!

Desktop Developer’s Introduction to Compact Framework Development: Part 2-Windows CE versus Windows Mobile

Sorting out the difference between Windows CE and Windows Mobile is the key to understanding the platform versioning (which can be very confusing). Here it is in a nutshell…

Windows CE is a set of modules that Microsoft releases for hardware vendors to build operating systems custom for their devices. Because these devices are so resource constrained, manufacturers don’t want to put any functionality on them that isn’t necessary.

It turns out that there are a whole bunch of devices that need about the same functionality, namely smart phones and PDAs. Instead of having each smart phone and PDA manufacturer build their own version of CE and add in all of the niceties that are needed to be productive on those devices, Microsoft offers Windows Mobile. In other words, Windows Mobile releases are instances of Windows CE specifically built for handheld devices.

The group that develops Windows Mobile is a customer of the CE group. The Windows Mobile releases always trail the CE releases and have their own (crazy) naming and versioning scheme. In the beginning, it was called PocketPC (first 2000, then 2002), then there was a split, as they offered Windows Mobile 2003 for PocketPC and Windows Mobile 2003 for Smartphone. PocketPC was optimized for a touch screen and keyboard while Smartphone was optimized for menu buttons and a numeric keypad.

With Windows Mobile 5 and 6 (which are found on most devices today), there are three flavors: Pocket PC without phone (classic), Pocket PC with phone (professional), and Smartphone (standard).

Wikipedia has a decent breakdown of the various combinations.

SOLID: What is Old is New Again

SOLID

Sometime around 2001, I came across a series of articles by Robert C. Martin. At that time they were already a little old—from The C++ Report in 1996. I didn’t really care. This guy (who I now know as everyone else does as “Uncle Bob”) had captured and expressed core principles that I knew from experience were the key to creating maintainable software. I mostly knew it mostly from having violated them and paying the price.

The principles put forth in those articles express fundamentals truths that are in play in any Object-Oriented software project.

In the fifth article in the series, Martin sums up the first four…

1. The Open Closed Principle. (OPC) January, 1996. This article discussed the notion that a software module that is designed to be reusable, maintainable and robust must be extensible without requiring change. Such modules can be created in C++ by using abstract classes. The algorithms embedded in those classes make use of pure virtual functions and can therefore be extended by deriving concrete classes that implement those pure virtual function in different ways. The net result is a set of functions written in abstract classes that can be reused in different detailed contexts and are not affected by changes to those contexts.

2. The Liskov Substitution Principle. (LSP) March, 1996. Sometimes known as “Design by Contract”. This principle describes a system of constraints for the use of public inheritance in C++. The principle says that any function which uses a base class must not be confused when a derived class is substituted for the base class. This article showed how difficult this principle is to conform to, and described some of the subtle traps that the software designer can get into that affect reusability and maintainability.

3. The Dependency Inversion Principle. (DIP) May, 1996. This principle describes the overall structure of a well designed object-oriented application. The principle states that the modules that implement high level policy should not depend upon the modules that implement low level details. Rather both high level policy and low level details should depend upon abstractions. When this principle is adhered to, both the high level policy modules, and the low level detail modules will be reusable and maintainable.

4. The Interface Segregation Principle. (ISP) Aug, 1996. This principle deals with the disadvantages of “fat” interfaces. Classes that have “fat” interfaces are classes whose interfaces are not cohesive. In other words, the interfaces of the class can be broken up into groups of member functions. Each group serves a different set of clients. Thus some clients use one group of member functions, and other clients use the other groups.

The ISP acknowledges that there are objects that require non-cohesive interfaces; however it suggests that clients should not know about them as a single class. Instead, clients should know about abstract base classes that have cohesive interfaces; and which are multiply inherited into the concrete class that describes the non-cohesive object.

Martin later published The Single Responsibility Principle, which says that there should never be more than one reason for a class to change.

There were several aspects of these short articles that helped me so much…

  • The articles were short but clear.
  • The articles were available on-line for anyone to download and read.
  • Martin gave these principles names.

All of these aspects made these principles easy to promote—and I did! I preached all of these principles, whether in C++ or later C#. So, I am particularly pleased to see the recent resurgence in these principles as the ALT.NET crowd has arranged them into a nice acronym (SOLID) and made back-to-basics cool again. We see the SOLID principles all over blogs and articles in the agile community. It reaffirms my belief that the fundamentals may go in and out of style, but they are always what matters most in good software development.