Archive for July, 2008

Windows Programming 20 Years Later

Saturday, July 26th, 2008

I’ve spent some time recently looking at the Windows Presentation Foundation (WPF). WPF is part of Vista, part of .NET 3.0 and part of Silverlight.

At some level, I’m disappointed with WPF. After hearing so much about it (but ignoring it) during the last few years, I expected it to be a radical new way of writing graphical user interfaces. Instead, it seems like a slightly different way of developing Winforms applications.

With .NET 2.0, you use Visual Studio to design your Windows “forms”. Visual Studio automatically generates code for you that creates all of the visual elements (windows, buttons, list boxes, etc.) at run time. When your form’s constructor is called, it calls “InitializeComponents()” and the generated code does the rest. The Visual Studio forms editor also lets you easily attach code to different events raised by the visual components.

With WPF and .NET 3.0, you use the Expression Blend tool to design your user interface. As with the Visual Studio forms editor, Expression Blend also lets you easily attach code to handle UI events. The output of Expression Blend, however, is not code, it’s XML. When your code’s constructor is called, again, InitializeComponents is called but, this time, the function works by loading the XML and interpreting it (creating forms, buttons, list boxes, etc.) rather than by executing generated code.

At this level, the only advantage/difference of/between WPF and .NET 2.0 Winforms is the use of XML rather than generated code. Mind you, this can be a significant advantage. By managing the UI specification as data separate from code, WPF facilitates the use of skilled graphical designers to develop user interfaces. Designers can use Expression Blend to fine tune UI without worrying about unintended changes to program code.

After looking WPF further, however, I realized how it is more significant than it appears at first blush. The WPF designers have completely reimplemented the basic Windows UI elements (and more) in a much more cohesive, sensible, fashion. The net result (no pun intended) is very cool.

For 20 years now Windows programmers have been suffering the limitations of the original Windows 1.0 design from 1985. Windows 1.0 defined a basic set of UI controls: window, menu, list box, static control, text control, push button, radio button and group box (I think that’s all of them!). These controls were implemented by Windows itself and could be composited by programmers in their own applications. Additionally, programmers could subclass these controls to alter their behavior or to implement their own user-defined controols.

Subsequent versions of Windows introduced new controls. Somewhere along the line, combo boxes, context menus, rich text controls, progress bars and other controls were added. The concept of a small set of built-in controls with narrowly prescribed behavior persisted however. You could do some things like image-based pushbuttons or scrolling lists of images by taking advantage of owner draw features but the amount of customization available with the built-in controls was minimal.

.NET 1.1 and 2.0 added new controls, too, including DataGrid and DataGridView that had no built-in counterparts. These controls, however, resembled the built-in ones in how the could be used and customized.

With WPF, the original Windows UI elements are totally subsumed by the new WPF UI model. It is possible to use WPF to write what looks like a traditional Windows application, but it is also possible to write applications with much more sophisticated user interfaces.

WPF has a very clean notion of containment and transformation. Let me explain what I mean by these. Consider a traditional Windows 1.0 List control. It contains a list of strings and can present these strings in a vertical list, providing scrollbars if they are needed to view all the list contents. In WPF, the ListBox control is a container that will provide a scrolling list of whatever it contains. What can it contain? Anything! Well, any WPF UI element. If you put static text boxes in a WPF list, it’s alot like a Windows 1.0 list. But if you want, you can put editable text boxes or tree views in a WPF ListBox and it will do the right thing with them. There are several container controls in WPF and all of them support this functionality.

Similarly, WPF provides a consistent mechanism for visual transformation. In graphics (and, don’t forget, WPF has full support for 2D and 3D graphics) “transformation” refers to mathematical manipulations to modify the appearance of what is being displayed. There are translation, scaling and rotation transformations that can move, size and rotate graphical data. WPF supports these transformations, too. If you surround a text box with a 90 degree rotation transformation, the text box will appear (and function) vertically instead of horizontally. Transformations can apply to entire graphical elements (for example, our previous ListBox) or to contained elements (we could have one tree view rotated within our list of tree views).

Beyond the generalized concepts of containment and transformation, WPF also adds support for animation including keyframe animation. With keyframe animation, Expression Blend lets you specify the visual characteristics of a UI at two (or more) points in time and the WPF run-time code will take care of gradually transforming the UI for the intervening points. You can, for example, place an image at one (x,y) coordinate to start and at another (x,y) coordinate 10 seconds later. The WPF run-time code will then gradually move the image from the initial to its final location over the course of 10 seconds. Key frame animation can be applied to scaling and rotation transformations as well as to other visual effects (transparency, for example).

So far, I’ve mostly read about WPF. I want to write some non-trivial software to put it through its paces. From the design perspective, I really like it. I also like the relationship between stand-alone WPF applications and Silverlight (browser-based) applications. I’ll post again on the topic when I have more to say.

Open Source vs. Proprietary Software vs. Good Software

Thursday, July 24th, 2008

I had the opportunity to spend a few hours at Oscon yesterday in Portland, Oregon. Oscon is the Open Source Conference held by O’Reilly. I was pleasantly surprised by the size of the conference, the number of exhibitors and the presence of several large companies. Open source software has definitely become mainstream and accepted by industry.

At Likewise, we consider ourselves an open source company. Likewise Open has been very successful and has opened many doors for us (no pun intended). It’s helped us tremendously, even when we end up selling our Enterprise version instead. Nevertheless, I have some observations about open source, not all of them positive.

There are several definite advantages to using open source. It enables you to build a solution without having to reengineer every component. We make use of both MIT Kerberos and OpenLDAP in our products. If we had needed to rewrite these components, it would have taken us much longer to get to market. We’ve also made use of Samba components. Samba has been around a long time, has had “a lot of eyes” on it and has figured out the subleties of talking to Microsoft systems. Again, using open source saved us a lot of time.

There are some disadvantages to open source, too. It can be difficult to get the “owners” of an open source project to do what you think is the right thing. Although open source is “open”, certain projects are led by designated groups of people. Different projects have different guidelines around software submission and how they go about accepting external contributions. Very often, your contributions have to be vetted before they’re accepted in the main code. If your code is not accepted, your only option is to distribute your own modified version of the open source project (your branch). Branching is not a good thing.

Sometimes, code changes are rejected due to style considerations or differences in design approaches. These are objections that can be dealt with relatively easily. More difficult are rejections due to “dogma”. Some open source projects, for example, are irrationally opposed to anything that they perceive as helping Microsoft. Even our intent is to make non-Windows systems work better they still oppose our goal of making these systems work better with Microsoft Active Directory. This, of course, doesn’t apply to the Samba project (who had the goal before we did) but applies to other open source projects/companies/teams with which we’ve had to deal.

There is little we can do in these cases other than to develop our own alternatives.

Another issue which we’ve encountered with some open source software is a certain lack of industrial rigor. I’ve worked a lot with both commercial software developers (I spent 11 years at Microsoft) and with academic programmers (4 years at Microsoft Research). Sometimes, open source software sometimes resembles the latter more than the former.

What do I mean by “academic” programmers? Say that you’re in school, you take a programming course and you’re asked to write a program that converts degrees from Celsius to Fahrenheit. You write something like: 

void main(int c, char **argv)
{
    int degrees = atoi(argv[1]);
    printf("%d Celsius is %g Farhenheit\n", degrees, (degrees * 9.0)/5.0 + 32);
}

Your professor would probably give you a passing grade for this. It works. In industry, however, your boss would likely complain about several things:

  • Crappy user interface. How is the customer supposed to know that the input should appear on the command line?
  • Poor error handling. What happens if you don’t supply a command-line argument? What if you specify a non numeric value?
  • Bad spelling
  • Lack of localization support
  • Lack of comments in code

Open source software is not always industrial quality code. We have found many cases of memory corruption and leakage even in mature open source projects. We have also found and fixed many, many, bugs.

Note that the title of this post does not suggest that proprietary software is immune from similar flaws. Many proprietary software companies (including my ex-employers) are guilty of releasing software that is not ready for prime time. “Good Software” can be either open source or proprietary. Similary, “Bad Software” does not care about its licensing model.

What I will suggest, however, is that companies that have to support their products, keep customers happy and, ultimately, make money are much more motivated to develop Good Software than organizations which develop software but don’t actually have to deal with the consequences of poor code. There is no stronger motivator to write Good Software than an irate customer.

The Narrow Road Between Success and Failure

Monday, July 21st, 2008

Before I started Likewise Software, I was in the venture capital business. Although my role was more on the “back end”, performing due diligence once we’d decided to invest in a company, I heard many, many pitches from aspiring entrepreneurs. For the great majority of these pitches (probably 95% of them or so), I recommended against investing in them.

It wasn’t that 95% of the proposed ideas were bad but, in most cases, either the idea or the team was weak. Venture capital folk like to say that they don’t “bet on the horse” they “bet on the jockey.” Occasionally, I’d hear weak ideas from good teams. In these cases, I’d try to guide them towards better alternatives. Most often, though, I’d hear reasonably good ideas from weak teams. It’s much harder trying to fix a weak team than a weak idea. Once in a while, a VC will try to do this. For example, they’ll propose funding on condition that a strong CEO be hired. This doesn’t always work. One of the founders might already fashion him/herself as CEO. Even worse, a founder might agree to take on a different role but then obstruct the new CEO once hired.

VCs like to bet on people because it’s been proven to be the most successful strategy. A good team can sell a weak product much better than a weak team can sell a good product. In practice, too, the good team will fixthe weak product while the weak team will find some way to ruin the good one. Think Betamax.

I’ve been a part of a couple of very strong teams during my career in the software business. From 1988 to 1994, I worked in the Developer Tools group at Microsoft. More recently, from 2004 I’ve been at Likewise (nee Centeris). In both cases we dealt with uncertainty and strong competitors and, through persistence and execution, succeeded. Weaker teams would not have fared as well.

In the early 90’s, the Microsoft “compiler group” was in trouble. Borland was kicking our collective behinds. We’d released Microsoft C version 6 to very negative reactions (we were one of the first products to eliminate print documentation in favor of electronic documents). We were mired in obligations to our operating system group and to IBM (for OS/2 support). Meanwhile, Borland had Turbo C and Turbo Pascal both of which were doing very well. Gates would frequently criticize our lack of killer product ideas. Nathan Myhrvold (effectively, the Microsoft CTO, at the time) wanted us to emphasize high-quality printed code listings. We’d gone from 60% market share to 40% (Watcom C was in there, too). Meanwhile, C7, our first C++ compiler, was mired in schedule delays and lack of focus. It was very easy to despair.

At yet, we didn’t. We gave up the IBM business (Borland took it and, I’m sure, regretted it). We basically told Gates and Myhrvold to stuff it. We released Quick C for Windows and Visual C++ 1.0 for NT — products that were received reasonably well. When we released Visual C++ 2.0 (the precursor of Visual Studio), we completely turned the tide. We got back more than our previous market share and never lost it. Within a year, Borland was firing/losing people and we were hiring/finding them.

Looking back, the interesting thing to me is that, at the time, we weren’t aware of any killer idea that we were counting on to succeed. In retrospect, it was “MFC” the Microsoft Foundation Classes and its coupling to our forms editor that turned out to be the killer idea (an idea on which .NET has built a billion dollar business). At the time, however, we were just doing our jobs and executing very well.

Back in the early 90’s, the Tools group was a very mature group for Microsoft. A lot of people wanted to work on operating systems or on cool applications. The only people who worked in the Tools group were nerdy folk who were into code optimization or writing debuggers and class libraries. Once in the Tools group, these people never left. We had old timers who’d been there for a decade. What this meant is that everyone knew their jobs extremely well.

Visual C++ 2.0 shipped on-time, with its full feature set, and with extremely good quality. To this day, I still consider it the most successful project I’ve ever worked on. In the course of 18 months, the Tools group had gone from being failures to being brilliant.

The road to success is rarely clear. To abuse the metaphor, if it’s an eight lane superhighway, it’s going to be jam-packed with competitors. More likely, the road to success is a poorly marked path through dangerous woods. Likely, you will have to bushwack. There’s no guarantee you won’t end up at a dead end or at a cliff. Your best best is to count on a good team, to be persistent and to not despair. Fate tends to favor the strong.