Monday, August 24, 2009

Dependencies

One of the differences between Windows and Linux is dependencies, and how they handle dependencies in software.

Linux distros have a long history of managing dependencies between packages. Distros include package managers such as 'aptitude' or 'YAST' which, when you to select products for installation, identify the necessary components and install them too.

Windows, on the other hand, has practically no mechanism for handling dependencies. In Windows, every install package is a self-contained thing, or at least views itself that way.

This difference is possibly due to the history of product development. Windows (up until the .NET age) has had a fairly flat stack for dependencies. To install a product, you had to have Windows, MFC, and... that was about it. Everything was in either the original Windows box or the new product box.

Linux has a larger stack. In Linux, there is the kernel, the C run-time libraries, the X windowing system, Qt, the desktop manager (usually KDE or Gnome), and possibly other packages such as Perl, Python, Apache, and MySQL. It's not uncommon for a single package to require a half-dozen other packages.

The difference in dependency models may be due to the cost model. In Linux, and open source in particular, there is no licensing cost touse a package in your product. I can build a system on top of Linux, Apache, MySQL, and Perl (the "LAMP stack") and distribute all components. (Or assume that the user can get the components.) Building a similar system in Microsoft technologies would mean that the customer muct have (or acquire) Windows, IIS, SQL Server, and, umm... there is no direct Microsoft replacement for Perl or other scripting language. (But that's not the point.) The customer would have to have all of those components in place, or buy them. It's a lot easier to leverage sub-packages when they are freely available.

Differences in dependency management affect more than just package installation.

Open source developers have a better handle on dependencies than developers of proprietary software. They have to -- the cost of not using sub-packages is too high, and they have to deal with new versions of those packages. In the proprietary world, the typical approach I have seen is to select a base platform and then freeze the specification of it.

Some groups carry the "freeze the platform" method too far. They freeze everything and allow no changes (except possibly for security updates). They stick to the originally selected configuration and prevent updates to their compilers, IDEs, database managers, ... anything they use.

The problem with this "freeze the platform" method is that it doesn't work forever. At some point, you have to upgrade. A lot of shops are buying new PCs and downgrading Windows Vista to Windows XP. (Not just development shops, but let's focus on them.) That's a strategy that buys a little time, but eventually Microsoft will pull the plug on Windows XP. When the time comes, the effort to update is large -- usually a big, "get everyone on the new version" project that delays coding. (If you're in such a shop, ask folks about their strategy for migrating to Windows Vista or Windows 7. If the answer is "we'll stay on Windows XP until we have to change", you may want to think about your options.)

Open source, with its distributed development model and loose specification for platforms, allows developers to move from one version of a sub-package to another. They follow the "little earthquakes" model, absorbing changes in smaller doses. (I'm thinking that the use of automated tests can ease the adoption of new versions of sub-packages, but have no experience there.)

A process that develops software on a fixed platform will yield fragile software -- any change could break it. A process to handle dependencies will yield a more robust product.

Which would you want?

No comments: