Monday, November 29, 2010

The return of frugality

We may soon see a return to frugality with computing resources.

Ages ago, computers were expensive, and affordable by only the very wealthy (governments and large companies). The owners would dole out computing power in small amounts and charge for each use. They used the notion of "CPU time", which was the amount of time actually spent by the CPU processing your task.

The computing model of the day was timesharing, the allocation of a fraction of a computer to each user, and the accounting of usage by each user. The key aspects measured were CPU time, connect time, and disk usage.

The PC broke the timesharing model. Instead of one computer shared by a number of people, the PC let each person have their own computer. The computers were small and low-powered (laughingly so by today's standards) but enough for individual needs. With the PC, the timesharing mindset was discarded, and along with it went the attention to efficiency.

A PC is a very different creature from a timesharing system. The purchase is much simpler, the installation is much simpler, and the administration is (well, was) non-existent. Instead of purchasing CPU power by the minute, you purchased the PC in one lump sum.

This change was significant. The PC model had no billing for CPU time; the monthly bill disappeared. That made PC CPU time "free". And since CPU time was free, the need for tight, efficient code become non-existent. (Another factor in this calculus was the availability of faster processors. Instead of writing better code, you could buy a new faster PC for less than the cost of the programming time.)

The cloud computing model is different from the PC model, and returns to the model of timesharing. Cloud computing is timesharing, although with virtual PCs on large servers.

With the shift to cloud computing, I think we will see a return to some of the timesharing concepts. Specifically, I think we will see the concept of billable CPU time. With the return of the monthly bill, I expect to see a renaissance of efficiency. Managers will want to reduce the monthly bill, and they will ask for efficient programs. Development teams will have to deliver.

With pressure to deliver efficient programs, development teams will look for solutions and the market will deliver them. I expect that the tool-makers will offer solutions that provide better optimization and cloud-friendly code. Class libraries will advertise efficiency on various platforms. Offshore development shops will cite certification in cloud development methods and efficiency standards. Eventually, the big consultant houses will get into the act, with efficiency-certified processes and teams.

I suspect that few folks will refer to the works of the earlier computing ages. Our predecessors had to deal with computing constraints much more severe than the cloud environments of the early twenty-first century, yet we will (probably) ignore their work and re-invent their techniques.

Sunday, November 28, 2010

Measuring the load

If I worked for a large consulting house, and offered you a new process for your software development projects, would you buy it?

Not without asking a few other questions, of course. Besides the price (and I'm amazed that companies will pay good money for a process-in-a-box) you'll ask about the phases used and the management controls and reports. You'll want to be assured that the process will let you manage (read that as "control") your projects. You'll want to know that other companies are using the process. You'll want to know that you're not taking a risk.

But here's what you probably won't ask: How much of a burden is it to your development team?

It's an important question. Every process requires that people participate in the data entry and reporting for the project: project definition, requirements, task assignment, task status updates, defects, defect corrections, defect verifications, build and test schedules, ... the list goes on. A process is not a thing as much as a way of coordinating the team's efforts.

So how much time does the process require? If I offered you a process that required four hours a day from each developer, seven hours a day from requirements analysts, and nine hours a day from architects, would you buy my process? Certainly not! Involvement in the process takes people away from their real work, and putting such a heavy "load" on your teams diverts too much effort away from the business goal.

Managers commit two sins: failure to ask the question and failure to measure the load. They don't ask the salesmen about the workload on their teams. Instead, they worry about licensing costs and compatibility with existing processes. (Both important, and neither should be ignored, but their view is too limited. One must worry about the affect on the team.) Managers should research the true cost of the process, like any other purchase.

After implementing the process, managers should measure the load the process imposes on their teams. That is, they should not take the salesman's word for it, assuming the salesman was able to supply an estimate. An effective manager will measure the "load" several times, since the load may change as people become familiar with the process. The load may also change during the development cycle, with variations at different phases.

Not measuring is bad, but assuming that the process imposes no load is worse. Any process will have some load on your staff. Using a process and thinking that it is "free" is self-delusion and will cause distortions in your project schedule. Even with a light load of one hour per day, the assumption of zero load introduces an error of 12%. You think you have eight hours available from each person, when in fact you have only seven. That missing hour will eventually catch up with you.

It's said that the first step of management is measurement. If true, then the zeroth step of management is acknowledgment. You must acknowledge that a process has a cost, a non-zero cost, and that it can affect your team. Only after that can you start to measure the cost, and only then can you start to manage it.

Wednesday, November 24, 2010

The line doesn't move; the people in the line do

An engineer friend once commented, while we were waiting in line somewhere (it may be have been for fast food) that the line doesn't move; it's the people in the line who move.

The difference between the line and the people in line is a subtle one, yet obvious once it is brought to your attention. At 10:00, there is a line from the cash register out onto the floor. An hour later, the line will still be there, from the cash register out onto the floor. The line (a collection of people) has not moved -- it's still in the same store, starting at the same location. The fact that the people in the line have changed does not mean the line has moved. (If the line were to move, it would start in a different place or extend to another part of the shop.)

There is a similar subtle difference with bits and bit-built entities like songs and books and on-line magazines. We think that buying a song means buying bits, when in fact we're buying not a bunch of bits but a bunch of bits in a sequence. It's not the bits that make the song, it's the sequence of bits.

A sequence is not a tangible thing.

For a long time, sequences of bits were embedded in atoms. Books were sequences of characters (bits) embedded in ink on paper. Copying the sequence of bits meant collecting your own atoms to hold the copy of bits. In the Middle Ages, manuscripts were copied by hand and creating the new set of atoms was an expensive operation. The printing press and movable type made the copy operation less expensive, but there was still a significant cost for the atom "substrate".

In the twenty-first century, the cost of the atom substrate has dropped to a figure quite close to zero. The cost of the copy operation has also dropped to almost zero. the physical barriers to copies are gone. All that is left (to save the recording industry) is tradition, social norms, and law (and law enforcement). And while tradition and social norms may prevent folks born prior to 1980 from making copies, they don't seem to be holding back the later generations.

The RIAA, record labels, and other copyright holders want it both ways. They want bit sequences to be cheap to copy (for them) but hard to copy (for everyone else). They want bit sequences that are easily copied and distributed to consumers. Once the bits arrive in our in-box, or on our iPod, they wants the bits to magically transform into non-movable, non-copyable bits. They want bit sequences that are easy to move until they want them to be fixed in place. That's not how bits work.

In the end, physics wins. Wily E. Coyote can run off the cliff into air and hang there, defying gravity. But when he looks down, gravity kicks in.

Tuesday, November 23, 2010

Getting from here to there

Cloud computing is much like the early days of microcomputers. Not the early days of PCs (1981 to 1984) but the early days of microcomputers (1976 to 1980).

In the pre-PC era, there was no one vendor of hardware and software, and there was no one standard format for exchangeable media (also know as "floppy disks"). The de facto standard for an operating system was CP/M, but Apple had DOS and Radio Shack had TRS-DOS, and the UCSD p-System was lurking in corners.

Even with CP/M as a standard across the multitude of hardware platforms (Imsai, Sol, North Star, Heathkit, etc.) the floppy disk formats varied. These floppy disks were true floppies, in either 8 inch or 5.25 inch forms, with differences in track density, track count, and even the number of sides. There were single-sided disks and double-sided disks. There were 48-tpi (tracks per inch) disks and 96-tpi disks. Tracks were records in single density, double density, quad density, and extended density.

Moving data from one computer to another was an art, not a science, and most definitely not easy. It was all too common to have data on one system and desire it on another computer. Truly, these early computers were islands of automation.

Yet the desire to move data won out. We used null modem cables, bulletin board systems, and specially-written software to read "foreign" disks. (The internet existed at the time, but not for the likes of us hobbyists.)

Over time, we replaced the null modem cables and bulletin board systems with real networks. Today, we think nothing of moving data. Indeed, the cell phone business is the business of moving data!

The situation of cloud computing is similar. Clouds can hold data and applications, but we're not in a position to move data from one cloud to another. Well, not easily. One can dump the MySQL database to a text file, FTP it to a new site, and then import it into a new MySQL database; this is the modern-day equivalent of the null modem cables of yore.

Data exchange (for PCs) grew over a period of twenty years, from the early microcomputers, to the first IBM PC, to the releases of Netware, Windows for Workgroups, IBM OS/2, Windows NT, and eventually Windows Server. The big advances came when large players arrived on the scene: first IBM with an open hardware platform that allowed for network cards, and later Novell and Microsoft with closed software platforms that established standards (or used existing ones).

I expect that data exchange for cloud apps will follow a similar track. Unfortunately, I also expect that it will take a similar period of time.

Sunday, November 21, 2010

Just how smart is your process?

Does your organization have a process? Specifically, does it have a process for the development of software?

The American mindset is one of process over skill. Define a good process and you don't need talented (that is, expensive) people. Instead of creative people, you can staff your teams with non-creative (that is, low wage) employees, and still get the same results. Or so the thinking goes.

The trend goes back to the scientific management movement of the early twentieth century.

For some tasks, the de-skilling of the workforce may make sense. Jobs that consist of repeated, well-defined steps, jobs with no unexpected factors, jobs that require no thought or creativity, can be handled by a process.

The creation of software is generally unrepeatable, has poorly-defined steps, has unexpected factors and events, and requires a great deal of thought. Yet many organizations (especially large organizations) attempt to define processes to make software development repeatable and predictable.

These organizations confuse software development with the project of software development. While the act of software development is unpredictable, a project for software development can be fit into a process. The project management tasks (status reports, personnel assignment, skills assessment, cost calculations, etc.) can be made routine. You most likely want them routine and standardized, to allow for meaningful comparison of one project to another.

Yet the core aspect of software development remains creative, and you cannot create a process for creative acts. (Well, you can create a process, and inflict it upon your people, but you will have little success with it.) Programming is an art more than science, and by definition an art is something outside of the realm of repeated processes.

Some organizations define a process that uses very specific requirements or design documents, removing all ambiguity and leaving the programming to low-skilled individuals. While this method appears to solve the "programming is an art" problem, it merely shifts the creative aspect to another group of individuals. This group (usually the "architects", "chief engineers", "tech leads", or "analysts") are doing the actual programming. (Perhaps not programming in FORTRAN or C#, but programming in English.) Shifting the creative work away from the coders introduces several problems, including the risk of poor run-time performance and the risk of specifying impossible solutions. Coders, the folks who wrangle the compiler, have the advantage of knowing that their solutions will either work or not work -- the computer tells them so. Architects and analysts who "program in English" have no such accurate and absolute feedback.

Successful management of software development consists not of reducing every task to a well-defined, repeatable set of steps, but of dividing tasks into the "repeatable" and "creative" groups, and managing both groups. For the repeatable tasks, use tools and techniques to automate the tasks and make them as friction-free as possible. For the creative tasks, provide well-defined goals and allow your teams to work on imaginative solutions.

Thursday, November 18, 2010

The new new thing

The history of personal computers (or hobbyist computers, or microcomputers) has a few events that define new technologies which grant non-professionals (otherwise known as amateurs) the power to develop new, cutting edge applications. Such events are followed by a plethora of poorly-written, hard-to-maintain, and mediocre quality applications.

Previous enabling tools were Microsoft BASIC, dBase II, and Microsoft Visual Basic. Each of these packages "made programming easy". Consequently, lots of people created applications and unleashed them upon the world.

Microsoft BASIC is on the list due to its ease-of-use and its pervasiveness. In 1979, every computer for sale included a version of Microsoft BASIC (with the possible exception of the TRS-80 model II and the Heathkit H-8, H-11, and H-89 computers). Microsoft BASIC made lots of applications possible, and made it possible for just about anyone to create an application. And they did, and many of those applications that were poorly written impossible to maintain.

dBase II from Aston-Tate allowed the average Joe to create database applications, something possible in Microsoft BASIC only with lots of study and practice. dBase II used high-level commands to manipulate data, and lots of people wrote dBase II apps. The apps were poorly written and hard to maintain.

Microsoft's Visual Basic surpassed the earlier "MBASIC" and dBase II in popularity. It let anyone write apps for Windows. It was much easier than Microsoft's other Windows development tool, Visual C++. Microsoft scored a double win here, as apps in both Visual Basic and Visual C++ were poorly written and hard to maintain.

Languages and development environments since then have been designed for professional programmers and used by professional programmers. The "average Joe" does not pick up Perl and create apps.

The tide has shifted again, and now there is a new development tool, a new "new thing", that lets average people (that is, non-programmers) develop applications. It's called "salesforce.com".

salesforce.com is a cloud-based application platform that can build data-intensive applications. The name is somewhat deceptive, as the applications are not limited to sales. they can be anything, although the model leads one to the traditional view of a database with master/child relationships and transaction updates to a master file. I would not use it to create a word proccessor, a compiler, or a social network.

The important aspects are ease-of-use and availability. salesforce.com has both, with a simple, GUI-based development environment (and a web-based one at that!) and free access for individuals to experiment. The company even offers the "App Exchange", a place to sell (or give away) apps for the salesforce.com platform.

Be prepared for a lot of salesforce.com applications, many written by amateurs and poorly designed.

Wednesday, November 17, 2010

Just like a toaster

Computers have gotten easy to use, to the point that the little hand-held computers that we call phones (or tablets) are not considered computers but simply smart appliances.

In the pre-PC days, lots of computers were sold as kits. The Altair was the first popular, practical kit computer for individuals. And while other pre-PC microcomputers such as the TRS-80 and Commodore PET were sold fully assembled, they needed some cable-plugging and lots of learning.

IBM PCs with DOS required cable-plugging too, although IBM made the cabling easier by using unique, asymmetric plugs for each type of cable. It was impossible to plug the wrong things together, and impossible to plug the right cable into the right jack but in the wrong orientation. Yet IBM PC DOS required lots of learning.

Microsoft Windows made things easier -- eventually. The early versions of Windows required a lot of set-up and configuration. Early Windows was a program that ran on top of DOS, so to run Windows you had to configure DOS and then install and configure Windows.

The Apple Macintosh was the one computer that made things easy. And today's PC with Windows pre-installed and configured with automatic updates is very easy to use. But let's ignore those computers for now. I want to focus on the "it's hard to set up and use a computer" concept.

When computers were difficult to use, only the people who wanted to use computers would use computers. Like-minded geeks would join together in user groups and share their hard-earned knowledge. User group members would respect each other for their accomplishments: installing an operating system, attaching peripheral devices, or building a computer.

In today's world, computers are easy to use and lots of people support them. One can walk into any number of consumer stores (including Wal-Mart) and buy a PC, take it home, and do interesting things with it.

Not only can you buy PCs, but the businesses that one deals with know how to support PCs. When calling a company for technical support, the company (whether it is the local internet provider, a bank, or a movie distributor) has a customer support department that understands computers and knows how to get them working.

Computers have changed from the hard-to-use, only-for-geek devices to plain consumer appliances. They are almost the equivalent of toasters.

If they are running Windows.

You can buy PCs with Windows in just about any store. You can buy Macintosh computers in a few places -- but not as many as the places that sell Windows PCs. And you can buy PCs with Linux in a very few places, if you know where to look.

Businesses have customer support departments that know how to fix Windows PCs. And a few can support Apple Macintosh PCs. And a very few will support Linux PCs.

Linux, for all of its popularity, is still a do-it-yourself operating system. As an enterprise, you can purchase Linux support services, but as an individual you are expected (by our society) to use Windows (or maybe a Mac).

Linux geeks, for the most part, buy PCs and install Linux. They don't look for PCs with Linux installed. They will buy a PC without an operating system, or they will buy a PC with Windows and then install Linux on it (possibly saving Windows, possibly not). This behavior skews the market research, since marketers count sales and the Linux PCs are not selling well.

Linux geeks also respect each other for their accomplishments: installing Linux, adding peripheral devices, and re-compiling the kernel. They have to respect each other, because they need each other. Linux users cannot count on outside entities for support like Windows users can.

Some Linux distros have made installation and upgrades very easy. These distros lower the barriers of entry for individuals and expand the potential population of Linux users. It's very easy to install Ubuntu Linux or SuSE Linux.

The difference between an easy-to-user Linux and Windows is now not in the installation of the operating system, nor in the software that is supported. The difference is in the external support. Windows users have lots of options (not always effective options, but lots); Linux users must be rugged individuals with the right social networks. Getting Linux fully accepted into the support structure will take a lot of work -- possibly more work than getting the install to work on different hardware.

Sunday, November 14, 2010

Java's necessary future

Now that Oracle has purchased Sun, we have a large cloud of uncertainty for the future of Java. Will Oracle keep Java, or will it kill it off? Several key Java folks have left Oracle, pursuing other projects, and Oracle has a reputation of dropping technologies that have no direct affect on the bottom line. (Although one has to wonder why Oracle, a database company, chose to purchase Sun, a hardware company that happened to own Java and MySQL. Purchasing Sun to get MySQL seems to be an expensive solution, one that is not in Oracle's usual pattern.)

Java is an interesting technology. It proved that virtual processors were feasible. (Java was not the first; the UCSD p-System was a notable predecessor. But Java was actually practical, whereas the earlier attempts were not.) But Java has aged, and needs not just face-lift but a re-thinking of its role in the Oracle stack.

Here's my list of improvements for "Java 2.0":

- Revamp the virtual processor. The original JRE was custom-built for the Java language. Java 2.0 needs to embrace other languages, including COBOL, FORTRAN, LISP, Python, and Ruby.

- Expand the virtual processor to support functional languages, including the new up-and-coming languages of Haskell and Erlang. This will help LISP, Python, and Ruby, too.

- Make the JRE more friendly to virtualization environments like Oracle VM, VMWare, Parallels, Xen, and even Microsoft's Virtual PC and Virtual Server.

- Contribute to the Eclipse IDE, and make it a legitimate player in the Oracle universe.

Java was the ground-breaker for virtual processor technologies. Like other ground-breakers such as FORTRAN, COBOL, and LISP, I think it will be around for a long time. Oracle can use this asset or discard it; the choice is theirs.

Thursday, November 11, 2010

Improve code with logging

I recently used a self-made logging class to improve my (and others') code. The improvements to code were a pleasant side-effect of the logging; I had wanted more information from the program, information that was not visible in the debugger, and wrote the logging class to capture and present that information. During my use of the logging class, I found the poorly structured parts of the code.

A logging class is a simple thing. My class has four key methods (Enter(), Exit(), Log(), and ToString() ) and a few auxiliary methods. Each method writes information to a text file. (The text file being specified by one of the auxiliary methods.) Enter() is used to capture the entry into a function; Exit() captures the return from the function; Log() adds an arbitrary message to the log file, including variable values; and ToString() converts our variables and structures to plain text. Combined, these methods let us capture the data we need.

I use the class to capture information about the flow of a program. Some of this information is available in the debugger but some is not. We're using Microsoft's Visual Studio, a very capable IDE, but some run-time information is not available. The problem is due, in part, to our program and the data structures we use. The most common is an array of doubles, allocated by 'new' and stored in a double*. The debugger can see the first value but none of the rest. (Oh, it can if we ask for x[n], where 'n' is a specific number, but there is no way to see the whole array, and repeating the request for an array of 100 values is tiresome.)

Log files provide a different view of the run-time than the debugger. The debugger can show you values at a point in time, but you must run the program and stop at the correct point in time. And once there, you can see only the values at that point in time. The log file shows the desired values and messages in sequence, and it can extract the 100-plus values of an array into a readable form. A typical log file would be:

** Enter: foo()
i == 0
my_vars = [10 20 22 240 255 11 0]
** Enter: bar()
r_abs == 22
** Exit: bar()
** Exit: foo()

The log file contains the text that I specify, and nothing beyond it.

Log files give me a larger view than the debugger. The debugger shows values for s single point in time; the log file shows me the values over the life of the program. I can see trends much easier with the log files.

But enough of the direct benefits of log files. Beyond showing me the run-time values of my data, they help me build better code.

Log files help me with code design by identifying the code that is poorly structured. I inject the logging methods into my code, instrumenting it. The function

double Foobar::square(double value)
{
return (value * value);
}

Becomes

double Foobar::square(double value)
{
Logger::Enter("Foobar::square(double)");
Logger::Log("value: ", Logger::ToString(value));
return (value * value);
Logger::Exit("Foobar::square(double)");
}

A bit verbose, and perhaps a little messy, but it gets the job done. The log file will contains lines for every invocation of Foobar::square().

Note that each instrumented function has a pair of methods: Enter() and Exit(). It's useful to know when each function starts and ends.

For the simple function above, one Enter() and one Exit() is needed. But for more complex functions, multiple Exit() calls are needed. For example:

double Foobar::square_root(double value)
{
if (value < 0.0)
return 0.0;
if (value == 0.0)
return 0.0;
return (pow(value, 0.5));
}

The instrumented version of this function must include not one but calls to Exit() for each return statement.

double Foobar::square_root(double value)
{
Logger::Enter("Foobar::square_root(double)");
Logger::Log("value: ", Logger::ToString(value));
if (value < 0.0)
{
Logger::Exit("Foobar::square_root(double)");
return 0.0;
}
if (value == 0.0)
{
Logger::Exit("Foobar::square_root(double)");
return 0.0;
}
Logger::Exit("Foobar::square_root(double)");
return (pow(value, 0.5));
}

Notice all of the extra work needed to capture the multiple exits of this function. This extra work is a symptom of poorly designed code.

In the days of structured programming, the notion of simplified subroutines was put forward. It stated that each subroutine ("function" or "method" in today's lingo) should have only one entry point and only one exit point. This rule seems to have been dropped.

At least the "only one exit point" portion of the rule. Modern day languages allow for only one entry point into a method. They allow for multiple exit points, and this lets us write poor code. A better (uninstrumented) version of the square root method is:

double Foobar::square_root(double value)
{
double result = 0.0;

if (is_rootable(value))
{
result = pow(value, 0.5);
}

return result;
}

bool Foobar::is_rootable(double value)
{
return (value > 0.0);
}

This code is longer but more readable. Instrumenting it is less work, too.

One can visually examine the code for the "extra return" problem, but instrumenting the code with my logging class made the problems immediately visible.


Sunday, November 7, 2010

Where Apple is falling behind

Apple is popular right now. It has a well-received line of products, from MacBooks to iPhones to iPads. It has easy-to-use software, from OSX to iTunes and GarageBand. it has beaten Microsoft and Google in the markets that it chooses.

Yet in one aspect, Apple is falling behind.

Apple is using real processor technology, not the virtual processors that Microsoft and Google use. By "virtual processors", I mean the virtual layers that separate the application code from the physical processor. Microsoft has .NET with its virtual processor, its IL code, and its CLR. Google uses Java and Python, and those languages also have the virtual processing layer.

Most popular languages today have a virtual processing layer. Java uses its Java Virtual Machine (JVM). Perl, Python, and Ruby use their virtual processors.

But Apple uses Objective-C, which compiles to the physical processor. In this, Apple is alone.

Compiling to physical processor has the advantage of performance. The virtual processors of the JVM and .NET (and Perl, and Python...) impose a performance cost. A lot of work has been done to reduce that cost, but the cost is still there. Microsoft's use of .NET for its Windows Mobile offerings means higher demands for processor power and higher draw from the battery. An equivalent Apple product can run with less power.

Compiling to a virtual processor also has advantages. The virtual environments can be opened to debuggers and performance monitors, something not possible with a physical processor. Therefore, writing a debugger or a performance monitor in a virtual processor environment is easier and less costly.

The languages which use virtual processors all have the ability for class introspection (or reflection, as some put it). I don't know enough about Objective-C to know if this is possible, but I do know that C and C++ don't have reflection. Reflection makes it easier to create unit tests and perform some type checking on code, which reduces the long-term cost of the application.

The other benefit of virtual processors is freedom from the physical processor, or rather from the processor line. Programs written to the virtual processor can run anywhere, with the virtual processor layer. This is how Java can run anywhere: the byte-code is the same, only the virtual processor changes from physical processor to physical processor.

The advantages of performance are no longer enough to justify a physical processor. Virtual processors have advantages that help developers and reduce the development costs.

Is it possible that Apple is working on a virtual processor of its own? The technology is well within their abilities.

I suspect that Apple would build their own, rather than use any of the existing virtual processors. The two biggies, .NET and Java, are owned by companies not friendly to Apple. The engines for Perl, Python, and Ruby are nice but perhaps not suitable to the Apple set of applications. An existing engine is not in Apple's best interests. They need their own.

Apple doesn't need the virtual processor engine immediately, but they will need it soon -- perhaps within two years. But there is more to consider.

Apple has pushed the Objective-C, C, and C++ languages for its development platform. For the iPhone, iPod, and iPad, it has all but banished other languages and technologies. But C, C++, and Objective-C are poorly suited for virtual processors. Apple will need a new language.

Given Apple's desire for reliable (that is, crash-free) applications, the functional languages may appeal to them. Look for a move from Objective-C to either Haskell or Erlang, or possibly a new language developed at Apple.

It will be a big shift for Apple, and their developers, but in the long run beneficial.

Friday, November 5, 2010

Kinect, open source, and licenses

So Microsoft has introduced "Kinect", a competitor to the Wii game-station.

Beyond winning an award for a stupid product name, Microsoft has also become rather protective of the Kinect. Microsoft wants to keep Kinect for itself, and prevent anyone else from writing code for it. (Anyone without a license, apparently.)

The Adafruit company is offering a bounty for an open source Kinect driver. Such a driver could allow Kinect to work with systems other than the XBOX. Horrors! People actually using a Microsoft project for something they choose!

Microsoft's response includes the following: “With Kinect, Microsoft built in numerous hardware and software safeguards designed to reduce the chances of product tampering. Microsoft will continue to make advances in these types of safeguards and work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

The interesting part of that statement is "law enforcement". Microsoft seems to be confusing breach of contract with criminal law.

In contract law, you can file a suit against someone for breach of contract. You win or lose, depending on the contract and the case you present (and the case that the defense presents). But here is the thing: a win does not condemn the accused to prison, nor do you get punitive damages. Punitive damages are not available in contract law -- the contract specifies the penalties for breach.

Unless Microsoft is counting on the DCMA, and its clause for bypassing copy-protection devices. They may win a criminal case on those grounds.

But Microsoft is missing the bigger point. They think that they must control Kinect and allow it to run only on the XBOX, therefore increasing sales for the XBOX and Windows. But its a selfish strategy, and one that will limit the growth of Kinect in the future.

Microsoft must decide the more important of two goals: sales of Kinect software or control of Kinect software.

Wednesday, November 3, 2010

Apple's Brave New World

Apple is bringing the goodness of the App Store to Macintosh computers and OSX. The popular application manager on the iPhone/iPod/iPad application will be part of OSX "Lion".

Apple has set a few rules for applications in the App Store, including "doesn't crash" and "doesn't duplicate existing apps".

I like and dislike this move.

First the dislikes: The App Store is a toll booth for Apple, a chokepoint on Mac applications that ensure Apple gets a cut of every sale. It also gives Apple the ability to suppress any app. (For any reason, despite Apple's propaganda.) It is a lot of power concentrated in one entity.

Now for the likes: It raises the bar for software quality, and probably reduces the price of apps. Where PC applications (and Mac applications) typically cost from $100 to $500, the lightweight apps in the App Store go for significantly less. I expect the same for Mac apps.

To shift to metaphors:

The initial days of personal computing (1977-1981) were a primitive era, requiring people to be self-sufficient, equivalent to living on the open Savannah, or in northern Britain. How they built Stonehenge (Visicalc) we will never really know.

The days of the IBM PC (DOS and Windows) were roughly equivalent to the Egyptian old kingdom and the empire, with some centralized direction and some impressive monuments (WordPerfect, Lotus 1-2-3, Microsoft Office) built with a lot of manual labor.

The brave new era of "app stores" (either Apple or Microsoft) will possibly be like the Roman Empire, with better technology but more central control and bureaucracy. Computers will finally be "safe enough" and "simple enough" for "plain users".

The new era brings benefits, but also signals the end of the old era. The days of complete independence are disappearing. Computers will be appliances that are controlled in part by the vendor. Applications will shrink in size and complexity (probably a good thing) and work reliably within the environment (also a good thing).

It's a brave new world, and developers would be wise to learn to live in it.

Tuesday, November 2, 2010

Pondering my next PC

Necessity is the mother of invention.

This week, my trusty IBM ThinkPad of ten years developed a severe case of lock-up. This is most likely a problem with the CPU card. (In a laptop, just about everything is on the CPU card, so that's where the problems lie.)

The loss of the laptop is disappointing. It has been a good friend through a number of projects. It was reliable. It had a very nice screen. I liked the keyboard.

Its passing leads me to think of a replacement. And this leads to several ideas.

First idea: Do I need to replace it? I'm not sure that I do. I've collected a number of other PCs in the past ten years, including an Apple MacBook which has better wireless connectivity. I may be able to live without a replacement.

Second idea: Replace it with a tablet. The Apple iPad comes to mind, although I am not happy with the screen (I dislike the high-gloss finish) and I would prefer a tablet that can play my Ogg Vorbis-encoded music.

Third idea: Replace it with a smart phone. Probably an Android phone, as I have the same dislikes of the iPhone as the iPad.

In brief, I am not considering a desktop PC, and considering but not committed to a laptop. This is a big change from a few years ago, when a desktop was considered "the usual" and a laptop was considered "nice to have".

Conversations with others (all tech-minded folks) show that most folks are thinking along similar lines. The techies are leaving desktop PCs and laptops. The future is in mobile devices: smart phones and tablets.