Wednesday, November 17, 2010

Just like a toaster

Computers have gotten easy to use, to the point that the little hand-held computers that we call phones (or tablets) are not considered computers but simply smart appliances.

In the pre-PC days, lots of computers were sold as kits. The Altair was the first popular, practical kit computer for individuals. And while other pre-PC microcomputers such as the TRS-80 and Commodore PET were sold fully assembled, they needed some cable-plugging and lots of learning.

IBM PCs with DOS required cable-plugging too, although IBM made the cabling easier by using unique, asymmetric plugs for each type of cable. It was impossible to plug the wrong things together, and impossible to plug the right cable into the right jack but in the wrong orientation. Yet IBM PC DOS required lots of learning.

Microsoft Windows made things easier -- eventually. The early versions of Windows required a lot of set-up and configuration. Early Windows was a program that ran on top of DOS, so to run Windows you had to configure DOS and then install and configure Windows.

The Apple Macintosh was the one computer that made things easy. And today's PC with Windows pre-installed and configured with automatic updates is very easy to use. But let's ignore those computers for now. I want to focus on the "it's hard to set up and use a computer" concept.

When computers were difficult to use, only the people who wanted to use computers would use computers. Like-minded geeks would join together in user groups and share their hard-earned knowledge. User group members would respect each other for their accomplishments: installing an operating system, attaching peripheral devices, or building a computer.

In today's world, computers are easy to use and lots of people support them. One can walk into any number of consumer stores (including Wal-Mart) and buy a PC, take it home, and do interesting things with it.

Not only can you buy PCs, but the businesses that one deals with know how to support PCs. When calling a company for technical support, the company (whether it is the local internet provider, a bank, or a movie distributor) has a customer support department that understands computers and knows how to get them working.

Computers have changed from the hard-to-use, only-for-geek devices to plain consumer appliances. They are almost the equivalent of toasters.

If they are running Windows.

You can buy PCs with Windows in just about any store. You can buy Macintosh computers in a few places -- but not as many as the places that sell Windows PCs. And you can buy PCs with Linux in a very few places, if you know where to look.

Businesses have customer support departments that know how to fix Windows PCs. And a few can support Apple Macintosh PCs. And a very few will support Linux PCs.

Linux, for all of its popularity, is still a do-it-yourself operating system. As an enterprise, you can purchase Linux support services, but as an individual you are expected (by our society) to use Windows (or maybe a Mac).

Linux geeks, for the most part, buy PCs and install Linux. They don't look for PCs with Linux installed. They will buy a PC without an operating system, or they will buy a PC with Windows and then install Linux on it (possibly saving Windows, possibly not). This behavior skews the market research, since marketers count sales and the Linux PCs are not selling well.

Linux geeks also respect each other for their accomplishments: installing Linux, adding peripheral devices, and re-compiling the kernel. They have to respect each other, because they need each other. Linux users cannot count on outside entities for support like Windows users can.

Some Linux distros have made installation and upgrades very easy. These distros lower the barriers of entry for individuals and expand the potential population of Linux users. It's very easy to install Ubuntu Linux or SuSE Linux.

The difference between an easy-to-user Linux and Windows is now not in the installation of the operating system, nor in the software that is supported. The difference is in the external support. Windows users have lots of options (not always effective options, but lots); Linux users must be rugged individuals with the right social networks. Getting Linux fully accepted into the support structure will take a lot of work -- possibly more work than getting the install to work on different hardware.

Sunday, November 14, 2010

Java's necessary future

Now that Oracle has purchased Sun, we have a large cloud of uncertainty for the future of Java. Will Oracle keep Java, or will it kill it off? Several key Java folks have left Oracle, pursuing other projects, and Oracle has a reputation of dropping technologies that have no direct affect on the bottom line. (Although one has to wonder why Oracle, a database company, chose to purchase Sun, a hardware company that happened to own Java and MySQL. Purchasing Sun to get MySQL seems to be an expensive solution, one that is not in Oracle's usual pattern.)

Java is an interesting technology. It proved that virtual processors were feasible. (Java was not the first; the UCSD p-System was a notable predecessor. But Java was actually practical, whereas the earlier attempts were not.) But Java has aged, and needs not just face-lift but a re-thinking of its role in the Oracle stack.

Here's my list of improvements for "Java 2.0":

- Revamp the virtual processor. The original JRE was custom-built for the Java language. Java 2.0 needs to embrace other languages, including COBOL, FORTRAN, LISP, Python, and Ruby.

- Expand the virtual processor to support functional languages, including the new up-and-coming languages of Haskell and Erlang. This will help LISP, Python, and Ruby, too.

- Make the JRE more friendly to virtualization environments like Oracle VM, VMWare, Parallels, Xen, and even Microsoft's Virtual PC and Virtual Server.

- Contribute to the Eclipse IDE, and make it a legitimate player in the Oracle universe.

Java was the ground-breaker for virtual processor technologies. Like other ground-breakers such as FORTRAN, COBOL, and LISP, I think it will be around for a long time. Oracle can use this asset or discard it; the choice is theirs.

Thursday, November 11, 2010

Improve code with logging

I recently used a self-made logging class to improve my (and others') code. The improvements to code were a pleasant side-effect of the logging; I had wanted more information from the program, information that was not visible in the debugger, and wrote the logging class to capture and present that information. During my use of the logging class, I found the poorly structured parts of the code.

A logging class is a simple thing. My class has four key methods (Enter(), Exit(), Log(), and ToString() ) and a few auxiliary methods. Each method writes information to a text file. (The text file being specified by one of the auxiliary methods.) Enter() is used to capture the entry into a function; Exit() captures the return from the function; Log() adds an arbitrary message to the log file, including variable values; and ToString() converts our variables and structures to plain text. Combined, these methods let us capture the data we need.

I use the class to capture information about the flow of a program. Some of this information is available in the debugger but some is not. We're using Microsoft's Visual Studio, a very capable IDE, but some run-time information is not available. The problem is due, in part, to our program and the data structures we use. The most common is an array of doubles, allocated by 'new' and stored in a double*. The debugger can see the first value but none of the rest. (Oh, it can if we ask for x[n], where 'n' is a specific number, but there is no way to see the whole array, and repeating the request for an array of 100 values is tiresome.)

Log files provide a different view of the run-time than the debugger. The debugger can show you values at a point in time, but you must run the program and stop at the correct point in time. And once there, you can see only the values at that point in time. The log file shows the desired values and messages in sequence, and it can extract the 100-plus values of an array into a readable form. A typical log file would be:

** Enter: foo()
i == 0
my_vars = [10 20 22 240 255 11 0]
** Enter: bar()
r_abs == 22
** Exit: bar()
** Exit: foo()

The log file contains the text that I specify, and nothing beyond it.

Log files give me a larger view than the debugger. The debugger shows values for s single point in time; the log file shows me the values over the life of the program. I can see trends much easier with the log files.

But enough of the direct benefits of log files. Beyond showing me the run-time values of my data, they help me build better code.

Log files help me with code design by identifying the code that is poorly structured. I inject the logging methods into my code, instrumenting it. The function

double Foobar::square(double value)
{
return (value * value);
}

Becomes

double Foobar::square(double value)
{
Logger::Enter("Foobar::square(double)");
Logger::Log("value: ", Logger::ToString(value));
return (value * value);
Logger::Exit("Foobar::square(double)");
}

A bit verbose, and perhaps a little messy, but it gets the job done. The log file will contains lines for every invocation of Foobar::square().

Note that each instrumented function has a pair of methods: Enter() and Exit(). It's useful to know when each function starts and ends.

For the simple function above, one Enter() and one Exit() is needed. But for more complex functions, multiple Exit() calls are needed. For example:

double Foobar::square_root(double value)
{
if (value < 0.0)
return 0.0;
if (value == 0.0)
return 0.0;
return (pow(value, 0.5));
}

The instrumented version of this function must include not one but calls to Exit() for each return statement.

double Foobar::square_root(double value)
{
Logger::Enter("Foobar::square_root(double)");
Logger::Log("value: ", Logger::ToString(value));
if (value < 0.0)
{
Logger::Exit("Foobar::square_root(double)");
return 0.0;
}
if (value == 0.0)
{
Logger::Exit("Foobar::square_root(double)");
return 0.0;
}
Logger::Exit("Foobar::square_root(double)");
return (pow(value, 0.5));
}

Notice all of the extra work needed to capture the multiple exits of this function. This extra work is a symptom of poorly designed code.

In the days of structured programming, the notion of simplified subroutines was put forward. It stated that each subroutine ("function" or "method" in today's lingo) should have only one entry point and only one exit point. This rule seems to have been dropped.

At least the "only one exit point" portion of the rule. Modern day languages allow for only one entry point into a method. They allow for multiple exit points, and this lets us write poor code. A better (uninstrumented) version of the square root method is:

double Foobar::square_root(double value)
{
double result = 0.0;

if (is_rootable(value))
{
result = pow(value, 0.5);
}

return result;
}

bool Foobar::is_rootable(double value)
{
return (value > 0.0);
}

This code is longer but more readable. Instrumenting it is less work, too.

One can visually examine the code for the "extra return" problem, but instrumenting the code with my logging class made the problems immediately visible.


Sunday, November 7, 2010

Where Apple is falling behind

Apple is popular right now. It has a well-received line of products, from MacBooks to iPhones to iPads. It has easy-to-use software, from OSX to iTunes and GarageBand. it has beaten Microsoft and Google in the markets that it chooses.

Yet in one aspect, Apple is falling behind.

Apple is using real processor technology, not the virtual processors that Microsoft and Google use. By "virtual processors", I mean the virtual layers that separate the application code from the physical processor. Microsoft has .NET with its virtual processor, its IL code, and its CLR. Google uses Java and Python, and those languages also have the virtual processing layer.

Most popular languages today have a virtual processing layer. Java uses its Java Virtual Machine (JVM). Perl, Python, and Ruby use their virtual processors.

But Apple uses Objective-C, which compiles to the physical processor. In this, Apple is alone.

Compiling to physical processor has the advantage of performance. The virtual processors of the JVM and .NET (and Perl, and Python...) impose a performance cost. A lot of work has been done to reduce that cost, but the cost is still there. Microsoft's use of .NET for its Windows Mobile offerings means higher demands for processor power and higher draw from the battery. An equivalent Apple product can run with less power.

Compiling to a virtual processor also has advantages. The virtual environments can be opened to debuggers and performance monitors, something not possible with a physical processor. Therefore, writing a debugger or a performance monitor in a virtual processor environment is easier and less costly.

The languages which use virtual processors all have the ability for class introspection (or reflection, as some put it). I don't know enough about Objective-C to know if this is possible, but I do know that C and C++ don't have reflection. Reflection makes it easier to create unit tests and perform some type checking on code, which reduces the long-term cost of the application.

The other benefit of virtual processors is freedom from the physical processor, or rather from the processor line. Programs written to the virtual processor can run anywhere, with the virtual processor layer. This is how Java can run anywhere: the byte-code is the same, only the virtual processor changes from physical processor to physical processor.

The advantages of performance are no longer enough to justify a physical processor. Virtual processors have advantages that help developers and reduce the development costs.

Is it possible that Apple is working on a virtual processor of its own? The technology is well within their abilities.

I suspect that Apple would build their own, rather than use any of the existing virtual processors. The two biggies, .NET and Java, are owned by companies not friendly to Apple. The engines for Perl, Python, and Ruby are nice but perhaps not suitable to the Apple set of applications. An existing engine is not in Apple's best interests. They need their own.

Apple doesn't need the virtual processor engine immediately, but they will need it soon -- perhaps within two years. But there is more to consider.

Apple has pushed the Objective-C, C, and C++ languages for its development platform. For the iPhone, iPod, and iPad, it has all but banished other languages and technologies. But C, C++, and Objective-C are poorly suited for virtual processors. Apple will need a new language.

Given Apple's desire for reliable (that is, crash-free) applications, the functional languages may appeal to them. Look for a move from Objective-C to either Haskell or Erlang, or possibly a new language developed at Apple.

It will be a big shift for Apple, and their developers, but in the long run beneficial.

Friday, November 5, 2010

Kinect, open source, and licenses

So Microsoft has introduced "Kinect", a competitor to the Wii game-station.

Beyond winning an award for a stupid product name, Microsoft has also become rather protective of the Kinect. Microsoft wants to keep Kinect for itself, and prevent anyone else from writing code for it. (Anyone without a license, apparently.)

The Adafruit company is offering a bounty for an open source Kinect driver. Such a driver could allow Kinect to work with systems other than the XBOX. Horrors! People actually using a Microsoft project for something they choose!

Microsoft's response includes the following: “With Kinect, Microsoft built in numerous hardware and software safeguards designed to reduce the chances of product tampering. Microsoft will continue to make advances in these types of safeguards and work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

The interesting part of that statement is "law enforcement". Microsoft seems to be confusing breach of contract with criminal law.

In contract law, you can file a suit against someone for breach of contract. You win or lose, depending on the contract and the case you present (and the case that the defense presents). But here is the thing: a win does not condemn the accused to prison, nor do you get punitive damages. Punitive damages are not available in contract law -- the contract specifies the penalties for breach.

Unless Microsoft is counting on the DCMA, and its clause for bypassing copy-protection devices. They may win a criminal case on those grounds.

But Microsoft is missing the bigger point. They think that they must control Kinect and allow it to run only on the XBOX, therefore increasing sales for the XBOX and Windows. But its a selfish strategy, and one that will limit the growth of Kinect in the future.

Microsoft must decide the more important of two goals: sales of Kinect software or control of Kinect software.

Wednesday, November 3, 2010

Apple's Brave New World

Apple is bringing the goodness of the App Store to Macintosh computers and OSX. The popular application manager on the iPhone/iPod/iPad application will be part of OSX "Lion".

Apple has set a few rules for applications in the App Store, including "doesn't crash" and "doesn't duplicate existing apps".

I like and dislike this move.

First the dislikes: The App Store is a toll booth for Apple, a chokepoint on Mac applications that ensure Apple gets a cut of every sale. It also gives Apple the ability to suppress any app. (For any reason, despite Apple's propaganda.) It is a lot of power concentrated in one entity.

Now for the likes: It raises the bar for software quality, and probably reduces the price of apps. Where PC applications (and Mac applications) typically cost from $100 to $500, the lightweight apps in the App Store go for significantly less. I expect the same for Mac apps.

To shift to metaphors:

The initial days of personal computing (1977-1981) were a primitive era, requiring people to be self-sufficient, equivalent to living on the open Savannah, or in northern Britain. How they built Stonehenge (Visicalc) we will never really know.

The days of the IBM PC (DOS and Windows) were roughly equivalent to the Egyptian old kingdom and the empire, with some centralized direction and some impressive monuments (WordPerfect, Lotus 1-2-3, Microsoft Office) built with a lot of manual labor.

The brave new era of "app stores" (either Apple or Microsoft) will possibly be like the Roman Empire, with better technology but more central control and bureaucracy. Computers will finally be "safe enough" and "simple enough" for "plain users".

The new era brings benefits, but also signals the end of the old era. The days of complete independence are disappearing. Computers will be appliances that are controlled in part by the vendor. Applications will shrink in size and complexity (probably a good thing) and work reliably within the environment (also a good thing).

It's a brave new world, and developers would be wise to learn to live in it.

Tuesday, November 2, 2010

Pondering my next PC

Necessity is the mother of invention.

This week, my trusty IBM ThinkPad of ten years developed a severe case of lock-up. This is most likely a problem with the CPU card. (In a laptop, just about everything is on the CPU card, so that's where the problems lie.)

The loss of the laptop is disappointing. It has been a good friend through a number of projects. It was reliable. It had a very nice screen. I liked the keyboard.

Its passing leads me to think of a replacement. And this leads to several ideas.

First idea: Do I need to replace it? I'm not sure that I do. I've collected a number of other PCs in the past ten years, including an Apple MacBook which has better wireless connectivity. I may be able to live without a replacement.

Second idea: Replace it with a tablet. The Apple iPad comes to mind, although I am not happy with the screen (I dislike the high-gloss finish) and I would prefer a tablet that can play my Ogg Vorbis-encoded music.

Third idea: Replace it with a smart phone. Probably an Android phone, as I have the same dislikes of the iPhone as the iPad.

In brief, I am not considering a desktop PC, and considering but not committed to a laptop. This is a big change from a few years ago, when a desktop was considered "the usual" and a laptop was considered "nice to have".

Conversations with others (all tech-minded folks) show that most folks are thinking along similar lines. The techies are leaving desktop PCs and laptops. The future is in mobile devices: smart phones and tablets.