Tuesday, May 31, 2011

Fast food software

Our industry of software development has used the "large software" model for development. By "large software", I mean software that costs a lot. Not just in purchase cost (or support licenses) but in care and feeding.

I recently attended a science fiction convention. I noticed that the con used little in the way of tech. The folks running the con used computers, but few attendees used them. The con made no effort to interact with people via tech. The con had a static web site and a PDF with a list of events.

In contrast, the O'Reilly conferences use tech to involve people. They have a web site. You can create a profile. The web site has a schedule of sessions, and you can create your personal schedule, picking those sessions that interest you.

The difference between the two organizations is technical firepower. O'Reilly has the wherewithal to hire folks and make the interaction happen. Science fiction conventions are often volunteer-run and have little technical expertise on staff.

I suspect that a lot of small organizations, companies, and government agencies are in similar situations. They probably use PCs with Windows, MS-Office, and Internet Explorer... because that's the software that is on the PC when they take it out of the box. Small organizations don't have IT support teams and cannot adopt today's high-tech solutions (the effort outstrips the one guy who sets up the PCs).

Small shops are too often run on a bunch of PCs, a router, and shared spreadsheets. (Or worse, passed-around spreadsheets, and we're not really sure who has the latest changes.) They don't have the expertise to install and support custom-made systems. (I suspect that many small shops don't have the experience to install MS-Office and track licenses and activation codes.)

Smartphones and tablets (and the cloud) are a possible solution here. The model provided by smartphones and tablets is a good fit for these small organizations. Installation of software is handled by a single click. (No license, no activation code, no special instructions.) Smartphone and tablet software is also priced within the budget of a small organization. The local Mom-and-Pop store will never, ever, purchase and install an ERP system (nor should they) but they can record expenses on their phones.

The business model for small software is different from the business model for large software. Large software is like a dinner at a high-end restaurant: You have personal service and you meal is prepared according to your specifications. Small software is fast food: You pick a set of items from a limited menu and your meal is handed to you from a stack of prepared items, with little or no personalization. It is not elegant... but it is fast, predictable, and cheap.

With the right combination of front-end software (on tablets and phones) and back-end software (in the cloud), small organizations will be able to use tech and involve their customers. (And involve their customers in a way that is easy for customers to use.)

There is a market here. It's different from the current market, in terms of tech and business. It is a high-volume, low-margin market. And for those who exploit it capably, very profitable.

Monday, May 30, 2011

Tablets and cloud, perfect together

If you want to see the future of computing, look at tablet computers and cloud computing. These two technologies complement each other and provide exciting opportunities for new applications.

Tablet computers (riding on wi-fi and cell networks) provide computation power to people when they want, where they want. Cloud computing can provide the support for tablet applications, either for music or video or plain ole' apps. Alone, either platform is nice but not so hot. Combined, the two will do well.

The two technologies will mean excitement for developers, too. Cloud apps are different from web apps and PC apps, and I believe a new language will be selected for the standard for cloud apps. (Just which language is not clear at the moment, but I suspect a functional programming language.)

I don't see PC apps or web apps disappearing, but i do see less emphasis on them. I suspect that we have seen all of the major PC apps, and that only custom, highly specific applications will be developed for PCs. There will be no new "killer app" for PCs, no new "Lotus 1-2-3" or "Word", no new PC app that everyone must have. We may see new apps for the (non-tablet) web, but every new web app will have a tablet/phone counterpart.

The tablets (and smartphones) rule! We will see new tablet/phone apps backed by the cloud. Some will have web counterparts, but many will not.

The tablet/cloud future awaits!

Thursday, May 26, 2011

Build on clouds

Folks have put effort (some folks lots of effort) into making the cloud look and behave like plain old PC applications. Not just the look and feel of apps, but the techniques to build those apps. Microsoft has designed their tools to make the development of cloud apps a lot like the development of PC applications. (Their tools also make the development of web apps similar to the development of PC apps.)

I think that this is the wrong approach.

I recognize the benefits of similar environments and tools. A single method is easier to learn and support. One can move from development on one project to development on another (different platform) project.

But there is nothing in the PC app development model that makes it the right model for all platforms.

The PC app toolset has a long history. We started with the tools and techniques of the mainframe and minicomputer platforms, but didn't keep them. We adopted the notions of compilers, interactive command lines, and version control, and then built our own tools that leveraged PC resources: the IDE, interactive debuggers, and GUI designers. We abandoned the processes of desk-checking software and symbol cross-references; we built modern processes, from code reviews to automated testing.

With cloud computing, expect new methods and tools that leverage the resources of the cloud. You won't use the new techniques if you stay in the PC mindset.

If you must have a single development platform, make it the cloud. That is, make the process and tools for the development of PC apps look and feel like the process for the development of cloud apps. Instead of adapting the cloud to the PC app process, create a process for cloud apps and then adapt it to the PC app process.

Wednesday, May 25, 2011

Files are out, web services are in

The trusty file has been with us (that is, us PC users) since before the dawn of time. (If we consider the "dawn of time" to be the introduction of the IBM PC.) It has been the workhorse of data containers, going back to the elder days of CP/M and DEC PDP-11 computers.

While files have served us well, they are poor citizens of cloud computing. Files insist on being, on existing, at some location (usually on a disk). Because they have a known physical location, they are problematic for the cloud. A file is a singular thing, existing on a singular host. Even files on a device such as a SAN are on a singular host -- the SAN, arguably, is a host. A single point of access is a bottleneck and a threat to performance.

But it turns out that we don't need files. Files are nothing more that a collection of bytes. Some consider them a stream. Others will think of them as a collection of not bytes but characters.

We need collections (or streams) of bytes (or characters). But we don't have to house those collections in files.

In the cloud, we can use web services. Web services can provide collections of bytes (or characters) just like files. Web services can be distributed across multiple hosts, eliminating the bottleneck of files. Unlike files, web services are identified by URIs, and this gives us flexibility. A "file" provided by a URI can come from one server or from one of many (although you must ensure that all servers agree upon the contents of the file).

The design of a classic PC application involved the specification of input and output files. The design of a cloud application must involve the specification of web services.

Sunday, May 22, 2011

Discoveries about discovery

Recent developments in tech have created an automated process to handle "discovery", the process of reviewing materials for a legal case.

One might think that law firms will adopt the technology, as a way to reduce costs. Or one might think that law firms will *not* adopt the tech, believing that they are traditional and unwilling to change. Or perhaps one might think that law firms are risk-averse, and do not want to try new tech that could miss something and cause the loss of a case.

I have a different outlook. I think law firms will avoid the tech of automated discovery, for economic reasons.

Law firms use the time-and-materials business model. They bill by the hour: the more hours, the higher the bill. Law firms have constructed their hourly rate to cover their costs (including labor) and to provide profit. Thus, each billable hour generates profit (not just revenue) for the firm. A reduction in labor (billable hours) equates to a reduction in profit.

Industries that have adopted technology (specifically to reduce labor hours) are industries that sell products with prices fixed by the market. Automobiles, hair dryers, books, computers... these are all sold at the market price. Profit is revenue less the cost of goods and manufacturing. The costs of labor and goods does not drive the price, and a manufacturer must manage costs to live within the market price. A reduction in labor hours equates to an increase in profit.

Thus, we can expect that businesses selling goods (or services) at prices dictated by the market will adopt cost-reducing techniques and technologies. They will computerize accounting systems, automate assembly lines with robots, and outsource software development.

We can also expect that businesses selling services on a time-and-materials basis will *not* adopt technologies that reduce labor hours. (They may adopt techniques that reduce labor costs, replacing highly paid workers with lower-wage workers. But a reduction in billable hours is unlikely.)

What of this can we apply to software development?

The software development industry is a complex one. Some software is created and sold on a time-and-materials basis. Some is sold in the market. Some is given away. Some software that is sold is not sold in a free market; Microsoft enjoys a monopoly position with Windows and Office. (A monopoly that is perhaps less strong now than ten years ago, yet still part of the complexities of the market.)

Companies that build software with the time-and-materials business model have little incentive to reduce the workload in their projects. These projects benefit from high labor efforts, and therefore we can expect them to use little in the way of effort-reducing techniques. Companies that build software for sale on the open market have strong incentives to minimize their costs. Cost reduction methods include:
  • outsourcing
  • agile development (pair programming, automated tests)
  • modern languages (Python, Ruby, Scala, Lua, Haskell)
Companies in the time-and-materials business model do not need such cost-reducing measures. If you're not working with the above techniques, then you're probably on a time-and-materials project. Or you're in a company that will soon be out of business.

Saturday, May 14, 2011

Cannonballs and fence posts

Imagine a world that has unstoppable cannon balls. Once launched, the cannonball is not stopped, or even slowed, by anything it encounters. It continues on, even impervious to friction. Also imagine that this world has immovable fence posts. Once placed, and immovable fence post cannot be moved by any means. What happens when an unstoppable cannonball collides with an immovable fence post?

The riddle is not unrelated to the announcement of Google's Chromebook and its automatic updates.

Unlike Windows and Mac OSX, the Chromebook applies updates without asking for permission -- it finds them and loads them as a matter of its general operation. Windows and OSX (and even Linux) follow a different model, and inform the user of an update but allow the user to decline the update (or at least defer it).

System administrators for large corporations (and even medium-size ones) want the latter model. They want to ensure that their operations continue, and they want control over updates. A good systems administrator will test updates on a few systems before releasing it to the entire corporation.

But individuals (and possibly small companies) want automatic updates. For them, the workload of monitoring, testing, and releasing updates is a burden. They choose to trust the supplier, and gain the time and effort that would go into verifying updates.

So here we have the two opposing forces: automatic updates (the unstoppable cannonball) and controlled updates (the immovable fencepost).

The answer to the riddle is a bit of a let-down: It is not that one overpowers the other, but that the question is not valid. If such a thing as an unstoppable cannonball exists, then by definition there can be no such thing as an immovable fence post. (And vice-versa.)

The answer to the current debate about Chromebook and automatic updates is less clear. I expect that individuals will look favorably on automatic updates and large enterprises will continue to use the "test and then apply" method. I think the two can both exist in our world.

Look for consumer devices to adopt automatic updates. Look for commercial software to stay with manual updates. Software that bridges the two worlds (Linux) will allow users to set the update method.

Wednesday, May 11, 2011

Are you an A-list programmer? Do you care?

Programmers are a diverse lot. Not merely in gender, ethnic background, or age, but in programming talent. Tests of programming skills show a large difference between the best and worst programmers: some tests report a factor of twenty-five.

Such a large difference may offend the sensibilities of some managers. (Especially managers in organizations that consider programming staff to be interchangeable cogs.) Yet I think that we can agree that some programmers are better than others. I classify programmers into three tiers.

The A-list programmers are those few who are superb at the craft. Not only productive, they are knowledgeable and creative. These are the folks who create Twitter and FaceBook and Amazon.com. They do not fit in to large, stodgy bureaucracies; the only large companies they stay with are the ones that they found. These folks work with the cutting edge languages. Today, that list is: Haskell, Scala, Lua, and perhaps Python. These programmers are mobile. If the A-list programmer does not like his current work, he moves on to something more appealing.

The B-list programmers are hard-working and passionate, but they work at a less entrepreneurial level than the A-list folks. They don't found companies, but they do have lots of ideas and creativity. They don't use the cutting-edge languages, but do use modern languages such as Ruby, Python, Objective-C, or Perl. They are not as mobile as the A-list programmers, but they do move from opportunity to opportunity.

The C-list folks are the determined programmers. They are not entrepreneurs, and they don't have ideas that are burning to get out. They are quietly sensible, and know that the paycheck is there to pay the rent. These folks avoid the (chaotic) start-ups and are comfortable with the (predictable) large bureaucracy. They use the languages that the corporation deems "acceptable": Java, C#, or maybe C++ or Visual Basic. (The last two are from legacy systems which must be maintained but not converted to new languages.) These folks move rarely, being risk-averse and preferring the known devil to the unknown demon.

The difference in performance matches the risk tolerance of the company. Small start-ups require talented performers, and cannot survive mediocrity. Large corporations find it difficult to tolerate creative folks who break the rules, and prefer conformance over performance.

I am sure that there are exceptions to these broad categories. The correlation of performance and language selection is yet to be shown, and some projects will require the use of an older language. But I think this alignment is pretty accurate. The top performers are where they are because they pick the best tools -- and languages -- for the job. The worst performers know only one language, try to solve every problem with it, and have picked a language that is "safe" (one that matches lots of job openings). They are not on the cutting edge of programming.

Where you fit into this scheme is, for the most part, up to you. If you are willing to take risks and are creative, you can be an A-list programmer. (You need to be a good programmer, too.) If you prefer a safer solution, a large company with older technology is a better fit for you. (Although I don't know that larger companies are safer, in these days.)

Tuesday, May 10, 2011

Constants aren't, variables won't

We like to think that standards are fixed and unchanging. Yet they shift and change over time, like sands on the shore.

Hardware has certainly changed. In fact, it has changed so much that today's PCs share nothing with the original IBM PC. Take any part of the today's PC and you find that it will not connect or attach to the original IBM PC. The keyboard is different (USB was not available on the IBM PC), the display is different (the IBM PC had monochrome or CGA, not VGA and certainly not DVI), the internal cards use a different buss... the list goes on.

The software platform has changed, too. From DOS, to Windows 3.1, to Windows 95, to Windows NT, and on to Windows XP and now Windows 7. While some early PC programs still run under Windows 7, no one uses them for serious work.

Development tools have changed. Original PC programs were written in assembly language. The early versions of Windows used Pascal. Later versions used C and then C++. Now The dominant language is C#.

Our standards change. Our tools change. With this environment, we must adopt development practices to accommodate change. Yet too many shops treat systems as fixed solutions: define the requirements, build the system test it, and release it. They allow for defect corrections and enhancements (often called "maintenance") but they do not plan for changes to platforms or tools. Instead, they wait for new platforms and tools to become dominant, and then they rush to bring their system to the new environment. People rushing and working under pressure to mutate a system into a new form; the results are often dismal.

Forward-looking shops will prepare for change. They will encourage people to learn new technologies. They will strive for clean designs. They will monitor changes in technology and tools. They will look outward and observe others in their industry, in other industries, and in academia.

One does not have to predict the future precisely. But one must be prepared for the future.

Tuesday, May 3, 2011

It's a poor atom blaster that doesn't scale

In Isaac Asimov's "Foundation" novel, the character Salvor Hardin utters the phrase "It's a poor atom blaster that doesn't point both ways.". The same can be said of "scaling" the process of adjusting an application or system to handle a different workload.

Most people think of scaling in the upward direction. Having built a prototype or proof-of-concept system, they ask themselves "will this design 'scale'?", meaning "will this design hold up under a large workload?". Frequently the system under discussion is a database, but it can be a social network or just about anything.

Yet scaling works in two directions. One can create a system that works well for a single user but fails with many users, and one can create a system that works for many users but fails for a small number. And the system does not have to be a database or even a system at all.

One example is Microsoft's products for version control. Let's consider the Visual SourceSafe and Team Foundation Server products. The former is (was, as it is no longer sold) usable by small teams, perhaps up to twenty people. Beyond that, it is hard to manage user rights and the underlying database. The latter (Team Foundation Server) on the other hand is built for large teams -- but is unsuitable for a small team of less than five. TFS has a fairly heavy administration cost, and it sucks up time and energy. A large team can afford to dedicate a person or two to the administration; a small team cannot.

Languages are also affected by scaling. Microsoft's Visual Basic languages (prior to VB.NET) were very good for small Windows applications. Simple applications could be made quite easily, but complex applications were harder. Visual Basic had some object-oriented constructs, but it was mostly procedural and one needed a lot of discipline to create a middling-complex application. Also, Visual Basic had some behaviors that limited the complexity of applications, such as its inability to display modal dialogs within modal dialogs.

Microsoft's Visual C++, in contrast to Visual Basic, was better suited for complex applications. It had full object-oriented constructs. It allowed any number of dialogs (modal or non-modal). But it had a high start-up cost. A single person could use it, but it was better for teams.

Scaling down is important for languages. Developers need tools to perform all sorts of tasks, and some applications are sensible for single users. Microsoft's tools are designed for large problems with large solutions implemented by large teams. Java is in a better situation, with language hacks like Scala and Groovy.

The current crop of scripting languages (Perl, Python, and Ruby) scale well, allowing single developers, small teams, and large teams. (Perl perhaps not quite so easily with large teams.)

With the rise of smartphones, apps have to move to smaller screens and virtual keyboards. (And virtual keyboards make a difference -- one does not type a term paper on a cell phone.) The assumptions of large screen and full keyboards do not apply in the cell phone arena.

Scaling down is just as important as scaling up. Scaling applies to users, data, and developers. Not every application, system, and development effort must scale from smartphone to cloud, but the system designers should look both ways before choosing their strategies.

Sunday, May 1, 2011

The peak of PC apps

There has been talk about "peak oil", the notion that oil production peaks at a specific point and then declines as supplies are depleted. I think that there is a similar notion of "peak software" for a platform. When a platform is introduced, there is some initial interest and some software. As interest in the platform grows, the number of applications grows.

PC applications fell into five categories: office applications (word processing, spreadsheets, presentations), development (compilers, interpreters, debuggers, and IDEs), games, and specialty applications (CAD/CAM, Fedex and UPS shipping, mortgage document preparation, etc.), and web browsers.

The first (office applications) has generic software, something that everyone can use. The second (development) is software used by the tool-makers, a small but diverse (and enthusiastic) crowd. Games, like development tools, appealed to a small but diverse crowd. Interestingly, web browsers are used by many folks; as a tool to get to a web page or web app they are not the end but a means.

The rise of PC applications started prior to the PC, with the microcomputers of the late 1970s: the Apple II, the TRS-80, the Commodore PET, and others. Dominant programs at the time were compilers and interpreters (BASIC, mostly), word processors (Wordstar), spreadsheets (Visicalc), games, and some business software (accounting packages).

PC applications increased with the introduction of the IBM PC (Lotus 1-2-3, Wordperfect, and more business applications).

But something happened in the late 1990s: the number of applications stopped increasing. Part of this was due to Microsoft's dominance in office applications and development tools (who wants to compete with Microsoft?). Another part of this change was due to the appearance of the world wide web and web applications. I like to think that the "best and brightest" developers moved from PC development (a routine and dull area) to the web (a bright, shiny future).

Fast-forward to 2011, and the development of PC applications has all but stopped. Aside from updates to existing applications, I see nothing in the way of development. And even updates are few: Microsoft releases new versions of Windows and Office, but what other general software is updated?

I expect that the home PC market is about to collapse, with most folks moving to either game consoles or smartphones -- or maybe tablets. The few remaining members of the home PC community will be the folks who started the PC craze: hobbyists and hard-core enthusiasts.

Businesses will continue to use PCs and PC applications. They have large investments in such applications and cannot easily transition to smartphones and tablets. Their processes are set up for PC development; their standards define PCs (and mainframes) but not tablets. And tablet apps are very different from PC apps -- as PC apps are different from mainframe apps -- so the transition will require a change not of hardware but of mindset.

It's clear that Microsoft is prepared for this transition. They have geared their offerings for businesses, home game users, and home web users. They are prepared for the disappearance of the home PC.

Apple, too, is prepared for this change, with it's line of iPhones, iPods, and iPads.

The real question is: what happens to Linux? Linux has a "business model" that is built on the use of PCs, usually old PCs that are too small for the latest version of Windows. The hobbyists and enthusiasts may want to use Linux, but how will they get hardware? And without hardware, how will they run Linux?