Saturday, February 27, 2010

Smaller source code

Let's assume that you have decided to reduce the size of your source code. (The code geeks have convinced you that smaller code has fewer defects and is therefore higher quality and less expensive to maintain.)

If you're a programmer, you have several choices. There are a number of strategies to reduce the size of your source code.

1) Write code that is more efficient, or re-write existing code to reduce its size. In C++ programs, I often see code that has multiple levels of content in a single function. I've seen code that performs high-level business logic and also allocates memory or checks that a string object (or CString object) holding a path name has a trailing backslash. Not only does shifting between high-level and low-level contexts cause more code, it requires much more attention from the programmer.

2) Combine similar or duplicate sections into a single function or class. I've seen many systems with duplicate or near-duplicate code. The big risk of duplicate code is that changes will be made in some sections but not all of the duplicate sections. Consolidating code reduces the code size and prevents future problems.

3) Use a different programming language. I can write a program more efficiently in C++ than in C, and more efficiently in Java than in C++. Languages that are more expressive result in smaller programs. (This works for programs over a certain size. The traditional "hello world" program is shorter in C than in Java, but Java scales better than C.)

If you're a manager, you can also reduce the size of your code. You must do it indirectly, since you're not working on the code but directing the people who do work on the code.

1) Ask (or exhort, or demand) that your team write less code. A risky proposition, since you are judging the performance of your programmers, and they are quite touchy about such things. At best they will ask you "which portion of the system would you like to remove?" and at worst they will quietly start goofing off.

2) Reduce the size of the programming team.

This last idea may seem a bit strange. If you use the common "man-hours" approach to planning, less staff means that you have less "capacity" -- less ability to produce functionality. The "man-hours" approach does not take into account the size of the code, or the communications load on the team.

A smaller team can operate more quickly. It is easier to come to consensus. There are fewer meetings. There are fewer disagreements. There are fewer chances for mis-communication.

A smaller team has more freedom to reduce the code base (without reducing functionality). They can combine functions and do the things listed above in the "programmer" options. There are fewer people to object, fewer people who will become confused when code modules change.

"Too many cooks spoil the broth", goes the old saying. It's true, for broth and for programs.

Don't spoil your broth.


Saturday, February 20, 2010

The importance of being printed

I've been thinking about e-book readers and how they make books less durable. Books (that is, the printed dead-tree kind) and fairly durable. Even cheap acid-paper books from the 1950s are still around. E-books (the electronic books) and ephemeral.

You can keep a book on a shelf, give it as a present, share it with a friend, and will it to your heirs. E-books are transient. You can keep them on your e-reader or in your e-reader account, but you cannot re-sell them, give them to another (effectively selling them for no charge), share them, or bequeath them.

I don't see this as a bad thing. Some may decry e-books as "not as good", but I see them as different and not necessarily better or worse. In fact, I see the electronic form as a better fit for transient publications such as newspapers and magazines. I don't need to keep a newspaper for years and years, nor magazines. They are transient in my life, coming for a short time and then leaving. (I recognize the need for "papers of record" and the permanent storage of information. Just not in my house, thank you.)

I would be happy to read the newspaper on an e-reader. I'm not quite happy enough with the current readers, so I don't have one. What would I like to see in an e-reader? A few things: a larger screen, radio or internet radio, and connections to social network sites. And one more thing: a better name. The term e-reader is clunky. The names "Kindle" and "Nook" are trademarked (and rather dumb names to boot). I think the second generation of readers will appeal to me.

Once we have the readers (the devices, not the people using them),  I think we will see a few trends:

- People will drop traditional newspaper and magazine subscriptions for reader subscriptions.
- People will shift some book purchases to electronic forms
- Some people who don't buy books or magazines will start reading them in electronic forms
- A few people will refuse to buy books in electronic form, on principle
- A larger set of people will split their book purchases between paper and electronic

The last idea interests me. I've read many books, and I have varying opinions about books. Some I enjoy reading multiple times (I'm currently reading "The Lord of the Rings" for the sixth or seventh time), some I read only once, and some I stop reading before I finish.

I think the notions of permanence and transience will apply to books. When I find a book that I enjoy, a book that I want to read multiple times, I will purchase the paper edition. I might use the electronic version as a preview, and purchase only the books that I enjoy. I already have plans to read certain classes of books only in their electronic form: business books like "Selling to Urban Tribes" and "The Seven Signs of a Successful Company", social analysis like "The New Conservatism: How it Can Work for You" and "The Missing Generation: Lost in the Crash". These are not classics and not deserving of second reads. Other books, like Knuth's "The Art of Computer Programming" will have a place on my bookshelf.

If enough other people do this, we'll see a change in the book market. Books will be published first in electronic form, and then in paper form only when there is enough demand. Or perhaps the paper edition will be available as a print-on-demand service, allowing anyone to purchase a paper version of any book.

This change shifts the decision to print books from the publisher to the consumer. Readers will decide which books are worthy of printing, not publishing companies (or editors, or accountants). It will democratize the published content, and open new areas for study. Once The People decide which books are important enough for printing, we can learn about society by studying the printing trends. Which books were printed during the last boom time, or the last recession? Did printing of fantasy novels increase after the latest "Twilight" movie? Our choices will define our culture.

Bring on the readers! (And please bring a new name for them!)

Tuesday, February 16, 2010

Projects and operations

I'm stealing this topic from a posting I read on the internet. I don't know the source, though. I thought the source was from the "Keep the Joint Running" blog, but I see no trance of the idea. (Which doesn't mean it's not there -- it means that my search may be incomplete.)

In the realm of IT, there are two different worlds: development and operations. For this discussion, I am using the larger meaning of "development", the meaning that includes analysis, design, coding, and testing.

The two worlds are different, yet perhaps not as different as one might think.

Let's start with the definitions "an operation is a set of tasks that are performed repeatedly, usually on behalf of a set of users", and "a project is a set of tasks that in the end results in a change to the operation". These definitions lead to very different approaches to the management of development projects and operations. Development is a one-time effort, often requiring intense analysis, intense coding and debugging, and intense testing phases. But when it's done, it's done and the development team can kick back and relax. Operations, on the other hand, is a round-the-clock effort to keep things running. Rather than change things, operations strives to keep things from changing. The operation teams make every effort to keep systems up and running, keep systems available for users.

Yet seasoned operation managers know that change happens. Not just the updates and fixes that are inflicted upon them by the development team, but updates and fixes inflicted upon them by all of the development teams for all of the software and hardware that they use. (Think Microsoft's "update Tuesday", for starters.)

These updates are projects. Small projects, compared to the analysis-design-code-test projects of the development teams, but projects nonetheless. They result in a change to the operation. Security fixes result in new versions of components in operation. Updates to virus-scan tables result (usually) in more secure virus-checking systems.

The operations folks run an operation, but also handle projects (even if they don't think of them as projects).

Meanwhile, the development teams think that they are working purely on projects, yet they have an operation. An outsider will see this easily. Observe a development team for any length of time, and you will see that they perform the same actions repeatedly: attending meetings, sharing information, preparing documents, coding and debugging, ... this is the "operation" of the developer.

The mindset of an operations manager is quite different from the mindset of a development manager. The operations manager strives for continuity. Naive operations managers attempt to block all changes. Experienced operations managers recognize the need for changes and create plans to introduce changes with a minimum of risk.

Development managers strive for deliverables but not continuity. The focus on deliverables to the exclusion of all else serves them poorly. They introduce change but manage risk poorly. Don't believe me? Look at the numbers for development project overruns. Compare them to the numbers for operations. Development managers may think that they are managing risk but the numbers tell the true story.

Development managers  may benefit from the wisdom of operations managers. The development managers are leading an operation, and while it does not have tape drives and printers and weekly backup jobs it does repetitively perform a set of tasks.

The Agile Programming methods, with their use of short development cycles and automated tests, have learned from operations managers. Agile Programming treats development like an operation and strives not only to keep the produced code in a state that can be delivered at any moment, it also strives to work at a sustained pace. These notions are second nature to the operations manager.

If you want to improve your development team, look to your operations team.


Monday, February 15, 2010

Windows Mobile no more

The New York Times reports that Microsoft has released their new version of Windows Mobile, spiffed up and ready to compete in the rough-and-tough market of mobile devices.

Except that I'm not sure that Microsoft "gets" the phone market. There are two things that tell me this: the new name for Windows Mobile, and the sample picture provided by the NYT.

The name first. Microsoft has changed the name from "Windows Mobile" to "Windows Phone". This is a marketing move, partly to get away from the reputation of previous versions of Windows Mobile. I think it is also a look to the future, in which Microsoft sees people carrying "phones" and not other devices. I share this vision -- I think that in the future we will carry general-purpose devices for phone calls, internet access, books, music, and real-time purchases, and I think that we will call these devices "phones" even though they are a far cry from the land-line phone or even today's smartphone.

I'm guessing that Microsoft is going to focus on these general purpose devices, and drop the specialized devices like the Zune. Look to see Microsoft phone devices with extra capabilities, not new Zunes or e-book readers or non-phone tablets.

But I don't know that we're at the point of convergence. I see a market (of a limited time) for specialized devices. The Kindle and Nook readers are much better book readers than iPhones or iPads. I still carry a separate camera. The decision to jump to a converged platform may be a bit premature.

One other issue with the name: people may get confused. The Microsoft-powered phones will be running "Windows Phone", of course. But the devices themselves will be called phones... or "Windows phones". The name "Windows Phone" refers to the device and to the operating system. An upgrade to Windows phone will mean new drivers... or maybe more memory. I'm not sure that microsoft has thought this one through.

The other issue is a design issue. If you look at the NYT article and the accompanying photograph, you can see a sample of Windows Phone. (The hardware and the software.) The phone is running Office One-Note. But look closely -- the display image does not fit on the screen! The One-Note screen is bigger than the available screen on the phone, and you have to scroll to see the rest of the screen.

Scrolling is evil. Perhaps less evil on a smartphone than on a desktop PC, but still evil. If possible, don't make me scroll! (And it certainly seems possible to redesign the screen to avoid scrolling.) Microsoft in my opinion made a big error here. (Do all of the Office apps require scrolling? If just some, you have an inconsistent experience across the product line.)

So I give Microsoft recognition for looking at the future of phones, but I have to give only partial marks for execution. I think Microsoft needs to think a bit more before it has a ready-for-prime-time product.


Saturday, February 13, 2010

Is the Microsoft ecosystem shrinking?

Is the Microsoft ecosystem shrinking? By 'ecosystem', I mean the set of companies developing applications for Microsoft Windows, and specifically those developing applications exclusive to Microsoft Windows: apps that say "requires MS Windows" and won't run on a Mac or under Linux. (I'm ignoring emulators like WINE here.)

It seems that the number of companies developing desktop software (including Windows software) is rather small. I ran a quick, informal, and quite unscientific survey of app vendors. There were a bunch, but not many. (My survey included sites such as NewEgg and Programmer's Paradise.)

Perhaps I am comparing the current ecosystem to the early PC market. Shortly after IBM introduced the PC, lots of companies jumped into the market. There was a "Cambrian Explosion" of software, from accounting packages to compilers to word processors. Today, the software genres seem limited. Excluding software from Microsoft, I found tax software (Intuit), accouting software (Intuit, Peachtree/Sage), CAD (Autodesk), PDF utilities (Adobe, Nuance), reports (Crystal Reports, ActiveReports), security/malware software (AVG, Symantec, bitdefender), and others, but nothing like a "Cambrian Explosion".

Perhaps the industry (ecosystem) has fragmented. In the early PC days, PCs did everything, from development to business applications to games.

If the market has fragmented, where has it gone? Here are a few ideas:

Games: Instead of using plain desktop PCs for games, there are capable gaming systems, such as the X-Box and the Wii. Games exclusive to the X-Box are part of the Microsoft ecosystem, but only exclusive games. Games for multiple platforms count less, and games exclusively for other (non-Microsoft) consoles are not.

The iPhone: A lot of fun applications (games and toys) have moved to the iPhone, or live only on the iPhone. The "glass of beer" application has no purpose on a PC, but it is a lot of fun on an iPhone.

Dual-mode apps: Instead of a PC-only application, developers are creating applications for both PC and Mac. (And sometimes for PC, Mac, and Linux.)

The web: Instead of selling shiny discs in a box, applications can run on the web. In this situation, the question becomes "how many apps run only on Internet Explorer?". In the early web days, lots of web sites worked only on Internet Explorer. Today, limiting your web site to a single browser is passe. Web sites work with IE, Firefox, Safari, Chrome, and Opera. (And other browsers.) A multi-browser web app is a "lose" for Microsoft.

Changes in marketing: Perhaps my study is flawed. Perhaps marketing methods have changed. Instead of going though central distributors, software manufacturers may be distributing their software directly, using the web to reach customers.

I'm sure that all of the above have happened. But I think that they do not account for all of the shrinkage.

Here's my claim, based on nothing more than my "gut feel": The Microsoft ecosystem was most vibrant immediately after the introduction of the IBM PC XT, and has been undergoing consolidation and shrinkage since then. (We may have considered it an "IBM ecosystem", but it was really Microsoft's.) It has been shrinking since then, and it continues to shrink. The number of companies producing applications for Windows is decreasing, either through attrition or acquisition. The reduction is not caused by the current economic recession, but has been occurring for the past twenty-five years.

In some ways, this has been good for Microsoft: They have acquired customers. In other ways, this is bad for Microsoft: their market/ecosystem is dying. If my theory (or gut feel, as "theory" may be too sophisticated a word) is correct, then Microsoft has been achieving short-term gains at the expense of the larger market. Microsoft will continue to gain customers, and business, and market share, as other companies leave the market. It's similar to being queen and the high school prom while every one else leaves: yes, you are the fairest of them all -- for them as what remains in the gymnasium. But most of your class-mates have gone outside to party, and aren't willing to put up with your snobbery.


Tuesday, February 9, 2010

Error messages as metric

Here's an idea for evaluating an organization: measure their error messages.

I picked the word "measure" in that sentence to let you define the appropriate metric. It could be a raw count or the complexity of messages. Or the absence of them. (An organization with lots of error messages in its code says one thing. An organization with little or no messages says something else.)

This task is perhaps more thought experiment than feasible. I suspect that most organizations are ill-prepared to simply hand over their error messages, even if they so desired. Error messages get tucked away in the darndest places, and any large system (comprising of multiple programs) will have messages in different forms. Messages can be hard-coded, stored in resource files, read from external text files, and generated on-the-fly.

Yet an evaluation of error messages may be of value. Error messages are presented to the user, and are a form of communication. I suspect that they are less regulated by the marketing arms of organizations than other forms of communication, such as web pages and e-mail updates. They are sent to the user only when something goes wrong (usually the fault of the user, but not always). They are not in the forefront of marketing.

Here are the aspects that I would look at:

- Are the messages accurate? Do they present the correct information for the situation?
- Are they spelled correctly? Do they have correct grammar?
- Are they specific? Do they present details, or do they present general text such as "Required field missing"?
- Do they recommend an action? Or do they assume that the user will know what to do?
- Are there lots of them? Too many, perhaps?
- Are there too few?
- Are there messages for situations that can never occur (much like "dead source code")?

Error messages may tell us a lot about the organization's view of its relationship to its users. The home page will have pleasant (or impressive) descriptions and pictures of smiling people. But anyone can have a welcoming or impressive home page.

Show me the messages!


Friday, February 5, 2010

Convergence in the cloud

The old convergence of PCs and TVs is underway. While WebTV was not the next big thing, YouTube and NetFlix have shown that the two technologies can work together. From here on out, the TV/PC convergence is a "done deal". On to the next convergence!

I think the next convergence will be in the computing realm.

We have multiple models of processing: stand-alone applications, client/server, web apps, smartphone apps, and (coming soon to a theater near you) cloud apps. These "platforms" are a varied lot, with different UI capabilities and different strengths.

The converged system will blend the capabilities of those different platforms and allow apps to move (or be exposed) at different levels. Some apps will work on a few levels, others will move to all levels.

This idea is not new. Once accounting systems were established on mainframes, but other levels of computing built bridges and inroads to extract data. Some of the solutions were hacked together from extract tapes, 3740 floppy discs, and report writer output, but they were useful. The advantages of sharing data, sharing information, are too great to leave applications at any single level.

I see the converged platform being cloud-based, with virtual processors and languages that use virtual machines, and interfaces that vary as they move from level to level. The "local PC" will be a cloud host, running a scaled-down version of the application. A plain application will run in the cloud -- perhaps your private cloud, but a cloud. 

Your smartphone will run its own cloud (or perhaps talk to a cloud) for its processing.

The notion of a plain executable will go away. There will be a generic processor, one that is present everywhere, delivered through processor virtualization.

The dominant languages will be those that can live in the cloud. LISP, Ruby, and possible C# will be the popular language choices. Elder languages such as COBOL and FORTRAN will be supported in emulators, interpreters, and translators. (Perhaps a just-in-time translator from COBOL to Java, and then an emulator for Java on the virtual processor.)

User interfaces will move further away from core processing. An application will talk to an interface or multiple interfaces. The interface will handle the specific device; your e-mail program (if we still use e-mail) will run on your cell phone, your tablet, your desktop, and your mainframe, using a "virtual interface" to present information.

Applications will be able to float and move from level to level. You can start a program on your desktop PC, transfer it to your cell phone as you commute to the office, and then use your tablet PC in a park at lunch.

We won't care about the processor (sorry, Intel!) or the supporting operating system (sorry, Microsoft!).

We *will* care about the brand and type of cloud. I expect that there will be multiple cloud vendors, with not necessarily compatible offerings. Microsoft is working on "Azure", Google has its "App Engine", and Amazon has its offerings. Look for Oracle to announce something soon. (And they had better be working on it, or they will be reduced to a minor player.) Also look for clouds from overseas. I expect China to create one, and another from a European consortium. India and Brazil may create clouds of their own.

We can look to clouds of different sizes. Clouds will be available for the general public, national governments (and state and local governments), corporations,  and individuals. You can run a cloud on your PC and talk to your account in the public cloud at the same time.

Your choice of cloud will define your business capabilities, just as your choice of operating system today dictates your capabilities.

Clouds will be hacked and attacked, just as computers are today. But in the future, once we find an attacker, we will be able to say:

"Hey, you! Get off of my cloud!"