Showing posts with label cloud computing. Show all posts
Showing posts with label cloud computing. Show all posts

Monday, January 16, 2023

The end of more

From the very beginning, PC users wanted more. More pixels and more colors on the screen. More memory. Faster processors. More floppy disks. More data on floppy disks. (Later, it would be more data on hard disks.)

When IBM announced the PC/XT, we all longed for the space (and convenience) of its built-in hard drive. When IBM announced the PC/AT we envied those with the more powerful 80286 processor (faster! more memory! protected mode!). When IBM announced the EGA (Enhanced Graphics Adapter) we all longed for the higher-resolution graphics. With the PS/2, we wanted the reliability of 3.5" floppy disks and the millions of colors on a VGA display.

The desire for more didn't stop in the 1980s. We wanted the 80386 processor, and networks, and more memory, and faster printers, and multitasking. More programs! More data!

But maybe -- just maybe -- we have reached a point that we don't need (or want) more.

To quote a recent article in MacWorld:

"Ever since Apple announced its Apple silicon chip transition, the Mac Pro is the one Mac that everyone has anxiously been awaiting. Not because we’re all going to buy one–most of the people reading this (not to mention me, my editor, and other co-workers) won’t even consider the Mac Pro. It’s a pricey machine and the work that we do is handled just as well by any Mac in the current lineup".

Here's the part I find interesting:

"the work that we do is handled just as well by any Mac in the current lineup"

Let that sink in a minute.

The work done in the offices of MacWorld (which I assume is typical office work) can be handled by any of Apple's Mac computers. That means that the lowliest Apple computer can handle the work. Therefore, Macworld, being a commercial enterprise and wanting to reduce expenses, should be equipping its staff with the low-end MacBook Air or Mac mini PCs. To do otherwise would be wasteful.

It is not just the Apple computers that have outpaced computing needs. Low end Windows PCs also handle most office work. (I myself am typing this on a Dell desktop that was made in 2007.)

The move from 32-bit processing to 64-bit processing had a negligible affect on many computing tasks. Microsoft Word, for example, ran just as well in 32-bit Windows as it did in 64-bit Windows. The move to 64-bit processing did not improve word processing.

There are some who do still want more. People who play games want the best performance from not only video cards but also central processors and memory. Folks who edit video want performance and high-resolution displays.

But the folks who need, really need, high performance are a small part of the PC landscape. Many of the demanding tasks in computation can be handled better by cloud-based systems. It is only a few tasks that require local, high-performance processing.

The majority of PC users can get by with a low-end PC. The majority of PC users are content. One may look at a new PC with more memory or more pixels, but the envy has dissipated. We have enough colors, enough pixels, and enough storage.

If we have reached "peak more" in PCs, what does that mean for the future of PCs?

An obvious change is that people will buy PCs less frequently. With no urge to upgrade, people will keep their existing equipment longer. Corporations that buy PCs for employees may continue on a "replace every three years" schedule, but that was driven by depreciation rules and tax laws. Small mom-and-pop businesses will probably keep computers until a replacement is necessary (I suspect that they have been doing that all along). Some larger corporations may choose to defer PC replacements, noting that cash outlays for new equipment are still cash outlays, and should be minimized.

PC manufacturers will probably focus on other aspects of their wares. PC makers will strive for better battery life, durability, or ergonomic design. They may even offer Linux as an alternative to Windows.

It may be that our ideas about computing are changing. It may be that instead of local PCs that do everything, we are now looking at cloud computing (and perhaps older web applications) and seeing a larger expanse of computing. Maybe, instead of wanting faster PCs, we will shift our desires to faster cloud-based systems.

If that is true, then the emphasis will be on features of cloud platforms. They won't compete on pixels or colors, but they may compete on virtual processors, administration services, availability, and supported languages and databases. Maybe we won't be envious of new video cards and local memory, but envious instead of uptime and automated replication. 

Wednesday, October 19, 2022

Businesses discover that cloud computing isn't magic

Businesses are now (just now, after more than a decade of cloud computing) discovering that cloud computing is not magic. That it doesn't make their computing cheap. That it doesn't solve their problems.

Some folks have already pointed this out. Looking back, it seems obvious: If all you have done is move your web-based system into cloud-based servers, why would things change? But they miss an important point.

Cloud computing is a form of computing, different from web-based applications and different from desktop applications. (And different from mainframe batch processing of transactions.)

A cloud-based system, to be efficient, must be designed for cloud computing. This means small independent services reading and writing to databases or other services, and everything coordinated through message queues. (If you know what those terms mean, then you understand cloud computing.)

Moving a web-based application into the cloud, unchanged, makes little sense. Or as much sense as moving a desktop-based application (remember those?) such as Word or Excel into the web, unchanged.

So why use cloud computing?

Cloud computing's strengths are redundancy, reliability, and variable power. Redundancy in that a properly designed cloud computing system consists of multiple services, each of which can be hosted on multiple (as in more than one per service) servers. If your system contains a service to perform address validations, that service could be running on one, two or seven different servers. Each instance does the same thing: examine a mailing address and determine the canonical form for that address.

The other components in your system, when they need to validate or normalize an address, issue a request to the validation service. They don't care which server handles the request.

Cloud systems are reliable because of this redundancy. A traditional web-based service would have one address validation server. If that server is unavailable, the service is unavailable for the entire system. Such a failure can lead to the entire system being unavailable.

Cloud systems have variable power. They can create additional instances of any of the services (including our example address validation service) to handle a heavy workload. Traditional web services, with only one server, can see slow response times when that server is overwhelmed with requests. (Sometimes a traditional web system would have more than one server for a service, but the number of servers is fixed and adding a server is a lengthy process. The result is the same: the allocated server or servers are overwhelmed and response time increases.)

Cloud services eliminate this problem by instantiating servers (and their services) as needed. When the address validation server is overwhelmed, the cloud management software detects it and "spins up" more instances. Good cloud management software works in the other direction too, shutting down idle instances.

Those are the advantages of cloud systems. But none of them are free; they all require that you build your system for the cloud. That takes effort.


Thursday, August 4, 2022

Eggs and baskets

PCWorld, a venerable trade publication-now-website of the IT realm, recently lost its YouTube video channel. The channel was disabled (or suspended? or deleted?) and no content was available. For more than eight days.

From what I can discern, IDG's YouTube account was controlled by an IDG e-mail address. Everything worked until IDG was purchased by Foundry, and Foundry changed all of IDG's e-mail addresses to Foundry addresses, didn't change the account at YouTube, and YouTube, seeing no activity on the IDG e-mail address or maybe getting bounce messages, cancelled the account.

Thus, the PCWorld video channel was unavailable for over a week.

Why didn't PCWorld restore its channel? Or make its content available on another service? 

My guess is that IDG stored all of their video content on YouTube. That is, the only copy was on YouTube. IDG probably relied on YouTube to keep backup copies and multiple servers for disaster recovery. In short, IDG followed the pattern for cloud-based computing.

The one disaster for which IDG failed to prepare was the account cancellation.

I must say here that a lot of this is speculation on my part. I don't work for PCWorld, or at IDG (um, Foundry) or at YouTube. I don't know that the sequence I have outlined is what actually happened.

My point is not to identify exactly what happened.

My point is this: cloud solutions, like any other type of technology, can be fragile. They can be fragile in ways that we do not expect.

The past half-century of computing has shown us that computers fail. They fail in many ways, from physical problems to programming errors to configuration mistakes. Those failures often cause problems with data, sometimes deleting all of it, sometimes deleting part of it, and sometimes modifying (incorrectly) part of the data. We have a lot of experience with failures, and we have built a set of good practices to recover from those failures.

Cloud-based solutions do not eliminate the need for those precautions. While cloud-based solutions offer protection against some problems, they introduce new problems.

Such as account cancellation.

Businesses (and people, often), when entering into agreements, look for some measure of security. Businesses want to know that the companies they pick to be suppliers will be around for some time. They avoid "fly by night" operations.

A risk in cloud-based solutions is account closure. The risk is not that Google (or Oracle) will go out of business, leaving you stranded. The risk is that the cloud supplier will simply stop doing business with you.

I have seen multiple stories about people or businesses who have had their accounts closed, usually for violating the terms of service. When said people or businesses reach out to the cloud provider (a difficult task in itself, as they don't provide phone support) the cloud provider refuses to discuss the issue, and refuses to provide any details about the violation. From the customer's perspective, the results are very much as if the cloud provider went out of business. But this behavior cannot be predicted from the normal signal of "a reliable business that will be around for a while".

It may take some time, and a few more stories about sudden, unexplained and uncorrectable account closures, but eventually people (and businesses) will recognize the risk and start taking preventative actions. Actions such as keeping local copies of data, backups of that data (not local and not on the main cloud provider), and a second provider for fail-over.

In other words:

Don't put all of your eggs in one cloud basket.

Wednesday, February 3, 2021

The return of timesharing

Timesharing (the style of computing from the 1970s) is making a comeback. Or rather, some of the concepts of timesharing are making a comeback. But first, what was timesharing and how was it different from computing at the time.

Mainframe computers in the 1960s were not the sophisticated devices of today, but simpler computers with equivalent power of an original IBM PC (5 MHz processor, 128KB RAM). They were, of course, much larger and much more expensive. Early mainframes ran one program at a time (much like PC-DOS). When one program finished, the next could be started. They had one "user" which was the system operator.

Timesharing was a big change in computing. Computers had become powerful enough to support multiple interactive users at the same time. It worked because interactive uses spent most of their time thinking and not typing, so a user's "job" is mostly waiting for input. A timesharing system held multiple jobs in memory and cycled among each of those jobs. Timesharing allowed remote access ("remote" in this case meaning "outside of the computer room") by terminals, which meant that individuals in different parts of an organization could "use the computer".

Timesharing raised the importance of time. Specifically, timesharing raised the importance of the time a program needed to run (the "CPU time") and the time a user was connected. The increase in computing power allowed the operating system to record these values for each session. Tracking those values was important, because it let the organization charge users and departments for the use of the computer. The computer was no longer a monolithic entity with a single (large) price tag, but a resource that could be expensed to different parts of the organization.

Now let's consider cloud computing.

It turns out that the cloud is not infinite. Nor is it free. Cloud computing platforms record charges for users (either individuals or organizations). Platforms charge for computing time, for data storage, and for many other services. Not every platform charges for the same things, with some offering a few services for free. 

The bottom line is the same: with cloud computing, an organization has the ability to "charge back" expenses to individual departments, something that was not so easy in the PC era or the web services era.

Or, to put it another way, we are undergoing a change in billing (and information about expenses) that is not new, but has not been seen in half a century. How did the introduction of timesharing (and its expense information) affect organizations? Will we see the same affects again?

I think we will.

Timesharing made interactive computing possible, and it made the expense of that computing visible to users. It let users decide how much computing that wanted to use, and users had discretion to use more or less computing resources.

Cloud computing provides similar information to users. (Or at least the organizations paying for the cloud services; I expect those organizations will "charge back" those expenses to users.) Users will be able to see those charges and decide how much computing resources they want to use.

As organizations move their systems from web to cloud (and from desktop to cloud), expect to see expense information allocated to the users of those systems. Internal users, and also (possibly) external users (partners and customers).

Timesharing made expense information available at a granular level. Cloud computing does the same.


Tuesday, December 29, 2020

For readability, BASIC was the best

One language stands alone in terms of readability.

That language is BASIC.

BASIC -- the old, pre-Visual Basic of the 1980s -- has a unique characteristic: a single line can be read and understood.

One may think that a line from any programming language can be read and understood. After all, we read and understand programs all the time, don't we? That's true, but we read entire programs, or large sections of programs. Those large fragments of programs contain information that defines the classes, functions, and variables in the programs, and we use that information to understand the code. But if we strip away that extra information, if we limit ourselves to a single line, then we cannot read and completely understand the code.

Let's look at an example line of code:

a = b * 5

Can you tell what this code does? For certain? I cannot.

A naive assessment is that the code retrieves the value of variable 'b', multiplies it by 5, and stores the result in the variable 'a'. It is easy to assume that the variables 'a' and 'b' are numeric. Yet we don't know that -- we only assume it.

If I tell you that the code is Python code, and that 'b' refers to a string object, then our understanding of this code changes. The code still performs a 'multiply' operation, but 'multiply' for a string object is very different from 'multiply' for a numeric object.

Instead, if I tell you that the code is C++, then we must identify the type for 'b' (which is not provided in the single line of code) and we must know if the class for 'b' defines the '*' operator. That operation could do anything, from casting b's contents to a number and multiplying 5 to sending some text to 'cout'.

We like to think we understand the code, but instead we are constantly making assumptions about the code and building an interpretation that is consistent with those assumptions.

But the language BASIC is different.

In BASIC, the line

a = b * 5

or, if you prefer

100 LET A = B * 5

is completely defined. We know that the variable B contains a numeric value. (The syntax and grammar rules for BASIC require that a variable with no trailing sigil is a numeric variable.) We also know that the value of B is defined. (Variables are always defined. If not initialized in our code, that have the value 0.) We know the behavior of the '*' operator -- it cannot be overridden or changed. We know that the variable 'A' is numeric, and that it can receive the results of the multiply operation.

We know these things. We do not need other parts of the program to identify the type for a variable, or a possible redefinition of an operator.

This property of BASIC means that BASIC is readable in a way that other programming languages are not. Other programming languages require knowledge of declarations. All of the C-based languages (C, Objective-C, C++, C#, and even Java) require this. Perl, Python, and Ruby don't have variables; they have names that can refer to any type of object. The only other programming language that comes close is FORTRAN II. It might have had the same readability; it had rules for names of variables and functions.

BASIC's readability is possible because it requires the data type of a variable to be encoded in the name of the variable. This is completely at odds with every modern language, which allow variables to be named with no special markings for type. BASIC used static typing; not only static typing, but overt typing -- the type was expressed in the name.

Static, overt typing was possible in BASIC because BASIC used a limited number of types (numeric, integer, single-precision floating point, double-precision floating point, and string) each of which could be represented by a single punctuation character. Each variable name had a sigil for the type. (Or no sigil, in which case the type was numeric.)

Those sigils were so useful that programmers who switched to Visual Basic kept the idea, through the use of programming style conventions that asked for prefixes for each variable. That effort become unwieldy, as there were many types (Visual Basic used many libraries for Windows functions and classes) and there was no all-encompassing standard and no was to enforce a standard.

Overt typing is possible with a language that has a limited number of types. It won't work (or it hasn't worked) for object-oriented languages. Those languages are designed for large systems with large code bases. They have built-in types and allow for user-defined types, with no support to indicate the type in the name of the variable. And as we saw with Visual Basic, expressing the different types is complicated.

But that doesn't mean the idea is useless. Overt typing worked for BASIC, a language that was designed for small programs. (BASIC was meant to be a language for teaching the skills of programming. The name was an acronym: Beginner's All-Purpose Symbolic Instruction Code.) Overt typing might be helpful for a small language, one designed for small programs.

It strikes me that cloud computing is the place for small languages. Cloud computing uses multiple processors to perform calculations, and split code bases. A well-designed cloud application consists of lots of small programs. Those small programs don't have to be built with object-oriented programming languages. I expect to see new programming languages for cloud-based computing, programming languages that are designed for small programs.

I'm not recommending that we switch from our current set of programming languages to BASIC. But I do think that the readability of BASIC deserves some attention.

Because programs, large or small, are easier to understand when they are readable.

Wednesday, August 19, 2020

Apple and cloud computing

 Apple's tussle with Epic (the creators of Fortnite) shows a deeper problem for Apple. That problem is cloud computing.

Apple has avoided cloud computing, at least from its customers' perspectives. Apple is the last company to use the "run locally" model for its application. Apps for iPhones and iPads run on those devices, applications for Macbooks run on them. (Apple does use servers and possible cloud computing for iCloud storage and Siri, but those are accessories to macOS, not a front-and-center service.)

In contrast, Google's cloud-based Documents and Sheets apps let one build documents and spreadsheets in the cloud, with data stored on Google's servers and available from any device. I can create a document on a Chromebook, then open it on my Android phone, and later update it on a desktop PC with a Chrome browser. This works because the data is stored on Google's servers, and each device pulls the latest version when it is needed.

Microsoft is moving in this direction with its online version of Office tools. Even Oracle is moving to cloud-based computing.

Apple is staying with the old "local data, local execution" model, in which computing and data is on the local device. Bur Apple does let one move from one device to another (such as from an iPad to a Macbook) by synchronizing the data onto Apple's servers.

The difference is the location of processing. In Google's model (and in Microsoft's new model), processing occurs on the server. Apple keeps processing on the local device.

For simple tasks such as word processing and spreadsheets, the difference is negligible. One might even claim that local processing is better as it offers more options for documents and spreadsheets. (More fonts, more chart options, more functions in spreadsheets.) The counter-argument is that the simpler cloud-based apps are better because they are simpler and easier to use.

Regardless of your preference for simple or complex word processors, cloud computing is also used by games, and games are a significant market. More and more games are shifting from a model of "local processing with come communication to other users" to a model of "some local processing to communications with servers for more processing". In other words, games are using a hybrid model of local processing and cloud processing.

This shift is a problem for Apple, because it breaks from the "all processing is local" model that Apple uses.

What is Apple to do? They have two choices. They can stay with the "all processing is local" model (and ignore cloud computing) or they can adopt cloud computing (most likely as a hybrid form, with some processing in the cloud and some processing remaining on the local computer).

Ignoring cloud computing seems risky. Everyone is moving to cloud computing, and Apple would be left out of a lot of innovations (and markets). So let's assume that Apple adopts some form of cloud computing, and enables developers of applications for Apple devices to run some functions in the cloud.

How will Apple host their cloud platform? Here again there are two choices. Apple can build their own cloud platform, or they can use someone else's.

Building their own cloud infrastructure is not easy. Apple comes late to cloud computing, and will have a lot of work to build their infrastructure. Apple probably has the time to do it, and most definitely has the cash to build data centers and hire the engineers and system administrators.

But the alternative -- using someone else's cloud -- is also not easy. Apple is not friends with the major cloud providers (Amazon.com, Microsoft, Google) but it may form alliances with one of the smaller providers (Oracle, Dell, IBM) or it might purchase a small cloud provider (Linode, Digital Ocean, OVH). Apple has the cash to make such a purchase. While possible, a purchase may not be what Apple wants.

My guess is that Apple will build their own cloud platform, and will make it different from the existing cloud platforms. Apple will need some way to distinguish their cloud platform and make it appealing to app developers.

Perhaps Apple will focus on security, and build a self-contained cloud platform, one that offers services to Apple devices but not others, and one that is isolated from other cloud platforms and services. An "Apple only" cloud platform with no connection to the outside world would allow Apple to ensure the security of customer data -- with no connection to the outside world, data would have no way to escape or be extracted.

Apple may go so far as to mandate the use of their cloud platform, prohibiting apps from communicating with other cloud platforms. iPhone apps and Macbook applications could use Apple's cloud, but not AWS or Azure. This would be a significant restriction, but would guarantee revenue to Apple.

(Assuming that Apple charges for cloud services, which seems a reasonable assumption. Exactly how to charge for cloud services is a challenge, and may be the only thing preventing Apple from offering cloud services today.)

The outcome of the dispute between Apple and Epic may foreshadow such a strategy. If Apple prevails, they may go on to create a locked cloud platform, one that does not allow competition but does ensure security of data. (Or perhaps a milder strategy of offering the Apple cloud, but only on the condition that the Apple cloud is the only cloud used by an app. Classic apps that use other clouds may continue to run, but they could use Apple cloud. Moving any services to the Apple cloud would mean moving all services to the Apple cloud.)

I don't know how things will play out. I don't know what discussions are being held in Apple's headquarters. These ideas seem reasonable to me, so they may come to pass. Of course, Apple has surprised us in the past, and they may surprise us again!

Wednesday, February 12, 2020

Advances in platforms and in programming languages

The history of computing can be described as a series of developments, alternating between computing platforms and programming languages. The predominant pattern is one in which hardware is advanced, and then programming languages. Occasionally, hardware and programming languages advance together, but that is less common. (Hardware and system software -- not programming languages -- do advance together.)

The early mainframe computers were single-purpose devices. In the early 21st century, we think of computers as general-purpose devices, handling financial transactions, personal communication, navigation, and games, because our computing devices perform all of those tasks. But in the early days of electronic computing, devices were not so flexible. Mainframe computers were designed for a single purpose; either commercial (financial) processing, or scientific computation. The distinction was visible through all aspects of the computer system, from the processor and representations for numeric values to input-output devices and the characters available.

Once we had those computers, for commercial and for scientific computation, we built languages. COBOL for commercial processing; FORTRAN for scientific processing.

And thus began the cycle of alternating developments: computing platforms and programming languages. The programming languages follow the platforms.

The next advance in hardware was the general-purpose mainframe. The IBM System/360 was designed for both types of computing, and it used COBOL and FORTRAN. But we also continued the cycle of "platform and then language" with the invention of a general-purpose programming language: PL/1.

PL/1 was the intended successor to COBOL and to FORTRAN. It improved the syntax of both languages and was supposed to replace them. It did not. But it was the language we invented after general-purpose hardware, and it fits in the general pattern of advances in platforms alternating with advances in languages.

The next advance was timesharing. This advance in hardware and in system software let people use computers interactively. It was a big change from the older style of scheduled jobs that ran on batches of data.

The language we invented for this platform? It was BASIC. BASIC was designed for interactive use, and also designed to avoid requests of system operators to load disks or tapes. A BASIC program could contain its code and its data, all in one. Such a thing was not possible in earlier languages.

The next advance was minicomputers. The minicomputer revolution (DEC's PDP-8, PDP-11 and other  systems from other vendors) used BASIC (adopted from timesharing) and FORTRAN. Once again, a new platform initially used the languages from the previous platform.

We also invented languages for minicomputers. DEC invented FOCAL (a lightweight FORTRAN) and DIBOL (a lightweight COBOL). Neither replaced its corresponding "heavyweight" language, but invent them we did.

The PC revolution followed minicomputers. PCs were small computers that could be purchased and used by individuals. Initially, PCs used BASIC. It was a good choice: small enough to fit into the small computers, and simple enough that individuals could quickly understand it.

The PC revolution invented its own languages: CBASIC (a compiled form of BASIC), dBase (later named "xbase"), and most importantly, spreadsheets. While not a programming language, a spreadsheet is a form of programming. It organizes data and specifies calculations. I count it as a programming platform.

The next computing platform was GUI programming, made possible with both the Apple Macintosh and Microsoft Windows. These "operating environments" (as they were called) changed programming from text-oriented to graphics, and required more powerful hardware -- and software. But the Macintosh first used Pascal, and Windows used C, two languages that were already available.

Later, Microsoft invented Visual Basic and provided Visual C++ (a concoction of C++ and macros to handle the needs of GUI programming), which became the dominant languages of Windows. Apple switched from Pascal to Objective-C, which it enhanced for programming the Mac.

The web was another computing advance, bringing two distinct platforms: the server and the browser. At first, servers used Perl and C (or possibly C++); browsers were without a language and had to use plug-ins such as Flash. We quickly invented Java and (somewhat less quickly) adopted it for servers. We also invented JavaScript, and today all browsers provide JavaScript for web pages.

Mobile computing (phones and tablets) started with Objective-C (Apple) and Java (Android), two languages that were convenient for those devices. Apple later invented Swift, to fix problems with the syntax of Objective-C and to provide a better experience for its users. Google invented Go and made it available for Android development, but it has seen limited adoption.

Looking back, we can see a clear pattern. A new computing platform emerges. At first, it uses existing languages. Shortly after the arrival of the platform, we invent new languages for that platform. Sometimes these languages are adopted, sometimes not. Sometimes a language gains popularity much later than expected, as in the case of BASIC, invented for timesharing but used for minicomputers and PCs.

It is a consistent pattern.

Consistent that is, until we get to cloud computing.

Cloud computing is a new platform, much like the web was a new platform, and PCs were a new platform, and general-purpose mainframes were a new platform. And while each of those platforms saw the development of new languages to take advantage of new features, the cloud computing platform has seen... nothing.

Well, "nothing" is a bit harsh and not quite true.

True to the pattern, cloud computing uses existing languages. Cloud applications can be built in Java, JavaScript, Python, C#, C++, and probably Fortran and COBOL. (And there are probably cloud applications that use these languages.)

And we have invented Node.js, a framework in JavaScript that is useful for cloud computing.

But there is no native language for cloud computing. No language that has been designed specifically for cloud computing. (No language of which I am aware. Perhaps there is, lurking in the dark corners of the internet that I have yet to visit.)

Why no language for the cloud platform? I can think of a few reasons:

First, it may be that our current languages are suitable for the development of cloud applications. Languages such as Java and C# may have the overhead of object-oriented design, but that overhead is minimal with careful design. Languages such as Python and JavaScript are interpreted, but that may not be a problem with the scale of cloud processing. Maybe the pressure to design a new language is low.

Second, it may be that developers, managers, and anyone else connected with projects for cloud applications is too busy learning the platform. Cloud platforms (AWS, Azure, GCP, etc.) are complex beasts, and there is a lot to learn. It is possible that we are still learning about cloud platforms and not ready to develop a cloud-specific language.

Third, it may be too complex to develop a cloud-specific programming language. The complexity may reside in separating cloud operations from programming, and we need to understand the cloud before we can understand its limits and the boundaries for a programming language.

I suspect that we will eventually see one or more programming languages for cloud platforms. The new languages may come from the big cloud providers (Amazon, Microsoft, Google) or smaller providers (Dell, Oracle, IBM) or possibly even someone else. Programming languages from the big providers will be applicable for their respective platforms (of course). A programming language from an independent party may work across all cloud platforms -- or may work on only one or a few.

We will have to wait this one out. But keep yours eyes open. Programming languages designed for cloud applications will offer exciting advances for programming.

Thursday, January 30, 2020

The cloud revolution is different

The history of computing can be described as a series of revolutions. If we start the age of modern computing with the earliest electronic calculating machines, we have the following upheavals:

  • Standardized computers for sale (or lease)
  • General-purpose mainframes
  • Minicomputers
  • Personal computers
  • Web applications
  • Cloud applications

Each of these events were revolutionary -- they introduced new forms of computing. And all of these events (except one) saw an expansion of computing, an increase in the applications that could be performed by computers.

The first revolution (standardized computers) was in the days of the IBM 1401. Computers were large, expensive, and designed for specific purposes, but they were also consistent. One IBM 1401 was quite similar to another IBM 1401, ignoring differences in memory and tape drives. The similarity in computers made possible the idea of commonly used applications, and common programming languages such as FORTRAN and COBOL.

The second revolution (a general-purpose computer) was introduced by the IBM System/360. The System/360 was designed to run applications for different domains: scientific, commercial, and government. It built on the ideas of common applications and common programming languages.

The minicomputer revolution (minicomputers, or timesharing) expanded computing with interactive applications. Instead of batch jobs that could be run only when scheduled by operators, timesharing allowed for processing when users wanted it. In fact, timesharing expanded computing from operators to users. (Not everyone was a user, but the set of users was much larger than the set of operators.) Minicomputers were used to create the C language and write the Unix operating system.

The PC revolution brought computing to "the rest of us", or at least those who were willing to spend the thousands of dollars for a small computer. It applications were more interactive than those of timesharing, and more graphical. The "killer" app was the spreadsheet, but word processors, small databases, and project planning software was also popular, and made possible with PCs.

The web revolution introduced communication, and made applications available across a network.

Each of these changes -- revolutions, in my mind -- expanded the universe of computing. The expansions were sometimes competitive, with the "rebels" introducing new applications and the "old guard" attempting to copy the same applications onto the old platform. The expansions were sometimes divisive, with people in the "old" and "new" camps disagreeing on applications, programming languages, and techniques, and even what value the different forms of computing offered. But despite competition and disagreement, each camp had its own ground, and was relatively secure in that area.

There was no fear that minicomputers would replace mainframes. The forms of computing were too different. The efficiencies of the two forms were different. Mainframes excelled at transaction processing. Minicomputers excelled at interaction. Neither crossed into the other's territory.

When PCs arrived, there was no fear that PCs would replace mainframes. PCs would, after some time, replace stand-alone word processing systems and typewriters. But mainframes retained core business applications on big iron. (Minicomputers did die off, being caught between efficient mainframes and interactive PCs.)

When the web arrived, there was no fear that web servers would replace PCs. There was no fear that  web applications would replace desktop applications. The web was a new place, with new capabilities. Instead of replacing PCs, the web expanded the capabilities of mainframe systems, providing a user interface into banking and corporate systems. PC applications such as word processing and spreadsheets remained on PCs.

Which brings us to cloud computing.

The cloud revolution is different. The approach with cloud computing, the philosophy, has been to absorb and replace existing applications. We have any number of companies ready to help "move applications to the cloud". There are any number of books, magazines, and online resources that describe tips and tricks for migrating to the cloud. The message is clear: the cloud is the place to be, convert your old applications to the cloud.

This mindset is different from the mindset of previous revolutions. The cloud revolution wants to take over all computing. The cloud revolution is predatory. It is not content with an expansion of computing; it wants to own it all.

I do not know why this revolution is different from previous changes. Why should this change, which is simply another form of computing, push people to behave differently.

At the root, it is people who are behaving differently. Cloud computing is not a sentient being; it has no feelings, no desires, and no motivations. Cloud computing does not want to take over the computing world; it is us, the people in IT, the developers and designers and managers who want cloud computing to take over the world.

I think that this desire may be driven by two factors: economics and control. The economics of cloud computing is better (cheaper) than the economics of PCs, discrete web servers, and even mainframes. But only if the application is designed for the cloud. A classic web application, "lifted and shifted" into the cloud, has the same economics as before.

The other factor is control. I think that people think that they have more control over cloud-based applications than desktop applications or classic web applications. The first is undoubtedly true. Desktop applications, installed on users PCs, are difficult to manage. Each PC has its own operating system, its own hardware, its own set of other applications, any of which can interfere with the application. PCs can fail, they can run out of disk space, and -- worst of all -- let an old version of the application continue to run. The cloud does away with all of that: control moves from the user to the cloud administrator and support becomes much simpler.

So I can understand the desire for people to move applications to the cloud. But I think that people are missing opportunities. By focusing on moving existing applications into the cloud, we do not see the possible new applications, possible only in the cloud. Those opportunities include things such as big data and machine learning, and can include more.

Imagine the PC revolution, with small computers that fit on desktops, and applications limited to copies of existing mainframe applications. The new PCs would be running order entry systems and inventory systems and general ledger. Or at least we would be trying to get them to run those applications, and we would be ignoring the possibilities of word processing and spreadsheets.

Cloud computing is a form of computing, just as mainframes, PCs, and the web are all forms of computing. Each has its strengths (and weaknesses). Don't throw them away for efficiency, or for simpler support.

Wednesday, January 24, 2018

Cloud computing is repeating history

A note to readers: This post is a bit of a rant, driven by emotion. My 'code stat' project, hosted on Microsoft Azure's web app PaaS platform, has failed and I have yet to find a resolution.

Something has changed in Azure, and I can no longer deploy a new version to the production servers. My code works; I can test it locally. Something in the deployment sequence fails. This is a test project, using the free level of Azure, which means no monthly costs but also means no support -- other than the community help pages.

There are a few glorious advances in IT, advances which stand out above the others. They include the PC revolution (which saw individuals purchasing and using computers), the GUI (which saw people untrained in computer science using computers), and the smartphone (which saw lots more people using computers for lots more sophisticated tasks).

The PC revolution was a big change. Prior to personal computers (whether they were IBM PCs, Apple IIs, or Commodore 64s), computers were large, expensive, and complicated; they were especially difficult to administer. Mainframes and even minicomputers were large and expensive; an individual could afford one if they were an enormously wealthy individual and had lots of time to read manuals and try different configurations to make the thing work.

The consumer PCs changed all of that. They were expensive, but within the range of the middle class. They required little or no administration effort. (The Commodore 64 was especially easy: plug it in, attach to a television, and turn it on.)

Apple made the consumer PC easier to use with the Macintosh. The graphical user interface (lifted from Xerox PARC's Alto, and later copied by Microsoft Windows) made many operations and concepts consistent. Configuration was buried, and sometimes options were reduced to "the way Apple wants you to do it".

It strikes me that cloud computing is in a "mainframe phase". It is large and complex, and while an individual can create a an account (even a free account), the complexity and time necessary to learn and use the platform is significant.

My issue with Microsoft Azure is precisely that. Something has changed and it behaves differently than it did in the past. (It's not my code, the change is in the deployment of my app.) I don't think that I have changed something in Azure's configuration -- although I could have.

The problem is that once you go beyond the 'three easy steps to deploy a web app', Azure is a vast and intimidating beast with lots of settings, each with new terminology. I could poke at various settings, but will that fix the problem or make things worse?

From my view, cloud computing is a large, complex system that requires lots of knowledge and expertise. In other words, it is much like a mainframe. (Except, of course, you don't need a large room dedicated to the equipment.)

The "starter plans" (often free) are not the equivalent of a PC. They are merely the same, enterprise-level plans with certain features turned off.

A PC is different from a mainframe reduced to tabletop size. Both have CPUs and memory and peripheral devices and operating systems, but are two different creatures. PCs have fewer options, fewer settings, fewer things you (the user) can get wrong.

Cloud computing is still at the "mainframe level" of options and settings. It's big and complicated, and it requires a lot of expertise to keep it running.

If we repeat history, we can expect companies to offer smaller, simpler versions of cloud computing. The advantage will be an easier learning curve and less required expertise; the disadvantage will be lower functionality. (Just as minicomputers were easier and less capable than mainframes and PCs were easier and less capable than minicomputers.)

I'll go out on a limb and predict that the companies who offer simpler cloud platforms will not be the current big providers (Amazon.com, Microsoft, Google). Mainframes were challenged by minicomputers from new vendors, not the existing leaders. PCs were initially constructed by hobbyists from kits. Soon after companies such as Radio Shack, Commodore, and the newcomer Apple offered fully-assembled, ready-to-run computers. IBM offered the PC after the success of these upstarts.

The driver for simpler cloud platforms will be cost -- direct and indirect, mostly indirect. The "cloud computing is a mainframe" analogy is not perfect, as the billed costs for cloud platforms can be inexpensive. The expense is not in the hardware, but the time to make the thing work. Current cloud platforms require expertise, and expertise that is not cheap. Companies are willing to pay for that expertise... for now.

I expect that we will see competition to the big cloud platforms, and the marketing will focus on ease of use and low Total Cost of Ownership (TCO). The newcomers will offer simpler clouds, sacrificing performance for reduced administration cost.

My project is currently stuck. Deployments fail, so I cannot update my app. Support is not really available, so I must rely on the limited web pages and perhaps trial and error. I may have to create a new app in Azure and copy my existing code to it. I'm not happy with the experience.

I'm also looking for a simpler cloud platform.

Monday, September 25, 2017

Web services are the new files

Files have been the common element of computing since at least the 1960s. Files existed before disk drives and file systems, as one could put multiple files on a magnetic tape.

MS-DOS used files. Windows used files. OS/2 used files. (Even the p-System used files.)

Files were the unit of data storage. Applications read data from files and wrote data to files. Applications shared data through files. Word processor? Files. Spreadsheet? Files. Editor? Files. Compiler? Files.

The development of databases saw another channel for sharing data. Databases were (and still are) used in specialized applications. Relational databases are good for consistently structured data, and provide transactions to update multiple tables at once. Microsoft hosts its Team Foundation on top of its SQL Server. (Git, in contrast, uses files exclusively.)

Despite the advantages of databases, the main method for storing and sharing data remains files.

Until now. Or in a little while.

Cloud computing and web services are changing the picture. Web services are replacing files. Web services can store data and retrieve data, just as files. But web services are cloud residents; files are for local computing. Using URLs, one can think of a web service as a file with a rather funny name.

Web services are also dynamic. A file is a static collection of bytes: what you read is exactly was was written. A web service can provide a set of bytes that is constructed "on the fly".

Applications that use local computing -- desktop applications -- will continue to use files. Cloud applications will use web services.

Those web services will be, at some point, reading and writing files, or database entries, which will eventually be stored in files. Files will continue to exist, as the basement of data storage -- around, but visited by only a few people who have business there.

At the application layer, cloud applications and mobile applications will use web services. The web service will be the dominant method of storing, retrieving, and sharing data. It will become the dominant method because the cloud will become the dominant location for storing data. Local computing, long the leading form, will fall to the cloud.

The default location for data will be the cloud; new applications will store data in the cloud; everyone will think of the cloud. Local storage and local computing will be the oddball configuration. Legacy systems will use local storage; modern systems will use the cloud.

Monday, September 11, 2017

Legacy cloud applications

We have legacy web applications. We have legacy Windows desktop applications. We have legacy DOS applications (albeit few). We have legacy mainframe applications (possibly the first type to be named "legacy").

Will we have legacy cloud applications? I see no reason why not. Any technology that changes over time (which is just about every technology) has legacy applications. Cloud technology changes over time, so I am confident that, at some time, someone, somewhere, will point to an older cloud application and declare it "legacy".

What makes a legacy application a legacy application? Why do we consider some applications "legacy" and others not?

Simply put, then technology world changed and the application did not. There are multiple aspects to the technology world, and any one of them, when left unchanged, may cause us to view an application as legacy.

It may the user interface. (Are we using an old version of HTML and CSS? An old version of JavaScript?) It may be the database. (Are we using a relational database and not a NoSQL database?) The the back-end code may be difficult to read. The back-end code may be in a language that has fallen out of favor. (Perl, or Visual Basic, or C, or maybe an early version of Java?)

One can ask similar questions about legacy Windows desktop applications or mainframe applications. (C++ and MFC? COBOL and CICS?)

But let us come back to cloud computing. Cloud computing has been around since 2006. (There was an earlier use of the term "cloud computing", but for our purposes the year 2006 is sufficient.)

So let's assume that the earliest cloud applications were built in 2006. Cloud computing has changed since then. Have all of these applications kept up with those changes? Or have some of them languished, retaining their original design and techniques? If they have not kept up with changing technology, we can consider them legacy cloud applications.

Which means, as owners or custodians of applications, we now not only have to worry about legacy mainframe applications and legacy web applications and legacy desktop applications. We can add legacy cloud applications to our list.

Cloud computing is a form of computing, but it is not magical. It evolves over time, just like other forms of computing. Those who look after applications must either make the effort to modify cloud applications over time (to keep up with the mainstream) or live with legacy cloud applications. That effort is an expense.

Like any other expense, it is really a business decision: invest time and money in an old (legacy) application or invest the time and money somewhere else. Both paths have benefits and costs; managers must decide which has the greater merit. Choosing to let an old system remain old is an acceptable decision, provided you recognize the cost of maintaining that older technology.

Saturday, August 19, 2017

Cloud, like other forms of computers, changes over time

Cloud computing has been with us a while. In its short life, and like other types of computing, it has changed.

"Cloud" started out as the outsourcing of system administration.

Then "cloud" was about scalability, and the ability to "spin up" servers as you needed them and "spin down" servers when they were not needed.

Shortly after, "cloud" was a cost-control measure: pay for only the servers you use.

For a while, "cloud" was a new type of system architecture with dedicated servers (web, database) connected by message queues.

Then "cloud" was about microservices, which are small web services that are less than complete applications. (Connect the right microservices in the right way, and you have an application!)

Lately, "cloud" has been all about containers, and the rapid and lightweight deployment of applications.

So what is "cloud computing", really?

Well, it's all of these things. As I see it, cloud computing is a new form of computing, difference from mainframe computing, desktop computing, and web applications. As a new form of computing, it has taken us a while to fully understand it.

We had similar transitions with desktop (or PC) computing and web applications. Early desktop microcomputers (the Apple II, the TRS-80, and even the IBM PC) were small, slow, and difficult to use. Over time, we modified those PCs: powerful processors, bigger displays, more memory, simpler attachments (USB instead of serial), and better interfaces (Windows instead of DOS).

Web applications went through their own transitions, from static web pages to CGI Perl scripts to AJAX applications to new standards for HTML.

Cloud computing is undergoing a similar process. It shouldn't be a surprise; this process of gradual improvement is less about technology and more about human creativity. We're always looking for new ways of doing things.

One can argue that PCs and web applications have not stopped changing. We've just added touchscreens to desktop and laptop computers, and we've invented NoSQL databases for web applications (and mobile applications). It may be that cloud computing will continue to change, too.

It seems we're pretty good at changing things.

Sunday, July 9, 2017

Cloud and optimizations

We all recognize that cloud computing is different.

It may be that cloud computing breaks some of our algorithms.

A colleague of mine, a long time ago, shared a story about programming early IBM mainframes. They used assembly language, because code written in assembly executed faster than code written in COBOL. (And for business applications on IBM mainframes, at the time, those were the only two options.)

Not only did they write in assembly language, they wrote code to be fast. That is, they "optimized" the code. One of the optimizations was with the "multiply" instruction.

The multiply instruction does what you think: it multiplies to numbers and stores the result. To optimize it, they wrote the code to place the larger of the two values in one register and the smaller of the two values in the other register. The multiply instruction was implemented as a "repeated addition" operation, so the second register was really a count of the number of addition operations that would be performed. By storing the smaller number in the second register, programmers reduced the number of "add" operations and improved performance.

(Technically inclined folks may balk at the notion of reducing a multiply operation to repeated additions, and observe that it works for integer values but not floating-point values. The technique was valid on early IBM equipment, because the numeric values were either integers or fixed-point values, not floating-point values.)

It was an optimization that was useful at the time, when computers were relatively slow and relatively expensive. Today's faster, cheaper computers can perform multiplication quite quickly, and we don't need to optimize it.

Over time, changes in technology make certain optimizations obsolete.

Which brings us to cloud computing.

Cloud computing is a change in technology. It makes available a variable number of processors.

Certain problems have a large number of possible outcomes, with only certain outcomes considered good. The problems could describe the travels of a salesman, or the number of items in a sack, or playing a game of checkers. We have algorithms to solve specific configurations of these problems.

One algorithm is the brute-force, search-every-possibility method, which does just what you think. While it is guaranteed to find an optimal solution, there are sometimes so many solutions (millions upon millions, or billions, or quintillions) that this method is impractical.

Faced with an impractical algorithm, we invent others. Many are iterative algorithms which start with a set of conditions and then move closer and closer to a solution by making adjustments to the starting conditions. Other algorithms discard certain possibilities ("pruning") which are known to be no better than current solutions. Both techniques reduce the number of tested possibilities and therefore reduce the time to find a solution.

But observe: The improved algorithms assume a set of sequential operations. They are designed for a single computer (or a single person), and they are designed to minimize time.

With cloud computing, we no longer have a single processor. We have multiple processors, each operating in parallel. Algorithms designed to optimize for time on a single processor may not be suitable for cloud computing.

Instead of using one processor to iteratively find a solution, it may be possible to harness thousands (millions?) of cloud-based processors, each working on a distinct configuration. Instead of examining solutions in sequence, we can examine solutions in parallel. The result may be a faster solution to the problem, in terms of "wall time" -- the time we humans are waiting for the solution.

I recognize that this approach has its costs. Cloud computing is not free, in terms of money or in terms of computing time. Money aside, there is a cost in creating the multiple configurations, sending them to respecting cloud processors, and then comparing the many results. That time is a cost, and it must be included in our evaluation.

None of these ideas are new to the folks who have been working with parallel processing. There are studies, papers, and ideas, most of which have been ignored by mainstream (sequential) computing.

Cloud computing will lead, I believe, to the re-evaluation of many of our algorithms. We may find that many of them have a built-in bias for single-processor operation. The work done in parallel computing will be pertinent to cloud computing.

Cloud computing is a very different form of computing. We're still learning about it. The application of concepts from parallel processing is one aspect of it. I won't be surprised if there are more. There may be all sorts of surprises ahead of us.

Sunday, June 18, 2017

Three models of computing

Computing comes in different flavors. We're probably most familiar with personal computers and web applications. Let's look at the models used by different vendors.

Apple has the simplest model: Devices that compute. Apple has built it's empire on high-quality personal computing devices. They do not offer cloud computing services. (They do offer their "iCloud" backup service, which is an accessory to the central computing of the iMac or Macbook.) I have argued that this model is the same as personal computing in the 1970s.

Google has a different model: web-based computing. This is obvious in their Chromebook, which is a lightweight computer that can run a browser -- and nothing else. All of the "real" computing occurs on the servers in Google's data center. The same approach is visible in most of the Google Android apps -- lightweight apps that communicate with servers. In some ways, this model is an update of the 1970s minicomputer model, with terminals connected to a central processor.

Microsoft has a third model, a hybrid of the two. In Microsoft's model, some computing occurs on the personal computer and some occurs in the data center. It is the most interesting of the two, requiring communication and coordination of two components.

Microsoft did not always have their current approach. Their original model was the same as Apple's: personal computers as complete and independent computing entities. Microsoft started with implementations of the BASIC language, and then sold PC-DOS to IBM. Even early versions of Windows were for stand-alone, independent PCs.

Change to that model started with Windows for Workgroups, and became serious with Windows NT, domains, and ActiveDirectory. Those three components allowed for networked computing and distributed processing. (There were network solutions from other vendors, but the Microsoft set was a change in Microsoft's strategy.)

Today, Microsoft offers an array of services under its "Azure" mark. Azure provides servers, message queues, databases, and other services, all hosted in its cloud environment. They allow individuals and companies to create applications that can combine PC and cloud technologies. These applications perform some computing on the local PC and some computing in the Azure cloud. You can, of course, build an application that runs completely on the PC, or completely in the cloud. That you can build those applications shows the flexibility of the Microsoft platform.

I think this hybrid model, combining local computing and server-based computing, has the best potential. It is more complex, but it can handle a wider variety of applications than either the PC-only solution (Apple's) or the server-only solution (Google's). Look for Microsoft to support this model with development tools, operating systems, and communication protocols and libraries.

Looking forward, I can see Microsoft working on a "fluid" model of computing, where some processing can move from the server to the local PC (for systems with powerful local PCs) and from the PC to the server (for systems with limited PCs).

Many things in the IT realm started in a "fixed" configuration, and over time have become more flexible. I think processing is about to join them.

Sunday, May 21, 2017

Parallel processing on the horizon

Parallel processing has been with us for years. Or at least attempts at parallel processing.

Parallel processing has failed due the numerous challenges it faces. It requires special (usually expensive) hardware. Parallel processing on convention CPUs is simply processing items serially, because conventional CPUs can process only serially. (Multi-core processors address this problem to a small degree.) Parallel processing requires support in compilers and run-time libraries, and often new data structures. Most importantly, parallel processing requires tasks that are partitionable. The classic example of "nine women producing a baby in one month" highlights a task that is not partitionable, not divisible, into smaller tasks.

Cloud computing offers a new twist on parallel processing.

First, it offers multiple processors. Not just multiple cores, but true multiple processors -- as many as you would like.

Second, it offers these processors cheaply.

Cloud computing is a platform that can handle parallel processing -- in some areas. It has its problems.

First, creating new cloud processing systems is expensive in terms of time. A virtual machine must be instantiated, started, and given software to handle the task. Then, data must be shipped to the server. After processing, the result must be sent back, or forward to another processor. The time for all of these tasks is significant.

Second, we still have the problems of partitioning tasks and representing the data and operations in a program.

There is one area of development that I believe is ready to leverage parallel processing. That area is testing.

The typical testing effort for a project can have multiple levels: unit tests, component tests, system tests, end-to-end tests, you name it. But each level of testing follows the same general pattern:

  • Get a collection of tests, complete with input data and expected results
  • For each test
  • 1) Set up a test environment (program and data)
  • 2) Run the test
  • 3) Compare output to expected output
  • 4) Record the results
  • Summarize the results and report

In this process, the sequence of steps I've labelled 1 through 4 is repeated for each test. Traditional testing puts all of these tests on one computer, performing each test in sequence. Parallel testing can put each test on its own cloud-based processor, effectively running all tests at once.

Testing has a series of well-defined and partitionable tasks. Modern testing methods use automated tests, so a test can run locally or remotely (as long as it has access to everything it needs). Testing can be a drain on resources and time, requiring lots of requests to servers and lots of time to complete all tests.

Testing in the cloud, and in parallel, addresses these issues. It reduces the time for tests and improves the feedback to developers. Cloud processing is cheap -- at least cheaper than paying developers to wait for tests to run.

I think one the next "process improvements" for software development will be the use of cloud processing to run tests. Look for new services and changes to testing frameworks to support this new mode of testing.

Tuesday, January 3, 2017

Predictions for 2017

What will happen in the new year? Let's make some predictions!

Cloud computing and containers remain popular.

Ransomware will become more prevalent, with a few big name companies (and a number of smaller companies) suffering infections. Individuals will be affected as well. Companies may be spurred to improve their security; "traditional" malware was annoying but ransomware stops operations and costs actual money. Earlier virus programs would require effort from the support team to resolve, and that expense could be conveniently ignored by managers. But this new breed of malware requires an actual payment, and that is harder to ignore. I expect a louder cry for secure operating systems and applications, but effective changes will take time (years).

Artificial Intelligence and Machine Learning will be discussed. A few big players will advertise projects. They will have little effect on "the little guy", small companies, and slow-moving organizations.

Apple will continue to lead the design for laptops and phones. Laptop computers from other manufacturers will lose DVD readers and switch to USB-C (following Apple's design for the MacBook). Apple itself will look for ways to distinguish its MacBooks from laptops.

Tablet sales will remain weak. We don't know what to do with tablets, at home or in the office. They fill a niche between phones and laptops, but if you have those two you don't need a tablet. If you have a phone and are considering an additional device, the laptop is the better choice. If you have a laptop and are considering an additional device, the phone is the better choice. Tablets offer no unique abilities.

Laptop sales will remain strong. Desktop sales will decline. There is little need for a tower PC, and the prices for laptops are in line with prices for desktops. Laptops offer portability, which is good for telework or group meetings. Tower PCs offer expansion slots, which are good for... um, very little in today's offices.

Tower PCs won't die. They will remain the PC of choice for games, and for specific applications that need the processing power of GPUs. Some manufacturers may drop the desktop configurations, and the remaining manufacturers will be able to raise prices. I won't guess at who will stay in the desktop market.

Amazon.com will grow cloud services but lose market share to Microsoft and Google, who will grow at faster rates. Several small cloud providers will cease operations. If you're using a small provider of cloud services, be prepared to move.

Programming languages will continue to fracture. (Witness the decline on http://www.tiobe.com/tiobe-index/.) The long trend has been to move away from a few dominant languages and towards a collection of mildly popular languages. This change makes life uncomfortable for managers, because there is no one "safe" language that is "the best" for corporate development. But fear not, because...

Vendor relationships will continue to define the best programming languages for your projects: Java with Oracle, C# with Microsoft, Swift with Apple. If you are a Microsoft shop, your best language is C#. (You may consider F# for special projects.) If you are developing iOS applications, your best language is Swift. For Android apps, you want Java. Managers need not worry too much about difficult decisions for programming languages.

Those are my ideas for the new year. Let's see what really happens!

Wednesday, December 28, 2016

Moving to the cloud requires a lot. Don't be surprised.

Moving applications to the cloud is not easy. Existing applications cannot be simply dropped onto cloud servers and leverage the benefits of cloud computing. And this should not surprise people.

The cloud is a different environment than a web server. (Or a Windows desktop.) Moving to the cloud is a change in platform.

The history of IT has several examples of such changes. Each transition from one platform to another required changes to the code, and often changes to how we *think* about programs.

The operating system

The first changes occurred in the mainframe age. The very first was probably the shift from a raw hardware platform to hardware with an operating system. With raw hardware, the programmer has access to the entire computing system, including memory and devices. With an operating system, the program must request such access through the operating system. It was no longer possible to write directly to the printer; one had to request the use of each device. This change also saw the separation of tasks between programmers and system operators, the latter handling the scheduling and execution of programs. One could not use the older programs; they had to be rewritten to call the operating system rather that communicate with devices.

Timesharing and interactive systems

Timesharing was another change in the mainframe era. In contrast to batch processing (running one program at a time, each program reading and writing data as needed but with no direct interaction with the programmer), timeshare systems interacted with users. Timeshare systems saw the use of on-line terminals, something not available for batch systems. The BASIC language was developed to take advantage of these terminals. Programs had to wait for user input and verify that the input was correct and meaningful. While batch systems could merely write erroneous input to a 'reject' file, timeshare systems could prompt the user for a correction. (If they were written to detect errors.) One could not use a batch program in an interactive environment; programs had to be rewritten.

Minicomputers

The transition from mainframes to minicomputers was, interestingly, one of the simpler conversions in IT history. In many respects, minicomputers were smaller versions of mainframes. IBM minicomputers used the batch processing model that matched its mainframes. Minicomputers from manufacturers like DEC and Data General used interactive systems, following the lead of timeshare systems. In this case, is *was* possible to move programs from mainframes to minicomputers.

Microcomputers

If minicomputers allowed for an easy transition, microcomputers were the opposite. They were small and followed the path of interactive systems. Most ran BASIC in ROM with no other possible languages. The operating systems available (CP/M, MS-DOS, and a host of others) were limited and weak compared to today's, providing no protection for hardware and no multitasking. Every program for microcomputers had to be written from scratch.

Graphical operating systems

Windows (and OS/2 and other systems, for those who remember them) introduced a number of changes to programming. The obvious difference between Windows programs and the older DOS programs was, of course, the graphical user interface. From the programmer's perspective, Windows required event-driven programming, something not available in DOS. A Windows program had to respond to mouse clicks and keyboard entries anywhere on the program's window, which was very different from the DOS text-based input methods. Old DOS programs could not be simply dropped into Windows and run; they had to be rewritten. (Yes, technically one could run the older programs in the "DOS box", but that was not really "moving to Windows".)

Web applications

Web applications, with browsers and servers, HTML and "submit" requests, with CGI scripts and JavaScript and CSS and AJAX, were completely different from Windows "desktop" applications. The intense interaction of a window with fine-grained controls and events was replaced with the large-scale request, eventually getting smaller AJAX and AJAX-like web services. The separation of user interface (HTML, CSS, JavaScript, and browser) from "back end" (the server) required a complete rewrite of applications.

Mobile apps

Small screen. Touch-based. Storage on servers, not so much on the device. Device processor for handling input; main processing on servers.

One could not drop a web application (or an old Windows desktop application) onto a mobile device. (Yes, you can run Windows applications on Microsoft's Surface tablets. But the Surface tablets are really PCs in the shape of tablets, and they do not use the model used by iOS or Android.)

You had to write new apps for mobile devices. You had to build a collection of web services to be run on the back end. (Not too different from the web application back end, but not exactly the same.)

Which brings us to cloud applications

Cloud applications use multiple instances of servers (web servers, database servers, and others) each hosting services (called "microservices" because the service is less that a full application) communicating through message queues.

One cannot simply move a web application into the cloud. You have to rewrite them to split computation and coordination, the latter handled by queues. Computation must be split into small, discrete services. You must write controller services that make requests to multiple microservices. You must design your front-end apps (which run on mobile devices and web browsers) and establish an efficient API to bridge the front-end apps with the back-end services.

In other words, you have to rewrite your applications. (Again.)

A different platform requires a different design. This should not be a surprise.


Sunday, November 22, 2015

Apple and Microsoft do sometimes agree

In the computing world, Apple and Microsoft are often considered opposites. Microsoft makes software; Apple makes hardware (primarily). Microsoft sells to enterprises; Apple sells to consumers. Microsoft products are ugly and buggy; Apple products are beautiful and "it just works".

Yet they do agree on one thing: The center of the computing world.

Both Apple and Microsoft have built their empires on local, personal-size computing devices. (I would say "PCs" but then the Apple fans would shout "MacBooks are not PCs!" and we don't need that discussion here.)

Microsoft's strategy has been to enable PC users, both individual and corporate. It supplies the operating system and application programs. It supplies software for coordinating teams of computer users (ActiveDirectory, Exchange, Outlook, etc). It supplies office software (word processor, spreadsheet), development tools (Visual Studio, among others), and games. At the center of the strategy is the assumption that the PC will be a computing engine.

Apple's strategy has also been to enable users of Apple products. It designs computing products such as the MacBook, the iMac, the iPad, and the iPhone. Like Microsoft, the center of its strategy is the assumption that these devices will be computing engines.

In contrast, Google and Amazon.com take a different approach. They offer computing services in the cloud. For them, the PCs and tablets and phones are not centers of computing; they are sophisticated input-output devices that feed the computing centers.

That Microsoft's and Apple's strategies revolve around the PC is not an accident. They were born in the microcomputing revolution of the 1970s, and in those days there was no cloud, no web, no internet. (Okay, technically there *was* an internet, but it was limited to a very small number of users.)

Google and Amazon were built in the internet age, and their business strategies reflect that fact. Google provides advertising, search technology, and cloud computing. Amazon.com started by selling books (on the web) and has moved on to selling everything (still on the web) and cloud computing (its AWS offerings).

Google's approach to computing allows it to build Chromebooks, light-powered laptops that have just enough operating system to run the Chrome browser. Everything Google offers is on the web, accessible with merely a browser.

Microsoft's PC-centric view makes it difficult to build a Windows version of a Chromebook. While Google can create Chrome OS as a derivative of Linux, Microsoft is stuck with Windows. Creating a light version of Windows is not so easy -- Windows was designed as a complete entity, not as a partitioned, shrinkable thing. Thus, a Windows Cloudbook must run Windows and be a center of computing, which is quite different from a Chromebook.

Yet Microsoft is moving to cloud computing. It has built an impressive array of services under the Azure name.

Apple's progress towards cloud computing is less obvious. It offers storage services called iCloud, but their true cloud nature is undetermined. iCloud may truly be based on cloud technology, or it may simply be a lot of servers. Apple must be using data centers to support Siri, but again, those servers may be cloud-based or may simply be servers in a data center. Apple has not been transparent in this.

Notably, Microsoft sells developer tools for its cloud-based services and Apple does not. One cannot, using Apple's tools, build and deploy a cloud-based app into Apple's cloud infrastructure. Apple remains wedded to the PC (okay, MacBook, iMac, iPad, and iPhone) as the center of computing. One can build apps for Mac OS X and iOS that use other vendors' cloud infrastructures, just not Apple's.

For now, Microsoft and Apple agree on the center of the computing world. For both of them, it is the local PC (running Windows, Mac OS X, or iOS). But that agreement will not last, as Microsoft moves to the cloud and Apple remains on the PC.

Wednesday, November 11, 2015

Big changes happen early

Software has a life cycle: It is born, it grows, and finally dies. That's not news, or interesting. What is interesting is that the big changes in software happen early in its life.

Let's review some software: PC-DOS, Windows, and Visual Basic.

PC-DOS saw several versions, from 1.0 to 6.0. There were intermediate versions, such as versions 3.3 and 3.31, so there were more that six versions.

Yet the big changes happened early. The transition from 1.0 to 2.0 saw big changes in the API, allowing new device types and especially subdirectories. Moving from version 1.0 to 2.0 required almost a complete re-write of an application. Moving applications from version 2.0 to later versions required changes, but not as significant. The big changes happened early.

Windows followed a similar path. Moving from Windows 1 to Windows 2 was a big deal, as was moving from Windows 2 to Windows 3. The transition from Windows 3 to Windows NT was big, as was the change from Windows 3.1 to Windows 95, but later changes were small. The big changes happened early.

Visual Basic versions 1, 2, and 3 all saw significant changes. Visual Basic 4 had some changes but not as many, and Visual Basic 5 and 6 were milder. The big changes happened early. (The change from VB6 to VB.NET was large, but that was a change to another underlying platform.)

There are other examples, such as Microsoft Word, Internet Explorer, and Visual Studio. The effect is not limited to Microsoft. Lotus 1-2-3 followed a similar arc, as did dBase, R:Base, the Android operating system, and Linux.

Why do big changes happen early? Why do the big jumps in progress occur early in a product's life?

I have two ideas.

One possibility is that the makers and users of an application have a target in mind, a "perfect form" of the application, and each generation of the product moves closer to that ideal form. The first version is a first attempt, and successive versions improve upon previous versions. Over the life of the application, each version moves closer to the ideal.

Another possibility is that changes to an application are constrained by the size of the user population. A product with few users can see large changes; a product with many users can tolerate only minor changes.

Both of these ideas explain the effect, yet they both have problems. The former assumes that the developers (and the users) know the ideal form and can move towards it, albeit in imperfect steps (because one never arrives at the perfect form). My experience in software development allows me to state that most development teams (if not all) are not aware of the ideal form of an application. They may think that the first version, or the current version, or the next version is the "perfect" one, but they rarely have a vision of some far-off version that is ideal.

The latter has the problem of evidence. While many applications grow there user base over time and also shrink their changes over time, not all do. Two examples are Facebook and Twitter. Both have grown (to large user bases) and both have seen significant changes.

A third possibility, one that seems less theoretical and more mundane, is that as an application grows, and its code base grows, it is harder to make changes. A small version 1 application can be changed a lot for version 2. A large version 10 application has oodles of code and oodles of connected bits of code; changing any bit can cause lots of things to break. In that situation, each change must be reviewed carefully and tested thoroughly, and those efforts take time. Thus, the older the application, the larger the code base and the slower the changes.

That may explain the effect.

Some teams go to great lengths to keep their code well-organized, which allows for easier changes. Development teams that use Agile methods will re-factor code when it becomes "messy" and reduce the couplings between components. Cleaner code allows for bigger and faster changes.

If changes are constrained not by large code but by messy code, then as more development teams use Agile methods (and keep their code clean) we will see more products with large changes not only early in the product's life but through the product's life.

Let's see what happens with cloud-based applications. These are distributed by nature, so there is already an incentive for smaller, independent modules. Cloud computing is also younger than Agile development, so all cloud-based systems could have been (I stress the "could") developed with Agile methods. It is likely that some were not, but it is also likely that many were -- more than desktop applications or web applications.

Tuesday, July 14, 2015

Public, private, and on-premise clouds

The cloud platform is flexible. It's primary degree of flexibility is scalability -- the ability to add (or remove) processing nodes as needed. Yet it has more possibilities. Clouds can be public, private, or on-premise.

Public cloud The cloud services offered by the well-known vendors (Amazon.com, Microsoft, Rackspace). The public cloud consists of virtual machines running on shared hardware. My virtual server may be on the same physical server as your virtual server. (At least today; tomorrow our virtual servers might be hosted on other shared hardware. The cloud is permitted to shift virtual servers to suit its needs.)

Private cloud These are servers and services offered by big vendors (Amazon.com, Microsoft, IBM, Oracle, and more) with dedicated hardware. (Sometimes. Different vendors have different ideas of "private cloud".) The cost is higher, but the private cloud offers more consistent performance and (theoretically) higher security as only your servers are running on the hardware.

On-premise cloud Virtual servers running on hardware that is located in your data center. The selling point is that you have control over physical access to the hardware. (You also pay for the hardware.)

Which configuration is best? The answer, as with many questions about systems, is: "it depends".

Some might think that on-premise clouds are better (even with the higher cost) because you have the most control. That's a debatable point, in today's connected world.

An aspect of the on-premise cloud configuration you may want to consider is scalability. The whole point of the cloud is to get more processors on-line quickly (within minutes) and avoid the long procurement, installation, and configuration processes associated with traditional data centers. On-premise clouds let you do that, provided that you have enough hardware to support the top level of demand. With the public cloud you share the hardware; increasing hardware capacity is the cloud vendors responsibility. With an on-premise cloud, you must plan for the capacity. If you need more hardware, you're back in the procurement, installation, and configuration bureaucracies.

Startups that want to prepare for rapid growth benefit from the public cloud. They can defer paying for servers until they need them. (With an on-premise cloud, you have to buy the hardware to support your servers. Once bought, the hardware is yours.)

Established companies with consistent workloads benefit little from cloud processing. (Unless they are looking to distribute their processing among multiple data centers, and use cloud design for resiliancy.)

Even companies with spiky workloads may want to stay with traditional data centers -- if they can accurately predict their needs. A consistent pattern over the year can be used to plan hardware for servers.

The one group that can benefit from on-premise clouds is large companies with dynamic workloads. By "dynamic", I mean a workload that shifts internally over time. If the on-line sales website needs the bulk of the processing during the day and the accounting systems need the bulk of the processing at night, and the workloads are about the same, then on on-premise cloud makes some sense. The ability to "slosh" computing power from one department to another (or one subsidiary to another) while keeping the total computing capacity (relatively) constant fits well with the on-premise cloud.

I expect that most companies will look for hybrid configurations, blending private and public clouds. The small, focussed, virtual servers for cloud allow for rapid re-deployment to different platforms. A company could run everything on their private cloud when business is slow, and when business (and processing) is heavy shift non-critical tasks to public clouds, keeping the critical items in-house (or "in-cloud").

Such a design requires an evaluation of the workload and the classification of tasks. You have to know which servers can be sent to the public cloud. I have yet to see anyone discussing this aspect of cloud systems -- but I won't be surprised when they do.