Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Wednesday, December 15, 2021

Everyone who is not Apple

Apple has direct control over the design of their hardware and software, a situation that has not been seen in the history of personal computers. I expect that they will enjoy success -- at least for a while -- with new, powerful designs.

But what about everyone else? What about Microsoft, the maker of Windows, Office, Azure services, Surface tablets and laptops, and other things? What about Dell and Lenovo and Toshiba and HP, the makers of personal computers? What Google, the maker of Chromebooks and cloud services?

That's a big question, and it has a number of answers.

Microsoft has a number of paths forward, and will probably pursue several of them. For its Surface devices, it can design systems on a chip that correspond to Apple's M1 chips. Microsoft could use ARM CPUs; it has already ported Windows to ARM and offers the "Surface X" with ARM. Microsoft could design a system-on-a-chip that uses Intel CPUs; such would provide binary compatibility with current Windows applications. Intel chips generate more heat, but Microsoft has success with Intel chips in most of its Surface line, so a system-on-a-chip with Intel could be possible. These paths mirror the path that Apple has taken.

Microsoft, unlike Apple, has another possible way forward: cloud services. Microsoft could design efficient processors for the computers that run data centers, the computers that host virtual instances of Windows and Linux. Such a move would ease the shift of processing from laptops and desktop computers and the cloud. (Such a shift is possible today; system-on-chip designs make it more efficient.) Microsoft may work with Intel, or AMD, or even IBM to design and build efficient hardware for cloud data centers.

Manufacturers of personal computers may design their own system-on-chip answers to the M1 processor. Or they may form a consortium and design a common chip that can be used by all (still allowing for custom system-on-chip designs and the current discrete component designs). Microsoft has, for a long time, provided a reference document for the requirements of Windows, and system-on-chip designs would follow that set of requirements just as laptops and desktops today follow those requirements.

PC manufacturers do lose some control when they adopt a common design. A common design would be common, and available to all manufacturers. It prevents a manufacturer from enhancing the design by selecting better components. Rather than shift their entire product line to system-on-chip design, manufacturers will probably use the system-on-chip design for only some of the offerings, keeping some products with discrete designs (and enhancements to distinguish them from the competition).

Google does not have to follow the requirements for Windows; it has its own requirements for Chromebooks. System-on-chip design is a good fit for Chromebooks, which already use both Intel and ARM chips (and few users can see the difference). The performance improvement of system-on-chip design fits in nicely with Google's plan for games on Chromebooks. The increase in power allows for an increase in the sophistication of web-based apps.

I am willing to wait for Microsoft's response and for Google's response. I think we will see innovative designs and improvements to the computing experience. I expect Microsoft to push in two directions: system-on-chip designs for their Surface tablets, and cloud-based applications running on enhanced hardware. Google will follow a similar strategy, enhancing cloud hardware and improving the capabilities of Chromebooks.

Wednesday, September 4, 2019

Don't shoot me, I'm only the OO programming language!

There has been a lot of hate for object-oriented programming of late. I use the word "hate" with care, as others have described their emotions as such. After decades of success with object-oriented programming, now people are writing articles with titles like "I hate object-oriented programming".

Why such animosity towards object-oriented programming? And why now? I have some ideas.

First, we have the age of object-oriented programming (OOP) as the primary paradigm for programming. I put the acceptance of OOP somewhere after the introduction of Java (in 1995) and before Microsoft's C# and .NET initiative (in 1999), which makes OOP about 25 years old -- or one generation of programmers.

(I know that object-oriented programming was around much earlier than C# and Java, and I don't mean to imply that Java was the first object-oriented language. But Java was the first popular OOP language, the first OOP language that was widely accepted in the programming community.)

So it may be that the rejection of OOP is driven by generational forces. Object-oriented programming, for new programmers, has been around "forever" and is an old way of looking at code. OOP is not the shiny new thing; it is the dusty old thing.

Which leads to my second idea: What is the shiny new thing that replaces object-oriented programming? To answer that question, we have to answer another: what does OOP do for us?

Object-oriented programming, in brief, helps developers organize code. It is one of several techniques to organize code. Others include Structured Programming, subroutines, and functions.

Subroutines are possibly the oldest techniques to organize code. They date back to the days of assembly language, when code that was executed more than once was called with a "branch" or "call" or "jump subroutine" opcode. Instead of repeating code (and using precious memory), common code could be stored once and invoked as often as needed.

Functions date back to at least Fortran, consolidating common code that returns a value.

For two decades (from the mid 1950s to the mid-1970s), subroutines and functions were the only way to organize code. In the mid-1970s, the structured programming movement introduced an additional way to organize code, with IF/THEN/ELSE and WHILE statements (and an avoidance of GOTO). These techniques worked at a more granular level that subroutines and functions. Structured programming organized code "in the small" and subroutines and functions organized code "in the medium". Notice that we had no way (at the time) to organize code "in the large".

Techniques to organize code "in the large" did come. One attempt was dynamic-linked libraries (DLLs), introduced with Microsoft Windows but also used by earlier operating systems. Another was Microsoft's COM, which organized the DLLs. Neither were particularly effective at organizing code.

Object-oriented programming was effective at organizing code at a level higher than procedures and functions. And it has been successful for the past two-plus decades. OOP let programmers build large systems, sometimes with thousands of classes and millions of lines of code.

So what technique has arrived that displaces object-oriented programming? How has the computer world changed, that object-oriented programming would become despised?

I think it is cloud programming and web services, and specifically, microservices.

OOP lets us organize a large code base into classes (and namespaces which contain classes). The concept of a web service also lets us organize our code, in a level higher than procedures and functions. A web service can be a large thing, using OOP to organize its innards.

But a microservice is different from a large web service. A microservice is, by definition, small. A large system can be composed of multiple microservices, but each microservice must be a small component.

Microservices are small enough that they can be handled by a simple script (perhaps in Python or Ruby) that performs a few specific tasks and then exits. Small programs don't need classes and object-oriented programming. Object-oriented programming adds cost to simple programs with no corresponding benefit.

Programmers building microservices in languages such as Java or C# may feel that object-oriented programming is being forced upon them. Both Java and C# are object-oriented languages, and they mandate classes in your program. A simple "Hello, world!" program requires the definition of at least one class, with at least one static method.

Perhaps languages that are not object-oriented are better for microservices. Languages such as Python, Ruby, or even Perl. If performance is a concern, the compiled languages C and Go are available. (It might be that the recent interest in C is driven by the development of cloud applications and microservices for them.)

Object-oriented programming was (and still is) an effective way to manage code for large systems. With the advent of microservices, it is not the only way. Using object-oriented programming for microservices is overkill. OOP requires overhead that is not helpful for small programs; if your microservice is large enough to require OOP, then it isn't a microservice.

I think this is the reason for the recent animosity towards object-oriented programming. Programmers have figured out the OOP doesn't mix with microservices -- but they don't know why. They fell that something is wrong (which it is) but they don't have the ability to shake off the established programming practices and technologies (perhaps because they don't have the authority).

If you are working on a large system, and using microservices, give some thought to your programming language.

Thursday, January 10, 2019

Predictions for tech in 2019

Predictions are fun! They allow us to see into the future -- or at least claim that we can see into the future. They also allow us to step away from the usual topics and talk about almost anything we want. Who could resist making predictions?

So here are my predictions for 2019:

Programming languages: The current "market" for programming languages is fractured. There is no one language that dominates. The ten most popular languages (according to Tiobe) are Java, C, Python, C++, VB.NET, C#, JavaScript, PHP, SQL, and Objective-C. The top ten are not evenly distributed; Java and C are in a "lead group" and the remaining languages are in a second group.

O'Reilly lists Python, Java, Go, C#, Kotlin, and Rust as languages to watch for 2019. Notice that this list is different from Tiobe's "most popular" -- Rust and Kotlin show on that index in positions 34 and 36, respectively. Notably absent from O"Reilly's list are C++ and Perl.

For 2019, I predict that the market will remain fragmented. Java will remain in the lead group unless Oracle, who owns Java, does something that discourages Java development. (And even then, so many systems are currently written in Java that Java will remain in use for years. Java will be the COBOL of the 2020s: used in important business systems but not liked very much by younger developers.) C will remain in the lead group. (The popularity of C is hard to explain. But whatever C has, people like.)

Fragmentation makes life difficult for managers. Which languages should their teams use? A single leader makes the decision easy. The current market, with multiple capable languages allows for debates about development languages. An established project provides the argument of sticking with the current programming language; a new project (with no existing code) makes the decision somewhat harder. (My advice: pick a popular language that gets the job done for you. Don't worry about it being the best language. Good enough is... good enough.)

Operating systems: Unlike the "market" for programming languages, the "market" for operating systems is fairly uniform. I should say "markets": we can consider the desktop/laptop segment, the server segment, and possibly a cloud segment. For the desktop, Windows is dominant, and will remain dominant in 2019. Windows 10 is capable and especially good for large organizations who want centralized administration. MacOS is used in a number of shops, especially smaller organizations and startups, and will continue to have a modest share.

For servers, Linux dominates and will continue to dominate in 2019. Windows runs some servers, and will continue to, especially in organizations who consider themselves "Microsoft shops".

The interesting future for operating systems is the cloud segment. Cloud services run on operating systems, usually Linux or Windows, but this is changing on two fronts. The first is the hypervisor, which sits below the virtual operating system in a cloud environment; the second is containers, which sit above the virtual operating system (and which contain an application).

Hypervisors are well-understood and well established. Containers are new (well, new-ish) and not as well understood, but gaining acceptance. Between the two sits the operating system, which is coming under pressure as hypervisors and containers perform tasks that were traditionally performed by operating systems.

In the long run, hypervisors, containers, and operating systems will achieve a new equilibrium, with operating systems doing less than they have in the past. The question will not be "Which operating system for my cloud application?" but instead "Which combination of hypervisor, operating system, and container for my cloud application?". And even then, there may be large shops that use a mixture of hypervisor, operating system, and container for their applications.

Virtual reality and augmented reality: Both will remain experimental. We have yet to find a "killer app" for augmented reality, something that combines real-world and supplied visuals in a compelling application.

Cloud services: Amazon.com dominates the market, and I see little to change that. Microsoft and Google will maintain (and possibly increase) their market shares. Other players (IBM, Dell, Oracle) will remain small.

The list of services available from cloud providers is impressive and daunting. Amazon is in a difficult position; its services are less consistent than Microsoft's and Google's. Both Microsoft and Google came into the market after Amazon and developed their offerings more slowly. The result has been a smaller market share but a more consistent set of services (and, I dare say, a better experience for the customer). Amazon may change some services to make things more consistent.

Phones: Little will change in 2019. Apple and Android will remain dominant. 5G will get press and slow roll-out by carriers; look for true implementation and wide coverage in later years.

Tablets: 2019 may be the "last year of the tablet" -- at least the non-laptop convertible tablet. Tablet sales have been anemic, except for iPads, and even those are declining. Apple could introduce an innovation to the iPad which increases its appeal, but I don't see that. (I think Apple will focus on phones, watches, earphones, and other consumer devices.)

I see little interest in tablets from other manufacturers, probably due to the lack of demand by customers. As Android is the only other (major) operating system for tablets, innovation for Android tablets will have to come from Google, and I see little interest from Google in tablets. (I think Google is more interested in phones, location-based services, and advertising.)

In sum, I see 2019 as a year of "more of the same", with few or no major innovations. I suspect that the market for tech will, at the end of 2019, look very much like the market for tech at the beginning of 2019.

Monday, September 25, 2017

Web services are the new files

Files have been the common element of computing since at least the 1960s. Files existed before disk drives and file systems, as one could put multiple files on a magnetic tape.

MS-DOS used files. Windows used files. OS/2 used files. (Even the p-System used files.)

Files were the unit of data storage. Applications read data from files and wrote data to files. Applications shared data through files. Word processor? Files. Spreadsheet? Files. Editor? Files. Compiler? Files.

The development of databases saw another channel for sharing data. Databases were (and still are) used in specialized applications. Relational databases are good for consistently structured data, and provide transactions to update multiple tables at once. Microsoft hosts its Team Foundation on top of its SQL Server. (Git, in contrast, uses files exclusively.)

Despite the advantages of databases, the main method for storing and sharing data remains files.

Until now. Or in a little while.

Cloud computing and web services are changing the picture. Web services are replacing files. Web services can store data and retrieve data, just as files. But web services are cloud residents; files are for local computing. Using URLs, one can think of a web service as a file with a rather funny name.

Web services are also dynamic. A file is a static collection of bytes: what you read is exactly was was written. A web service can provide a set of bytes that is constructed "on the fly".

Applications that use local computing -- desktop applications -- will continue to use files. Cloud applications will use web services.

Those web services will be, at some point, reading and writing files, or database entries, which will eventually be stored in files. Files will continue to exist, as the basement of data storage -- around, but visited by only a few people who have business there.

At the application layer, cloud applications and mobile applications will use web services. The web service will be the dominant method of storing, retrieving, and sharing data. It will become the dominant method because the cloud will become the dominant location for storing data. Local computing, long the leading form, will fall to the cloud.

The default location for data will be the cloud; new applications will store data in the cloud; everyone will think of the cloud. Local storage and local computing will be the oddball configuration. Legacy systems will use local storage; modern systems will use the cloud.

Monday, June 5, 2017

The Demise of Apple

Future historians will look back at Apple, point to a specific moment, and say "Here, at this point, is when Apple started its decline. This event started Apple's fall.". That point will be the construction of their new spaceship-inspired headquarters.

Why do I blame their new building? I don't, actually. I think others -- those future historians -- will. They will get the time correct, but point to the wrong event.

First things first. What do I have against Apple's shiny new headquarters?

It's round.

Apple's new building is large, elegant, expensive, and ... the wrong shape. It is a giant circle, or wheel, or doughnut, and it works poorly with human psychology and perception. The human mind works better with a grid than a circle.

Not that humans can't handle circular objects. We can, when they are small or distant. We have no problem with the moon being round, for example. We're okay with clocks and watches, and old-style speedometers in cars.

We're good when we are looking at the entire circle. Watches and clocks are smaller than us, so we can view the entire circle and process it. (Clocks in towers, such as the "Big Ben" clock in London or the center of town, are also okay, since we view them from a distance and they appear small.)

The problems occur when we are inside the circle, when we are navigating along the circumference. We're not good at keeping track of gradual changes in direction. (Possibly why so many people get lost in the desert. They travel in a circle without realizing it.)

Apple's building looks nice, from above. I suspect the experience of working inside the building will be one of modest confusion and discomfort. Possibly at such a minor level that people do not realize that something is wrong. But this discomfort will be significant, and eventually people will rebel.

It's ironic that Apple, the company that designs and builds products with the emphasis on "easy to use", got the design of their building wrong.

So it may be that historians, looking at Apple's (future) history, blame the design of the new headquarters for Apple's (future) failures. They will (rightly) associate the low-level confusion and additional brain processing required for navigation of such a building as draining Apple's creativity and effectiveness.

I think that they (the historians) will be wrong.

The building is a problem, no doubt. But it won't cause Apple's demise. The true cause will be overlooked.

That true cause? It is Apple's fixation on computing devices.

Apple builds (and sells) computers. They are the sole company that has survived from the 1970s microcomputer age. (Radio Shack, Commodore, Cromemco, Sol, Northstar, and the others left the market decades ago.) In that age, microcomputers were stand-alone devices -- there was no internet, no ethernet, no communication aside from floppy disks and a few on-line bulletin board systems (BBS) that required acoustic coupler modems. Microcomputers were "centers of computing" and they had to do everything.

Today, computing is changing. The combination of fast and reliable networks, cheap servers, and easy virtual machines allows the construction of cloud computing, where processing is split across multiple processors. Google is taking advantage of this with its Chromebooks, which are low-end laptops that run a browser and little else. The "real" processing is performed not on the Chromebook but on web servers, often hosted in the cloud. (I'm typing this essay on a Chromebook.)

All of the major companies are moving to cloud technology. Google, obviously, with Chromebooks and App Engine and Android devices. Microsoft has its Azure services and versions of Word and Excel that run entirely in the cloud, and they are working on a low-end laptop that runs a browser and little else. It's called the "Cloudbook" -- at least for now.

Amazon.com has its cloud services and its Kindle and Fire tablets. IBM, Oracle, Dell, HP, and others are moving tasks to the cloud.

Except Apple. Apple has no equivalent of the Chromebook, and I don't think it can provide one. Apple's business model is to sell hardware at a premium, providing a superior user experience to justify that premium. The superior user experience is possible with local processing and excellent integration of hardware and software. Apps run on the Macs, MacBooks, and iPhones. They don't run on servers.

A browser-only Apple laptop (a "Safaribook"?) would offer little value. The Apple experience does not translate to web sites.

When Apple does use cloud technology, they use it as an accessory to the PC. The processing for Siri is done in a a big datacenter, but its all for Siri and the user experience. Apple's iCloud lets users store data and synchronize it across devices, but it is simply a big, shared disk. Siri and iCloud make the PC a better PC, but don't transform the PC.

This is the problem that Apple faces. It is stuck in the 1970s, when individual computers did everything. Apple has made the experience pleasant, but it has not changed the paradigm.

Computing is changing. Apple is not. That is what will cause Apple's downfall.

Wednesday, December 28, 2016

Moving to the cloud requires a lot. Don't be surprised.

Moving applications to the cloud is not easy. Existing applications cannot be simply dropped onto cloud servers and leverage the benefits of cloud computing. And this should not surprise people.

The cloud is a different environment than a web server. (Or a Windows desktop.) Moving to the cloud is a change in platform.

The history of IT has several examples of such changes. Each transition from one platform to another required changes to the code, and often changes to how we *think* about programs.

The operating system

The first changes occurred in the mainframe age. The very first was probably the shift from a raw hardware platform to hardware with an operating system. With raw hardware, the programmer has access to the entire computing system, including memory and devices. With an operating system, the program must request such access through the operating system. It was no longer possible to write directly to the printer; one had to request the use of each device. This change also saw the separation of tasks between programmers and system operators, the latter handling the scheduling and execution of programs. One could not use the older programs; they had to be rewritten to call the operating system rather that communicate with devices.

Timesharing and interactive systems

Timesharing was another change in the mainframe era. In contrast to batch processing (running one program at a time, each program reading and writing data as needed but with no direct interaction with the programmer), timeshare systems interacted with users. Timeshare systems saw the use of on-line terminals, something not available for batch systems. The BASIC language was developed to take advantage of these terminals. Programs had to wait for user input and verify that the input was correct and meaningful. While batch systems could merely write erroneous input to a 'reject' file, timeshare systems could prompt the user for a correction. (If they were written to detect errors.) One could not use a batch program in an interactive environment; programs had to be rewritten.

Minicomputers

The transition from mainframes to minicomputers was, interestingly, one of the simpler conversions in IT history. In many respects, minicomputers were smaller versions of mainframes. IBM minicomputers used the batch processing model that matched its mainframes. Minicomputers from manufacturers like DEC and Data General used interactive systems, following the lead of timeshare systems. In this case, is *was* possible to move programs from mainframes to minicomputers.

Microcomputers

If minicomputers allowed for an easy transition, microcomputers were the opposite. They were small and followed the path of interactive systems. Most ran BASIC in ROM with no other possible languages. The operating systems available (CP/M, MS-DOS, and a host of others) were limited and weak compared to today's, providing no protection for hardware and no multitasking. Every program for microcomputers had to be written from scratch.

Graphical operating systems

Windows (and OS/2 and other systems, for those who remember them) introduced a number of changes to programming. The obvious difference between Windows programs and the older DOS programs was, of course, the graphical user interface. From the programmer's perspective, Windows required event-driven programming, something not available in DOS. A Windows program had to respond to mouse clicks and keyboard entries anywhere on the program's window, which was very different from the DOS text-based input methods. Old DOS programs could not be simply dropped into Windows and run; they had to be rewritten. (Yes, technically one could run the older programs in the "DOS box", but that was not really "moving to Windows".)

Web applications

Web applications, with browsers and servers, HTML and "submit" requests, with CGI scripts and JavaScript and CSS and AJAX, were completely different from Windows "desktop" applications. The intense interaction of a window with fine-grained controls and events was replaced with the large-scale request, eventually getting smaller AJAX and AJAX-like web services. The separation of user interface (HTML, CSS, JavaScript, and browser) from "back end" (the server) required a complete rewrite of applications.

Mobile apps

Small screen. Touch-based. Storage on servers, not so much on the device. Device processor for handling input; main processing on servers.

One could not drop a web application (or an old Windows desktop application) onto a mobile device. (Yes, you can run Windows applications on Microsoft's Surface tablets. But the Surface tablets are really PCs in the shape of tablets, and they do not use the model used by iOS or Android.)

You had to write new apps for mobile devices. You had to build a collection of web services to be run on the back end. (Not too different from the web application back end, but not exactly the same.)

Which brings us to cloud applications

Cloud applications use multiple instances of servers (web servers, database servers, and others) each hosting services (called "microservices" because the service is less that a full application) communicating through message queues.

One cannot simply move a web application into the cloud. You have to rewrite them to split computation and coordination, the latter handled by queues. Computation must be split into small, discrete services. You must write controller services that make requests to multiple microservices. You must design your front-end apps (which run on mobile devices and web browsers) and establish an efficient API to bridge the front-end apps with the back-end services.

In other words, you have to rewrite your applications. (Again.)

A different platform requires a different design. This should not be a surprise.


Sunday, January 3, 2016

Predictions for 2016

It's the beginning of a new year, which means... predictions! Whee!

Let's start with some obvious predictions:

Mobile will be big in 2016.

Cloud will be big on 2016.

NoSQL and distributed databases will be big in 2016.

Predictions like these are easy.

Now for something a little less obvious: legacy applications.

With the continued interest in mobile, cloud, NoSQL, and distributed databases, these areas will see strong demand for architects, developers, designers, and testers. That demand will pull people away from legacy applications -- those applications built for classic, non-cloud web architectures as well as the remaining desktop applications and mainframe batch systems.

Which is unfortunate for the managers of those legacy applications, because I believe that 2016 is going to be the year that companies decide that they want to migrate those legacy applications to the cloud/mobile platform.

When the web appeared, lots of managers held back, waiting to see if the platform would prove itself. It did, and companies migrated most of their applications from desktop to web (either external or internal). Even Microsoft, stalwart of desktop applications, created a web-based version of Outlook.

Likewise, when mobile and cloud appeared, many managers held back and waited for the new technologies to prove themselves. With almost ten years of mobile and cloud, and many companies already using those technologies, its time for the holdouts to take action.

Look for renewed interest in converting existing desktop and classic web applications. The conversions have challenges. In one sense, the job is easier than the early conversions, because we now have experience with mobile/cloud systems and we understand the architecture. In other ways, this may be harder, as the easy conversions (the "low-hanging fruit") have already been done, which means that the remaining conversions are harder.

The architecture of mobile/cloud systems (with or without distributed databases) is different from classic web applications. (And very different from desktop applications.)

I think that 2016 will be the year of rude awakening, as companies look at the effort to convert their legacy systems to newer technologies.

But the rude awakening is delivered in two phases. The first is the cost and time to convert legacy applications. The second is the cost of maintaining legacy applications in their current form.

Why the cost of maintaining legacy applications, without changing them to newer technologies? Because of the demand for mobile/cloud is high. New entrants to the field will know the new technologies, and select jobs that let them use that knowledge. That means that the folks with knowledge of the older technologies will be, um, older.

The folks with knowledge about older languages (C++, Visual Basic) and older APIs (Flash) will be the senior developers. And senior developers are more expensive than junior developers.

So the owners of legacy applications have a rather unpleasant choice: migrate to mobile/cloud, which is expensive, or stay on the legacy platform, with will also be expensive.

Thursday, October 22, 2015

Windows 10 means a different future for PCs

Since the beginning, PCs have always been growing.  The very first IBM PCs used 16K RAM chips (for a maximum of 64K on the CPU board); these were quickly replaced by PCs with 64K RAM chips (which allowed 256K on the CPU board).

We in the PC world are accustomed to new releases of bigger and better hardware.

It may have started with that simple memory upgrade, but it continued with hard drives (the IBM PC XT), enhanced graphics, higher-capacity floppy disks, and a more capable processor (the IBM PC AT), and an enhanced buss, even better graphics, and even better processors (the IBM PS/2 series).

Improvements were not limited to IBM. Compaq and other manufacturers revised their systems and offered larger hard drives, better processors, and more memory. Every year saw improvements.

When Microsoft became the market leader, it played an active role in the specification of hardware. Microsoft also designed new operating systems for specific minimum platforms: you needed certain hardware to run Windows NT, certain (more capable) hardware for Windows XP, and even more capable hardware for Windows Vista.

Windows 10 may change all of that.

Microsoft's approach to Windows 10 is different from previous versions of Windows. The changes are twofold. First, Windows 10 will see a constant stream of updates instead of the intermittent service packs of previous versions. Second, Windows 10 is "it" for Windows -- there will be no later release, no "Windows 11".

With no Windows 11, people running Windows 10 on their current hardware should be able to keep running it. Windows Vista forced a lot of people to purchase new hardware (which was one of the objections to Windows Vista); Windows 11 won't force that because it won't exist.

Also consider: Microsoft made it possible for just about every computer running Windows 8 or Windows 7 (or possibly Windows Vista) to upgrade to Windows 10. Thus, Windows 10 requires just as much hardware as those earlier versions.

What may be happening is that Microsoft has determined that Windows is as big as it is going to be.

This makes sense for desktop PCs and for servers running Windows.

Most servers running Windows will be in the cloud. (They may not be now, but they will be soon.) Cloud-based servers don't need to be big. With the ability to "spin up" new instances of a server, an overworked server can be given another instance to handle the load. A system can provide more capacity with more servers. It is not necessary to make the server bigger.

Desktop PCs, either in the office or at home, run a lot of applications, and these applications (in Microsoft's plan) are moving to the cloud. You won't need a faster machine to run the new version of Microsoft Word -- it runs in the cloud and all you need is a browser.

It may be that Microsoft thinks that PCs have gotten as powerful as they need to get. This is perhaps not an unreasonable assumption. PCs are powerful and can handle every task we ask of them.

As we shift our computing from PCs and discrete servers to the cloud, we eliminate the need for improvements to PCs and discrete servers. The long line of PC growth stops. Instead, growth will occur in the cloud.

Which doesn't mean that PCs will be "frozen in time", forever unchanging. It means that PC *growth* will stop, or at least slow to a glacial pace. This has already happened with CPU clock frequencies and buss widths. Today's CPUs are about as fast (in terms of clock speed) as CPUs from 2009. Today's CPUs use a 64-bit data path, which hasn't changed since 2009. PCs will grow, slowly. Desktop PCs will become physically smaller. Laptops will become thinner and lighter, and battery life will increase.

PCs, as we know them today, will stay as we know them today.

Tuesday, July 14, 2015

Public, private, and on-premise clouds

The cloud platform is flexible. It's primary degree of flexibility is scalability -- the ability to add (or remove) processing nodes as needed. Yet it has more possibilities. Clouds can be public, private, or on-premise.

Public cloud The cloud services offered by the well-known vendors (Amazon.com, Microsoft, Rackspace). The public cloud consists of virtual machines running on shared hardware. My virtual server may be on the same physical server as your virtual server. (At least today; tomorrow our virtual servers might be hosted on other shared hardware. The cloud is permitted to shift virtual servers to suit its needs.)

Private cloud These are servers and services offered by big vendors (Amazon.com, Microsoft, IBM, Oracle, and more) with dedicated hardware. (Sometimes. Different vendors have different ideas of "private cloud".) The cost is higher, but the private cloud offers more consistent performance and (theoretically) higher security as only your servers are running on the hardware.

On-premise cloud Virtual servers running on hardware that is located in your data center. The selling point is that you have control over physical access to the hardware. (You also pay for the hardware.)

Which configuration is best? The answer, as with many questions about systems, is: "it depends".

Some might think that on-premise clouds are better (even with the higher cost) because you have the most control. That's a debatable point, in today's connected world.

An aspect of the on-premise cloud configuration you may want to consider is scalability. The whole point of the cloud is to get more processors on-line quickly (within minutes) and avoid the long procurement, installation, and configuration processes associated with traditional data centers. On-premise clouds let you do that, provided that you have enough hardware to support the top level of demand. With the public cloud you share the hardware; increasing hardware capacity is the cloud vendors responsibility. With an on-premise cloud, you must plan for the capacity. If you need more hardware, you're back in the procurement, installation, and configuration bureaucracies.

Startups that want to prepare for rapid growth benefit from the public cloud. They can defer paying for servers until they need them. (With an on-premise cloud, you have to buy the hardware to support your servers. Once bought, the hardware is yours.)

Established companies with consistent workloads benefit little from cloud processing. (Unless they are looking to distribute their processing among multiple data centers, and use cloud design for resiliancy.)

Even companies with spiky workloads may want to stay with traditional data centers -- if they can accurately predict their needs. A consistent pattern over the year can be used to plan hardware for servers.

The one group that can benefit from on-premise clouds is large companies with dynamic workloads. By "dynamic", I mean a workload that shifts internally over time. If the on-line sales website needs the bulk of the processing during the day and the accounting systems need the bulk of the processing at night, and the workloads are about the same, then on on-premise cloud makes some sense. The ability to "slosh" computing power from one department to another (or one subsidiary to another) while keeping the total computing capacity (relatively) constant fits well with the on-premise cloud.

I expect that most companies will look for hybrid configurations, blending private and public clouds. The small, focussed, virtual servers for cloud allow for rapid re-deployment to different platforms. A company could run everything on their private cloud when business is slow, and when business (and processing) is heavy shift non-critical tasks to public clouds, keeping the critical items in-house (or "in-cloud").

Such a design requires an evaluation of the workload and the classification of tasks. You have to know which servers can be sent to the public cloud. I have yet to see anyone discussing this aspect of cloud systems -- but I won't be surprised when they do.

Sunday, June 21, 2015

More and smaller data centers for cloud

We seem to repeat lessons of technology.

The latest lesson is one from the 1980s: The PC revolution. Personal computers introduced the notion of smaller, numerous computers. Previously, the notion of computers revolved around mainframe computers: large, centralized, and expensive. (I'm ignoring minicomputers, which were smaller, less centralized, and less expensive.)

The PC revolution was less a change from mainframes to PCs and more a change in mindset. The revolution made the notion of small computers a reasonable one. After PCs arrived, the "space" of computing expanded to include mainframes and PCs. Small computers were considered legitimate.

That lesson -- that computing can come in small packages as well as large ones -- can be applied to cloud data centers. The big cloud providers (Amazon.com, Microsoft, IBM, etc.) have been built large data centers. And large is an apt description: enormous buildings containing racks and racks of servers, power supply distribution units, air conditioning... and more. The facilities may vary between the players: the hypervisors, operating systems, administration systems are all different among them. But the one factor they have in common is that they are all large.

I'm not sure that data centers have to be large. They certainly don't have to be monolithic. Cloud providers maintain multiple centers ("regions", "zones", "service areas") to provide redundancy in the event of physical disasters. But aside from the issue of redundancy, it seems that the big cloud providers are thinking in mainframe terms. They build large, centralized, (and expensive) data centers.

Large, centralized mainframe computers make sense for large, centralized mainframe programs.

Cloud systems are different from mainframe programs. They are not large, centralized programs. A properly designed cloud system consists of small, distinct programs tied together by data stores and message queues. A cloud system becomes big by scaling -- by increasing the number of copies of web servers and applications -- and not by growing a single program or single database.

A large cloud system can exist on a cloud platform that lives in one large data center. For critical systems, we want redundancy, so we arrange for multiple data centers. This is easy with cloud systems, as the system can expand by creating new instances of servers, not necessarily in the same data center.

A large cloud system doesn't need a single large data center, though. A large cloud system, with its many instances of small servers, can just as easily live in a set of small data centers (provided that there are enough servers to host the virtual servers).

I think we're in for an expansion of mindset, the same expansion that we saw with personal computers. Cloud providers will expand their data centers with small- and medium-sized data centers.

I'm ignoring two aspects here. One is communications: network transfers are faster in a single data center than across multiple centers. But how many applications are that sensitive to time? The other aspect is the efficiency of smaller data centers. It is probably cheaper, on a per-server basis, to build large data centers. Small data centers will have to take advantage to something, like an existing small building that requires no major construction.

Cloud systems, even large cloud systems, don't need large data centers.

Tuesday, August 26, 2014

With no clear IT leader, expect lots of changes

The introduction of the IBM PC was market-wrenching. Overnight, the small, rough-and-tumble market of microcomputers with diverse designs from various small vendors became large and centered around the PC standard.

From 1981 to 1987, IBM was the technology leader. IBM lead in sales and also defined the computing platform.

IBM's leadership fell to Compaq in 1987, when IBM introduced the PS/2 line with its new (incompatible) hardware. Compaq delivered old-style PCs with a faster buss (the EISA buss) and notably the Intel 80386 processor. (IBM stayed with the older 80286 and 8086 processors, eventually consenting to provide 80386-based PS/2 units.) Compaq even worked with Microsoft to deliver newer versions of MS-DOS that recognized larger memory capacity and optical disc readers.

But Compaq did not remain the leader. It's leadership declined gradually, to the clone makers and especially Dell, HP, and Gateway.

The mantle of leadership moved from a PC manufacturer to the Microsoft-Intel duopoly. The popularity of Windows, along with marketing skill and software development prowess led to a stable configuration for Microsoft and Intel. Together, they out-competed IBM's OS/2, Motorola's 68000 processor, DEC's Alpha processor, and Apple's Macintosh line.

That configuration held for two decades, roughly from 1990 to 2010, when Apple introduced the iPhone. The genius move was not the iPhone hardware, but the App Store and iTunes, which let one easily find and install apps on your phone (and pay for them).

Now Microsoft and Apple have the same problem: after years of competing in a well-defined market (the corporate PC market) they struggle to move into the world of mobile computing. Microsoft's attempts at mobile devices (Zune, Kin, Surface RT) have flopped. Intel is desperately attempting to design and build processors that are suitable for low-power devices.

I don't expect either Microsoft or Intel to disappear. (At least not for several years, possibly decades.) The PC market is strong, and Intel can sell a lot of its traditional (heat radiator that happen to compute data) processors. Microsoft is a competent player in the cloud arena with its Azure services.

But I will make an observation: for the first time in the PC era, we find that there is no clear leader for technology. The last time we were leaderless was prior to the IBM PC, in the "microcomputer era" of Radio Shack TRS-80 and Apple II computers. Back then, the market was fractured and tribal. Hardware ruled, and your choice of hardware defined your tribe. Apple owners were in the Apple tribe, using Apple-specific software and exchanging data on Apple-specific floppy disks. Radio Shack owners were in the Radio Shack tribe, using software specific to the TRS-80 computers and exchanging data on TRS-80 diskettes. Exchanging data between tribes was one of the advanced arts, and changing tribes was extremely difficult.

There were some efforts to unify computing: CP/M was the most significant. Built by Digital Research (a software company with no interest in hardware), CP/M ran on many different configurations. Yet even that effort could not span the differences in processors, memory layout, and video configurations.

Today we see tribes forming around multiple architectures. For cloud computing, we have Amazon.com's AWS, Microsoft's Azure, Google's App Engine. With virtualization we see VMware, Oracle's VirtualBox, the aforementioned cloud providers, and newcomer Docker as a rough analog of CP/M. Mobile computing sees Apple's iOS, Google's Android, and Microsoft's Windows RT as a (very) distant third.

With no clear leader and no clear standard, I expect each vendor to enhance their offerings and also attempt to lock in customers with proprietary features. In the mobile space, Apple's Swift and Microsoft's C# are both proprietary languages. Google's choice of Java puts them (possibly) at odds with Oracle -- although Oracle seems to be focussed on databases, servers, and cloud offerings, so there is no direct conflict. Things are a bit more collegial in the cloud space, with vendors supporting OpenStack and Docker. But I still expect proprietary enhancements, perhaps in the form of add-ons.

All of this means that the technology world is headed for change. Not just change from desktop PC to mobile/cloud, but changes in mobile/cloud. The competition from vendors will lead to enhancements and changes, possibly significant changes, in cloud computing and mobile platforms. The mobile/cloud platform will be a moving target, with revisions as each vendor attempts to out-do the others.

Those changes mean risk. As platforms change, applications and systems may break or fail in unexpected ways. New features may offer better ways of addressing problems and the temptation to use those new features will be great. Yet re-designing a system to take advantage of new infrastructure features may mean that other work -- such as new business features -- waits for resources.

One cannot ignore mobile/cloud computing. (Well, I suppose one can, but that is probably foolish.) But one cannot, with today's market, depend on a stable platform with slow, predictable changes like we had with Microsoft Windows.

With such an environment, what should one do?

My recommendations:

Build systems of small components  This is the Unix mindset, with small tools to perform specific tasks. Avoid large, monolithic systems.

Use standard interfaces  Use web services (either SOAP or REST) to connect components into larger systems. Use JSON and Unicode to exchange data, not proprietary formats.

Hedge your bets  Gain experience in at least two cloud platforms and two mobile platforms. Resist the temptation of "corporate standards". Standards are good with a predictable technology base. The current base is not predictable, and placing your eggs in one vendor's basket is risky.

Change your position  After a period of use, examine your systems, your tools, and your talent. Change vendors -- not for everything, but for small components. (You did build your system from small, connected components, right?) Migrate some components to another vendor; learn the process and the difficulties. You'll want to know them when you are forced to move to a different vendor.

Many folks involved in IT have been living in the "golden age" of a stable PC platform. They may have weathered the change from desktop to web -- which saw a brief period of uncertainty. More than likely, they think that the stable world is the norm. All that is fine -- except we're not in the normal world with mobile/cloud. Be prepared for change.

Sunday, April 7, 2013

Mobile/cloud apps will be different than PC apps

As a participant in the PC revolution, I was comfortable with the bright future of personal computers. I *knew* -- that is, I strongly believed -- that PCs were superior to mainframes.

It turned out that PCs were *different* from mainframes, but not necessarily superior.

Mainframe programs were, primarily, accounting systems. Oh, there were programs to compute ballistics tables, and programs for engineering and astronomy, and system utilities, but the big use of mainframe computers was accounting (general ledger, inventory, billing, payment processing, payables, receivables, and market forecasts). These uses were shaped by the entities that could afford mainframe computers (large corporations and governments) and the data that was most important to those organizations.

But the data was also shaped by technology. Computers read input on punch cards and stored data on magnetic tape. The batch processing systems were useful for certain types of processing and made efficient use of transactions and master files. Even when terminals were invented, the processing remained in batch mode.

Personal computers were more interactive than mainframes. They started with terminals and interactive applications. From the beginning, personal computers were used for tasks very different than the tasks of mainframe computers. The biggest applications for PCs were word processors and spreadsheets. (They still are today.)

Some "traditional" computer applications were ported to personal computers. There were (and still are) systems for accounting and database management. There were utility programs and programming languages: BASIC, FORTRAN, COBOL, and later C and Pascal. But the biggest applications were the interactive ones, the ones that broke from the batch processing mold of mainframe computing.

(I am simplifying greatly here. There were interactive programs for mainframes. The BASIC language was designed as an interactive environment for programming, on mainframe computers.)

I cannot help but think that the typical mainframe programmer, looking at the new personal computers that appeared in the late 1970s, could only puzzle at what possible advantage they could offer. Personal computers were smaller, slower, and less capable than mainframes in every degree. Processors were slower and less capable. Memory was smaller. Storage was laughably primitive. PC software was also primitive, with nothing approaching the sophistication of mainframe operating systems, database management systems, or utilities.

The only ways in which personal computers were superior to mainframes were the BASIC language (Microsoft BASIC was more powerful than mainframe BASIC), word processors, and spreadsheets. Notice that these are all interactive programs. The cost and size of a personal computer made it possible for a person to own one, but the interactive nature of applications made it sensible for a person to own one.

That single attribute of interactive applications made the PC revolution possible. The success of modern-day PCs and the Microsoft empire was built on interactive applications.

I suspect that the success of cell phones and tablets will be built on a single attribute. But what that attribute is, I do not know. It may be portability. It may be location-aware capabilities. It may be a different level of interactivity.

I *know* -- that is, I feel very strongly -- that mobile/cloud is going to have a brilliant future.

I also feel that the key applications for mobile/cloud will be different from traditional PC applications, just as PC applications are different from mainframe applications. Any attempt to port PC applications to mobile/cloud will be doomed to failure, just as mainframe applications failed to port to PCs.

Mainframe applications live on, in their batch mode glory, to this day. Large companies and governments need accounting systems, and will continue to need them. PC applications will live through the mobile/cloud revolution, although some may fade; PowerPoint-style presentations may be better served on synchronized mobile devices than with a single PC and a projector.

Expect mobile/cloud apps to surprise us. They will not be word processors and spreadsheets. (Nor will they be accounting systems.) They will be more like Twitter and Facebook, with status updates and connections to our network of people.