If you're going to have a single point of failure, make it replaceable.
We strive to avoid single points of failure. They hold risk -- if a single point of failure fails, then the entire system fails.
It is not always possible to avoid a single point of failure. Sometimes the constraint is cost. Other times the design requires a single component for a function.
If you have a single point of failure, make it easy to replace. Design the component so that you can replace it quickly and with little risk. When it fails, you can respond and install the replacement component. (Kind of like a spare tire on an automobile. Although the four tires on a car are not a single point of failure, because there are four of them. But you get the idea.)
A simple design for a single point of failure (or any component) requires care and attention. You have to design the component with minimal functionality. Move what you can to other, redundant components.
You also have to guard against changes to the simplicity. Over time, designs change. People add to designs. They want new features, or extensions to existing features. Watch for changes that complicate the single point of failure. Add them to other, redundant components in the system.
Showing posts with label risk management. Show all posts
Showing posts with label risk management. Show all posts
Sunday, December 17, 2017
Thursday, April 13, 2017
Slack, efficiency, and choice
Slack is the opposite of efficiency. Slack is excess capacity, unused space, extra money. Slack is waste and therefore considered bad.
Yet things are not that simple. Yes, slack is excess capacity and unused space. And yes, slack can be viewed as waste. But slack is not entirely bad. Slack has value.
Consider the recent (infamous) overbooking event on United Airlines. One passenger was forcibly removed from a flight to make room for a United crew member (for another flight, the crew member was not working the flight of the incident). United had a fully-booked flight from Chicago to Louisville and needed to move a flight crew from Chicago to Louisville. They asked for volunteers to take other flights; three people took them up on the offer, leaving one seat as an "involuntary re-accommodation".
I won't go into the legal and moral issues of this incident. Instead, I will look at slack.
- The flight had no slack passenger capacity. It was fully booked. (That's usually a good thing for the airline, as it means maximum revenue.)
- The crew had to move from Chicago to Louisville, to start their next assigned flight. It had to be that crew; there was no slack (no extra crew) in Louisville. I assume that there was no other crew in the region that could fill in for the assigned crew. (Keep in mind that crews are regulated as to how much time they can spend working, by union contracts and federal law. This limits the ability of an airline to swap crews like sacks of flour.)
In a perfectly predictable world, we can design, build, and operate systems with no slack. But the world is not perfectly predictable. The world surprises us, and slack helps us cope with those surprises. Extra processing capacity is useful when demand spikes. Extra money is useful for many events, from car crashes to broken water heaters to layoffs.
Slack has value. It buffers us from harsh consequences.
United ran their system with little slack, was subjected to demands greater than expected, and suffered consequences. But this is not really about United or airlines or booking systems. This is about project management, system design, budgeting, and just about any other human activity.
I'm not recommending that you build slack into your systems. I'm not insisting that airlines always leave a few empty seats on each flight.
I'm recommending that you consider slack, and that you make a conscious choice about it. Slack has a cost. It also has benefits. Which has the greater value for you depends on your situation. But don't strive to eliminate slack without thought.
Examine. Evaluate. Think. And then decide.
Yet things are not that simple. Yes, slack is excess capacity and unused space. And yes, slack can be viewed as waste. But slack is not entirely bad. Slack has value.
Consider the recent (infamous) overbooking event on United Airlines. One passenger was forcibly removed from a flight to make room for a United crew member (for another flight, the crew member was not working the flight of the incident). United had a fully-booked flight from Chicago to Louisville and needed to move a flight crew from Chicago to Louisville. They asked for volunteers to take other flights; three people took them up on the offer, leaving one seat as an "involuntary re-accommodation".
I won't go into the legal and moral issues of this incident. Instead, I will look at slack.
- The flight had no slack passenger capacity. It was fully booked. (That's usually a good thing for the airline, as it means maximum revenue.)
- The crew had to move from Chicago to Louisville, to start their next assigned flight. It had to be that crew; there was no slack (no extra crew) in Louisville. I assume that there was no other crew in the region that could fill in for the assigned crew. (Keep in mind that crews are regulated as to how much time they can spend working, by union contracts and federal law. This limits the ability of an airline to swap crews like sacks of flour.)
In a perfectly predictable world, we can design, build, and operate systems with no slack. But the world is not perfectly predictable. The world surprises us, and slack helps us cope with those surprises. Extra processing capacity is useful when demand spikes. Extra money is useful for many events, from car crashes to broken water heaters to layoffs.
Slack has value. It buffers us from harsh consequences.
United ran their system with little slack, was subjected to demands greater than expected, and suffered consequences. But this is not really about United or airlines or booking systems. This is about project management, system design, budgeting, and just about any other human activity.
I'm not recommending that you build slack into your systems. I'm not insisting that airlines always leave a few empty seats on each flight.
I'm recommending that you consider slack, and that you make a conscious choice about it. Slack has a cost. It also has benefits. Which has the greater value for you depends on your situation. But don't strive to eliminate slack without thought.
Examine. Evaluate. Think. And then decide.
Labels:
capacity,
risk avoidance,
risk management,
slack,
system design
Monday, August 24, 2015
The file format wars are over -- and text won
When I first started with computers, files were simple things. Most of them were source code, and a few of them were executables. The source code (BASIC, FORTRAN, and assembly) were all plain text files. The executables were in binary, since they contained machine instructions.
That simple world changed with the PC revolution and the plethora of applications that it brought. Wordstar used a format that was almost text, with ASCII characters and the end of each word marked with a regular character with its 8th bit set. Lotus 1-2-3 used a special file format for its worksheets. dBase II (and dBase III, and dBase IV) used a special format for its data.
There was a "carboniferous explosion" of binary formats. Each and every application had its own format. Binary formatted data was smaller to store, easier to parse, and somewhat proprietary. The last was important for the commercial market; once a customer had lots of data locked in a proprietary format they were unwilling to change to a competitor's product.
The conversion from DOS to Windows changed little. Applications kept their proprietary, binary formats.
Yet recently (that is, with the rise of web services and mobile computing) binary formats have declined. The new favorites are text-based formats: XML, JSON, and YAML.
I have seen no new proprietary, binary format lately. New formats have been one of the text-based formats. Even Microsoft has changed its Office applications (Word, Excel, Powerpoint, and others) to use an XML-based set of files.
This is a big change. Why did it happen?
I can think of several reasons:
First is the existence of the formats. In the "age of binary formats", a binary format was how one stored data. Everyone did it.
Second is the abundance of storage. With limited storage space, a binary format is smaller and a better fit. With today's available storage that pressure does not exist.
Third is the availability of libraries to parse and construct the text formats. We can easily read and write XML (or JSON, or YAML) with commonly-available, tested, working libraries. A proprietary format requires a new (untested) library.
Fourth is the pressure of legislation. Some countries (and some large companies) have mandated the use of open formats, to prevent the lock-in of proprietary data formats.
All of these are good reasons, yet I think there is another factor.
In the past, a file format served the application program. In the data processing world, our mindsets considered applications to "own" the data, with files being nothing more than a convenient holding space to be used when the application was not running (or when it was processing data from a different file). Programs did not share data -- or on the rare occasions when they did, it was through databases or plain text files.
Today, our mobile device apps share data with cloud-based systems. The cloud-based systems are collections of independent applications performing coordinated work. The nature of mobile/cloud is to share data from one application to another. This sharing between programs (sometimes written in different languages) is easier with standard formats and difficult with proprietary formats.
New systems will be developed with open (text) formats for storage and exchange. That means that our existing systems, the dinosaurs of the processing world with their proprietary formats, will fall out of favor.
I don't expect them to vanish completely. They work, which is an important virtue. Replacing them with a new system (or simply modifying them to use text formats) would be expensive with little apparent return on investment. Yet continuing to use them implies that some amount of data (a significant amount) will be locked within proprietary non-text formats.
Expect calls for people with skills in these file formats.
* * * * *
The recent supreme court decision about Java's API (in which the court decided not to hear an appeal) means that for now APIs and file formats can be considered intellectual property. It may be difficult to reverse-engineer the formats for old systems without the expressed permission of the vendor. (And if the vendor is out of business or sold to a larger company, it may be very difficult to obtain such permission.)
Companies may want to evaluate the risk of their data formats.
That simple world changed with the PC revolution and the plethora of applications that it brought. Wordstar used a format that was almost text, with ASCII characters and the end of each word marked with a regular character with its 8th bit set. Lotus 1-2-3 used a special file format for its worksheets. dBase II (and dBase III, and dBase IV) used a special format for its data.
There was a "carboniferous explosion" of binary formats. Each and every application had its own format. Binary formatted data was smaller to store, easier to parse, and somewhat proprietary. The last was important for the commercial market; once a customer had lots of data locked in a proprietary format they were unwilling to change to a competitor's product.
The conversion from DOS to Windows changed little. Applications kept their proprietary, binary formats.
Yet recently (that is, with the rise of web services and mobile computing) binary formats have declined. The new favorites are text-based formats: XML, JSON, and YAML.
I have seen no new proprietary, binary format lately. New formats have been one of the text-based formats. Even Microsoft has changed its Office applications (Word, Excel, Powerpoint, and others) to use an XML-based set of files.
This is a big change. Why did it happen?
I can think of several reasons:
First is the existence of the formats. In the "age of binary formats", a binary format was how one stored data. Everyone did it.
Second is the abundance of storage. With limited storage space, a binary format is smaller and a better fit. With today's available storage that pressure does not exist.
Third is the availability of libraries to parse and construct the text formats. We can easily read and write XML (or JSON, or YAML) with commonly-available, tested, working libraries. A proprietary format requires a new (untested) library.
Fourth is the pressure of legislation. Some countries (and some large companies) have mandated the use of open formats, to prevent the lock-in of proprietary data formats.
All of these are good reasons, yet I think there is another factor.
In the past, a file format served the application program. In the data processing world, our mindsets considered applications to "own" the data, with files being nothing more than a convenient holding space to be used when the application was not running (or when it was processing data from a different file). Programs did not share data -- or on the rare occasions when they did, it was through databases or plain text files.
Today, our mobile device apps share data with cloud-based systems. The cloud-based systems are collections of independent applications performing coordinated work. The nature of mobile/cloud is to share data from one application to another. This sharing between programs (sometimes written in different languages) is easier with standard formats and difficult with proprietary formats.
New systems will be developed with open (text) formats for storage and exchange. That means that our existing systems, the dinosaurs of the processing world with their proprietary formats, will fall out of favor.
I don't expect them to vanish completely. They work, which is an important virtue. Replacing them with a new system (or simply modifying them to use text formats) would be expensive with little apparent return on investment. Yet continuing to use them implies that some amount of data (a significant amount) will be locked within proprietary non-text formats.
Expect calls for people with skills in these file formats.
* * * * *
The recent supreme court decision about Java's API (in which the court decided not to hear an appeal) means that for now APIs and file formats can be considered intellectual property. It may be difficult to reverse-engineer the formats for old systems without the expressed permission of the vendor. (And if the vendor is out of business or sold to a larger company, it may be very difficult to obtain such permission.)
Companies may want to evaluate the risk of their data formats.
Labels:
data formats,
file formats,
JSON,
risk management,
XML,
YAML
Wednesday, May 13, 2015
The other shadow IT
The term "shadow IT" has come to mean IT products and services used within an organization without the blessing (or even knowledge) of the IT support team. Often such services and products are not on the list of approved software.
In the good old days before the open source movement and when software had to be purchased, it was easy to control purchases of software: the Purchasing Department would verify all purchase requests with the IT Department; any unauthorized requests would be refused.
Even with open source and "free" software, the IT Department could set policies on individual PCs and prevent people from installing software. IT remained the gatekeepers of software.
With cloud computing, those controls can be bypassed. Software can now be used in the browser. Point your browser at a web site, register, supply a credit card, and you're ready to go. No Purchasing Department, no administrator rights. This is the situation that most people associate with the term "shadow IT".
Yet there is another technology that is used without the knowledge of the IT Department. A technology that has been used by many people, to perform many tasks within the organization. Programs have been written, databases have been designed, and systems have been implemented without the involvement of the IT Department. Worse, these systems have been used in production without extensive testing (perhaps no testing), have not been audited, and have no backups or disaster recover plans.
I'm talking about spreadsheets.
Specifically, Microsoft Excel spreadsheets.
Microsoft Excel is a standard among corporate computing. Just about every "business" PC (as opposed to "developer" PC or "sysadmin" PC) runs Windows and has Excel. The technology is available, often mandated by the IT Department as a standard configuration.
Millions of people have access to Excel, and they use it. And why not? Excel is powerful, flexible, and useful. There are tutorials for it. There are web pages with hints and tips. Microsoft has made it easy to use. There is little work needed to use Excel to perform calculations and store data. One can even connect Excel to external data sources (to avoid re-typing data) and program Excel with macros in VBA.
Excel, in other words, is a system platform complete with programming language. It is used by millions of people in thousands (hundreds of thousands?) of organizations. Some small businesses may run completely on Excel. Larger business may run on a combination of "properly" designed and supported systems and Excel.
This is the other shadow IT. The spreadsheets used by people to perform mundane (or perhaps not-so-mundane) tasks. The queries to corporate databases. The programs in VBA that advertise themselves as "macros". All operating without IT's knowledge or support.
Comparing two programming languages is difficult. Different languages have different capabilities and different amounts of programming "power". One line of COBOL can do the work of many lines of assembly. Ten lines of Python can do more work than ten lines of Java.
I suspect that if we could compare Excel to the corporate-approved languages of C# and Java, we would find that there is more Excel code that corporate-approved code. That is a lot of code! It means that Excel is the "dark matter" of the IT universe: existing but not observed. (I realize that this amount is speculation. We have no measurements for Excel code.)
Excel is the shadow technology to watch. Don't ignore file-sharing and browser-based apps; they are risks too. But keep an eye on the technology we already have and use.
In the good old days before the open source movement and when software had to be purchased, it was easy to control purchases of software: the Purchasing Department would verify all purchase requests with the IT Department; any unauthorized requests would be refused.
Even with open source and "free" software, the IT Department could set policies on individual PCs and prevent people from installing software. IT remained the gatekeepers of software.
With cloud computing, those controls can be bypassed. Software can now be used in the browser. Point your browser at a web site, register, supply a credit card, and you're ready to go. No Purchasing Department, no administrator rights. This is the situation that most people associate with the term "shadow IT".
Yet there is another technology that is used without the knowledge of the IT Department. A technology that has been used by many people, to perform many tasks within the organization. Programs have been written, databases have been designed, and systems have been implemented without the involvement of the IT Department. Worse, these systems have been used in production without extensive testing (perhaps no testing), have not been audited, and have no backups or disaster recover plans.
I'm talking about spreadsheets.
Specifically, Microsoft Excel spreadsheets.
Microsoft Excel is a standard among corporate computing. Just about every "business" PC (as opposed to "developer" PC or "sysadmin" PC) runs Windows and has Excel. The technology is available, often mandated by the IT Department as a standard configuration.
Millions of people have access to Excel, and they use it. And why not? Excel is powerful, flexible, and useful. There are tutorials for it. There are web pages with hints and tips. Microsoft has made it easy to use. There is little work needed to use Excel to perform calculations and store data. One can even connect Excel to external data sources (to avoid re-typing data) and program Excel with macros in VBA.
Excel, in other words, is a system platform complete with programming language. It is used by millions of people in thousands (hundreds of thousands?) of organizations. Some small businesses may run completely on Excel. Larger business may run on a combination of "properly" designed and supported systems and Excel.
This is the other shadow IT. The spreadsheets used by people to perform mundane (or perhaps not-so-mundane) tasks. The queries to corporate databases. The programs in VBA that advertise themselves as "macros". All operating without IT's knowledge or support.
Comparing two programming languages is difficult. Different languages have different capabilities and different amounts of programming "power". One line of COBOL can do the work of many lines of assembly. Ten lines of Python can do more work than ten lines of Java.
I suspect that if we could compare Excel to the corporate-approved languages of C# and Java, we would find that there is more Excel code that corporate-approved code. That is a lot of code! It means that Excel is the "dark matter" of the IT universe: existing but not observed. (I realize that this amount is speculation. We have no measurements for Excel code.)
Excel is the shadow technology to watch. Don't ignore file-sharing and browser-based apps; they are risks too. But keep an eye on the technology we already have and use.
Labels:
Microsoft Excel,
risk management,
shadow IT,
spreadsheets
Tuesday, August 26, 2014
With no clear IT leader, expect lots of changes
The introduction of the IBM PC was market-wrenching. Overnight, the small, rough-and-tumble market of microcomputers with diverse designs from various small vendors became large and centered around the PC standard.
From 1981 to 1987, IBM was the technology leader. IBM lead in sales and also defined the computing platform.
IBM's leadership fell to Compaq in 1987, when IBM introduced the PS/2 line with its new (incompatible) hardware. Compaq delivered old-style PCs with a faster buss (the EISA buss) and notably the Intel 80386 processor. (IBM stayed with the older 80286 and 8086 processors, eventually consenting to provide 80386-based PS/2 units.) Compaq even worked with Microsoft to deliver newer versions of MS-DOS that recognized larger memory capacity and optical disc readers.
But Compaq did not remain the leader. It's leadership declined gradually, to the clone makers and especially Dell, HP, and Gateway.
The mantle of leadership moved from a PC manufacturer to the Microsoft-Intel duopoly. The popularity of Windows, along with marketing skill and software development prowess led to a stable configuration for Microsoft and Intel. Together, they out-competed IBM's OS/2, Motorola's 68000 processor, DEC's Alpha processor, and Apple's Macintosh line.
That configuration held for two decades, roughly from 1990 to 2010, when Apple introduced the iPhone. The genius move was not the iPhone hardware, but the App Store and iTunes, which let one easily find and install apps on your phone (and pay for them).
Now Microsoft and Apple have the same problem: after years of competing in a well-defined market (the corporate PC market) they struggle to move into the world of mobile computing. Microsoft's attempts at mobile devices (Zune, Kin, Surface RT) have flopped. Intel is desperately attempting to design and build processors that are suitable for low-power devices.
I don't expect either Microsoft or Intel to disappear. (At least not for several years, possibly decades.) The PC market is strong, and Intel can sell a lot of its traditional (heat radiator that happen to compute data) processors. Microsoft is a competent player in the cloud arena with its Azure services.
But I will make an observation: for the first time in the PC era, we find that there is no clear leader for technology. The last time we were leaderless was prior to the IBM PC, in the "microcomputer era" of Radio Shack TRS-80 and Apple II computers. Back then, the market was fractured and tribal. Hardware ruled, and your choice of hardware defined your tribe. Apple owners were in the Apple tribe, using Apple-specific software and exchanging data on Apple-specific floppy disks. Radio Shack owners were in the Radio Shack tribe, using software specific to the TRS-80 computers and exchanging data on TRS-80 diskettes. Exchanging data between tribes was one of the advanced arts, and changing tribes was extremely difficult.
There were some efforts to unify computing: CP/M was the most significant. Built by Digital Research (a software company with no interest in hardware), CP/M ran on many different configurations. Yet even that effort could not span the differences in processors, memory layout, and video configurations.
Today we see tribes forming around multiple architectures. For cloud computing, we have Amazon.com's AWS, Microsoft's Azure, Google's App Engine. With virtualization we see VMware, Oracle's VirtualBox, the aforementioned cloud providers, and newcomer Docker as a rough analog of CP/M. Mobile computing sees Apple's iOS, Google's Android, and Microsoft's Windows RT as a (very) distant third.
With no clear leader and no clear standard, I expect each vendor to enhance their offerings and also attempt to lock in customers with proprietary features. In the mobile space, Apple's Swift and Microsoft's C# are both proprietary languages. Google's choice of Java puts them (possibly) at odds with Oracle -- although Oracle seems to be focussed on databases, servers, and cloud offerings, so there is no direct conflict. Things are a bit more collegial in the cloud space, with vendors supporting OpenStack and Docker. But I still expect proprietary enhancements, perhaps in the form of add-ons.
All of this means that the technology world is headed for change. Not just change from desktop PC to mobile/cloud, but changes in mobile/cloud. The competition from vendors will lead to enhancements and changes, possibly significant changes, in cloud computing and mobile platforms. The mobile/cloud platform will be a moving target, with revisions as each vendor attempts to out-do the others.
Those changes mean risk. As platforms change, applications and systems may break or fail in unexpected ways. New features may offer better ways of addressing problems and the temptation to use those new features will be great. Yet re-designing a system to take advantage of new infrastructure features may mean that other work -- such as new business features -- waits for resources.
One cannot ignore mobile/cloud computing. (Well, I suppose one can, but that is probably foolish.) But one cannot, with today's market, depend on a stable platform with slow, predictable changes like we had with Microsoft Windows.
With such an environment, what should one do?
My recommendations:
Build systems of small components This is the Unix mindset, with small tools to perform specific tasks. Avoid large, monolithic systems.
Use standard interfaces Use web services (either SOAP or REST) to connect components into larger systems. Use JSON and Unicode to exchange data, not proprietary formats.
Hedge your bets Gain experience in at least two cloud platforms and two mobile platforms. Resist the temptation of "corporate standards". Standards are good with a predictable technology base. The current base is not predictable, and placing your eggs in one vendor's basket is risky.
Change your position After a period of use, examine your systems, your tools, and your talent. Change vendors -- not for everything, but for small components. (You did build your system from small, connected components, right?) Migrate some components to another vendor; learn the process and the difficulties. You'll want to know them when you are forced to move to a different vendor.
Many folks involved in IT have been living in the "golden age" of a stable PC platform. They may have weathered the change from desktop to web -- which saw a brief period of uncertainty. More than likely, they think that the stable world is the norm. All that is fine -- except we're not in the normal world with mobile/cloud. Be prepared for change.
From 1981 to 1987, IBM was the technology leader. IBM lead in sales and also defined the computing platform.
IBM's leadership fell to Compaq in 1987, when IBM introduced the PS/2 line with its new (incompatible) hardware. Compaq delivered old-style PCs with a faster buss (the EISA buss) and notably the Intel 80386 processor. (IBM stayed with the older 80286 and 8086 processors, eventually consenting to provide 80386-based PS/2 units.) Compaq even worked with Microsoft to deliver newer versions of MS-DOS that recognized larger memory capacity and optical disc readers.
But Compaq did not remain the leader. It's leadership declined gradually, to the clone makers and especially Dell, HP, and Gateway.
The mantle of leadership moved from a PC manufacturer to the Microsoft-Intel duopoly. The popularity of Windows, along with marketing skill and software development prowess led to a stable configuration for Microsoft and Intel. Together, they out-competed IBM's OS/2, Motorola's 68000 processor, DEC's Alpha processor, and Apple's Macintosh line.
That configuration held for two decades, roughly from 1990 to 2010, when Apple introduced the iPhone. The genius move was not the iPhone hardware, but the App Store and iTunes, which let one easily find and install apps on your phone (and pay for them).
Now Microsoft and Apple have the same problem: after years of competing in a well-defined market (the corporate PC market) they struggle to move into the world of mobile computing. Microsoft's attempts at mobile devices (Zune, Kin, Surface RT) have flopped. Intel is desperately attempting to design and build processors that are suitable for low-power devices.
I don't expect either Microsoft or Intel to disappear. (At least not for several years, possibly decades.) The PC market is strong, and Intel can sell a lot of its traditional (heat radiator that happen to compute data) processors. Microsoft is a competent player in the cloud arena with its Azure services.
But I will make an observation: for the first time in the PC era, we find that there is no clear leader for technology. The last time we were leaderless was prior to the IBM PC, in the "microcomputer era" of Radio Shack TRS-80 and Apple II computers. Back then, the market was fractured and tribal. Hardware ruled, and your choice of hardware defined your tribe. Apple owners were in the Apple tribe, using Apple-specific software and exchanging data on Apple-specific floppy disks. Radio Shack owners were in the Radio Shack tribe, using software specific to the TRS-80 computers and exchanging data on TRS-80 diskettes. Exchanging data between tribes was one of the advanced arts, and changing tribes was extremely difficult.
There were some efforts to unify computing: CP/M was the most significant. Built by Digital Research (a software company with no interest in hardware), CP/M ran on many different configurations. Yet even that effort could not span the differences in processors, memory layout, and video configurations.
Today we see tribes forming around multiple architectures. For cloud computing, we have Amazon.com's AWS, Microsoft's Azure, Google's App Engine. With virtualization we see VMware, Oracle's VirtualBox, the aforementioned cloud providers, and newcomer Docker as a rough analog of CP/M. Mobile computing sees Apple's iOS, Google's Android, and Microsoft's Windows RT as a (very) distant third.
With no clear leader and no clear standard, I expect each vendor to enhance their offerings and also attempt to lock in customers with proprietary features. In the mobile space, Apple's Swift and Microsoft's C# are both proprietary languages. Google's choice of Java puts them (possibly) at odds with Oracle -- although Oracle seems to be focussed on databases, servers, and cloud offerings, so there is no direct conflict. Things are a bit more collegial in the cloud space, with vendors supporting OpenStack and Docker. But I still expect proprietary enhancements, perhaps in the form of add-ons.
All of this means that the technology world is headed for change. Not just change from desktop PC to mobile/cloud, but changes in mobile/cloud. The competition from vendors will lead to enhancements and changes, possibly significant changes, in cloud computing and mobile platforms. The mobile/cloud platform will be a moving target, with revisions as each vendor attempts to out-do the others.
Those changes mean risk. As platforms change, applications and systems may break or fail in unexpected ways. New features may offer better ways of addressing problems and the temptation to use those new features will be great. Yet re-designing a system to take advantage of new infrastructure features may mean that other work -- such as new business features -- waits for resources.
One cannot ignore mobile/cloud computing. (Well, I suppose one can, but that is probably foolish.) But one cannot, with today's market, depend on a stable platform with slow, predictable changes like we had with Microsoft Windows.
With such an environment, what should one do?
My recommendations:
Build systems of small components This is the Unix mindset, with small tools to perform specific tasks. Avoid large, monolithic systems.
Use standard interfaces Use web services (either SOAP or REST) to connect components into larger systems. Use JSON and Unicode to exchange data, not proprietary formats.
Hedge your bets Gain experience in at least two cloud platforms and two mobile platforms. Resist the temptation of "corporate standards". Standards are good with a predictable technology base. The current base is not predictable, and placing your eggs in one vendor's basket is risky.
Change your position After a period of use, examine your systems, your tools, and your talent. Change vendors -- not for everything, but for small components. (You did build your system from small, connected components, right?) Migrate some components to another vendor; learn the process and the difficulties. You'll want to know them when you are forced to move to a different vendor.
Many folks involved in IT have been living in the "golden age" of a stable PC platform. They may have weathered the change from desktop to web -- which saw a brief period of uncertainty. More than likely, they think that the stable world is the norm. All that is fine -- except we're not in the normal world with mobile/cloud. Be prepared for change.
Labels:
Amazon.com,
Apple,
changes,
cloud,
Google,
history,
IBM PC,
mobile,
mobile/cloud,
process,
risk management,
technology leader
Thursday, September 19, 2013
Nirvanix collapse is not a cloud failure
The cloud service Nirvanix announced this week that it was closing its doors, and directing its customers to take their data elsewhere. Now, people are claiming this is a failure of cloud technology.
Let's be clear: the problems caused by Nirvanix are not a failure of the cloud. They are a business failure for Nirvanix and a supplier failure for its clients.
Business rely on suppliers for many items. No business is totally independent; business rely on suppliers for office space, computers, printers and paper, electricity, accounting and payroll services, and many other goods and services.
Suppliers can fail. Failure can be small (a delayed delivery, or the wrong item), or large (going out of business). Business must evaluate their suppliers and the risk of failure. Most supplies are commodities and can be easily found through competing suppliers. (Paper, for example.)
Some suppliers are "single-source". Apple, for instance, is the only supplier for its products. IBM PC compatibles are available from a number of sources (Dell, Lenovo, and HP) but MacBooks and iPads are available only from Apple.
Some suppliers are monopolies, and therefore also single sources. Utility companies are often local monopolies; you have exactly one choice for electric power, water, and usually cable TV.
A single-source supplier is a higher risk than a commodity supplier. This is obvious; when a commodity supplier fails you can go to another supplier for the same (or equivalent) item, and when a single-source supplier fails you cannot. It is common for businesses to look for multiple suppliers for the items they purchase.
Cloud services are, for the most part, incompatible, and therefore cloud suppliers are single-source. I cannot easily move my application from Amazon's cloud to Microsoft's cloud, for example. Being single-source, there is a higher risk involved with using them.
Yet many clients of cloud services have bought the argument "when you put something into the cloud, you don't have to worry about administration or back-up". This is false. Of course you have to worry about administration and back-up. You may have less involvement, but the work is still there.
And you also have the risk of supplier failure.
Our society chooses to regulate some suppliers. Utility companies are granted monopolies for efficiency (it makes little sense to run multiple water or power distribution networks) and are regulated to prevent failures. Some non-monopoly companies, such as banks and electricians are regulated for safety of the economy or people.
Other companies, such as payroll companies, are not regulated, and clients must examine the health of a company before committing to them.
I expect that cloud services will be viewed as accounting services: important but not so important as to need regulation. It will be up to clients to choose appropriate suppliers and make contingency plans for failures.
Let's be clear: the problems caused by Nirvanix are not a failure of the cloud. They are a business failure for Nirvanix and a supplier failure for its clients.
Business rely on suppliers for many items. No business is totally independent; business rely on suppliers for office space, computers, printers and paper, electricity, accounting and payroll services, and many other goods and services.
Suppliers can fail. Failure can be small (a delayed delivery, or the wrong item), or large (going out of business). Business must evaluate their suppliers and the risk of failure. Most supplies are commodities and can be easily found through competing suppliers. (Paper, for example.)
Some suppliers are "single-source". Apple, for instance, is the only supplier for its products. IBM PC compatibles are available from a number of sources (Dell, Lenovo, and HP) but MacBooks and iPads are available only from Apple.
Some suppliers are monopolies, and therefore also single sources. Utility companies are often local monopolies; you have exactly one choice for electric power, water, and usually cable TV.
A single-source supplier is a higher risk than a commodity supplier. This is obvious; when a commodity supplier fails you can go to another supplier for the same (or equivalent) item, and when a single-source supplier fails you cannot. It is common for businesses to look for multiple suppliers for the items they purchase.
Cloud services are, for the most part, incompatible, and therefore cloud suppliers are single-source. I cannot easily move my application from Amazon's cloud to Microsoft's cloud, for example. Being single-source, there is a higher risk involved with using them.
Yet many clients of cloud services have bought the argument "when you put something into the cloud, you don't have to worry about administration or back-up". This is false. Of course you have to worry about administration and back-up. You may have less involvement, but the work is still there.
And you also have the risk of supplier failure.
Our society chooses to regulate some suppliers. Utility companies are granted monopolies for efficiency (it makes little sense to run multiple water or power distribution networks) and are regulated to prevent failures. Some non-monopoly companies, such as banks and electricians are regulated for safety of the economy or people.
Other companies, such as payroll companies, are not regulated, and clients must examine the health of a company before committing to them.
I expect that cloud services will be viewed as accounting services: important but not so important as to need regulation. It will be up to clients to choose appropriate suppliers and make contingency plans for failures.
Subscribe to:
Posts (Atom)