In IT, the term "virtual machine" has multiple meanings. We use the term to identify pretend servers with pretend disk space and pretend devices, all hosted on a real (or "physical") server. Cloud computing and even plain old (non-cloud) data centers have multiple instances of virtual machines.
We also use the term to identify the pretend processors used by various programming languages. Java has its JVM, C# has the .NET processor, and other languages have their own imaginary processors. It is an old concept, made popular by Java in the mid-1990s but going back to the 1980s with the UCSD p-System and even into the 1960s.
The two types of virtual machines are complementary. The former duplicates the hardware (usually for a PC) and provides virtual instances of everything in a computer: disk, graphics card, network card, USB and serial ports, and even a floppy disk (if you want). The one thing it doesn't virtualize is the processor; the hypervisor (the program controlling the virtual machines) relies on the physical processor.
The latter is a fictitious processor (with a fictitious instruction set) that is emulated by software on the physical processor. It has no associated hardware, and the term "virtual processor" might have been a better choice. (I have no hope of changing the name now, but I will use the term for this essay.)
It is the virtual processor that interests me. Or rather, it is the number of virtual processors that exist today.
We are blessed (or cursed) with a large number of virtual processors. Oracle's Java uses one called "JVM". Microsoft uses one called "CLR" (for "common language runtime"). Perl uses a virtual processor (two, actually; one for Perl 5 and a different one for Perl 6). Python uses a virtual processor. Ruby, Erlang, PHP, and Javascript all use virtual processors.
We are awash in virtual processors. It seems that each language has its own, but that's not true. The languages Groovy, Scala, Clojure, Kotlin, JRuby, and Jython all run on JVM. Microsoft's CLR runs C#, F#, VB.NET, IronPython, and IronRuby. Even BEAM, the virtual processor for Erlang, supports "joxa", "lfe", "efene", "elixir", "eml", and others.
I will point out that not every language uses a virtual processor. C, C++, Go, and Swift all produce executable code. Their code runs on the real processor. While more efficient, an executable is bound to the processor instruction set, and you must recompile to run on a different processor.
But back to virtual processors. We have a large number of virtual processors. And I have to think: "We've been here before".
The PC world long ago settled on the Intel x86 architecture. Before it did, we had a number of processors, from Intel (the 8080, 8085, 8086, and 8088), Zilog (the Z-80), Motorola (the 6800, 6808, and 6809), and MOS (the 6502).
The mainframe world saw many processors, before the rise of the IBM System/360 processor. Its derivatives are now the standard for mainframes.
Will we converge on a single architecture for virtual processors? I see no reason for such convergence in the commercial languages. Oracle and Microsoft have nothing to gain by adopting the other's technology. Indeed, one using the other would make them beholden to the competition for improvements and corrections.
The open source community is different, and may see convergence. An independent project, providing support for open source languages, may be possible. It may also make sense, allowing the language maintainers to focus on their language-specific features and remove the burden of maintaining the virtual processor. An important factor in such a common virtual processor is the interaction between the language and the virtual processor.
Open source has separated and consolidated other items. Sometimes we settle on a single solution, sometimes multiple. The kernel settled on Linux. The windowing system settled on X and KDE. The file system. The compiler back end.
Why not the virtual processor?
Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts
Saturday, January 28, 2017
A multitude of virtual machines
Labels:
C#,
CLR,
Java,
JVM,
languages,
virtual machine,
virtual processor
Sunday, August 17, 2014
Reducing the cost of programming
Different programming languages have different capabilities. And not surprisingly, different programming languages have different costs. Over the years, we have found ways of reducing those costs.
Costs include infrastructure (disk space for compiler, memory) and programmer training (how to write programs, how to compile, how to debug). Notice that the load on the programmer can be divided into three: infrastructure (editor, compiler), housekeeping (declarations, memory allocation), and business logic (the code that gets stuff done).
Symbolic assembly code was better than machine code. In machine code, every instruction and memory location must be laid out by the programmer. With a symbolic assembler, the computer did that work.
COBOL and FORTRAN reduced cost by letting the programmer not worry about the machine architecture, register assignment, and call stack management.
BASIC (and time-sharing) made editing easy, eliminated compiling, and made running a program easy. Results were available immediately.
Today we are awash in programming languages. The big ones today (C, Java, Objective C, C++, BASIC, Python, PHP, Perl, and JavaScript -- according to Tiobe) are all good at different things. That is perhaps not a coincidence. People pick the language best suited to the task at hand.
Still, it would be nice to calculate the cost of the different languages. Or if numeric metrics are not possible, at least rank the languages. Yet even that is difficult.
One can easily state that C++ is more complex than C, and therefore conclude that programming in C++ is more expensive that C. Yet that's not quite true. Small programs in C are easier to write than equivalent programs in C++. Large programs are easier to write in C++, since the ability to encapsulate data and group functions into classes helps one organize the code. (Where 'small' and 'large' are left to the reader to define.)
Some languages are compiled and some that are interpreted, and one can argue that a separate step to compile is an expense. (It certainly seems like an expense when I am waiting for the compiler to finish.) Yet languages with compilers (C, C++, Java, C#, Objective-C) all have static typing, which means that the editor built into an IDE can provide information about variables and functions. When editing a program written in one of the interpreted languages, on the other hand, one does not have that help from the editor. The interpreted languages (Perl, Python, PHP, and JavaScript) have dynamic typing, which means that the type of a variable (or function) is not constant but can change as the program runs.
Switching from an "expensive" programming language (let's say C++) to a "reduced cost" programming language (perhaps Python) is not always possible. Programs written in C++ perform better. (On one project, the C++ program ran for several hours; the equivalent program in Perl ran for several days.) C and C++ let one have access to the underlying hardware, something that is not possible in Java or C# (at least not without some add-in trickery, usually involving... C++.)
The line between "cost of programming" and "best language" quickly blurs, and nailing down the costs for the different dimensions of programming (program design, speed of coding, speed of execution, ability to control hardware) get in our way.
In the end, I find that it is easy to rank languages in the order of my preference rather than in an unbiased scheme. And even my preferences are subject to change, given the nature of the project. (Is there existing code? What are other team members using? What performance constraints must we meet?)
Reducing the cost of programming is really about trade-offs. What capabilities do we desire, and what capabilities are we willing to cede? To switch from C++ to C# may mean faster development but slower performance. To switch from PHP to Java may mean better organization of code through classes but slower development. What is it that we really want?
Costs include infrastructure (disk space for compiler, memory) and programmer training (how to write programs, how to compile, how to debug). Notice that the load on the programmer can be divided into three: infrastructure (editor, compiler), housekeeping (declarations, memory allocation), and business logic (the code that gets stuff done).
Symbolic assembly code was better than machine code. In machine code, every instruction and memory location must be laid out by the programmer. With a symbolic assembler, the computer did that work.
COBOL and FORTRAN reduced cost by letting the programmer not worry about the machine architecture, register assignment, and call stack management.
BASIC (and time-sharing) made editing easy, eliminated compiling, and made running a program easy. Results were available immediately.
Today we are awash in programming languages. The big ones today (C, Java, Objective C, C++, BASIC, Python, PHP, Perl, and JavaScript -- according to Tiobe) are all good at different things. That is perhaps not a coincidence. People pick the language best suited to the task at hand.
Still, it would be nice to calculate the cost of the different languages. Or if numeric metrics are not possible, at least rank the languages. Yet even that is difficult.
One can easily state that C++ is more complex than C, and therefore conclude that programming in C++ is more expensive that C. Yet that's not quite true. Small programs in C are easier to write than equivalent programs in C++. Large programs are easier to write in C++, since the ability to encapsulate data and group functions into classes helps one organize the code. (Where 'small' and 'large' are left to the reader to define.)
Some languages are compiled and some that are interpreted, and one can argue that a separate step to compile is an expense. (It certainly seems like an expense when I am waiting for the compiler to finish.) Yet languages with compilers (C, C++, Java, C#, Objective-C) all have static typing, which means that the editor built into an IDE can provide information about variables and functions. When editing a program written in one of the interpreted languages, on the other hand, one does not have that help from the editor. The interpreted languages (Perl, Python, PHP, and JavaScript) have dynamic typing, which means that the type of a variable (or function) is not constant but can change as the program runs.
Switching from an "expensive" programming language (let's say C++) to a "reduced cost" programming language (perhaps Python) is not always possible. Programs written in C++ perform better. (On one project, the C++ program ran for several hours; the equivalent program in Perl ran for several days.) C and C++ let one have access to the underlying hardware, something that is not possible in Java or C# (at least not without some add-in trickery, usually involving... C++.)
The line between "cost of programming" and "best language" quickly blurs, and nailing down the costs for the different dimensions of programming (program design, speed of coding, speed of execution, ability to control hardware) get in our way.
In the end, I find that it is easy to rank languages in the order of my preference rather than in an unbiased scheme. And even my preferences are subject to change, given the nature of the project. (Is there existing code? What are other team members using? What performance constraints must we meet?)
Reducing the cost of programming is really about trade-offs. What capabilities do we desire, and what capabilities are we willing to cede? To switch from C++ to C# may mean faster development but slower performance. To switch from PHP to Java may mean better organization of code through classes but slower development. What is it that we really want?
Wednesday, September 11, 2013
Specialization can be good or bad
Technologies have targets. Some technologies, over time, narrow their targets. Two examples are Windows and .NET.
Windows was, at first, designed to run on multiple hardware platforms. The objective was an "operating environment" that would give Microsoft an opportunity to sell software for multiple hardware platforms. There were versions of Windows for the Zenith Z-100 and the DEC Rainbow; these computers had Intel processors and ran MS-DOS but used architectures different from the IBM PC. Later versions of Windows ran on PowerPC, DEC Alpha, and MIPS processors. Those variants have all ceased; Microsoft supports only Intel PC architecture for "real" Windows and the new Windows RT variant for ARM processors, and both of these run on well-defined hardware.
The .NET platform has also narrowed. Instead of machine architectures, the narrowing has been with programming languages. When Microsoft released .NET, it supplied compilers for four languages: C++, C#, Visual Basic, and Visual J#. Microsoft also made bold proclamations about the .NET platform supporting multiple languages; the implications were that other vendors would build compilers and that Java was a "one-trick pony".
Yet the broad support for languages has narrowed. It was clear from the start that Microsoft was supporting C# as "first among equals". The documentation for C# was more complete and better organized than the documentation for other languages. Other vendors did provide compilers for other languages (and some still do), but the .NET world is pretty much C# with a small set of VB fans. Microsoft's forays into Python and Ruby (the IronPython and IronRuby engines) have been spun off as separate projects; the only "expansion" language from Microsoft is F#, used for functional programming.
Another word for this narrowing of technology is "specialization". Microsoft focussed Windows on the PC platform; the code become specialized. The .NET ecosystem is narrowing to C#; our code is becoming specialized.
Specialization has its advantages. Limiting Windows to the PC architecture reduced Microsoft's costs and enabled them to optimize Windows for the platform. (Later, Microsoft would become strong enough to specify the hardware platform, and they made sure that advances in PC hardware meshed with improvements in Windows.)
Yet specialization is not without risk. When one is optimized for an environment (such as PC hardware or a language), it is hard to move to another environment. Thus, Windows is a great operating system for desktop PCs but a poor fit on tablets. Windows 8 shows that significant changes are needed to move to tablets.
Similarly, specializing in C# may lead to significant hurdles when new programming paradigms emerge. The .NET platform is optimized for C# and its object-oriented roots. Moving to another programming paradigm (such as functional programming) may prove difficult. The IronPython and IronRuby projects may provide some leverage, as may the F# language, but these are quite small compared to C# in the .NET ecosystem.
Interestingly, the "one-trick pony" environment for Java has expanded to include Clojure, Groovy, and Scala, as well as Jython and JRuby. So not all technologies narrow, andSun's Oracle's Java may avoid the trap of over-specialization.
Picking the target for your technology is a delicate balance. A broad set of targets leads to performance issues and markets with little return. A narrow set of targets reduces costs but foregoes market penetration (and revenue) and may leave you ill-prepared for a paradigm shift. You have to chart your way between the two.
I didn't say it would be easy.
Windows was, at first, designed to run on multiple hardware platforms. The objective was an "operating environment" that would give Microsoft an opportunity to sell software for multiple hardware platforms. There were versions of Windows for the Zenith Z-100 and the DEC Rainbow; these computers had Intel processors and ran MS-DOS but used architectures different from the IBM PC. Later versions of Windows ran on PowerPC, DEC Alpha, and MIPS processors. Those variants have all ceased; Microsoft supports only Intel PC architecture for "real" Windows and the new Windows RT variant for ARM processors, and both of these run on well-defined hardware.
The .NET platform has also narrowed. Instead of machine architectures, the narrowing has been with programming languages. When Microsoft released .NET, it supplied compilers for four languages: C++, C#, Visual Basic, and Visual J#. Microsoft also made bold proclamations about the .NET platform supporting multiple languages; the implications were that other vendors would build compilers and that Java was a "one-trick pony".
Yet the broad support for languages has narrowed. It was clear from the start that Microsoft was supporting C# as "first among equals". The documentation for C# was more complete and better organized than the documentation for other languages. Other vendors did provide compilers for other languages (and some still do), but the .NET world is pretty much C# with a small set of VB fans. Microsoft's forays into Python and Ruby (the IronPython and IronRuby engines) have been spun off as separate projects; the only "expansion" language from Microsoft is F#, used for functional programming.
Another word for this narrowing of technology is "specialization". Microsoft focussed Windows on the PC platform; the code become specialized. The .NET ecosystem is narrowing to C#; our code is becoming specialized.
Specialization has its advantages. Limiting Windows to the PC architecture reduced Microsoft's costs and enabled them to optimize Windows for the platform. (Later, Microsoft would become strong enough to specify the hardware platform, and they made sure that advances in PC hardware meshed with improvements in Windows.)
Yet specialization is not without risk. When one is optimized for an environment (such as PC hardware or a language), it is hard to move to another environment. Thus, Windows is a great operating system for desktop PCs but a poor fit on tablets. Windows 8 shows that significant changes are needed to move to tablets.
Similarly, specializing in C# may lead to significant hurdles when new programming paradigms emerge. The .NET platform is optimized for C# and its object-oriented roots. Moving to another programming paradigm (such as functional programming) may prove difficult. The IronPython and IronRuby projects may provide some leverage, as may the F# language, but these are quite small compared to C# in the .NET ecosystem.
Interestingly, the "one-trick pony" environment for Java has expanded to include Clojure, Groovy, and Scala, as well as Jython and JRuby. So not all technologies narrow, and
Picking the target for your technology is a delicate balance. A broad set of targets leads to performance issues and markets with little return. A narrow set of targets reduces costs but foregoes market penetration (and revenue) and may leave you ill-prepared for a paradigm shift. You have to chart your way between the two.
I didn't say it would be easy.
Labels:
C#,
F#,
IBM PC,
IronPython,
IronRuby,
Java,
Microsoft Windows,
specialization
Monday, May 20, 2013
Where do COBOL programmers come from?
In the late Twentieth Century, COBOL was the standard language for business applications. There were a few other contenders (IBM's RPG, assembly language, and DEC's DIBOL) but COBOL was the undisputed king of the business world. If you were running a business, you used COBOL.
If you worked in the data processing shop of a business, you knew COBOL and programmed with it.
If you were in school, you had a pretty good chance of being taught COBOL. Not everywhere, and not during the entire second half of the century. I attended an engineering school; we learned FORTRAN, Pascal, and assembly language. (We also used the packages SPSS and CSMP.)
Schools have, for the most part, stopped teaching COBOL. A few do, but most moved on to C++, or Java, or C#. A number are now teaching Python.
Business have lots of COBOL code. Lots and lots of it. And they have no reason to convert that code to C++, or Java, or C#, or the "flavor of the month" in programming languages. Business code is often complex and working business code is precious. One modifies the code only when necessary, and one converts a system to a new language only at the utmost need.
But that code, while precious, does have to be maintained. Businesses change and those changes require fixes and enhancements to the code.
Those changes and enhancements are made by COBOL programmers.
Of which very few are being minted these days. Or for the past two decades.
Which means that COBOL programmers are, as a resource, dwindling.
Now, I recognize that the production of COBOL programmers has not ceased. There are three sources that I can name with little thought.
First are the schools (real-life and on-line) that offer courses in COBOL. Several colleges still teach it, and several on-line colleges offer it.
Second is offshore programming companies. Talent is available through outsourcing.
Third is existing programmers who learn COBOL. A programmer who knows Visual Basic and C++, for example, may choose to learn COBOL (perhaps through an on-line college).
Yet I believe that, in any given year, the number of new COBOL programmers is less than the number of retiring COBOL programmers. Which means that the talent pool is now at risk, and therefore business applications may be at risk.
For many years businesses relied on the ubiquitous nature of COBOL to build their systems. I'm sure that the managers considered COBOL to be a "safe" language: stable and reliable for many years. And to be fair, it was. COBOL has been a useful language for almost half a century, a record that only FORTRAN can challenge.
The dominance of COBOL drove a demand for COBOL programmers, which in turn drove a demand for COBOL training. Now, competing languages are pulling talent out of the "COBOL pool", starving the training. Can businesses be far behind?
If you are running a business, and you rely on COBOL, you may want to think about the future of your programming talent.
* * * * *
Such an effect is not limited to COBOL. It can happen to any popular language. Consider Visual Basic, a dominant language in Windows shops in the 1990s. It has fallen out of favor, replaced by C#. Or consider C++, which like COBOL has a large base of installed (and working) code. It, too, is falling out of favor, albeit much more slowly than Visual Basic or COBOL.
If you worked in the data processing shop of a business, you knew COBOL and programmed with it.
If you were in school, you had a pretty good chance of being taught COBOL. Not everywhere, and not during the entire second half of the century. I attended an engineering school; we learned FORTRAN, Pascal, and assembly language. (We also used the packages SPSS and CSMP.)
Schools have, for the most part, stopped teaching COBOL. A few do, but most moved on to C++, or Java, or C#. A number are now teaching Python.
Business have lots of COBOL code. Lots and lots of it. And they have no reason to convert that code to C++, or Java, or C#, or the "flavor of the month" in programming languages. Business code is often complex and working business code is precious. One modifies the code only when necessary, and one converts a system to a new language only at the utmost need.
But that code, while precious, does have to be maintained. Businesses change and those changes require fixes and enhancements to the code.
Those changes and enhancements are made by COBOL programmers.
Of which very few are being minted these days. Or for the past two decades.
Which means that COBOL programmers are, as a resource, dwindling.
Now, I recognize that the production of COBOL programmers has not ceased. There are three sources that I can name with little thought.
First are the schools (real-life and on-line) that offer courses in COBOL. Several colleges still teach it, and several on-line colleges offer it.
Second is offshore programming companies. Talent is available through outsourcing.
Third is existing programmers who learn COBOL. A programmer who knows Visual Basic and C++, for example, may choose to learn COBOL (perhaps through an on-line college).
Yet I believe that, in any given year, the number of new COBOL programmers is less than the number of retiring COBOL programmers. Which means that the talent pool is now at risk, and therefore business applications may be at risk.
For many years businesses relied on the ubiquitous nature of COBOL to build their systems. I'm sure that the managers considered COBOL to be a "safe" language: stable and reliable for many years. And to be fair, it was. COBOL has been a useful language for almost half a century, a record that only FORTRAN can challenge.
The dominance of COBOL drove a demand for COBOL programmers, which in turn drove a demand for COBOL training. Now, competing languages are pulling talent out of the "COBOL pool", starving the training. Can businesses be far behind?
If you are running a business, and you rely on COBOL, you may want to think about the future of your programming talent.
* * * * *
Such an effect is not limited to COBOL. It can happen to any popular language. Consider Visual Basic, a dominant language in Windows shops in the 1990s. It has fallen out of favor, replaced by C#. Or consider C++, which like COBOL has a large base of installed (and working) code. It, too, is falling out of favor, albeit much more slowly than Visual Basic or COBOL.
Tuesday, August 28, 2012
The deception of C++'s 'continue' and 'break'
Pick up any C++ reference book, visit any C++ web site, and you will see that the 'continue' and 'break' keywords are grouped with the loop constructs. In many ways it makes sense, since you can use these keywords with only those constructs.
But the more I think about 'continue' and 'break', the more I realize that they are not loop constructs. Yes, they are closely associated with 'while' and 'for' and 'case' statements, but they are not really loop constructs.
Instead, 'continue' and 'break' are variations on a different construct: the 'goto' keyword.
The 'continue' and 'break' statements in loops bypass blocks of code. 'continue' transfers control to the end of the loop block and allows the next iteration to continue. 'break' transfers control to the end of the loop block and forces the loop to end (allowing code after the loop to execute). These are not loop operations but 'transfer of control' operations, or 'goto' operations.
Now, modern programmers have declared that 'goto' operations are evil and must never, ever be used. Therefore, 'continue' and 'break', as 'goto' in disguise, are evil and must never, ever be used.
(The 'break' keyword can be used in 'switch/case' statements, however. In that context, a 'goto' is exactly the construct that we want.)
Back to 'continue' and 'break'.
If 'continue' and 'break' are merely cloaked forms of 'goto', then we should strive to avoid their use. We should seek out the use of 'continue' and 'break' in loops and re-factor the code to remove them.
I will be looking at code in this light, and searching for the 'continue' and 'break' keywords. When working on systems, I will make their removal one of my metrics for the improvement of the code.
But the more I think about 'continue' and 'break', the more I realize that they are not loop constructs. Yes, they are closely associated with 'while' and 'for' and 'case' statements, but they are not really loop constructs.
Instead, 'continue' and 'break' are variations on a different construct: the 'goto' keyword.
The 'continue' and 'break' statements in loops bypass blocks of code. 'continue' transfers control to the end of the loop block and allows the next iteration to continue. 'break' transfers control to the end of the loop block and forces the loop to end (allowing code after the loop to execute). These are not loop operations but 'transfer of control' operations, or 'goto' operations.
Now, modern programmers have declared that 'goto' operations are evil and must never, ever be used. Therefore, 'continue' and 'break', as 'goto' in disguise, are evil and must never, ever be used.
(The 'break' keyword can be used in 'switch/case' statements, however. In that context, a 'goto' is exactly the construct that we want.)
Back to 'continue' and 'break'.
If 'continue' and 'break' are merely cloaked forms of 'goto', then we should strive to avoid their use. We should seek out the use of 'continue' and 'break' in loops and re-factor the code to remove them.
I will be looking at code in this light, and searching for the 'continue' and 'break' keywords. When working on systems, I will make their removal one of my metrics for the improvement of the code.
Labels:
break,
C#,
C++,
continue,
goto,
goto considered harmful,
Java,
program design,
programming,
structured programming
Saturday, January 7, 2012
Predictions for 2012
Happy new year!
The turning of the year provides a time to pause, look back, and look ahead. Looking ahead can be fun, since we can make predictions.
Here are my predictions for computing in the coming year:
Here are my predictions for computing in the coming year:
With the rise of mobile apps, we will see changes in project requirements and in the desires of candidates.
The best talent will work on mobile apps. The best talent will -- as always -- work on the "cool new stuff". The "cool new stuff" will be mobile apps. The C#/.NET and Java applications will be considered "that old stuff". Look for the bright, creative programmers and designers to flock to companies building mobile apps. Companies maintaining legacy applications will have to hire the less enthusiastic workers.
Less funding for desktop applications. Desktop applications will be demoted to "legacy" status. Expect a reduced emphasis on their staffing. These projects will be viewed as less important to the organization, and will see less training, less tolerance for "Fast Company"-style project teams, and lower compensation. Desktop projects will be the standard, routine, bureaucratic (and boring) projects of classic legacy shops. The C# programmers will be sitting next to, eating lunch with, and reminiscing with, the COBOL programmers.
More interest in system architects. Mobile applications are a combination of front end apps (the iPhone and iPad apps) and back-end systems that store and supply data. Applications like Facebook and Twitter work only because the front end app can call upon the back end systems to obtain data (updates submitted by other users). Successful applications will need people who can visualize, describe, and lead the team in building mobile applications.
More interest in generalists. Companies will look to bring on people skilled in multiple areas (coding, testing, and user interfaces). They will be less interested in specialists who know a single area -- with a few exceptions of the "hot new technologies".
Continued fracturing of the tech world. Amazon.com, Apple, and Google will continue to build their walled gardens of devices, apps, and media. Music and books available from Amazon.com will not be usable in the Apple world (although available on the iPod and iPad in the Amazon.com Kindle app). Music and books from Apple will not be available on Amazon.com Kindles and Google devices. Consumers will continue to accept this model. (Although like 33 RPM LPs and 45 PRM singles, consumers will eventually want a music and books on multiple devices. But that is a year or two away.)
Cloud computing will be big, popular, and confused. Different cloud suppliers offer different types of cloud services. Amazon.com's EC2 offering is a set of virtual machines that allow one to "build up" from there, installing operating systems and applications. Microsoft's Azure is a set of virtual machines with Windows/.NET and one may build applications starting at a higher level that Amazon's offering. Salesforce.com offers their cloud platform that lets one build applications at an even higher level. Lots of folks will want cloud computing, and vendors will supply it -- in the form that the vendor offers. When people from different "clouds" meet, they will be confused that the "other guy's cloud" is different from theirs.
Virtualization will fade into the background. It will be useful in large shops, and it will not disappear. It is necessary for cloud computing. But it will not be the big star. Instead, it will be a quiet, necessary technology, joining the ranks of power management, DASD management, telecommunications, and network administration. Companies will need smart, capable people to make it work, but they will be reluctant to pay for them.
Telework will exist, quietly. I expect that the phrase "telework" will be reserved for traditional "everyone works in the office" companies that allow some employees to work in remote locations. For them, the default will be "work in the office" and the exception will be "telework". In contrast, small companies (especially start-ups) will leverage faster networks, chat and videoconferencing, mobile devices, and social networks. Their standard mode of operation will be "work from wherever" but they won't think of themselves as offering "telework". From their point of view, it will simply be "how we do business", and they won't need a word to distinguish it. (They may, however, create a word to describe folks who insist on working in company-supplied space every day. Look for new companies to call these people "in-house employees" or "residents".)
Understand the sea change of the iPad. The single-app interface works for people consuming information. The old-fashioned multi-windowed desktop interface works for people composing and creating information. This change leads to a very different approach to the design of applications. This year people will understand the value of the "swipe" interface and the strengths of the "keyboard" interface.
Voice recognition will be the hot new tech. With the success of "Siri" (and Android's voice recognizer "Majel"), expect interest in voice recognition technology and apps designed for voice.
Content delivery becomes important. Content distributors (Amazon.com, Google, and Apple) become more important, as they provide exclusive content within their walled gardens. The old model of a large market in which anyone can write and sell software will change to a market controlled by the delivery channels. The model becomes one similar to the movie industry (a few studios producing and releasing almost all movies) and the record industry (a few record labels producing and releasing almost all music) and the book industry (a few publishing houses... you get the idea).
Content creation becomes more professional. With content delivery controlled by the few major players, the business model becomes less "anyone can put on a show" and more of "who do you know". Consumers and companies will have higher expectations of content and the abilities of those who prepare it.
Amateur producers will still exist, but with less perceived value. Content that is deemed "professional" (that is, for sale on the market) will be developed by professional teams. Other content (such as the day-to-day internal memos and letters) will be composed by amateur content creators: the typical office worker equipped with a word processor, a spreadsheet, and e-mail will be viewed as less important, since they provide no revenue.
Microsoft must provide content provider and enable professional creators. Microsoft's business has been to supply tools to amateur content creators. Their roots of BASIC, DOS, Windows, Office, and Visual Basic let anyone (with or without specific degrees or certifications) create content for the market. With the rise of the "professional content creator", expect Microsoft to supply tools labeled (and priced) for professionals.
Interest in new programming languages. Expect a transition from the object-oriented languages (C++, Java, C#) to a new breed of languages that introduce ideas from functional programming. Languages such as Scala, Lua, Python, and Ruby will gain in popularity. C# will have a long life -- but not the C# we know today. Microsoft has added functional programming capabilities to the .NET platform, and modified the C# language to use them. C# will continue to change as Microsoft adapts to the market.
The new year brings lots of changes and a bit of uncertainty, and that's how it should be.
Subscribe to:
Posts (Atom)