Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, January 3, 2022

The biggest gains of Apple's M processors is behind us

Improvements in hardware are not linear. If we look at the performance of hardware over time, we can see that performance improvements follow a pattern: a sharp rise in performance followed by a period of little improvements. (A graph looks like a staircase, with the pattern of "rise, flat, rise, flat".)

The first implementation of a change provides significant increases. Over time, we refine the improvements and gain additional increases. But those later increases are smaller. Eventually, subsequent refinements provide minimal improvements. So we move on to other ideas.

The history of personal computers follows this pattern. We have made a number of changes to hardware that have improved performance. Each of those changes yielded a large initial gain, and then gradually diminishing improvements.

We have increased the clock speed.

We changed memory technology from "core" memory (ferrous rings) to transistor-based memory (static at first, and then dynamic memory).

We have added caches, to store values in processor, reducing the dependence on memory. We liked the idea of caches so much that we did it more than once. Processors now have three levels of caches.

We have off-loaded work to (smarter) devices. Devices now have their own processors and can perform tasks independently of the CPU.

We have increased the number of CPU cores, which improves performance for systems with multiple processes and multiple threads. (Which is just about every system we have today.)

Each of these changes improved performance. A large step up at first, and then smaller increases.

Now Apple has used another method to improve performance: Reduce distance between chips with system-on-a-chip designs. The M1 chips include all components of the computer: CPU, memory, storage, GPU, and more.

Yet the overall pattern of improvements will hold with this new design. The first M1 chip will have significant improvements over the older design of discrete components. The M1 Pro and M1 Max will have improvements over the M1 chip, but not larger than the initial M1 gains.

Later chips, such as the M2, M2 Pro, and even the M3, will have gains, but less and less (in terms of percentages) than the previous chips. The performance curve, after a sharp rise with the M1 chip, will flatten. Apple will have entered the "plain of modest gains" phase.

Apple's M1 chips are nice. They provide good performance. Newer versions will be better: faster, more powerful. But the biggest increases, I think, are already behind us.

Friday, September 28, 2018

Macbooks are not an incentive

I've seen a number of job postings that include the line "all employees use MacBooks".

I suppose that this is intended as an enticement. I suppose that a MacBook is considered a "perk", a benefit of working at the company. Apple equipment is considered "cool", for some reason.

I'm not sure why.

MacBooks in 2018 are decent computers, but I find that they are inferior to other computers, especially when it comes to development.

I've been using computers for quite some time, and programming for most of that time. I've used MacBooks and Chromebooks and modern PCs. I've used older PCs and even ancient PCs with IBM's Model M keyboard. I've worked on IBM's System/23 (which was the origin of the first IBM PC keyboard). I have even used model 33 ASR Teletype terminals, which are large mechanical beasts that print uppercase on roll paper and do a poor job of it. So I know what I like.

And I don't like Apple's MacBook and MacBook Pro computers. I dislike the keyboard; I want more travel in the keys. I dislike the touchpad in front of the keyboard; I prefer the small pointing stick embedded in Lenovo and some Dell laptop keyboards. I dislike Apple's displays, which are too bright and too reflective. I want "matte" finish displays which hide reflections from light sources such as windows and ceiling lights.

My main client provides a computer, one that I must use when working for them. The computer is a Dell laptop, with a high-gloss display and a keyboard that is a bit better than current Apple keyboards, but not by much. I supplement the PC with a matte-finish display and a Matias "Quiet Pro" keyboard. These make the configuration much more tolerable.

Just as I "fixed" the Dell laptop, I could "fix" a MacBook Pro with an additional keyboard and display. But once I do that, why bother with the MacBook? Why not use a Mac Mini, or for that matter any small-factor PC? The latter would probably offer just as much memory and disk, and more USB ports. And cost less. And run Linux.

It may be some time before companies realize that developers have strong opinions about the equipment that they use. I think that they will, and when they do, they will provide developers with choices for equipment -- including the "bring your own" option.

And it may be some time before developers realize that Apple MacBooks are not the best for development. Apple devices have a lot of glamour, but glamour doesn't get the job done -- at least not for me. Apple designs computers for visual appeal, and I need good ergonomic design.

I'm not going to forbid developers from using Apple products, or demand that everyone use the same equipment that I use. I will suggest that developers try different equipment, see which devices work for them, and understand the benefits of those devices. Pick your equipment for the right reasons, not because it has a pretty logo.

In the end, I find the phrase "all employees use MacBooks" to be a disincentive, a reason to avoid a particular gig. Because I would rather be productive than cool.

Tuesday, July 26, 2016

The changing role of IT

The original focus of IT was efficiency and accuracy. Today, the expectation still includes efficiency and accuracy, yet adds increased revenue and expanded capabilities for customers.

IT has been with us for more than half a century, if you count IT as not only PCs and servers but also minicomputers, mainframes, and batch processing systems for accounting and finance.

Computers were originally large, expensive, and fussy beasts. They required an whole room to themselves. Computers cost a lot of money. Mainframes cost hundreds of thousands of dollars (if not millions). They needed a coterie of attendants: operators, programmers, service technicians, and managers.

Even the early personal computers were expensive. A PC in the early 1980s cost three to five thousand dollars. They didn't need a separate room, but they were a significant investment.

The focus was on efficiency. Computers were meant to make companies more efficient, processing transactions and generating reports faster and more accurately than humans.

Because of their cost, we wanted computers to operate as efficiently as possible. Companies who purchased mainframes would monitor CPU and disk usage to ensure that they were operating in the ninety-percent range. If usage was higher than that, they knew they needed to expand their system; if less, they had spent too much on hardware.

Today, we focus less on efficiency and more on growing the business. We view automation and big data as mechanisms for new services and ways to acquire new customers.

That's quite a shift from the "spend just enough to print accounting reports" mindset. What changed?

I can think of two underlying changes.

First, the size and cost of computers have dropped. A cell phone that fits in your pocket and costs less than a thousand dollars. Laptop PCs can be acquired for similar prices; Chromebooks for significantly less. Phones, tablets, Chromebooks, and even laptops can be operated by a single person.

The drop in cost means that we can worry less about internal efficiency. Buying a mainframe computer that was too large was an expensive mistake. Buying an extra laptop is almost unnoticed. Investing in IT is like any other investment, with a potential return of new business.

Yet there is another effect.

In the early days of IT (from the 1950s to the 1980s), computers were mysterious and almost magical devices. Business managers were unfamiliar with computers. Many people weren't sure that computers would remain tame, and some feared that they would take over (the company, the country, the world). Managers didn't know how to leverage computers to their full extent. Investors were wary of the cost. Customers resisted the use of computer-generated cards that read "do not fold, spindle, or mutilate".

Today, computers are not mysterious, and certainly not magical. They are routine. They are mundane. And business managers don't fear them. Instead, managers see computers as a tool. Investors see them as equipment. Customers willingly install apps on their phones.

I'm not surprised. The business managers of the 1950s grew up with manual processes. Senior managers might have remembered an age without electricity.

Today's managers are comfortable with computers. They used them as children, playing video games and writing programs in BASIC. The thought that computers can assist the business in various tasks is a natural extension of that experience.

Our view of computers has shifted. The large, expensive, magical computation boxes have shrunk and become cheaper, and are now small, flexible, and powerful computation boxes. Simply owning (or leasing) a mainframe would provide strategic advantage through intimidation; now everyone can leverage server farms, networks, cloud computing, and real-time updates. But owning (or leasing) a server farm or a cloud network isn't enough to impress -- managers, customers, and investors look for business results.

With a new view of computers as mundane, its no surprise that businesses look at them as a way to grow.

Tuesday, July 19, 2016

How programming languages change

Programming languages change. That's not news. Yet programming languages cannot change arbitrarily; the changes are constrained. We should be aware of this, and pick our technology with this in mind.

If we think of a programming language as a set of features, then programming languages can change in three ways:

Add a feature
Modify a feature
Remove a feature

The easiest change (that is, the type with the least resistance from users) is adding a feature. That's no surprise; it allows all of the old programs to continue working.

Modifying an existing feature or removing a feature is a difficult business. It means that some programs will no longer work. (If you're lucky, they won't compile, or the interpreter will reject them. If you're not lucky, the compiler or interpreter will accept them but process them differently.)

So as a programming language changes, the old features remain. Look inside a modern Fortran compiler and you will find FORMAT statements and arithmetic IF constructs, elements of Fortran's early days.

When a programming language changes enough, we change its name. We (the tech industry) modified the C language to mandate prototypes and in doing so we called the revised language "ANSI C". When Stroustup enhanced C to handle object-oriented concepts, he called it "C with Classes". (We've since named it "C++".)

Sometimes we change not the name but the version number. Visual Basic 4 was quite different from Visual Basic 3, and Visual Basic 5 was quite different from Visual Basic 4 (two of the few examples of non-compatible upgrades). Yet the later versions retained the flavor of Visual Basic, so keeping the name made sense.

Perl 6 is different from Perl 5, yet it still runs old code with a compatibility layer.

Fortran can add features but must remain "Fortranish", otherwise we call it "BASIC" or "FOCAL" or something else. Algol must remain Algol or we call it "C". An enhanced Pascal is called "Object Pascal" or "Delphi".

Language names bound a set of features for the language. Change the feature set beyond the boundary, and you also change the name of the language. Which means that a language can change only so much, in only certain dimensions, while remaining the same language.

When we start a project and select a programming language, we're selecting a set of features for development. We're locking ourselves into a future, one that may expand over time -- or may not -- but will remain centered over its current point. COBOL will always be COBOL, C++ will always be C++, and Ruby will always be Ruby. A COBOL program will always be a COBOL program, a C++ program will always be a C++ program, and a Ruby program will always be a Ruby program.

A lot of this is psychology. We certainly could make radical changes to a programming language (any language) and keep the name. But while we *could* do this, we don't. We make small, gradual changes. The changes to programming languages (I hesitate to use the words "improvements" or "progress") are glacial in nature.

I think that tells us something about ourselves, not the technology.

Tuesday, July 7, 2015

I can write any language in FORTRAN

Experienced programmers, when learning a new programming language, often use the patterns and idioms of their old language in the new one. Thus, a programmer experienced in Java and learning Python will write code that, while legal Python, looks and smells like Java. The code is not "Pythonic".

I, when writing code in a new programming language, often write as if it were C. As I learn about the language, I change my pattern to match the language. The common saying is that a good programmer can write any language in FORTRAN. It's an old saying, probably from the age when most programmers learned COBOL and FORTRAN.

When the IT world shifted from structured programming languages (C, BASIC, FORTRAN) to object-oriented programming languages (C++, Java, C#) much of the code written in the new languages was in the style of the old languages. Eventually, we programmers learned to write object-oriented code.

Today, most programmers learn C# or Java as their first language.  Perhaps we should revise our pithy saying to: A good programmer can write any language in Java. (Or C#, if you prefer.)

Why is this important? Why think about the transition from structured programming ("procedural programming") to object-oriented programming?

Because we're going through another transition. Two, actually.

The first is the transition from object-oriented programming to functional programming. This is a slow change, one that will take several years and perhaps decades. Be prepared to see more about functional programming: articles, product releases, and services in programming platforms. And be prepared to see lots of functional programs written in a style that matches object-oriented code.

The second is the transition from web applications to mobile/cloud applications. This change is faster and is already well underway. Yet be prepared to see lots of mobile/cloud applications architected in the style of web applications.

Eventually, we will learn to write good functional programs. Eventually, we will learn to design good cloud systems. Some individuals (and organizations) will make the transition faster than others.

What does this mean for the average programmer? For starters, be aware of one's own skills. Second, have a plan to learn the new programming paradigms. Third, be aware of the skills of a hiring organization. When a company offers you a job, understand how that company's level matches your own.

What does this mean for companies? First, we have yet another transition in the IT world. (It may seem that the IT world has a lot of these inconvenient transitions.) Second, develop a plan to change your processes to use the new technology. (The changes are happening whether you like them or not.) Third, develop a plan to help your people learn the new technologies. Companies that value skilled employees will plan for training, pilot programs, and migration efforts. Companies that view employees as expensive resources that are little more than cost centers will choose to simply engage contractors with the skills and lay off those currently on their payrolls.

And in the short term, be prepared to see a lot of FORTRAN.

Tuesday, May 26, 2015

When technology is not the limit

The early days of computing were all about limits. Regardless of the era you pick (mainframe, minicomputer, PC, client-server, etc.) the systems were constrained and imposed hard limits on computations. CPUs were limited in speed. Memory was limited to small sizes. Disks for storage were expensive, so people used the smallest disk they could and stored as much as possible on cheaper tape.

These limitations showed through to applications.

Text editors could handle a small amount of text at one time. Some were limited to that amount and could handle only files of that size (or smaller). Other editors would "page out" a block of text and "page in" the next block, letting you work on one section of the text at a time, but the page operations worked only in the forward direction -- there was no "going back" to a previous block.

Compilers would allow for programs of only limited sizes (the limits dependent on the memory and storage available). Early FORTRAN compilers used only the first six characters of identifiers (variable names and function names) and ignored the remainder, so the variables DVALUES1 and DVALUES2 were considered to be the same variable.

In those days, programming required knowledge not only of the language but also of the system limitations. The constraints were a constant pressure, a ceiling that could not be exceeded. Such limitations drove much innovation; we were constantly yearning for more powerful instruction sets, larger memories, and more capacious and faster storage. Over time, we achieved those goals.

The history of the PC shows such growth. The original IBM PC was equipped with an 8088 CPU, a puny (by today's standards) processor that could not even handle floating-point numbers. While the processor could handle 1 MB of memory, the computer came equipped with only 64 KB of RAM and 64 KB of ROM. The display was a simple arrangement, with either high-resolution text only monochrome or low-resolution graphics in color.

Over the years, PCs acquired more powerful processors, larger address spaces, more memory, larger disk drives (well, larger capacities but smaller physical forms), and better displays.

We are at the point where a number of applications have been "solved", that is, they are not constrained by technology. Text editors can hold the entire document (up to several gigabytes) in memory and allow sophisticated editing commands. The limits on editors have been expanded such that we do not notice them.

Word processing, too, has been solved. Today's word processing systems can handle just about any function: wrapping text to column widths, accounting for typeface variations and kerning, indexing and auto-numbering, ... you name it.

Audio processing, e-mail, web browsing, ... all of these have enough technology to get the job done. We no longer look for a larger processor or more memory to solve our problems.

Which leads to an interesting conclusion: When our technology can handle our needs, an advance in technology will not help us.

A faster processor will not help our word processors. More memory will not help us with e-mail. (When one drives in suburbia on 30 MPH roads, a Honda Civic is sufficient, and a Porsche provides no benefits.)

I recognize that there are some applications that would benefit from faster processors and "more" technology. Big data (possibly, although cloud systems seems to be handling that). Factorization of numbers, for code-breaking. Artificial Intelligence (although that may be more a problem of algorithms and not raw hardware).

For the average user, today's PCs, Chromebooks, and tablets are good enough. They get the job done.

I think that this explains the longevity of Windows XP. It was a "good enough" operating system running on "good enough" hardware, supporting "good enough" applications.

Looking forward, people will have little incentive to switch from 64-bit processors to larger models (128-bit? super-scaled? variable-bit?) because they will offer little in the way of an improved experience.

The market pressure for larger systems will evaporate. What takes its place? What will drive innovation?

I see two things to spur innovation in the market: cost and security. People will look for systems with lower cost. Businesses especially are price-conscious and look to reduce expenses.

The other area is security. With more "security events" (data exposures, security breaches, and viruses) people are becoming more aware of the need for secure systems. Increased security (if there is a way to measure security) will be a selling point.

So instead of faster processors and more memory, look for cheaper systems and more secure (possibly not cheaper) offerings.

Thursday, November 6, 2014

A New Microsoft And A New Tech World

Things are just not what they used to be.

In the good old days, Microsoft defined technology for business and set the pace for change. They had built an empire on Windows and products that worked with Windows.

Not only did Microsoft build products, they built their own versions of things to work in their world. Microsoft adopted the attitude of "not invented here": they eschewed popular products and built their own versions.

They built their own operating system (DOS at first, then Windows). They built their own word processor, their own spreadsheet (two actually: Multiplan was their first attempt), their own database manager, their own presentation software. They built their own browser. They even constructed their own version of a "ZIP" file: OLE Structured Storage.

All of these technologies had one thing in common: they worked within the Microsoft world. Microsoft Office ran on Windows - and nothing else. Internet Explorer worked on Windows - and nothing else. Visual Studio ran on Windows - and... you get the idea. Microsoft technology worked with Microsoft technology and nothing else.

For two decades this strategy worked. And then the world changed.

Microsoft has shifted away from the "all things Microsoft" approach. Consider:

  • Microsoft Word uses an open (well, open-ish) format of ZIP and XML
  • So does Microsoft Excel
  • Visual Studio supports projects that use JavaScript, HTML, and CSS
  • Microsoft Azure supports Linux, PHP, Python, and node.js
  • Office 365 apps are available for Android and iOS

These are significant changes. Microsoft is no longer the self-centered (one might say solipsistic) entity that it once was.

We must give up our old prejudices. The idea that Microsoft technology is always good ("No one was fired for buying Microsoft") is not true. It and never was. The weak reception of the Surface tablet and Windows phones shows that. (The anemic reception of Windows RT also shows that.)

We must also give up the notion that all Microsoft technology is large, expensive, bug-ridden, and difficult to maintain. It may be fun to hate on Microsoft, but it is not practical. Microsoft Azure is a capable set of tools. Their 'Express' products may be limited in functionality but they do work, and without much effort or expense.

The bigger change is the shift away from monoculture technology. We're entering an age of diverse technology. Instead of servers running Microsoft Windows and Microsoft databases and Microsoft applications with clients running Microsoft Windows and Microsoft browsers using Microsoft authentication, we have Microsoft applications running in Amazon.com's cloud with users holding Android tablets and Apple iPads.

Microsoft is setting a new standard for IT: multiple vendors, multiple technologies, and interoperability. What remains to be seen is how other vendors will follow.

Sunday, May 22, 2011

Discoveries about discovery

Recent developments in tech have created an automated process to handle "discovery", the process of reviewing materials for a legal case.

One might think that law firms will adopt the technology, as a way to reduce costs. Or one might think that law firms will *not* adopt the tech, believing that they are traditional and unwilling to change. Or perhaps one might think that law firms are risk-averse, and do not want to try new tech that could miss something and cause the loss of a case.

I have a different outlook. I think law firms will avoid the tech of automated discovery, for economic reasons.

Law firms use the time-and-materials business model. They bill by the hour: the more hours, the higher the bill. Law firms have constructed their hourly rate to cover their costs (including labor) and to provide profit. Thus, each billable hour generates profit (not just revenue) for the firm. A reduction in labor (billable hours) equates to a reduction in profit.

Industries that have adopted technology (specifically to reduce labor hours) are industries that sell products with prices fixed by the market. Automobiles, hair dryers, books, computers... these are all sold at the market price. Profit is revenue less the cost of goods and manufacturing. The costs of labor and goods does not drive the price, and a manufacturer must manage costs to live within the market price. A reduction in labor hours equates to an increase in profit.

Thus, we can expect that businesses selling goods (or services) at prices dictated by the market will adopt cost-reducing techniques and technologies. They will computerize accounting systems, automate assembly lines with robots, and outsource software development.

We can also expect that businesses selling services on a time-and-materials basis will *not* adopt technologies that reduce labor hours. (They may adopt techniques that reduce labor costs, replacing highly paid workers with lower-wage workers. But a reduction in billable hours is unlikely.)

What of this can we apply to software development?

The software development industry is a complex one. Some software is created and sold on a time-and-materials basis. Some is sold in the market. Some is given away. Some software that is sold is not sold in a free market; Microsoft enjoys a monopoly position with Windows and Office. (A monopoly that is perhaps less strong now than ten years ago, yet still part of the complexities of the market.)

Companies that build software with the time-and-materials business model have little incentive to reduce the workload in their projects. These projects benefit from high labor efforts, and therefore we can expect them to use little in the way of effort-reducing techniques. Companies that build software for sale on the open market have strong incentives to minimize their costs. Cost reduction methods include:
  • outsourcing
  • agile development (pair programming, automated tests)
  • modern languages (Python, Ruby, Scala, Lua, Haskell)
Companies in the time-and-materials business model do not need such cost-reducing measures. If you're not working with the above techniques, then you're probably on a time-and-materials project. Or you're in a company that will soon be out of business.

Sunday, January 2, 2011

Predictions for 2011

Happy New Year!

The turning of the year provides a time to pause, look back, and look ahead. Looking ahead can be the most fun, since we can make predictions.

Here are my predictions for computing in the coming year:

Tech that is no longer new

Virtualization will drop from the radar. Virtualization for servers is no longer exciting -- some might say that is is "old hat". Virtualization for the desktop is "not quite fully baked". I expect modest interest in virtualization, driven by promises of cost reductions, but no major announcements.

Social networks in the enterprise are also "not quite fully baked", but here the problem is with the enterprise and its ability to use them. Enterprises are built on command-and-control models and don't expect individuals to comment on each other's projects. When enterprises shift to results-oriented models, enterprise social networks will take off. But this is a change in psychology, not technology.

Multiple Ecosystems

The technologies associated with programming are a large set, and not a single bunch. Programmers seem to enjoy "language wars" (C# or C++? Java or Visual Basic? Python or Ruby?) and the heated debates continue in 2011. But beyond languages, the core technologies are bunched: Microsoft has its "stack" of Windows, .NET, C#, and SQL Server; Oracle with its purchase of Sun has Solaris, JVM, Java, Oracle DB, and MySQL; and so forth.

We'll continue to see the fracturing of the development world. The big camps are Microsoft, Oracle, Apple, Google, and open source. Each has their own technology set and the tools cross camps poorly, and I expect the different ecosystems will continue to diverge. Since each technology set is too large for a single person to learn, individuals must pick a camp as their primary skill area and forgo other camps. Look to see experts in one environment (or possibly two) but not all.

Companies will find that they are consolidating their systems into one of the big five ecosystems. They will build new systems in the new technology, and convert their old systems into the new technology. Microsoft shops will convert Java systems to C#, Oracle shops will convert Visual Basic apps to Java, and everyone will want to convert their old C++ systems to something else. (Interestingly, C++ was the one technology that spanned all camps, and it is being abandoned or at least deprecated by employers and practitioners.)

Microsoft will keep .NET and C#, and continue to "netify" its offerings. Learning from Apple, it will shift away from web applications in a browser to internet applications that run locally and connect through the network. Look for more "native apps" and fewer "web apps".

Apple will continue to thrive in the consumer space, with new versions of iPads, iPods, and iPhones. The big hole in their tech stack is the development platform, which compiles to the bare processor and not to a virtual machine. Microsoft uses .NET, Oracle uses JVM, and the open source favorites Perl, Python, and Ruby also use interpreters and virtual machines.

Virtual processors provide three advantages: 1) superior development tools, 2) improved security, and 3) independence from physical processors. Apple needs this independence; look for a new development platform for all of their devices (iPhone, iPad, iPod, and Mac). This new platform will require a relearning of development techniques, and may possibly use a new programming language.

Google has always lived in the net and does not need to "netify" its offerings. Unlike Microsoft, it will stay in the web world (that is, inside a browser). I expect modest improvements to things such as Google Search, Google Docs, and Google Chrome, and major improvements to Google cloud services such as the Google App Engine.

The open source "rebel alliance" will continue to be a gadfly with lots of followers but little commercial clout. Linux will be useful in the data center but it will not take over the desktop. Perl will continue its slow decline; Python, Ruby, and PHP will gain. Open source products such as Open Office may get a boost from the current difficult economic times.

Staffing

Companies will have a difficult time finding the right people. They will find lots of the "not quite right" people. When they find "the right person", that right person may want to work from home. Companies will have four options:

1) Adjust policies and allow employees to work from home or alternate locations. This will require revision to management practices, since one must evaluate on delivered goods and not on appearance and arrival time.

2) Keep the traditional management policies and practices and accept "not quite right" folks for the job.

3) Expand the role of off-site contractors. Companies that use off-site contractors but insist that employees show up to the office every day and attend Weekly Status Meetings of Doom will be in a difficult situation: How to justify the "work in the office every day" policy when contractors are not in the office?

4) Defer hiring.

How companies deal with staffing in an up market, after so many years of down markets, will be interesting and possibly entertaining.

New tech

Cloud computing will receive modest interest from established shops, but it will take a while longer for those shops to figure out how to use it. More interest will come from startups. The benefits of cloud computing, much like the PC revolution of the early 1980s, will be in new applications, not in improving existing applications.

We will see an interest in functional programming languages. I dislike the term "functional" since all programming languages let you define functions and are functional in the sense that they perform, but the language geeks have their reason for the term and we're stuck with it. The bottom line: Languages such as Haskell, Erlang, and even Microsoft's F# will tick up on the radar, modestly. The lead geeks are looking into these languages, just as they looked into C++ in the mid 1980s.

The cloud suppliers will be interested in functional programming. Functional languages are a better fit in the cloud, where processes can be shuffled from one processor to another. C# and Java can be used for cloud applications, but such efforts require a lot more discipline and careful planning.

Just as C++ was a big jump up from C and Pascal, functional languages are a big jump up from C++, Java, and C#. Programming in a functional language (Haskell, Erlang, or F#) requires a lot of up-front analysis and thought.

The transition from C to C++ was driven by Windows and its event-driven model. The transition from object-oriented to functional programming will be driven by the cloud and its new model. The shift to functional languages will take time, possibly decades. Complicating the transition will be the poor state of object-oriented programming. Functional programming assumes good-to-excellent knowledge of object-oriented programming, and a lot of shops use object-oriented languages but not rigorously. These shops will have to improve their skills in object-oriented programming before attempting the move to functional programming.

These are my predictions for the coming year. I've left out quite a few technologies, including:

Ruby on Rails
Silverlight
Windows Phone 7
NoSQL databases
Perl 6
Microsoft's platforms (WPF, WWF, WCF, and whatever they have introduced since I started writing this post)
Google's "Go" language
Android phones
Salesforce.com's cloud platform

There is a lot of technology out there! Too much to cover in a single post. I've picked those items that I think will be the big shakers. Let's see how well I do! We can check in twelve months.