Sunday, July 5, 2015

Apple wins the race for laptops -- now what?

Apple is successful, in part, due to its hardware design. Its products are lovely to look at and comfortable to hold. Its portable devices are thinner and lighter than the competition. Apple has defined the concept of laptop for the last several years -- perhaps the last decade. Apple has set the standard and other manufacturers have followed.

The latest MacBook design is light, sleek, and capable. It is the ultimate in laptop design. And by "ultimate", I mean "ultimate". Apple has won the laptop competition. And now, they have a challenge: what to do next?

Apple's advantage of the MacBook will last for only a short time. Already other manufacturers are creating laptops that are just as thin and sleek as the MacBook. But Apple cannot develop a thinner, sleeker MacBook. The problem is that there are limits to physical size. There are certain things that make a laptop a laptop. You need a display screen and a keyboard, along with processor, memory, and I/O ports. While the processor, memory, and I/O ports can be reduced in size (or in number), the screen and keyboard must be a certain size, due to human physiology.

So how will Apple maintain its lead in the laptop market?

They can use faster processors, more memory, and higher resolution displays.

They can add features to Mac OS X.

They can attempt thinner and sleeker designs.

They may add hardware features, such as wireless charging and 3G/4G connections.

But there is only so much room left for improvement.

I think the laptop market is about to change. I think that laptops have gotten good enough -- and that there are lots of improvements in other markets. I expect manufacturers will look for improvements in tablets, phones, and cloud technologies.

Such a change is not unprecedented. It happened in the desktop PC market. After the initial IBM PC was released, and after Compaq made the first clone, desktop PCs underwent an evolution that lead to the PC units we have today -- and have had for the past decade. Desktop PCs became "good enough" and manufacturers moved on to other markets.

Now that laptops have become good enough, look for the design of laptops to stabilize, the market to fragment among several manufacturers with no leader, prices to fall, and innovation to occur in other markets.

Which doesn't mean that laptops will become unimportant. Just as PCs had a long life after they became "good enough" and their design stabilized, laptops will have a long life. They will be an important part of the IT world.

Wednesday, July 1, 2015

Oracle grabs Java and goes home

The US Supreme Court has declined to hear Google's appeal in the decision to allow Oracle property rights to the Java API. This has caused some consternation in the tech industry. Rightfully so, I believe. But the biggest damage may be to Oracle.

I'm sure Oracle is protecting their property -- or so they think. By obtaining control of the Java API they have maintained ownership of the Java language. They have prevented anyone else from re-implementing Java, as Google did with Android.

This all makes sense from a corporate point of view. A corporation must do what is best for its shareholders, and it must keep control over its properties. If Google (or anyone else) re-implemented Java without Oracle's permission (read that as 'license' and 'royalty payments') then Oracle could be seen as delinquent.

Yet I cannot help but think that Oracle's actions in this case have devalued the Java property. Consider:

Google, the immediate target, will pay Oracle for the Java API in Android. But these payments will be for a short term, perhaps the next five years. Can one doubt that Google will redesign Android to use a different language (perhaps Go) for its apps? Oracle will get payments from Google in the short term but not in the long term.

Other tech suppliers will move away from Java. Apple and Microsoft, for example, have little to gain by supporting Java. Microsoft has been burned by Java in the past (see "Visual Java") and doesn't need to be burned again.

Users of Java, namely large corporations, may re-think their commitment to Java. Prior to Oracle's grab, Java was seen as a neutral technology, one not tied to a large tech supplier. C# and Visual Basic are tied to Microsoft; Objective C and Swift are tied to Apple. C and C++ are not tied to specific vendors, but they are considered older technologies and expensive. Other languages, languages not tied to vendors (Python, Ruby) have appeal. When other languages gain, Java loses.

The open source world will look at Oracle's actions dimly. Open source is about sharing, and Oracle's move is all about *not* sharing. The open source movement was already suspicious of Oracle; this move will push developers away.

Java has been losing market share. The Tiobe index has seen a steady decline in Java's popularity over the past fifteen years. Oracle, by shouting "mine!", has perhaps accelerated that decline.

In the long term, Oracle may have damaged the Java property. Which is not good for their shareholders.

Java is a valuable property only while people use it. If everyone (except Oracle) abandons Java, Oracle will have a property with little value.

Tuesday, June 30, 2015

Apple's browser

Does Apple need a browser?

Apple has Safari, but maintains it poorly. Is Apple serious about its browser?

Come to think of it, why does Apple need a browser?

I can think of several reasons:

Ego Apple has to have a browser to satisfy its ego. It needs to have a browser to meet some internal need.

Freudian? Yes. Reasonable? No.

Apple is a smart company. It doesn't invest in products to suit its ego. It invests to improve revenue.

Keeping up with Microsoft Microsoft has a browser. Apple wants, at some level, to compete with Microsoft. Therefore, Apple needs a browser.

Doubtful. Apple doesn't have to match Microsoft one-for-one on features. They never have.

Superlative web experience Apple knows Mac OSX and iOS better then anyone. They, and only they, can build the browser that provides the proper user experience.

Possible. But only if Apple thinks that a good web experience is necessary.

Avoid dependence on others Max OSX (and iOS), despite Apple's most fervent wishes, still needs a browser. Without an Apple browser, Apple would have to rely on another browser. Perhaps Google Chrome. Perhaps Mozilla Firefox. But relying on Google is risky -- Google and Apple are not the best of friends. Relying on Mozilla is also risky, but in another sense: Mozilla may not be around much longer, thanks to the popularity of other browsers (of which Safari is one).

All of these strategies have one thing in common: the assumption that Apple considers the web important.

I'm not sure Apple thinks that. It may be that Apple thinks, in the long run, that the web is unimportant. Apple focusses on native apps, not HTML 5 apps. The dominant app design processes data on the local device and uses the cloud for storage, but nothing more. Apple doesn't provide cloud services for computing. Apple has no service that matches Windows Azure or Google's computer engine. In Apple's world, devices compute and the cloud stores.

* * * * *

The more I think about it, the more I convince myself that Apple doesn't need a browser. Apple has already delayed several improvements to Safari. Maybe Apple thinks that Safari is good enough for the rest of us.

In the Apple ecosystem, they may be right.

Sunday, June 28, 2015

Arrogance from Microsoft, and from Apple

Microsoft's design for Windows 8 has gotten a bit of pushback from its users. One notion I have heard (from multiple sources) is that Microsoft was 'arrogant' in their assumptions about the features users want.

It's true that Microsoft did not consult with all of their users when they designed Windows 8. (They certainly did not ask my opinion.) Is this behavior arrogant? Perhaps.

I cannot help but compare Microsoft's behavior to Apple's, and find that they operate in similar styles. Apple has changed the user interface for its operating systems, without consulting users. Yet I hear few complaints about Apple's arrogance.

Why such a difference in response from users?

Some might say that Apple's changes were "good", while Microsoft's changes were "bad". I don't see it that way. Apple made changes, and Apple users accepted them. Microsoft made changes, and many Microsoft users rejected them. Assuming that the changes were neither good nor bad, how to explain the difference?

I have a theory. It has to do with the populations of users.

Apple has a dedicated user base. People choose to use Apple equipment, and it is a conscious choice. (Some might say a rebellious choice.) Apple is not the mainstream equipment for office or home use, and one does not accidentally purchase an Apple product. If you are using Apple equipment, it is because you want to use it. You are, to some degree, a fan of Apple.

Microsoft has a different user base. Their users consist of two groups: people who want to use Microsoft equipment and people who use it because they have to. The former, I suspect, is rather small -- perhaps the size of the Apple fan group. The latter group consists of those people who use Microsoft software not because they want to, but because they have to. Perhaps they work in an office that has standardized on Microsoft Windows. Perhaps they purchases a PC and Windows came with it. Perhaps they use it because all of their family uses it. Regardless of the reason, they use Windows (and other Microsoft products) not out of choice.

People who choose to use Apple equipment have a favorable opinion of Apple. (Why else would they choose that equipment?) When Apple introduces a change, the Apple fans are more accepting of the change. They trust Apple.

The same can be said for the Microsoft fans.

The non-fans, the people who use Microsoft software through no choice of their own, are not in that situation. They tolerate Microsoft software. They do not (necessarily) trust Microsoft. But when changes are introduced, they are less accepting.

The Microsoft user base, as a result of Microsoft's market dominance, has a lot of non-volunteers. The Apple user base is much smaller and is almost entirely volunteers.

The demographics of Microsoft and Apple user bases are changing. Apple is gaining market share; Microsoft is losing. (The numbers are complicated, due to the expanding role of mobile and cloud technologies.) If these trends continue, Apple may find that its user base has a higher percentage of "conscripts" than in the past, and that they may be less than happy with changes. Microsoft is in the opposite position: as people shift from Microsoft to other platforms, the percentage of "conscripts" will decline, meaning that the percentage of fans will increase. Microsoft may see a smaller market share with more loyal and trusting customers.

These changes are small, and will occur over time. Significant changes will take years, possibly decades. But they will also occur slowly, and Microsoft and Apple may be caught by surprise. For Microsoft, it may be a pleasant surprise. Apple may find that their customer base, after many years of loyal following, starts becoming grumpy.

Monday, June 22, 2015

Static languages for big and dynamic languages for small

I will admit, up front, that I have more experience with statically-type languages (C, C++, Java, C#) than with dynamically-typed languages (Python, Ruby). I learned C before any of the others (but after BASIC, Fortran, and Pascal) and learned Python and Ruby most recently. My opinions are biased, shaped by my experience, or lack thereof, with specific programming languages.

Now that we have the disclaimer out of the way...

I have found that I write programs differently in dynamically-typed languages than I do in statically-typed languages.

There are many differences between the two language sets. C++ is a big jump up from C. Java and C# are, in my mind, a bit simpler than C++. Python and Ruby are simpler than Java and C# -- yet more powerful.

Putting language capabilities aside, I have examined the programs I have written. I have two distinct styles, one for statically-typed languages and a different style for dynamically typed languages. The big difference in my two styles? The names of things.

Programs in any language need names. Even the old Dartmouth BASIC needs names for variables, and one can argue that with the limited namespace (one letter and one optional digit) one must give more thought to names in BASIC than in any other language.

My style for statically-type languages is to name variables and functions with semantic names. Names for functions are usually verb phrases, describe the action performed by the function. Names for objects usually describe the data contained by the variable.

My style for dynamically-typed languages is different. Names for functions typically describe the data structure that is returned by the function. Names for variables typically describe the data structure contained by the variable (or referenced by it, if you insist).

Perhaps this difference is due to my familiarity with the older statically-typed languages. Perhaps it is due to the robust IDEs for C++, Java, and C# (for Python and Ruby I typically use a simple text editor).

I find dynamically-typed languages much harder to debug than statically-typed languages. Perhaps this is due to the difference in tools. Perhaps it is due to my unfamiliarity with dynamically-typed languages. But perhaps it is simply easier to analyze and debug statically-typed languages.

If that is true, then I can further state that it is better to use a statically-typed languages for large projects. It may also be better to use a dynamically-typed language for smaller programs. I'm not sure how large 'large' is in this context, nor am I sure how small 'small' is. Nor do I know the cross-over point, at which it is better to switch from one to the other.

But I think it is worth thinking about.

Actually, I tend to write FORTRAN in all programming languages. But that is another matter.

Sunday, June 21, 2015

More and smaller data centers for cloud

We seem to repeat lessons of technology.

The latest lesson is one from the 1980s: The PC revolution. Personal computers introduced the notion of smaller, numerous computers. Previously, the notion of computers revolved around mainframe computers: large, centralized, and expensive. (I'm ignoring minicomputers, which were smaller, less centralized, and less expensive.)

The PC revolution was less a change from mainframes to PCs and more a change in mindset. The revolution made the notion of small computers a reasonable one. After PCs arrived, the "space" of computing expanded to include mainframes and PCs. Small computers were considered legitimate.

That lesson -- that computing can come in small packages as well as large ones -- can be applied to cloud data centers. The big cloud providers (Amazon.com, Microsoft, IBM, etc.) have been built large data centers. And large is an apt description: enormous buildings containing racks and racks of servers, power supply distribution units, air conditioning... and more. The facilities may vary between the players: the hypervisors, operating systems, administration systems are all different among them. But the one factor they have in common is that they are all large.

I'm not sure that data centers have to be large. They certainly don't have to be monolithic. Cloud providers maintain multiple centers ("regions", "zones", "service areas") to provide redundancy in the event of physical disasters. But aside from the issue of redundancy, it seems that the big cloud providers are thinking in mainframe terms. They build large, centralized, (and expensive) data centers.

Large, centralized mainframe computers make sense for large, centralized mainframe programs.

Cloud systems are different from mainframe programs. They are not large, centralized programs. A properly designed cloud system consists of small, distinct programs tied together by data stores and message queues. A cloud system becomes big by scaling -- by increasing the number of copies of web servers and applications -- and not by growing a single program or single database.

A large cloud system can exist on a cloud platform that lives in one large data center. For critical systems, we want redundancy, so we arrange for multiple data centers. This is easy with cloud systems, as the system can expand by creating new instances of servers, not necessarily in the same data center.

A large cloud system doesn't need a single large data center, though. A large cloud system, with its many instances of small servers, can just as easily live in a set of small data centers (provided that there are enough servers to host the virtual servers).

I think we're in for an expansion of mindset, the same expansion that we saw with personal computers. Cloud providers will expand their data centers with small- and medium-sized data centers.

I'm ignoring two aspects here. One is communications: network transfers are faster in a single data center than across multiple centers. But how many applications are that sensitive to time? The other aspect is the efficiency of smaller data centers. It is probably cheaper, on a per-server basis, to build large data centers. Small data centers will have to take advantage to something, like an existing small building that requires no major construction.

Cloud systems, even large cloud systems, don't need large data centers.

Sunday, June 14, 2015

Data services are more flexible than files

Data services provide data. So do files. But the two are very different.

In the classic PC world ("classic" meaning desktop applications), the primary storage mechanism is the file. A file is, at its core, a bunch of bytes. Not just a random collection of bytes, but a meaningful collection. That collection could be a text file, a document, a spreadsheet, or any one of a number of possibilities.

In the cloud world, the primary storage mechanism is the data service. That could be an SQL database, a NoSQL database, or a web service (a data service). A data service provides a collection of values, not a collection of bytes.

Data services are active things. They can perform operations. A data service is much like a query in an SQL database. (One may think of SQL as a data service, if one likes.) You can specify a subset of the data (either columns or rows, or both), the sequence in which the data appears (again, either columns or rows, or both), and the format of the data. For sophisticated services, you can collect data from multiple sources.

Data services are much more flexible and powerful than files.

But that's not what is interesting about data services.

What is interesting about data services is the mindset of the programmer.

When a programmer is working with data files, he must think about what he needs, what is in the file, and how to extract what he needs from the file. The file may have extra data (unwanted data rows, or perhaps undesired headings and footings). The file may have extra columns of data. The data may be in a sequence different from the desired sequence. The data may be in a format that is different from what is needed.

The programmer must compensate for all of these things, and write code to handle the unwanted data or the improper formats. Working with files means writing code to match the file.

In contrast, data services -- well-designed data services -- can format the data, filter the data, and clean the data for the programmer. Data services have capabilities that files do not; they are active and can perform operations.

A programmer using files must think "what does the file provide, and how can I convert it to what I need?"; a programmer using data services thinks "what do I need?".

With data services, the programmer can think less about what is available and think more about what has to be done with the data. If you're a programmer or a manager, you understand how this change makes programmers more efficient.

If you're writing code or managing projects, think about data services. Even outside of the cloud, data services can reduce the programming effort.