Wednesday, September 4, 2019
Don't shoot me, I'm only the OO programming language!
Why such animosity towards object-oriented programming? And why now? I have some ideas.
First, we have the age of object-oriented programming (OOP) as the primary paradigm for programming. I put the acceptance of OOP somewhere after the introduction of Java (in 1995) and before Microsoft's C# and .NET initiative (in 1999), which makes OOP about 25 years old -- or one generation of programmers.
(I know that object-oriented programming was around much earlier than C# and Java, and I don't mean to imply that Java was the first object-oriented language. But Java was the first popular OOP language, the first OOP language that was widely accepted in the programming community.)
So it may be that the rejection of OOP is driven by generational forces. Object-oriented programming, for new programmers, has been around "forever" and is an old way of looking at code. OOP is not the shiny new thing; it is the dusty old thing.
Which leads to my second idea: What is the shiny new thing that replaces object-oriented programming? To answer that question, we have to answer another: what does OOP do for us?
Object-oriented programming, in brief, helps developers organize code. It is one of several techniques to organize code. Others include Structured Programming, subroutines, and functions.
Subroutines are possibly the oldest techniques to organize code. They date back to the days of assembly language, when code that was executed more than once was called with a "branch" or "call" or "jump subroutine" opcode. Instead of repeating code (and using precious memory), common code could be stored once and invoked as often as needed.
Functions date back to at least Fortran, consolidating common code that returns a value.
For two decades (from the mid 1950s to the mid-1970s), subroutines and functions were the only way to organize code. In the mid-1970s, the structured programming movement introduced an additional way to organize code, with IF/THEN/ELSE and WHILE statements (and an avoidance of GOTO). These techniques worked at a more granular level that subroutines and functions. Structured programming organized code "in the small" and subroutines and functions organized code "in the medium". Notice that we had no way (at the time) to organize code "in the large".
Techniques to organize code "in the large" did come. One attempt was dynamic-linked libraries (DLLs), introduced with Microsoft Windows but also used by earlier operating systems. Another was Microsoft's COM, which organized the DLLs. Neither were particularly effective at organizing code.
Object-oriented programming was effective at organizing code at a level higher than procedures and functions. And it has been successful for the past two-plus decades. OOP let programmers build large systems, sometimes with thousands of classes and millions of lines of code.
So what technique has arrived that displaces object-oriented programming? How has the computer world changed, that object-oriented programming would become despised?
I think it is cloud programming and web services, and specifically, microservices.
OOP lets us organize a large code base into classes (and namespaces which contain classes). The concept of a web service also lets us organize our code, in a level higher than procedures and functions. A web service can be a large thing, using OOP to organize its innards.
But a microservice is different from a large web service. A microservice is, by definition, small. A large system can be composed of multiple microservices, but each microservice must be a small component.
Microservices are small enough that they can be handled by a simple script (perhaps in Python or Ruby) that performs a few specific tasks and then exits. Small programs don't need classes and object-oriented programming. Object-oriented programming adds cost to simple programs with no corresponding benefit.
Programmers building microservices in languages such as Java or C# may feel that object-oriented programming is being forced upon them. Both Java and C# are object-oriented languages, and they mandate classes in your program. A simple "Hello, world!" program requires the definition of at least one class, with at least one static method.
Perhaps languages that are not object-oriented are better for microservices. Languages such as Python, Ruby, or even Perl. If performance is a concern, the compiled languages C and Go are available. (It might be that the recent interest in C is driven by the development of cloud applications and microservices for them.)
Object-oriented programming was (and still is) an effective way to manage code for large systems. With the advent of microservices, it is not the only way. Using object-oriented programming for microservices is overkill. OOP requires overhead that is not helpful for small programs; if your microservice is large enough to require OOP, then it isn't a microservice.
I think this is the reason for the recent animosity towards object-oriented programming. Programmers have figured out the OOP doesn't mix with microservices -- but they don't know why. They fell that something is wrong (which it is) but they don't have the ability to shake off the established programming practices and technologies (perhaps because they don't have the authority).
If you are working on a large system, and using microservices, give some thought to your programming language.
Wednesday, December 28, 2016
Moving to the cloud requires a lot. Don't be surprised.
The cloud is a different environment than a web server. (Or a Windows desktop.) Moving to the cloud is a change in platform.
The history of IT has several examples of such changes. Each transition from one platform to another required changes to the code, and often changes to how we *think* about programs.
The operating system
The first changes occurred in the mainframe age. The very first was probably the shift from a raw hardware platform to hardware with an operating system. With raw hardware, the programmer has access to the entire computing system, including memory and devices. With an operating system, the program must request such access through the operating system. It was no longer possible to write directly to the printer; one had to request the use of each device. This change also saw the separation of tasks between programmers and system operators, the latter handling the scheduling and execution of programs. One could not use the older programs; they had to be rewritten to call the operating system rather that communicate with devices.
Timesharing and interactive systems
Timesharing was another change in the mainframe era. In contrast to batch processing (running one program at a time, each program reading and writing data as needed but with no direct interaction with the programmer), timeshare systems interacted with users. Timeshare systems saw the use of on-line terminals, something not available for batch systems. The BASIC language was developed to take advantage of these terminals. Programs had to wait for user input and verify that the input was correct and meaningful. While batch systems could merely write erroneous input to a 'reject' file, timeshare systems could prompt the user for a correction. (If they were written to detect errors.) One could not use a batch program in an interactive environment; programs had to be rewritten.
Minicomputers
The transition from mainframes to minicomputers was, interestingly, one of the simpler conversions in IT history. In many respects, minicomputers were smaller versions of mainframes. IBM minicomputers used the batch processing model that matched its mainframes. Minicomputers from manufacturers like DEC and Data General used interactive systems, following the lead of timeshare systems. In this case, is *was* possible to move programs from mainframes to minicomputers.
Microcomputers
If minicomputers allowed for an easy transition, microcomputers were the opposite. They were small and followed the path of interactive systems. Most ran BASIC in ROM with no other possible languages. The operating systems available (CP/M, MS-DOS, and a host of others) were limited and weak compared to today's, providing no protection for hardware and no multitasking. Every program for microcomputers had to be written from scratch.
Graphical operating systems
Windows (and OS/2 and other systems, for those who remember them) introduced a number of changes to programming. The obvious difference between Windows programs and the older DOS programs was, of course, the graphical user interface. From the programmer's perspective, Windows required event-driven programming, something not available in DOS. A Windows program had to respond to mouse clicks and keyboard entries anywhere on the program's window, which was very different from the DOS text-based input methods. Old DOS programs could not be simply dropped into Windows and run; they had to be rewritten. (Yes, technically one could run the older programs in the "DOS box", but that was not really "moving to Windows".)
Web applications
Web applications, with browsers and servers, HTML and "submit" requests, with CGI scripts and JavaScript and CSS and AJAX, were completely different from Windows "desktop" applications. The intense interaction of a window with fine-grained controls and events was replaced with the large-scale request, eventually getting smaller AJAX and AJAX-like web services. The separation of user interface (HTML, CSS, JavaScript, and browser) from "back end" (the server) required a complete rewrite of applications.
Mobile apps
Small screen. Touch-based. Storage on servers, not so much on the device. Device processor for handling input; main processing on servers.
One could not drop a web application (or an old Windows desktop application) onto a mobile device. (Yes, you can run Windows applications on Microsoft's Surface tablets. But the Surface tablets are really PCs in the shape of tablets, and they do not use the model used by iOS or Android.)
You had to write new apps for mobile devices. You had to build a collection of web services to be run on the back end. (Not too different from the web application back end, but not exactly the same.)
Which brings us to cloud applications
Cloud applications use multiple instances of servers (web servers, database servers, and others) each hosting services (called "microservices" because the service is less that a full application) communicating through message queues.
One cannot simply move a web application into the cloud. You have to rewrite them to split computation and coordination, the latter handled by queues. Computation must be split into small, discrete services. You must write controller services that make requests to multiple microservices. You must design your front-end apps (which run on mobile devices and web browsers) and establish an efficient API to bridge the front-end apps with the back-end services.
In other words, you have to rewrite your applications. (Again.)
A different platform requires a different design. This should not be a surprise.
Wednesday, October 19, 2016
We prefer horizontal layers, not vertical stacks
Looking back at the 60-plus years of computer systems, we can see a pattern of design preferences. That pattern is an initial preference for vertical design (that is, a complete system from top to bottom) followed by a change to a horizontal divide between a platform and applications on that platform.
A few examples include mainframe computers, word processors, and smart phones.
Mainframe computers, in the early part of the mainframe age, were special-purpose machines. IBM changed the game with its System/360, which was a general-purpose computer. The S/360 could be used for commercial, scientific, or government organizations. It provided a common platform upon which ran application programs. The design was revolutionary, and it has stayed with us. Minicomputers followed the "platform and applications" pattern, as did microcomputers and later IBM's own Personal Computer.
When we think of the phrase "word processor", we think of software, most often Microsoft's "Word" application (which runs on the Windows platform). But word processors were not always purely software. The original word processors were smart typewriters, machines with enhanced capabilities. In the mid-1970s, a word processor was a small computer with a keyboard, display, processing unit, floppy disks for storage, a printer, and software to make it all go.
But word processors as hardware did not last long. We moved away from the all-in-one design. In its place we used the "application on platform" approach, using PCs as the hardware and a word processing application program.
More recently, smart phones have become the platform of choice for photography, music, and navigation. We have moved away from cameras (a complete set of hardware and software for taking pictures), moved away from MP3 players (a complete set of hardware and software for playing music), and moved away from navigation units (a complete set of hardware and software for providing directions). In their place we use smart phones.
(Yes, I know that some people still prefer discrete cameras, and some people still use discrete navigation systems. I myself still use an MP3 player. But the number of people who use discrete devices for these tasks is small.)
I tried thinking of single-use devices that are still popular, and none came to mind. (I also tried thinking of applications that ran on platforms that moved to single-use devices, and also failed.)
It seems we have a definite preference for the "application on platform" design.
What does this mean for the future? For smart phones, possibly not so much -- other than they will remain popular until a new platform arrives. For the "internet of things", it means that we will see a number of task-specific devices such as thermostats and door locks until an "internet of things" platform comes along, and then all of those task-specific devices will become obsolete (like the task-specific mainframes or word processor hardware).
For cloud systems, perhaps the cloud is the platform and the virtual servers are the applications. Rather than discrete web servers and database servers the cloud is the platform for web server and database server "applications" that will be containerized versions of the software. The "application on platform" pattern means that cloud and containers will endure for some time, and is a good choice for architecture.
Tuesday, November 13, 2012
Which slice of the market can you afford to ignore?
Everyone except for those Apple folks, but they had less than five percent of the market, and one could afford to ignore them. In fact, the economics dictated that one did ignore them -- the cost of developing a Mac-specific version of the application was larger than the expected revenue.
Those were simple days.
Today is different. Instead of a single operating system we have several. And instead of a single dominant version of an operating system, we have multiple.
Windows still rules the desktop -- but in multiple versions. Windows 8 may be the "brand new thing", yet most people have either Windows 7 or Windows XP. And a few have Windows Vista!
Outside of Windows, Apple's OSX has a strong showing. (Linux has a minor presence, and can probably be safely ignored.)
The browser world is fragmented among Microsoft Internet Explorer, Google Chromium, Apple's Safari, and Mozilla's Firefox.
Apple has become powerful with the mobile phones and dominant with tablets. The iOS operating system has a major market share, and one cannot easily ignore it. But there are different versions of iOS. Which ones should be supported and which ones can be ignored?
Of course, Google's Android has no small market share either. And Android exists in multiple versions. (Although most Android users want free apps, so perhaps it is possible to ignore them.)
Don't forget the Kindle and Nook e-reader/tablets!
None of these markets are completely dominant. Yet none are small. You cannot build one app that runs on all of them. Yet building multiple apps is expensive. (Lots of tools to buy, lots of techniques to learn, and lots of tests to run!)
What to do?
My suggestions:
- Include as many markets as possible
- Keep the client part of your app small
- Design your application with processing on the server, not the client
Multiple markets gives you more exposure. It also forces you to keep your application somewhat platform-agnostic, which means that you are not tied to a platform. (Being tied to a platform is okay until the platform sinks, in which case you sink with it.)
Keeping your application small forces your application to a minimal user interface and a detached back end.
Pushing the processing part of your app insulates you from changes to the client (or the GUI, in 1990-speak). It also reduces the development and testing efforts for your apps, by centralizing the processing.
This technique has no surprises, perhaps. But then, it also requires no magic.
After all, which market segment can you afford to ignore?
Monday, May 28, 2012
Do it right the first time
The cynical version of this is "We don't have time to do it properly, but we always have time to do it again".
I know many developers, and not a one wakes in the morning with the thought "today I will do things wrong". Everyone I know wants to do the right thing.
Some folks believe that they are prevented from doing the "right thing" by their managers, or the bureaucracy, or marketing-originated specifications. Yet sometimes we prevent ourselves from doing the right thing.
To "do things right", one needs two things:
1) An understanding of "the right thing"
2) The belief that one has the resources to accomplish it
Inexperienced programmers will do what they think is the right thing, yet may be bad for the long term. They may duplicate code, or create unmaintainable code. It seems the right thing to do at the time, and with their expertise. Seasoned programmers know that code endures and should be written for future developers. Even temporary fixes endure.
Experience and sophisticated knowledge is not sufficient. If one thinks that the effort for the "proper" code is too great, or takes too long, then one might decide to make a set of "improper" changes. Business managers often must decide between multiple options, and sometimes choose compromising solutions. Managers often make choices that differ from those of the technologists.
Sometimes the technologists are right. Sometimes the managers are right. In any given situation, it is hard to tell who is truly right. Sometimes the answer becomes clear with time -- but not always.
Often I find that the proper way to do things requires discipline, care, and knowledge of the business requirements and the underlying technology. And often I find the the right way of doing things uses principles from earlier eras. (For example, the Model-View-Controller design pattern is mostly a separation of concerns, and this ideas was present in many system design methods of the 1960s. COBOL shops knew to separate input-output operations from processing operations; they just didn't use the phrase "MVC".)
So perhaps we should spend less time exhorting our fellow developers to "do the right thing" and a little more time on understanding the work of of predecessors. That might be the right thing to do.
Wednesday, October 26, 2011
Small is the new big thing
Applications are programs that do everything you need. Microsoft Word and Microsoft Excel are applications: They let you compose documents (or spreadsheets), manipulate them, and store them. Visual Studio is an application: It lets you compose programs, compile them, and test them. Everything you need is baked into the application, except for the low-level functionality provided by the operating system.
Apps, in contrast, contain just enough logic to get the desired data and present it to the user.
A smartphone app is not a complete application; except for the most trivial of programs, it is the user interface to an application.
The Facebook app is a small program that talks to Facebook servers and presents data. Twitter apps talk to the Twitter servers. The New York Times talks to their servers. Simple apps such as a calculator app or rudimentary games can run without back-ends, but I suspect that popular games like "Angry Birds" store data on servers.
Applications contained everything: core logic, user interface, and data storage. Apps are components in a larger system.
We've seen distributed systems before: client-server systems and web applications divide data storage and core logic from user interface and validation logic. These application designs allowed for a single front-end; current system design allows for multiple user interfaces: iPhone, iPad, Android, and web. Multiple front ends are necessary; there is no clear leader, no "IBM PC" standard.
To omit a popular platform is to walk away from business.
Small front ends are better than large front ends. A small, simple front end can be ported quickly to new platforms. It can be updated more rapidly, to stay competitive. Large, complex apps can be ported to new platforms, but as with everything else, a large program requires more effort to port.
Small apps allow a company to move quickly to new platforms.
With a dynamic market of user interface devices, an effective company must adopt new platforms or face reduced revenue. Small user interfaces (apps) allow a company to quickly adopt new platforms.
If you want to succeed, think small.
Tuesday, October 11, 2011
SOA is not DOA
Mobile apps use it. iPhone apps that get data from a server (e-mail or Twitter, for example) use web services -- a service oriented architecture.
SOA was the big thing back in 2006. So why do we not hear about it today?
I suspect it had nothing to do with SOA's marketability.
I suspect that no one talks about SOA because no one makes money from it.
Object oriented programming was an opportunity to make money. Programmers had to learn new techniques and new languages; tool vendors had to provide new compilers, debuggers, and IDEs.
Java was a new programming language. Programmers had to learn it. Vendors provided new compilers and IDEs.
UML was big, for a while. Vendors provided tools; architects, programmers, and analysts learned it.
The "retraining load" for SOA is smaller, limited mostly to the architects of systems. (And there are far fewer architects than programmers or analysts.) SOA has no direct affect on programmers.
With no large-scale training programs for SOA (and no large-scale training budgets for SOA), vendors had no incentive to advertise it. They were better off hawking new versions of compilers.
Thus, SOA quietly faded into the background.
But it's not dead.
Mobile apps use SOA to get work done. iPhones and Android phones talk to servers, using web services. This design is SOA. We may not call it that, but that's what it is.
When the hype of SOA vanished, lots of companies dropped interest in SOA. Now, to move their applications to the mobile world, they will have to learn SOA.
So don't count SOA among the dead.
On the other hand, don't count on it for your profits. You need it, but it is infrastructure, like electricity and running water. I know of few companies that count on those utilities as a competitive advantage.