Sunday, October 31, 2010

Better code through visualization

Visualization (which is different from virtualization) renders complex data into a simpler and easier-to-understanf form. We see it all the time with charts and graphs for business and economic activity, demographic trends, and other types of analysis. The pretty charts and graphs (and animated displays, for some analyses) summarize the data and make the trends or distribution clear.

The human brain is geared for visual input. We devote a significant chunk of the brain to the work of processing images.

We're now ready to use visualization (and by doing so, leveraging the brain's powerful capabilities) for software.

I'm doing this, in a very limited (and manual) way, by analyzing source code and creating object diagrams, maps of the different classes and how they relate. These maps are different from the traditional class hierarchy diagrams, in that they show references from methods. (The classic class diagrams show only references in member lists. I pick through the code and find "local" objects and show those references.)

The result is a class diagram that is more comprehensive, and also a bit messier. My process creates maps of classes with arrows showing the dependencies, and even simple (non-trivial) programs have a fair number of classes and a bunch more arrows.

The diagrams are useful. It is easy to spot classes that are not referenced, and contain dead code. I can also spot poorly-designed classes; they usually exist in a set with a "loop" of references (class A refers to class B, class B refers to class C, and class C refers to class A). The diagram makes such cyclic references obvious. It also makes a proper solution (when applied) obvious.

I'm adding the technique and the programs that make it possible to my toolbox of utilities. Visual analysis of programs helps me write better programs, and helps other members of the team understand our programs.

I'm not alone in this idea.

The ACM (the Association for Computing Machinery) ran an article on "code maps" in the August 2010 issue of their Communications magazine. (The name "Communications" refers to information shared with ACM members and does not denote the telecommunications aspect of computing.) The study team found that code maps help team members stay "oriented" within the code.

IBM has their "Many Eyes" project which can analyze data (not just source code) but I'm sure that they are close to visualization of code.

The IEEE (Institute for Electrical and Electronics Engineers) has their "VisWeek", an amalgam of conferences for visualization including the ACM SoftVis conference.

It's an idea whose time has come.



Wednesday, October 27, 2010

How to teach programming?

What approach should we use for newbie programmers? What languages should we recommend and use to let novice programmers learn the craft?

In the nineteen-eighties, with the early microcomputers (Radio Shack TRS-80, Apple II, and Commodore 64) and early PCs (IBM PC and IBM XT), the natural choices were BASIC and Pascal. Both languages were designed as teaching languages, and interpreters and compilers were readily available (books, too).

Over time, fashions in programming languages changed, and the industry shifted from BASIC and Pascal to C, and then to C++, and then to Java, and now to either C# or Python.

I consider C# (and its .NET framework) a competent language, but not a teaching language. It is too complex, too encompassing, too high of a climb for a simple program. I have the same complaint of Java.

Part of my reluctance for C# and Java is their object-orientedness. While object-oriented programming is currently accepted as the norm, I am not convinced that a person should learn O-O techniques from the start. Teaching someone object-oriented programming takes some amount of time, which delays their effectiveness. Also, it may bond the student to object-oriented programming and prevent a move to another programming form in the future.

Object-oriented programming is popular now, but I foresee a day when it is deprecated, looked upon as archaic. (Just as plain old procedural programming is viewed today.)

If we teach new programmers the techniques of object-oriented programming, will they be able to move to something like functional programming? Or will they accept object-oriented programming as The Way Things Ought To Be and resist different programming paradigms? One advantage of BASIC (Dartmouth BASIC, not Visual Basic with its sophisticated constructs) was that we students knew that things could be better. We saw the limitations of the language, and yearned for a better world. When object-oriented programming came along, we jumped at the chance to visit a better world.

If I have fourteen weeks (the length of a typical college semester) I would structure the "CS 101 Intro to Programming" with a mix of languages. I would use BASIC and Pascal, to teach the basic concepts of programming (statements, assignments, variables, arrays, loops, decisions, interpreters, and compilers). I would have a second class for "CS 102 Intermediate  Programming" with Java and Ruby for Object-Oriented programming concepts.

For the serious students, I would have a "CS 121/122 Advanced Programming" pair of classes with assembly language and C, and advanced classes of "CS 215" with LISP and "CS 325/326 Functional Programming" with Haskell.

That's a lot of hours, possibly more than the typical undergraduate wants (or needs) and most likely more than the deans want to allocate to programming.

So the one thing I would do, regardless of the time allocated to programming classes and the number of languages, is design classes to let students learn in pairs. Just as Agile Programming uses pair development to build quality programs, I would use paired learning to build deeper knowledge of the material.


Monday, October 25, 2010

CRM for help desks

We're all familiar with CRM systems. (Or perhaps not. They were the "big thing" several years ago, but the infatuation seems to have passed. For those with questions: CRM stands for "Customer Relationship Management" and was the idea that capturing information about interactions with customers would give you knowledge that could lead to sales.)

We're also all familiar with help desks. (Or perhaps not. They are the banks of usually underpaid, underinformed workers on the other end of the call for support.)

A call to a help desk can be a frustrating experience, especially for technically-savvy folks.

Help desks are typically structured with multiple levels. Incoming calls are directed to the "first level" desk with technicians that have prepared scripts for the most common problems. Only after executing the scripts and finding that the problem is not resolved is your call "escalated" to the "second level" help desk, which usually has a second set of technicians with a different set of scripts and prepared solutions. Sometimes there is a third level of technicians who have experience and can work independently (that is, without prepared scripts) to resolve a problem.

This structure is frustrating for techno-geeks, for two reasons. First, the techno-geek has already tried the solutions that the first level help desk will recommend. Some first level help desks insist that the user try the proffered solutions, even though the user has done them. (This is a blind following of the prepared script.)

Second, many help desks have scripts that assume you are running Microsoft Windows. Help desk technicians ask you to click on the "start" menu, when you don't have one. Some help desks go as far as to deny support to anyone with operating systems other than Windows. See the XKCD comic here. Techno-geeks often pretend to click on the non-existent Windows constructs and provide the help desk with fake information from the non-existent information dialogs (usually from memory) to get their call escalated to a more advanced technician.

The design of help desks (multiple levels, prepared scripts for first levels) is easy to comprehend and on the surface looks efficient. The first level tries the "simple" and "standard" solutions which solve the problem most times. Only after dealing with the (cheap to operate) first level and not resolving the problem do you work with the (expensive to operate) higher levels.

The help desk experience is similar to a video game. You start at the first level, and only after proving your worthiness do you advance to the next level.

Which brings us back to CRM systems. While not particularly good at improving sales, they might be good at reducing the cost of help desks.

Here's why: The CRM system can identify the tech-savvy customers and route them to the advanced levels directly, avoiding the first level scripts. This reduces the load on the first level and also reduces the frustration imposed on the customers. Any competent help desk manager should be willing to jump on a solution that reduces the load on the help desk. (Help desks are measured by calls per hour and calls resolved per total calls, and escalated calls fall in the "not resolved" bucket.)

CRM can also give you better insight into customer problems and calling patterns. The typical help desk metrics will report the problems by frequency and often list the top ten (or maybe twenty). With CRM, you can correlate problems with your customer base and identify the problems by customer type. It's nice to know that printing is the most frequent problem, but it's nicer to know that end-of-day operations is the most frequent problem among your top customers. I know which I would consider the more important!

Saturday, October 23, 2010

Microsoft abandons intuitive for complicated

Microsoft Office 2007 introduced the "ribbon", a change to the GUI that replaced the traditional menu bar with a dynamic, deep-feature, sliding set of options. This is a bigger change than first appears. It is a break with the original concept of the graphical user interface as originally touted by Microsoft.

When Microsoft introduced Windows, it made a point of advertising the "ease of use" and "intuitive" nature of the graphical interface. The GUI was superior to the old DOS programs (some of which were command-line and some which used block-character menus and windows). The Windows GUI was superior because it offered a consistent set of commands and was "intuitive", which most people took to mean as "could be used without training", as if humans had some pre-wired knowledge of the Windows menu bar.

Non-intuitive programs were those that were hard to use. The non-intuitive programs were the command-line DOS programs, the block-character graphic DOS programs which had their own command sets, and especially Unix programs. While one could be productive with these programs, productivity required deep knowledge of the programs that was gained only over time and with practice.

Windows programs, in contrast, were usable from day one. Anyone could sit down with Notepad and change the font and underline text. Anyone could use the calculator. The Windows GUI was clearly superior in that it allowed anyone to use it. (For managers and decision makers, read "anyone" and "a less talented and costly workforce".)

Microsoft was not alone in their infatuation with GUI. IBM tried it with the OS/2 Presentation Manager, yet failed. Apple bought into the GUI concept, and succeeded. But it was Microsoft that advertised the loudest. Microsoft was a big advocate of GUI, and it became the dominant method of interacting with the operating system. Software was installed with the GUI. Configuration options were set with the GUI. Security was set up with the GUI. All of Microsoft's tools for developers were designed for the GUI. All of Microsoft's training materials were designed around the GUI. Microsoft all but abandoned the command line. (Later, they quietly created command-line utilities for system administration, because they were necessary for efficient administration of multi-workstation environments.)

Not all programs were able to limit themselves to the simple menu in Notepad. Microsoft Office products (Word, Excel, Powerpoint, and such) required complex menus and configuration dialogs. Each new release brought larger and longer menus. Using these programs was hard, and a cottage industry for the training of users grew.

The latest version of Microsoft Office replaced the long, hard-to-navigate menus with the confusing, hard-to-search ribbon. One person recently told me that it was more effective and more efficient, once you learned how to use it.

Does anyone see the irony in that statement? Microsoft built the GUI (or stole it from Apple and IBM) to avoid the long lead times for effective use. They wanted people to use Windows immediately, so they told us that it was easy to use. Now, they need a complex menu that takes a long time to learn. 

Another person told me that Microsoft benefits from the ribbon, since once people learn it, they will be reluctant to switch to a different product. In other words, the ribbon is a lock-in device.

It's not surprising that Microsoft needs a complex menu for their Office products. The concepts in the data (words, cells, or slides) are complex concepts, much more sophisticated than the simple text in Notepad. You cannot make complex concepts simple by slapping a menu on top.

But here is the question: if I have to learn a large, complex menu (the Office ribbon), why should I learn Windows programs? Why not learn the whatever tool I want? Instead of Microsoft Word I can learn TEX (or LATEX) and get better control over my output. Instead of Microsoft Access I can learn MySQL.

By introducing the ribbon, Microsoft admitted that the concept of an intuitive program is a false one, or at least limited to trivial functionality. Complex programs are not intuitive; efficient use requires investment and time.


Wednesday, October 20, 2010

Waiting for C++

I was there at the beginning, when C++ was the shiny new thing. It was bigger than C, and more complex, and it required a bit more learning, and it required a new way of thinking. Despite the bigness and the expense and the re-learning time, it was attractive. It was more than the shiny new thing -- it was the cool new thing.

Even when Microsoft introduced Visual Basic, C++ (and later, Visual C++) was the cool thing. It may not have been new, but it was cooler than Visual Basic.

The one weakness in Visual C++ (and in Visual Basic) was the tools for testing, especially tools for testing GUI programs. The testing programs were always add-ons to the basic product. Not just in the marketing or licensing sense, but in terms of technology. GUI testing was always clunky and fragile, using the minimal hooks into the application under test. It was hard to attach test programs to the real programs (the programs under test), and changes to the dialogs would break the tests.

When Java came along, the testing tools were better. They could take advantage of things that were not available in C++ programs. Consequently, the testing tools for Java were better than the testing tools for C++. (Better by a lot.)

The C#/.NET environment offered the same reflection and introspection of classes, and testing tools were better than tools for C++.

I kept waiting for corresponding tools on the C++ side.

This week it hit me: the new tools for C++ will never arrive. The new languages, with their virtual machines and support for reflection, allow for the nifty GUI testing tools, and C++ doesn't. And it never will. It just won't happen. The bright minds in our industry are focussed on C# and Java (or Python or Ruby) and the payoff for C++ does not justify the investment. There is insufficient support in the C++ language and standard libraries for comprehensive testing, and the effort for creating new libraries that do support GUI tests is great.

GUI testing for C++ application is as good as it will ever get. The bright young things are working on other platforms.

Which means that C++ is no longer the cool new thing.


Monday, October 18, 2010

Abstractions

Advances in programming (and computers) come not from connecting things, but from separating them.

In the early days of computing (the 1950s and 1960s), hardware, processors, and software were tied together. The early processors had instructions that assumed the presence of tape drives. Assemblers knew about specific hardware devices. EBCDIC was based on the rows available on punch cards.

Abstractions allow for the decoupling of system components. Not detachment, since we need components connected to exchange information, but decoupled so that a component can be replaced by another.

The COBOL and FORTRAN languages offered a degree of separation from the processor. While FORTRAN I was closely tied to IBM hardware (being little more than a high-powered macro assembler) later versions of FORTRAN delivered on machine-independent code.

The C language showed than a language could be truly portable across multiple hardware platforms, by abstracting the programming constructs to a common set. 

Unix abstracted the file system. It isolated the details of files and directories and provided a common interface to them, regardless of the device. Everything in Unix (and by inheritance, Linux) is a file or a directory. Or if you prefer, everything is a stream.

Microsoft Windows abstracted the graphics and printing (and later, networking) for PC applications.

The Java JVM and the .NET CLR decouple execution from the processor. Java offers "write once, run anywhere" and has done a good job of delivering on that promise. Microsoft focusses more on "write in any language and run on .NET" which has served them well.

Virtualization separates processes from real hardware. Today's virtualization environments provide the same processor as the underlying real hardware -- Intel Pentium on Intel Pentium, for example. I expect that future virtualization environments will offer cross-processor emulation, the equivalent of a 6800 on a Z-80, or an Alpha chip on an Intel Pentium. Once that happens, I expect a new, completely virtual processor to emerge. (In a sense, it already has in the form of the Java JVM and .NET ILM languages.)

Cloud computing offers another form of abstraction, separating processes from underlying hardware.

Each of these abstractions allowed people to become more productive. By decoupling system components, abstraction lets us focus on a smaller space of the problem and lets us ignore other parts of the problem. If the abstracted bits are well-known and not part of the core problem, abstraction helps by eliminating work. (If the abstraction isolates part of the core problem, then it doesn't help because we will still be worrying about the abstracted bits.)


Sunday, October 17, 2010

Small as the new big

I attended the CPOSC (the one-day open source conference in Harrisburg,PA) this week-end. It was a rejuvenating experience, with excellent technical sessions on various technologies.

Open source conferences come in various sizes. The big open source conference is OSCON, with three thousand attendees and running for five days. It is the grand dame of open source conferences. But lots of other conferences are smaller, either in number of attendees or days and usually both.

The open source community has lots of small conferences. The most plentiful of these are the BarCamps, small conferences organized at the local level. They are "unconferences", where the attendees hold the sessions, not a group of selected speakers.

Beyond BarCamp, several cities have small conferences on open source. Portland OR has Open Source Bridge (about 500 attendees), Salt Lake City has Utah Open Source Conference, Columbus has Linux Fest, Chicago has a conference at the University of Illinois, and the metropolis of Fairlee, VT hosts a one-day conference. The CPOSC conference in Harrisburg has a whopping 150 attendees, due to the size of their auditorium and fire codes.

I've found that small conferences can be just as informative and just as energetic as large conferences. The venues may be smaller, the participants are usually from the region and not the country, yet the conference speakers are just as passionate and informed as the speakers at the large conferences.

Local conferences are often volunteer-run, with low overhead and a general air of reasonableness. They have no rick stars, no prima donnas. Small conferences can't afford them, and the audience doesn't idolize them. It makes for a healthy and common-sense atmosphere.

I expect the big conferences to continue. They have a place in the ecosystem. I also expect the small conferences to continue, and to thrive. They serve the community and therefore have a place.


Monday, October 11, 2010

Farming for IT

Baseball teams have farm teams, where they can train players who are not quite ready for the major leagues.

IT doesn't have farm teams, but IT shops can establish connections with recruiters and staffing companies, and recruiters and staffing companies can establish connections with available talent. It's less formal than the baseball farm team, yet it can be quite effective.

If you're running an IT shop, you want a farm team -- unless you can grow your own talent. Some shops can, but they are few. And you want a channel that can give you the right talent when you need it.

Unlike baseball teams, IT shops see cycles of growth and reduction. The business cycle affects both, but baseball teams have a constant roster size. IT shops have a varying roster size, and they must acquire talent when business improves. If they have no farm team, they must hire talent from the "spot market", taking what talent is available.

Savvy IT shops know that talent -- true, capable talent -- is scarce, and they must work hard to find it. Less savvy shops consider their staff to be a commodity like paper or electricity. Those shops are quick to downsize and quick to upsize. They consider IT staff to be easily replaceable, and place high emphasis on reducing cost.

Yet reducing staff and increasing staff are not symmetrical operations. New hires (appropriate  new hires) are much harder to find than layoffs.  The effort to select the correct candidate is large; even companies that treat IT workers as commodities will insist on interviews before hiring someone.

During business downturns, savvy and non-savvy IT shops can lay off workers with equal ease. During business upturns, savvy IT shops have the easier task. They have kept relationships with recruiters and the recruiters know their needs. Recruiters can match candidates to their needs. The non-savvy IT shops are the Johnny-Come-Latelys who have no working relationship. Recruiters do what they can, but the savvy shops will get the best matches, leaving the less-talented folks.

If you want talent, build your farm team.

If you want commodity programmers, then don't build a farm team. But then don't bother interviewing either. (Do you interview boxes of printer paper? Or the person behind the fast food counter?) And don't complain about the dearth of talent. After all, if programmers are commodities, then they are all alike, and you should accept what you can find.


Sunday, October 10, 2010

Risk or no risk?

How to pick the programming languages of the future? One indication is the number of books about programming languages.

One blog lists a number of freely-available books on programming languages. What's interesting is the languages listed: LISP, Ruby, Javascript, Haskell, Erlang, Python, Smalltalk, and under the heading of "miscellaneous": assembly, Perl, C, R, Prolog, Objective-C, and Scala.

Also interesting is the list of languages that are not mentioned: COBOL, FORTRAN, BASIC, Visual Basic, C#, and Java. These are the "safe" languages; projects at large companies use them because they have low risk. They have lots of support from well-established vendors, lots of existing code, and lots of programmers available for hire.

The languages listed in the "free books" list are the risky languages. For large companies (and even for small companies) these languages entail a higher risk. Support is not so plentiful. The code base is smaller. There are fewer programmers available, and when you find one it is harder to determine his skills (espacially if you don't have a programmer on your staff versed in the risky language.)

The illusion of safe is just that -- an illusion. A safe language will give you comfort, and perhaps a more predictable project. (As much as software projects can be comforting and predictable.)

But business advantage does not come from safe, conservative strategies. If it did, innovations would not occur. Computer companies would be producing large mainframe machines, and automobile manufacturers would be producing horse-drawn carriages.

The brave new world of cloud computing needs new programming languages. Just as PC applications used BASIC and later Visual Basic and C++ (and not COBOL and FORTRAN), cloud applications will use new languages like Erlang and Haskell.

In finance, follow the money. In knowledge, follow the free.


Monday, October 4, 2010

A Moore's Law for Programming Languages?

We're familiar with "Moore's Law", the rate of advance in hardware that allows simultaneous increases in performance and reductions on cost. (Or, more specifically, the reduction in size of a transistor by one half every eighteen months.)

Is there a similar effect for programming languages? Are programming languages getting better (more powerful and cheaper to run) and if so at what rate?

It's a tricky question, since programming languages rest on top of hardware, and as the performance of hardware improves, the performance of programming languages gets a free ride. But we can filter out the effect of faster hardware and look at language design.

Even so, rating the power of a programming language is difficult. What is the value of the concept of an array? The concept of structured (GOTO-less) programming? Object-oriented programming?

The advancements made by languages can be deceptive. The LISP language,  considered the most advanced language by luminaries of the field, was created in the late 1950s! LISP has features that modern languages such as Ruby and C# are just beginning to incorporate. If RUby and C# are modern, what is LISP?

The major languages (assembly, FORTRAN, COBOL, BASIC, Pascal, C, C++, and Perl, in my arbitrary collection), have a flat improvement curve. Improvements are linear and not exponential (or even geometric). There is no Moore's Law scaling of improvements.

If not programming languages, perhaps IDEs. Yet here also, progress has been less than exponential. From the initial IDE of TurboPascal (compiler and editor combined) through Microsoft's acquisition and integration of Nu-Mega's CodeView debugger, to Microsoft's SQL Intellisense and stored procedure debugger, improvements have -- in my opinion -- been linear and not worthy of the designation of "Moore's Law".

IDEs are not a consistent measure, since languages like Perl and Ruby have bucked the trend by avoiding (for the most part) IDEs entirely and using nothing more than "print" statements for debugging.

If hardware advances at an exponential rate and programming languages advance at a linear rate, then we have quite a difference in progress.

A side effect of this progress will be the price paid for good programming talent. It's easy to make a smaller and cheaper computer, but not as easy to make a smaller and cheaper application.


Sunday, October 3, 2010

Winning the stern chase

A "stern chase" is a race between two competitors, where one is following the other. The leader's goal is to stay ahead; the laggard must overtake the leader. Both positions have challenges: the leader is ahead but has less information (he can't see the follower), the laggard must work harder but has the advantage of seeing the leader. The lagard knows when he is gaining (or losing) the leader, and can see the actions taken by the leader.

In software, a stern chase occurs when replacing a system. Here's the typical pattern:

A company builds a system, or hires an outside form to build the system. The new system has a nice, shiny design and clean code (possibly). Over time, different people make changes to the system, and the design degrades. The nice, shiny system is no longer shiny, it's rusty. People dislike the system - users don't like it, nor do developers. Users find it hard to use, and developers find it hard to modify.

At some point, someone decides to Build A New System. The old system is buggy, hard to use, hard to maintain, but exists and gets the job done. It is the leader. The new system appears shiny but has no code (and possibly no requirements other than "do what the old system does"). It is the laggard.

The goal of the development team is to build a new system  that catches the old system. Thus the stern chase begins. The old system is out in front, running every day, and getting updates and new features. The news system is behind, getting features added but always lacking some feature of the old system.

Frequently, the outcome of the software stern chase is a second system that has a design that is (eventually if not initially) compromised, code that is (eventually if not initially) modified multiple times and hard to read, and a user base and development team which are (eventually if not initially) asking for a re-write of the system. (A re-write of the new system, not the old one!) The stern chase fails.

But more and more, stern chases are succeeding. Not only succeeding in the delivery of the new system, but the succeeding in that the replacement system is easier to use and easier to maintain.

A stern chase in software can succeed when the following conditions exist:

The chase team knows the old system the problem domain, and has input into the design of the new system. The team can contribute valuable insights into the design for the new system. The new system can take the working parts of the old system, yet avoid the problems of the old system. This lets the development team aim.

The chase team uses better technology that the old system. Newer technology (such as C# over C++) lets the chase team move faster.

The chase team uses a process that is disciplined. The biggest advantage comes from automated tests. Automated tests let the chase team re-design their application as they move forward and learn more about the problem. Without automated tests, the team is afraid to make changes and try new designs. It's also afraid to clean up the code that has gotten hard to maintain. (Even on a new project, code can be hard to maintain.)

With these conditions, the chase team moves faster than the team maintaining the old system. In doing so, they surpass the old system and win the race.


Friday, October 1, 2010

NoSQL means no SQL

There's been a bit of debate about NoSQL databases. The Association for Computing Machinery has a discussion of non-interest in NoSQL.

In sum, the folks who are not interested in NoSQL databases want to remain with SQL for two reasons: their business uses OLTP and SQL offers ACID (atomic, consistent, isolated, and durable) transactions, and SQL is a standard.

I have some thoughts on this notion of non-interest.

No one is asking people to abandon SQL databases. The NoSQL enthusiasts recognize that OLTP needs the ACID properties. Extremists may advocate the abandonment of SQL, but I disagree with that approach. If SQL is working for your current applications, keep using it.

But limiting your organization to SQL with the reasoning: "transactions are what we do, and that's all that we do" is a dangerous path. It prevents you from exploring new business opportunities. The logic is akin to sticking with telegrams and avoiding the voice telephone. (Telegrams are written records and therefore can be stored, they can be confirmed and therefore audited, and they are the standard... sound familiar?)

SQL as a standard offers little rationale for its use. The folks I talk to are committed to their database package (DB2, SQL Server, Oracle, Postgres, MySQL, etc.). They don't move applications from one to another. (Some folks have moved from MySQL to Postgres, but the numbers are few.) Selecting SQL as a standard has little value if you stick with a single database engine.

For me, the benefit of NoSQL databases is... no SQL! I find the SQL language hard to learn and harder to use. My experience with SQL databases has been consistently frustrating, even with open source packages like MySQL. My experience with proprietary, non-SQL databases, on the other hand, has been successful. The performance is better and the code is easier to write. (Easier, but still not easy.) No SQL databases appeal to me simply to get away from SQL.

My preferences aside, NoSQL databases are worthy of investigation. They have different strengths and allow for the construction of new types of applications. Insisting on SQL for every database application is just as extremist as insisting on the complete removal of SQL. Both can contribute to your application suite. Use them both to your advantage.