Showing posts with label management. Show all posts
Showing posts with label management. Show all posts

Wednesday, January 8, 2025

The missing conversation about AI

For Artificial Intelligence (AI), -- or at least the latest fad that we call "AI" -- I've seen lots of announcements, lots of articles, lots of discussions, and lots of advertisements. All of them -- and I do mean "all" -- fall into the category of "hype". I have yet to see a serious discussion or article on AI.

Here's why:

In business -- and in almost every organization -- there are four dimensions for serious discussions. Those dimensions are: money, time, risk, and politics. (Politics internal to the organization, or possible with external suppliers or customers; not the national-level politics.)

Businesses don't care if an application is written in Java or C# or Rust. They *do* care that the application is delivered on time, that the development cost was reasonably close to the estimated cost, and that the application runs as expected with no ill effects. Conversations about C++ and Rust are not about the languages but about the risks of applications written in those languages. Converting from C++ to Rust is about the cost of conversion, the time it takes, opportunities lost during the conversion, and reduction of risk due to memory leaks, invalid access, and other exploits. The serious discussion ignores the issues of syntax and IDE support (unless one can tie them to money, time, or risk).

With AI, I have not seen a serious discussion about money, for either the cost to implement AI or the reduction in expenditures, other than speculation. I have not seen anyone list the time it took to implement AI with any degree of success. I have yet to see any articles or discussions about the risks of AI and how AI can provide incorrect information that seems, at first glance, quite reasonable.

These are the conversations about AI that we need to have. Without them, AI is merely a shiny new thing that has no clearly understood benefits and no place in our strategies or tactics. Without them, we do not understand the true costs to implement AI and how to decide when and where to implement it. Without them, we do not understand the risks and how to mitigate them.

The first rule of investment is: If you don't understand an investment instrument, then don't invest in it.

The first rule of business management is: If you don't understand a technology (how it can help you, what it costs, and its risks), then don't implement it. (Other than small, controlled research projects to learn about it.)

It seems to me that we don't understand AI, at least not well enough to use it for serious tasks.

Wednesday, July 5, 2023

Twitter, Elon Musk, and dignity

A lot has been said about Elon Musk's actions at Twitter. I will add a little more, with some ideas that I have not seen anywhere else. (Also, I recognize that Musk has stepped aside and is letting Linda Yaccarino run the show. But I don't know if Musk is still involved.)

Musk's behavior at Twitter has been described as chaotic, petulant, and just plain wrong. He has made decisions with wide-sweeping actions, and made them hastily and with little respect for the long-time employees at Twitter. Those decisions have had consequences.

I'm going to focus not on the decisions, and not on the consequences, but on the process. Musk is running Twitter as if it were a start-up, a company with an idea of a product or service, perhaps a prototype or minimum viable product, and few or no customers. Start-ups need to find a product or service that resonates with customers, something that makes customers ready to pay for the product or service. It is common for a start-up to try several (sometimes quite varied) approaches.

A start-up looking for its product (or its value proposition, to use MBA-speak) needs to move quickly. It has limited resources and it does not have the luxury of waiting for multiple levels of bureaucracy to review decisions and slowly reach a consensus. The CEO must make decisions quickly and with minimal delay.

That's the behavior I see in Musk at Twitter: unilateral, arbitrary decisions made with no advance notice.

While such behavior is good (and sometimes necessary) at start-ups, it is not good at established companies. Established companies are, well, established. They have well-defined products and services. They have a base of customers who pay them money on a regular basis. Those customers have expectations, based on the previous actions of the company.

Arbitrary changes to products and services, made on short notice, do not sit well with those customers. Customers want predictability, just as you and I want predictability from our internet providers and streaming services.

(Note to self: a future column might discuss consistency and predictability for streaming services.)

Back to customers of Twitter: They want predictability, and Musk is not providing it.

The users of Twitter, distinct from the customers who pay for advertising, also want consistency and predictability. Arbitrary changes can drive users away, which reduces advertising view counts, which reduces advertising rates, which reduces income for advertising.

It seems to me that Musk is well-suited to run a start-up, and poorly suited to run an established company.

(Note to self: a future column might discuss the transition from start-up to established company.)

Perhaps the best action that Musk can take is to remove himself from the management of Twitter and let others run the company. He has done that, to some extent. He should step completely aside. I'm not commenting on Yaccarino's competency to run Twitter; that is another topic.

Sometimes the best way to solve a problem is to let others handle it.

Monday, November 21, 2022

More Twitter

Elon Musk has made quite the controversy, with his latest actions at Twitter (namely, terminating employment of a large number of employees, terminating the contracts for a large number of contractors, and discontinuing many of Twitter's services). His decisions have been almost universally derided; it seems that the entire internet is against him.

Let's take a contrarian position. Let's assume -- for the moment -- that Musk knows what he is doing, and that he has good reasons for his actions. Why would he take those actions, and what is his goal?

The former is open to speculation. My thought is that Twitter is losing money (it is) and is unable to fill the gap between income and "outgo" with investments. Thus, Twitter must raise revenue or reduce spending, or some combination of both. While this fits with Musk's actions, it may or may not be his motivation. 

The question of Musk's goal may be easier to answer. His goal is to improve the performance of Twitter, making it profitable and either keeping the company or selling it. (We can rule out the goal of destroying the company.) Keeping Twitter gives Musk a large communication channel to lots of people (free advertising for Tesla?) and makes him a notable figure in the tech (software) community. If Musk can "turn Twitter around" (that is, make it profitable, whether he keeps it or sells it) he builds on his reputation as a capable business leader.

Reducing the staff at Twitter has two immediate effects. The first is obvious: reduced expenses. The second is less obvious: a smaller company with fewer teams, and therefore more responsive. Usually, a smaller organization can make decisions faster than a large one, and can act faster than a large one.

It is true that a lot of "institutional knowledge" can be lost with large decreases in staff. That knowledge can range from the design of Twitter's core software, its databases, and its processes for updates, and its operations (keeping the site running). Yet a lot of knowledge can be stored in software (and database structures), and read by others if the software is well-written.

I'm not ready to bury Twitter just yet. Musk may be able to make Twitter profitable and keep a commanding presence in the tech space.

But I'm also not ready to build on top of Twitter. Musk's effort may fail, and Twitter may fail. I'm taking a cautious approach, using it for distributing and collecting and non-critical information. 

Wednesday, November 2, 2022

Twitter

Elon Musk has bought Twitter and started making changes. Lots of people have commented on the changes. Here are my thoughts.

Musk's actions are radical and seem reckless. (At least, they seem reckless to me.) Dissolving the board, terminating employment of senior managers, demanding that employees work 84-hour weeks to quickly implement a new feature (a fee for the blue 'authenticated' checkmark), and threatening to terminate the employment of employees who don't meet performance metrics are no way to win friends -- although it may influence people.

Musk may think that running Twitter is similar to running his other companies. But Tesla, SpaceX, The Boring Company are quite different from Twitter.

Twitter has a number of components. It has software: the various clients that provide Twitter to devices and PCs, the database of tweets, the query routines that select the tweets to show to individuals, and advertising inventory (ads) and the functions that inject those ads into the viewed streams.

But notice that the database of tweets is not made by Twitter. It is made by Twitter's users. It is the user base that creates the tweets, not Twitter employees. (Nor are they mined from the ground or grown on trees.)

The risk that Twitter now faces is one of reputation. If the quality (or the perceived quality) of Twitter falls, people (users) will leave. And like all social media, the value of Twitter is mostly defined by how many other people are on the service. Facebook's predecessor MySpace knows this, as does MySpace's predecessor Friendster.

Social media is like a telephone. A telephone is useful when lots of people have them. If you were the only person on Earth with a phone, it would be useless to you. (Who could you call?) The more people who use Twitter, the more valuable it is.

Musk's actions are damaging Twitter's reputation. A number of people have already closed their accounts, and more a claiming to do so in the future. (Those future closures haven't occurred, and it is possible that those individuals will decide to stay on Twitter.)

As I see it, Twitter has technical problems (all companies do) but their larger issues are management and leadership issues. Musk may have made some unforced errors that will drive away users, advertisers, employees, and future investors.

Friday, December 8, 2017

The cult of fastest

In IT, we (well, some of us) are obsessed with speed. The speed-cravers seek the fastest hardware, the fastest software, and the fastest network connections. They have been with us since the days of the IBM PC AT, which ran at 6MHz which was faster than the IBM PC (and XT) speed of 4.77MHz.

Now we see speed competition among browsers. First Firefox claims their browser is fastest. Then Google releases a new version of Chrome, and claims that it is the fastest. At some point, Microsoft will claim that their Edge browser is the fastest.

It is one thing to improve performance. When faced with a long-running job, we want the computer to be faster. That makes sense; we get results quicker and we can take actions faster. Sometimes it is reasonable to go to great lengths to improve performance.

I once had a job that compared source files for duplicate code. With 10,000 source files, and the need to compare each file against each other file, there were 1,000,000 comparisons. Each comparison took about a minute, so the total job was projected to run for 1,000,000 minutes -- or about 2 years! I revised the job significantly, using a simpler (and faster) comparison to identify if two files had any common lines of code and then using the more detailed (and longer) comparison on only those pairs with over 1,000 lines of common code.

Looking for faster processing in that case made sense.

But it is another thing to look for faster processing by itself.

Consider a word processor. Microsoft Word has been around for decades. (It actually started its life in MS-DOS.) Word was designed for systems with much smaller memory and much slower processors, and it still has some of that design. The code for Word is efficient. It spends most of its time not in processing words but in waiting for the user to type a key or click the mouse. Making the code twice as fast would not improve its performance (much), because the slowness comes from the user.

E-mail is another example. Most of the time for e-mail is, like Word, the computer waiting for the user to type something. When an e-mail is sent, the e-mail is passed from one e-mail server to another until it arrives at the assigned destination. Changing the servers would let the e-mail arrive quicker, but it doesn't help with the composition. The acts of writing and reading the e-mail are based on the human brain and physiology; faster processors won't help.

The pursuit of faster processing without definite benefits is, ironically, a waste of time.

Instead of blindly seeking faster hardware and software, we should think about what we want. We should identify the performance improvements that will benefit us. (For managers, this means lower cost or less time to obtain business results.)

Once we insist on benefits for improved performance, we find a new concept: the idea of "fast enough". When an improvement lets us meet a goal (a goal more specific than "go faster"), we can justify the effort or expense for faster performance. But once we meet that goal, we stop.

This is a useful tool. It allows us to eliminate effort and focus on changes that will help us. If we decide that our internet service is fast enough, then we can look at other things such as database and compilers. If we decide that our systems are fast enough, then we can look at security.

Which is not to say that we should simply declare our systems "fast enough" and ignore them. The decision should be well-considered, especially in the light of our competitors and their capabilities. The conditions that let us rate our systems as "fast enough" today may not hold in the future, so a periodic review is prudent.

We shouldn't ignore opportunities to improve performance. But we shouldn't spend all of our effort for them and avoid other things. We shouldn't pick a solution because it is the fastest. A solution that is "fast enough" is, at the end of the day, fast enough.

Thursday, October 1, 2015

The birth of multiprogramming

Early computers ran one program at a time. They were also slow. This wasn't a problem. At first.

Early computers ran in "batch mode" - a non-interactive mode that often saw input on punch cards or magnetic tape, instead of people typing on terminals (much less smaller computers as we do today).

Companies had programs for each task: a program to update inventory, a program to update sales information, a program to print personnel reports, etc. Each program was a "job" with its program, input data, and output data.

The advantage of batch mode processing is that the job runs as an independent unit and it can be scheduled. Your collection of programs could be planned, as each used specific data, generated specific data, and ran for a (usually) predictable amount of time. Companies would run their daily jobs every day, their weekly jobs perhaps every Saturday, their monthly jobs at the end of the month (or more often during the first days of the next month), and their annual jobs at the end of the year.

If your programs all ran successfully, and within their usual timeframes, you were happy. The problem for companies was that they tended to grow, increasing the number of customers they supported and the number of sales they created. Those increases meant an increase in the size of data for processing, and that meant increased processing time to run their computer jobs.

If you have spare computing time, you simply run jobs longer. But what happens when you don't have spare processing time? What happens when your daily jobs take more than twenty-four hours to run?

In today's world of personal computers and cloud processing, we simply order up some additional computers. That was not possible in the early days of computing: computers were expensive.

Instead of buying (or leasing) more computers, we looked for ways to make computers more efficient. One of the first methods was called "multiprogramming" and it allowed multiple programs to run at the same time.

Successful multiprogramming had a number of challenges: loading multiple programs into memory (at different locations), preventing one program from writing to memory allocated to another, and sharing the processor among the simultaneously executing programs. While these are all tasks that any modern operating system handles, in its day it was a significant change.

It was also successful. It took the time spent waiting for input/output tasks and re-allocated for processing. The result was an increase of processing time, which meant that a company could run more programs without buying a larger (and more expensive) computer.

Multiprogramming shared the processor by using what we call "cooperative multitasking". A program ran until it requested an input/output operation, at which point the operating system initiated the operation and switched the processor to a different program. The input/output operation was handled by a separate device (a card reader or tape reader, or maybe a disk drive) so it could continue without the main processor. This freed the main processor to do some other work in the form of another program.

Windows 95 used a similar technique to switch between programs.

Later operating systems used "pre-emptive task switching", giving programs small amounts of processing time and then suspending one program and activating another. This was the big change for Windows NT.

Multiprogramming was driven by cost reduction (or cost avoidance) and focussed on internal operations. It made computing more efficient in the sense that one got "more computer" for the same amount of hardware. (The operating system had to be more sophisticated, of course.) But it did nothing for the user; it made no changes to the user experience. One still had to schedule jobs to run programs with specific input and output data.

Cost avoidance is one driver for IT. Its focus on internal operations is appropriate. But there are other drivers, and they require other techniques.

Monday, November 25, 2013

The New Aristocracy

The workplace is an interesting study for psychologists. It has many types of interactions and many types of stratifications of employees. The divisions are always based on rank; the demonstrations of rank are varied.

I worked in one companyin which rank was indicated by the type, size, and location of one's workspace. Managers were assigned offices (with doors and solid walls), senior technical people were assigned cubicles next to windows, junior technical employees were assigned cubicles without windows, and contract workers were "doubled up" in windowless cubicles.

In another company, managers were issued color monitors and non-managers were issued (cheaper) monochrome monitors.

We excel at status symbols.

The arrival of tablets (and tablet apps) gives us a new status symbol. It allows us to divide workers into those who work with keyboards and those who work without keyboards. The "new aristocracy" will be, of course, those who work without keyboards. They will be issued tablets, while the "proletariat" will continue to work with keyboards.

I don't expect that this division will occur immediately. Tablets are quite different from desktop PCs and the apps for tablets must be different from desktop apps. It will take time to adapt our current applications to the tablet.

Despite their differences, tablets are -- so far -- much better at consuming information, while PCs are better at composing information. Managers who use information to make decisions will be able to function with tablets, while employees who must prepare the information will continue to do that work on PCs.

I expect that the next big push for tablet applications will be those applications used by managers: project planning software, calendars, dashboards, and document viewers.

The new aristocracy in the office will be those who use tablets.

Friday, February 22, 2013

Software subscriptions

One of our current debates is the change from traditional, PC-installed software to web-based software.

Even Microsoft is switching. Microsoft in addition to its classic PC software "Office 2013" which is installed on your local PC, now offers the subscription package "Office 365".

It's a big change, and many folks are concerned. System admins worry about the new procedures for signing up with new software services. Managers fret that an update could change file formats, and locally stored documents in old formats may become unreadable. Ordinary users find the concept of renting, not owning, their software a bit disconcerting. A few folks are aghast to learn that their software could disappear after failing to pay for the subscription, and have railed against the increased cost and the greed of software vendors.

Yes, I know that software is usually licensed and not sold. But most folks think of current PC software as sold, regardless of the licensing agreement. It is this general understanding that I consider to be important.

Let's run a thought experiment. Suppose technology was going in the other direction. What if, instead of starting with purchased software and moving to subscriptions, we were starting with subscriptions and moving to purchased software?

In that change, people would be complaining, of course. Users would be hesitant to move from the comfort and convenience of subscription software to the strange new world of installed software. System admin types would grumble about the additional work of installing software and applying updates. Managers would fret about compatibility, fearing that some users would have old versions of software and might be unable to share files. Ordinary users might find the concept of "owning" software a bit disconcerting. I suspect that a lot of people would be aghast to learn that they would have to pay for each device they used, and rail against the increased cost and the greed of software vendors.

Such a thought experiment shows that the change from ownership to rental is big, but perhaps not a bad thing. The decision between PC-installed software and web-based software subscriptions (or mobile/cloud subscriptions) is similar to the decision to own a house or rent a condominium. Both have advantages, and drawbacks.

My advice is to experiment with this new model. Start using web-based e-mail, word processing, and spreadsheets. Try the file-sharing services of SkyDrive, Google Drive, and DropBox. Learn how they work. Learn how your organization and use them. Then you can decide which is the better method for your team.

Sunday, September 16, 2012

The Agile and Waterfall Impedance Mismatch

During a lunchtime conversation with a colleague, we chatted about project management and gained some insights into the Waterfall and Agile project management techniques.

Our first insight was that an organization cannot split its project management methods. It cannot run some projects with a Waterfall process and other projects with Agile processes. (At least not if the projects must coordinate deliverables.) A company that uses Waterfall processes may be tempted to run a pilot project with Agile processes -- but if that pilot project "plugs in" to other projects, the result will be failure.

The problem is that Waterfall and Agile make two different promises. Waterfall promises a specific set of functionality, with a specific level of quality, delivered on a specific date. Agile makes no such promise;  it promises to add functionality and have a product that is always ready to ship (that is a high level of quality), albeit with an unknown set of functionality. The Agile process adds small bits of functionality and waits to get them correct before adding others -- thus ensuring that everything that has been added is working as expected, but not promising to delivery everything desired by a specific date. (I am simplifying things here. Agile enthusiasts will point out that there is quite a bit more to Agile processes.)

Waterfall processes make promises that are very specific in terms of feature set, quality, and delivery time -- but are not that good at keeping them. Hence, we have a large number of projects that are late or have low quality. Agile makes promises that are specific in terms of quality, and are good at keeping them. But the promises of the Agile processes are limited to quality; they do not propose the specifics that are promised by Waterfall.

With two different promises, it is no surprise that waterfall and agile processes don't co-exist. (There are other reasons that the two methods fail to cooperate, including the "design up front" of Waterfall and the "design as you go" of Agile.)

Our second insight was that transitioning an IT shop from Waterfall to Agile methods should not be accomplished by pilot projects.

Pilot projects are suitable to introduce people to the methods -- but those pilot projects must exist in isolation from the "regular" projects. Such projects were easy to establish in the earlier age of separate systems -- "islands of automation" -- that gave each system a measure of independence. Today, it is rare to see an IT system that exists in isolation.

Rather that use pilot projects, we like the idea of introducing ideas from Agile into the standard process used within the company. Our first choice is automated testing, the unit-level automated testing that can be performed by developers. Giving each developer the ability to run automated tests introduces them to a core practice of Agile, without creating an impedance mismatch.

After automated testing, we like the idea of allowing refactoring. Often omitted from plans for Waterfall projects, refactoring is another key practice of Agile development. Once unit tests are in place, developers can refactor with confidence.

Our third insight relates to project methods and project size. We think (and this is speculation) that Agile is better suited to small projects (and small systems) and Waterfall may be better suited to large systems. Thus, if you have large, complex systems, you may want to stay with Waterfall; if you have small systems (or even small applications) then you may want to use Agile.

We think that this relationship is correlation, not causal. That is, you can pick one side of the equation and drive the other. If you have large systems, you will end up with Waterfall. But the equation works in both directions, and you don't have to start with the size of your systems. If you use Agile methods, you will end up with smaller, collaborating systems.

Now, we realize that a lot of companies have large systems. We don't believe that by switching overnight to Agile methods will lead to smaller systems overnight. A base of large systems contains a certain amount of inertia, and requires a certain effect to redesign into smaller systems.

What we do believe is that you have a choice, that you can choose to use Agile (or Waterfall) methods (carefully, avoiding impedance mismatches), and that you can change the size of your systems. You can manage the size and complexity of your systems by selecting your processes.

These are choices that managers should accept and make carefully. They should not assume that "the system is the system" and "things are big and complicated and will always be so".