Thursday, May 5, 2022

C but for Intel X86

Let's step away from current events in technology, and indulge in a small reverie. Let's play a game of "what if" with past technology. Specifically the Intel 8086 processor.

I will start with the C programming language. C was designed for the DEC PDP-7 and PDP-11 processors. Those processors had some interesting features, and the C language reflects that. One example is the increment and decrement operators (the '++' and '--' operators) which map closely to addressing modes on the DEC processors.

Suppose someone had developed a programming language for the Intel 8086, just as Kernighan and Ritchie developed C for the PDP-11. What would it look like?

Let's also suppose that this programming language was developed just as the 8086 was designed, or shortly thereafter. It was released in 1978. We're looking at the 8086 (or the 8088, which has the identical instruction set) and not the later processors.

The computing world in 1978 was quite different from today. Personal computers were just entering the market. Apple and Commodore had computers that used the 6502 processor; Radio Shack had computers that used the Z-80 and the 68000.

Today's popular programming languages didn't exist. There was no Python, no Ruby, no Perl, no C#, and no Java. The common languages were COBOL, FORTRAN, BASIC, and assembler. Pascal was available, for some systems. Those would be the reference point for a new programming language. (C did exist, but only in the limited Unix community.)

Just as C leveraged features of the PDP-7 and PDP-11 processors, our hypothetical language would leverage features of the 8086. What are those features?

One feature that jumps out is text strings. The 8086 has instructions to handle null-terminated text. It seems reasonable that an 8086-centric language would support them. They might even be a built-in type for the language. (Strings do require memory management, but that is a feature of the run-time library, not the programming language itself.)

The 8086 supports BCD (binary-converted decimal) arithmetic. BCD math is rare today, but it was common on IBM mainframes and a common way to encode data for exchange with other computers.

The 8086 had a segmented architecture, with four different segments (code, data, stack, and "extra"). Those four segments map well to Pascal's organization of code, static data, stack, and heap. (C and C-derivatives use the same organization.) A language could support dynamic allocation of memory and recursive functions (two things that were not available in COBOL, FORTRAN, or BASIC). And it could also support a "flat" organization like those used in COBOL and FORTRAN, in which all variables and functions are laid out at link time and fixed in position.

There would be no increment or decrement operators.

Who would build such a language? I suppose a natural choice would be Intel, as they knew the processor best. But then, maybe not, as they were busy with hardware design, and had no operating system on which to run a compiler or interpreter.

The two big software houses for small systems (at the time) were Microsoft and Digital Research. Both had experience with programming languages. Microsoft provided BASIC for many different computer systems, and also had FORTRAN and COBOL. Digital Research provided CBASIC (a compiled BASIC) and PL/M (a derivative of IBM's PL/I).

IBM would probably not create a language for the 8086. They had no offering that used that processor. The IBM PC would arrive in 1981, and IBM didn't consider it a serious computer -- at least not until people started buying them in large quantities.

DEC, at the time successful with minicomputers, also had no offering that used the 8086. DEC offered many languages, but used their own processors.

Our language may have been developed by a hardware vendor, such as Apple or Radio Shack, but they like Intel were busy with hardware and did very little in terms of software.

So it may have been either Microsoft or Digital Research. Both companies were oriented for business, so a language developed by either of them would be oriented for business. A new language for business might be modelled on COBOL, but COBOL didn't allow for variable-length strings. FORTRAN was oriented for numeric processing, and it didn't handle strings either. Even Pascal had difficulty with variable-length strings.

My guess is that our new language would mix elements of each of the popular languages. It would be close to Pascal, but with more flexibility for text strings. It would support BCD numeric values, not only in calculations but also in input-output operations. The language would be influenced by COBOL's verbose approach to a language.

We might see something like this:

PROGRAM EXAMPLE
DECLARE
  FILE DIRECT CUSTFILE = "CUST.DAT";
  RECORD CUSTREC
    BCD ACCTNUM PIC 9999,
    BCD BALANCE PIC 9999.99;
    STRING NAME PIC X(20);
    STRING ADDRESS PIC X(20);
  FILE SEQUENTIAL TRANSFILE = "TRANS.DAT";
  RECORD TRANSREC
    BCD ACCTNUM PIC 9999,
    BCD AMOUNT PIC 999.99;
  STRING NAME;
  BCD PAYMENT,TAXRATE;
PROCEDURE INIT
  OPEN CUSTFILE, TRANSFILE;
PROCEDURE MAINLINE
  WHILE NOT END(TRANSFILE) BEGIN
    READ TRANSFILE TO TRANSREC;
  END;
CALL INIT;
CALL MAINLINE;

and so forth. This borrows heavily from COBOL; it could equally borrow from Pascal.

It may have been a popular language. Less verbose than COBOL, but still able to process transactions efficiently. Structured programming from Pascal, but with better input-output. BCD data for efficient storage and data transfer to other systems.

It could have been a contender. In an alternate world, we could be using programming languages derived not from C but from this hybrid language. That might solve some problems (such as buffer overflows) but maybe given us others. (What problems, you ask? I don't know. I just now invented the language, I'll need some time to find the problems.)


Sunday, May 1, 2022

Apple wants only professional developers

Apple got itself some news this past week: It sent intimidating letters to the developers of older applications. Specifically, Apple threatened to expel old apps -- apps that had not been updated in three years -- from the iTunes App Store. (Or is it the Apple App Store?)

On the surface, this seems a reasonable approach. Apple has received complaints about the organization of the App Store, and the ability (or lack thereof) to find specific apps. By eliminating older apps, Apple can reduce the number of apps in the store and improve the user experience.

But Apple showed a bit of its thinking when it sent out the notice. It specified a grace period of 30 days. If the developers of an app submitted a new version, the app could remain in the App Store.

The period of 30 days calls my attention. I think it shows a lack of understanding on Apple's part. It shows that Apple thinks anyone can rebuild and resubmit their app in less than one month.

For professional teams, this seems a reasonable limit. Companies that develop apps should be familiar with the latest app requirements, have the latest tools, and have processes to release a new version of their app. Building apps is a full-time job, and they should be ready to go.

Individuals who build apps "for fun" are in a different situation. For them, building apps is not a full-time job. They probably have a different full-time job, and building apps is a side job. They don't spend all day working on app development, and they probably don't have the latest tools. (They may not even have the necessary equipment to run the compilers and packager necessary to meet Apple's requirements.) For them, the 30-day period is an impossible constraint.

Apple, in specifying that limit, showed that it does not understand the situation of casual developers. (Or, more ominously, deliberately chose to make life difficult for casual developers. I see no evidence of hostility, so I will credit this limit to ignorance.)

In either case (ignorance or malice) Apple thinks -- apparently -- that developers can spin out new versions of their apps quickly. In the long term, this will become true. Casual developers will give up on Apple and stop developing their apps. (They may also drop Apple's products. Why buy a phone that won't run their app?)

As casual developers leave the field, only the serious developers will remain. Those will be the commercial developers, or the professional developers who are paid by corporations.

That sets the Apple environment as a commercial platform, one that serves commercially-developed apps for commercial purposes. The Apple platform will have online banking, email, commercial games, video streaming, location tracking for auto insurance, and other apps in the realm of commerce. But it will have very few fun apps, very few apps built by individuals for enjoyment. Every app will have a purpose, and that purpose will be to make money, either directly or indirectly.

Yet it doesn't have to be this way. Apple has an alternative, if they want it.

Right now, the Apple App Store is a single store. Apple could change that, splitting the App Store into multiple stores, or a single store with sections. One section could be for the commercial apps and another section for the amateur apps. The commercial section will have the tighter restrictions that Apple wants to ensure a good experience for users (updated apps, recent APIs, etc.) and the amateur section will hold the casually-built apps. But the two sections are not equal: apps in the commercial section are allowed to use API calls for payments, and apps in the amateur section are not. (Apple could limit other APIs too, such as biometrics or advertising.)

A two-tiered approach to the App Store gives the developers of iOS apps a choice: play in the pro league, or play in the amateur league. It may be an approach worth considering.

Wednesday, April 20, 2022

Advances in programming come from restraints

Advances in programming come from, to a large extent, advances in programming languages. And those advances in programming languages, unlikely as it seems, are mostly not expanded features but restrictions.

That advances come from restrictions seems counter-intuitive. How does fewer choices make us better programmers?

Let's look at some selected changes in programming languages, and how they enabled better programming.

The first set of restrictions was structured programming. Structured programming introduced the concepts of the IF/THEN/ELSE statement and the WHILE loop. More importantly, structured programming banished the GOTO statement (and its cousin, the IF/GOTO statement). This restriction was an important advancement for programming.

A GOTO statement allows for arbitrary flows of control within programs. Structured programming's IF/THEN/ELSE and WHILE statements (and WHILE's cousin, the FOR statement) force structure onto programs. Arbitrary flows of control were not possible.

The result was programs that were harder to write but easier to understand, easier to debug, and easier to modify. Structured programming -- the loss of GOTO -- was an advancement in programming.

A similar advance occurred with object-oriented programming. Like structured programming, object-oriented programming was a set of restrictions coupled with a set of new features. In object oriented programming, those restrictions were encapsulation (hiding data within a class) and the limiting of functions (requiring an instance of the class to execute). Data encapsulation protected data from arbitrary changes; one had to go through functions (in well-designed systems) to change the data. Instance functions were limited to executing on instances of the class, which meant that one had to *have* an instance of the class to call the function. Functions could not be called at arbitrary points in the code.

Both structured programming and object-oriented programming advanced the state of the art for programming. They did it by restricting the choices that programmers could make.

I'm going to guess that future advancements in programming will also come from restrictions in new programming languages. What could those restrictions be?

I have a few ideas.

One idea is immutable objects. This idea has been tested in the functional programming languages. Those languages often have immutable objects, objects which, once instantiated, cannot change their state.

In today's object-oriented programming languages, objects are often mutable. They can change their state, either through functions or direct access of member data.

Functional programming languages take a different view. Objects are immutable: once formed they cannot be changed. Immutable objects enforce discipline in programming: you must provide all of the ingredients when instantiating an object; you cannot partially initialize an object and add things later.

I would like to see a programming language that implements immutable objects. But not perfectly -- I want to allow for some objects that are not immutable. Why? Because the shift to "all objects are immutable" is too much, too fast. My preference is for a programming language to encourage immutable designs and require extra effort to design mutable objects.

A second idea is a limit to the complexity of expressions.

Today's programming languages allow for any amount of complexity in an expression. Expressions can be simple (such as A + 1) or complex (such as A + B/C - sqr(B + 7) / 2), or worse.

I want expressions to be short and simple. This means breaking a complex expression into multiple statements. The only language that I know that placed restrictions on expressions was early FORTRAN, and then only for the index to an array variable. (The required form was I*J+K, where I, J, and K were optional.)

Perhaps we could design a language that limited the number of operations in an expression. Simpler expressions are, well, simpler, and easier to understand and modify. Any expression that contained more than a specific number of operations would be an error, forcing the programmer to refactor the expression.

A third idea is limits on the size of functions and classes. Large functions and large classes are harder to understand than small functions and small classes. Most programming languages have a style-checker, and most style-checkers issue warnings for long functions or classes with lots of functions.

I want to strengthen those warnings and change them to errors. A function that is too long (I'm not sure how long is too long, but that's another topic) is an error -- and the compiler or interpreter rejects it. The same applies to a class: too many data members, or too many functions, and you get an error.

But like immutable objects, I will allow for some functions to be larger than the limit, and some classes to be more complex than the limit. I recognize that some classes and functions must break the rules. (But the mechanism to allow a function or class to break the rules must be a nuisance, more than a simple '@allowcomplex' attribute.)

Those are the restrictions that I think will help us advance the art of programming. Immutable objects, simple expressions, and small functions and classes.

Of these ideas, I think the immutable objects will be the first to enter mainstream programming. The concept has been implemented, some people have experience with it, and the experience has been positive. New languages that combine object-oriented programming with functional programming (much like Microsoft's F#, which is not so new) will allow more programmers to see the benefits of immutable objects.

I think programming will be better for it.

Tuesday, March 22, 2022

Apple has a software problem

Apple has a problem. Specifically, Apple has a software problem. More specifically, Apple has a problem between its hardware and its software. The problem is that Apple's hardware is getting bigger and more powerful at a much faster rate than its software.

Why is better hardware a problem? By itself, better hardware isn't a problem. But hardware doesn't exist by itself -- we use hardware with software. Faster, more powerful hardware requires bigger, more capable software.

One might think that fast hardware is a good thing, regardless of software. Faster hardware runs software faster, right? And faster is always a good thing, right? Well, not always.

For software that operates on a large set of data, such that the user is waiting for the result, yes, faster hardware is better. But faster hardware is only better if the user is waiting. Software that operates in "batch mode" or with limited interaction with the user is improved with better hardware. But software that interacts with the user, software that must wait for the user to do something, isn't necessarily improved.

Consider the word processor, a venerable tool that has been with us since the introduction of the personal computer. Word processors spend most of their time waiting for the user to press a key. This was true even with computers from prior to the IBM PC. In the forty-odd years since, hardware has gotten much, much faster but word processors have not gotten much more complicated. (There was a significant increase in complexity when we shifted from DOS and its character-based display and printing to Windows and graphics-based display and printing, but very little otherwise.)

The user experience for word processors has changed little in that time. Faster hardware has not improved the experience. If we limited computers to word processors, there would be no need for a processor more powerful than the Intel 80386. By extension, if you needed a computer for word processing (and nothing else) today's bottom-of-the-line, cheap, minimal computer would be more than enough for you. There is no point in spending on a premium computer (or even a mediocre one) because the minimal computer can do the job adequately.

The same logic applies to spreadsheets. And e-mail. And web browsers. Computers have gotten better faster than these programs and their data have gotten bigger.

The computers I use for my day-to-day and even unusual tasks are old PCs, ranging from five to twenty years in age. All of them are fast enough for what I need to do. An aged Dell Inspiron N5010 runs Windows 10 and lets me use Zoom and Teams to join virtual meetings. I could replace it with a modern laptop, but the experience would be the same! Why should I bother?

A premium computer is needed only for those tasks that perform complex operations on  large sets of data. And this is where Apple fails to provide the tools to justify its powerful (and expensive) hardware.

Apple is focussed on hardware, and it does a terrific job of designing and manufacturing powerful computers. But software is another story. Apple develops applications and then seems to lose interest in them. It built Pages, Numbers, and Keynote, and has made precious few improvements -- other than recompiling for ARM processors, or adding support for things like AirPlay. It hasn't added features.

The same goes for applications such as GarageBand, iTunes, and FaceTime. Even Xcode.

Apple has even let utilities such as grep and sed in MacOS (excuse me, "mac os". Or is it "macos"?) are aging with no updates. The corresponding GNU utilities have been modified and improved in various ways, to the point that developers now recommend the installation of the GNU utilities on Apple computers.

Apple may be waiting for others to build the applications that will take advantage of the latest Mac computers. I'm not sure that many will want to do that.

Building applications to leverage the new Apple processors may seem a no-brainer. But there are a number of disincentives.

First, Apple may build their own version of the application, and compete with the vendor. Independent vendors may be reluctant to enter a market when Apple is a possible competitor.

Second, developing applications to take advantage of the M1 architecture requires a lot of time and effort. The application must be multithreaded -- single-threaded applications cannot fully leverage the multiple cores on the M1. Designing, coding, and testing such an application is a lot of work.

Third, the market is limited. Applications developed to take advantage of the M1 processor are, well, developed for the M1 processor. They won't run on Windows PCs. You can't cross-build to run on Windows PCs, because PC processors are much slower than the Apple processors. The set of potential customers is limited to those who have the upper-end Apple computers. That's a small market (compared to Windows) and the potential revenue may not cover the development costs.

That is an intimidating set of disincentives.

So if Apple isn't building applications to leverage the upper-end Macs, and third-party developers aren't building applications to leverage upper-end Macs, then ... no one is building them! The result being that there are no applications (or very few) to take advantage of the higher-end processors.

I expect sales of the M1-based Macs to be robust, for a short time. But as people realize that their experience has not improved (except, perhaps for "buttery smooth" graphics) they will hesitate for the next round of new Macs (and iPads, and iPhones). As customers weigh the benefits and costs of new hardware, some will decide that their current hardware is good enough. (Just as my 15-year-old PCs are good enough for my simple word processing and spreadsheet needs.) If Apple introduces newer systems with even faster processors, more people will look at the price tags and decide to wait before upgrading.

Apple is set up to learn an important lesson: High-performance hardware is not enough. One needs the software to offer solutions to customers. Apple must work on its software.


Monday, March 14, 2022

Developer productivity is not linear

The internet has recently castigated an anonymous CEO who wanted his developers to work extra hours (without an increase in pay, presumably). Most of the reactions to the post have focussed on the issue of fairness. (And a not insignificant number of responses have been... less than polite.)

I want to look at a few other aspects of this issue.

I suspect that the CEO thinks he wants employees to have the same enthusiasm for the company that he himself has. He may want employees to put in extra hours and constantly think about improvements and new ideas, just as he does. But while the CEO (who is probably the founder) may be intrinsically motivated to work hard, employees may not share that same intrinsic motivation. And the CEO may be financially motivated to make the company as successful as possible (due to his compensation and stock holdings), but employees are typically compensated through wages and nothing else. (Some companies do offer bonuses; we have no information about compensation in this internet meme.) A salary alone is not enough to make employees work longer hours.

But its not really motivation, or enthusiasm, or a positive attitude that the CEO wants.

What the CEO wants, I think, is an increase in productivity. Making developers work extra, uncompensated hours is, in one sense, an increase in productivity. The developers will contribute more to the project, at no additional cost to the company. Thus, the company sees an increase in productivity (an increase in output with no increase in inputs).

Economists will point out that the extra work does have a cost, and that cost is borne by the employees. Thus, while the company doesn't pay for it, the cost does exist. It is an "externality" in the language of economists.

But these are small mistakes. The CEO is making bigger mistakes.

One big mistake is the thinking that developer productivity is linear. That is, thinking that if a developer can do so much (let's call it N) in one hour, then that same developer can do 2N in 2 hours, 4N in 4 hours, and so forth. This leads to the conclusion that by working an extra 2 hours each day (from 8 hours to 10 hours) then the developer provides 10N units of productivity instead of 8N units. This is not true; developers get tired like anyone else and their productivity wanes as they tire. Even if all of the hours are paid, developer productivity can vary.

Beyond simply tiring, the productivity of a developer will vary throughout a normal workday, and good developers recognize the times that are best for the different tasks of analysis, coding, and debugging.

A second big mistake is thinking that the quality of a developer's work is constant, and unaffected by the number of hours worked. This is similar to the first mistake, but somewhat different. The first mistake is about quantity; this mistake is about quality. The act of writing (or fixing) a computer program is a creative one, requiring a programmer to draw on knowledge and experience to devise correct, workable, and efficient solutions. Programmers who are tired make mistakes. Programmers who are under stress from personal issues make mistakes. (Programmers who are under stress from company-generated issues also make mistakes.) Asking programmers to work longer hours increases mistakes and reduces quality.

If one wants to increase the productivity of developers, there are multiple ways to do it. Big gains in productivity can come from automated tests, good tools (including compilers, version control, and debuggers), clear and testable requirements, and respect in the office. But these changes require time to implement, some require expenditures, and most have delayed returns on the investments.

Which brings us to the third big mistake: That gains in productivity can occur rapidly and for little to no cost. The techniques and tools that are effective at improving productivity require time and sometimes money.

Asking programmers to work longer hours is easy to implement, can be implemented immediately, and requires no expenditure (assuming you do not pay for the extra time). It requires no additional tools, no change to process, and no investment. On the surface, it is free. But it does have costs -- it costs you the goodwill of your team, it can cause an increase in errors in the code, and it can encourage employees to seek employment elsewhere.

There is little to be gained, I think, by saying impolite things to the CEO who wants longer hours from his employees. Better to look to our own houses, and our own practices and procedures. Are we using good tools? How can we improve productivity? What changes will it require?

Improving our productivity is a big task -- as big as we choose to make it. Let's use proven techniques and respect our employees and colleagues. I think that will give us better results.

Tuesday, March 8, 2022

About those new Macs

Apple's presentation today talks about three new products: the new iPhone SE, the new iPad Air, and the new Mac Studio desktop (with a new display, so maybe that is four new products).

The iPhone SE is pretty much what you would expect in a low-end Apple phone. It still uses the A-series chips.

The iPad Air uses an M-series chip, and that is interesting. Using the M-series chip brings the iPad Air closer to the Mac line of computers. I expect that, in the somewhat-distant future, Apple will replace the iMac with the iPad line. Apple already lets one run iPhone and iPad apps on Macbooks; some day Apple may let an M-series Mac run apps for an M-series iPad.

The new M1 Ultra chip and the new Apple Studio desktop and display received the most time in the presentation.

The M1 Ultra chip is a pairing of two M1 Max chips. Instead of simply placing two M1 Max chips on a board, Apple has connected them at the chip level. The connection allows for rapid transfer of data, and results in a powerful processor.

Interestingly, the M1 Ultra design does not follow the pattern of the M1, M1 Pro, and M1 Max chips, which are expansions of the base model. Apple may have run into difficulties in building a single-chip successor to the M1 Max. The pairing of two M1 Max chips feels like a compromise, getting a faster processor on an aggressive schedule.

The Mac Studio is a computer that I was expecting: an expanded version of the Mac Mini. The Mac Studio  has a beefier processor, more memory, and more ports than the Mac Mini, but the same basic design. The Mac Studio is simply a big brother to the Mac Mini, and nothing like the Mac Pro.

Apple hinted at a replacement for the Mac Pro, and my prediction for its successor stands. That is, I expect the replacement for the Mac Pro to be a more powerful Mac Mini (or a more powerful Mac Studio). It will not be like the current Mac Pro with slots for GPU cards. (There is no point, as the built-in GPU of the M1 processor provides more computational power than an external card.) It will most certainly have a processor more powerful than the M1 Ultra. That processor may be a double M1 Ultra (or four M1 Pro processor ganged together); what Apple calls it is anyone's guess. ("M1 Double Ultra"? "M1 Ultra Max"? "M1 Ultravox"?)

The new Mac Studio is a processor for specific purposes. Apple's presentation focussed on video applications, which are of interest to a limited market. For typical PC users who work in the pedestrian world of documents and spreadsheets, the new Mac Studio offers little -- the low-end Mac systems are capable, and additional processing power is simply wasted as the computer waits for the user.

Apple's presentation, and its concentration on video applications, shows a blind spot in their thinking. Apple made no announcements about changes to operating systems or application software. Another company, when announcing more powerful hardware, would also introduce more powerful software that takes advantage of that hardware. Apple could have introduced improved versions of applications such as Pages and Numbers, or features to share processing among Apple devices, or AI-like capabilities for improved security and privacy... but they did not. Perhaps they have those changes lined up for a future presentation, but I suspect that they simply don't have them. Their thinking seems to be to wow their customers with hardware alone. That, I think, may be a mistake.

Tuesday, March 1, 2022

With Chrome OS Flex, Look Before You Leap

Google made news with its "Chrome OS Flex" offering, which turns a PC into a Chromebook.

Some like the idea, seeing a way to reduce licensing costs. Others like the idea because it offers simpler administration. Yet others see it as a way of using older PCs that cannot migrate to Windows 11. 

Before committing to a conversion, consider:

Chrome OS Flex may not run on your PCs Chrome OS Flex works on some PCs but not all PCs. Google has a list of supported PCs, and the list is rather thin. Google rates target PCs with one of three classifications: "Certified", "Expect minor issues", and "Expect major issues". Google does not explain the difference between major and minor, but let's assume that major issues would be such that the Chrome experience would be poor and not productive.

Microsoft has a large knowledge base of hardware and device drivers. Google may be building such a knowledge base, but its current set of knowledge is much smaller than Microsoft's. The result is that Chrome OS Flex can run on a limited number of PC models.

Your employees may dislike the idea The introduction of new technology is tricky from a management perspective. Some employees will welcome Chrome OS Flex, and others will want to remain on the old, familiar system. If your roll-out is limited, some of the employees in the "stay on the old OS" group will feel relieved, and others may feel left out.

My recommendation is to communicate your plans well in advance, and focus on the ideas of efficiency and reduced costs. Avoid the notion of Chrome OS as a reward or a perk, and talk about it as simply another tool for the office.

Google is not Microsoft Switching from Windows to Chrome OS Flex means changing a core relationship from Microsoft to Google. Microsoft has a long history of supporting technologies and products; Google has the opposite. (There are web sites dedicated to the "Google graveyard".)

Google may drop the Google OS Flex offering at any time, and not provide a successor product. (If they do, your best path forward may be to replace the PCs running Google OS Flex with Chromebooks, which should provide the same capabilities as the PCs.)

Look before you leap My point is not to dissuade you from Google's Chrome OS Flex offering. Rather, I suggest that you consider carefully the benefits and risks of such a move. As part of your evaluation, I suggest a pilot project, moving some PCs (and employees) to the new OS. I also suggest that you compile an inventory of applications that run locally -- that is on your PCs, not on the web or in the cloud. Those applications cannot run on Chrome OS Flex, or on regular Chromebooks.

It may be possible to replace local applications with web-based applications, or cloud-based applications, but such replacements are projects themselves. You may want to start with a pilot project for Chrome OS Flex, and then migrate PC-based applications to the web or cloud, and then migrate other employees to Chrome OS. Or not -- a hybrid solution with some PCs running Windows (or mac os) and other PCs running Chrome OS Flex is possible.

Whatever your choose to do, I suggest that you think, communicate, evaluate, and then decide.