Does Apple need a browser?
Apple has Safari, but maintains it poorly. Is Apple serious about its browser?
Come to think of it, why does Apple need a browser?
I can think of several reasons:
Ego Apple has to have a browser to satisfy its ego. It needs to have a browser to meet some internal need.
Freudian? Yes. Reasonable? No.
Apple is a smart company. It doesn't invest in products to suit its ego. It invests to improve revenue.
Keeping up with Microsoft Microsoft has a browser. Apple wants, at some level, to compete with Microsoft. Therefore, Apple needs a browser.
Doubtful. Apple doesn't have to match Microsoft one-for-one on features. They never have.
Superlative web experience Apple knows Mac OSX and iOS better then anyone. They, and only they, can build the browser that provides the proper user experience.
Possible. But only if Apple thinks that a good web experience is necessary.
Avoid dependence on others Max OSX (and iOS), despite Apple's most fervent wishes, still needs a browser. Without an Apple browser, Apple would have to rely on another browser. Perhaps Google Chrome. Perhaps Mozilla Firefox. But relying on Google is risky -- Google and Apple are not the best of friends. Relying on Mozilla is also risky, but in another sense: Mozilla may not be around much longer, thanks to the popularity of other browsers (of which Safari is one).
All of these strategies have one thing in common: the assumption that Apple considers the web important.
I'm not sure Apple thinks that. It may be that Apple thinks, in the long run, that the web is unimportant. Apple focusses on native apps, not HTML 5 apps. The dominant app design processes data on the local device and uses the cloud for storage, but nothing more. Apple doesn't provide cloud services for computing. Apple has no service that matches Windows Azure or Google's computer engine. In Apple's world, devices compute and the cloud stores.
* * * * *
The more I think about it, the more I convince myself that Apple doesn't need a browser. Apple has already delayed several improvements to Safari. Maybe Apple thinks that Safari is good enough for the rest of us.
In the Apple ecosystem, they may be right.
Tuesday, June 30, 2015
Sunday, June 28, 2015
Arrogance from Microsoft, and from Apple
Microsoft's design for Windows 8 has gotten a bit of pushback from its users. One notion I have heard (from multiple sources) is that Microsoft was 'arrogant' in their assumptions about the features users want.
It's true that Microsoft did not consult with all of their users when they designed Windows 8. (They certainly did not ask my opinion.) Is this behavior arrogant? Perhaps.
I cannot help but compare Microsoft's behavior to Apple's, and find that they operate in similar styles. Apple has changed the user interface for its operating systems, without consulting users. Yet I hear few complaints about Apple's arrogance.
Why such a difference in response from users?
Some might say that Apple's changes were "good", while Microsoft's changes were "bad". I don't see it that way. Apple made changes, and Apple users accepted them. Microsoft made changes, and many Microsoft users rejected them. Assuming that the changes were neither good nor bad, how to explain the difference?
I have a theory. It has to do with the populations of users.
Apple has a dedicated user base. People choose to use Apple equipment, and it is a conscious choice. (Some might say a rebellious choice.) Apple is not the mainstream equipment for office or home use, and one does not accidentally purchase an Apple product. If you are using Apple equipment, it is because you want to use it. You are, to some degree, a fan of Apple.
Microsoft has a different user base. Their users consist of two groups: people who want to use Microsoft equipment and people who use it because they have to. The former, I suspect, is rather small -- perhaps the size of the Apple fan group. The latter group consists of those people who use Microsoft software not because they want to, but because they have to. Perhaps they work in an office that has standardized on Microsoft Windows. Perhaps they purchases a PC and Windows came with it. Perhaps they use it because all of their family uses it. Regardless of the reason, they use Windows (and other Microsoft products) not out of choice.
People who choose to use Apple equipment have a favorable opinion of Apple. (Why else would they choose that equipment?) When Apple introduces a change, the Apple fans are more accepting of the change. They trust Apple.
The same can be said for the Microsoft fans.
The non-fans, the people who use Microsoft software through no choice of their own, are not in that situation. They tolerate Microsoft software. They do not (necessarily) trust Microsoft. But when changes are introduced, they are less accepting.
The Microsoft user base, as a result of Microsoft's market dominance, has a lot of non-volunteers. The Apple user base is much smaller and is almost entirely volunteers.
The demographics of Microsoft and Apple user bases are changing. Apple is gaining market share; Microsoft is losing. (The numbers are complicated, due to the expanding role of mobile and cloud technologies.) If these trends continue, Apple may find that its user base has a higher percentage of "conscripts" than in the past, and that they may be less than happy with changes. Microsoft is in the opposite position: as people shift from Microsoft to other platforms, the percentage of "conscripts" will decline, meaning that the percentage of fans will increase. Microsoft may see a smaller market share with more loyal and trusting customers.
These changes are small, and will occur over time. Significant changes will take years, possibly decades. But they will also occur slowly, and Microsoft and Apple may be caught by surprise. For Microsoft, it may be a pleasant surprise. Apple may find that their customer base, after many years of loyal following, starts becoming grumpy.
It's true that Microsoft did not consult with all of their users when they designed Windows 8. (They certainly did not ask my opinion.) Is this behavior arrogant? Perhaps.
I cannot help but compare Microsoft's behavior to Apple's, and find that they operate in similar styles. Apple has changed the user interface for its operating systems, without consulting users. Yet I hear few complaints about Apple's arrogance.
Why such a difference in response from users?
Some might say that Apple's changes were "good", while Microsoft's changes were "bad". I don't see it that way. Apple made changes, and Apple users accepted them. Microsoft made changes, and many Microsoft users rejected them. Assuming that the changes were neither good nor bad, how to explain the difference?
I have a theory. It has to do with the populations of users.
Apple has a dedicated user base. People choose to use Apple equipment, and it is a conscious choice. (Some might say a rebellious choice.) Apple is not the mainstream equipment for office or home use, and one does not accidentally purchase an Apple product. If you are using Apple equipment, it is because you want to use it. You are, to some degree, a fan of Apple.
Microsoft has a different user base. Their users consist of two groups: people who want to use Microsoft equipment and people who use it because they have to. The former, I suspect, is rather small -- perhaps the size of the Apple fan group. The latter group consists of those people who use Microsoft software not because they want to, but because they have to. Perhaps they work in an office that has standardized on Microsoft Windows. Perhaps they purchases a PC and Windows came with it. Perhaps they use it because all of their family uses it. Regardless of the reason, they use Windows (and other Microsoft products) not out of choice.
People who choose to use Apple equipment have a favorable opinion of Apple. (Why else would they choose that equipment?) When Apple introduces a change, the Apple fans are more accepting of the change. They trust Apple.
The same can be said for the Microsoft fans.
The non-fans, the people who use Microsoft software through no choice of their own, are not in that situation. They tolerate Microsoft software. They do not (necessarily) trust Microsoft. But when changes are introduced, they are less accepting.
The Microsoft user base, as a result of Microsoft's market dominance, has a lot of non-volunteers. The Apple user base is much smaller and is almost entirely volunteers.
The demographics of Microsoft and Apple user bases are changing. Apple is gaining market share; Microsoft is losing. (The numbers are complicated, due to the expanding role of mobile and cloud technologies.) If these trends continue, Apple may find that its user base has a higher percentage of "conscripts" than in the past, and that they may be less than happy with changes. Microsoft is in the opposite position: as people shift from Microsoft to other platforms, the percentage of "conscripts" will decline, meaning that the percentage of fans will increase. Microsoft may see a smaller market share with more loyal and trusting customers.
These changes are small, and will occur over time. Significant changes will take years, possibly decades. But they will also occur slowly, and Microsoft and Apple may be caught by surprise. For Microsoft, it may be a pleasant surprise. Apple may find that their customer base, after many years of loyal following, starts becoming grumpy.
Monday, June 22, 2015
Static languages for big and dynamic languages for small
I will admit, up front, that I have more experience with statically-type languages (C, C++, Java, C#) than with dynamically-typed languages (Python, Ruby). I learned C before any of the others (but after BASIC, Fortran, and Pascal) and learned Python and Ruby most recently. My opinions are biased, shaped by my experience, or lack thereof, with specific programming languages.
Now that we have the disclaimer out of the way...
I have found that I write programs differently in dynamically-typed languages than I do in statically-typed languages.
There are many differences between the two language sets. C++ is a big jump up from C. Java and C# are, in my mind, a bit simpler than C++. Python and Ruby are simpler than Java and C# -- yet more powerful.
Putting language capabilities aside, I have examined the programs I have written. I have two distinct styles, one for statically-typed languages and a different style for dynamically typed languages. The big difference in my two styles? The names of things.
Programs in any language need names. Even the old Dartmouth BASIC needs names for variables, and one can argue that with the limited namespace (one letter and one optional digit) one must give more thought to names in BASIC than in any other language.
My style for statically-type languages is to name variables and functions with semantic names. Names for functions are usually verb phrases, describe the action performed by the function. Names for objects usually describe the data contained by the variable.
My style for dynamically-typed languages is different. Names for functions typically describe the data structure that is returned by the function. Names for variables typically describe the data structure contained by the variable (or referenced by it, if you insist).
Perhaps this difference is due to my familiarity with the older statically-typed languages. Perhaps it is due to the robust IDEs for C++, Java, and C# (for Python and Ruby I typically use a simple text editor).
I find dynamically-typed languages much harder to debug than statically-typed languages. Perhaps this is due to the difference in tools. Perhaps it is due to my unfamiliarity with dynamically-typed languages. But perhaps it is simply easier to analyze and debug statically-typed languages.
If that is true, then I can further state that it is better to use a statically-typed languages for large projects. It may also be better to use a dynamically-typed language for smaller programs. I'm not sure how large 'large' is in this context, nor am I sure how small 'small' is. Nor do I know the cross-over point, at which it is better to switch from one to the other.
But I think it is worth thinking about.
Actually, I tend to write FORTRAN in all programming languages. But that is another matter.
Now that we have the disclaimer out of the way...
I have found that I write programs differently in dynamically-typed languages than I do in statically-typed languages.
There are many differences between the two language sets. C++ is a big jump up from C. Java and C# are, in my mind, a bit simpler than C++. Python and Ruby are simpler than Java and C# -- yet more powerful.
Putting language capabilities aside, I have examined the programs I have written. I have two distinct styles, one for statically-typed languages and a different style for dynamically typed languages. The big difference in my two styles? The names of things.
Programs in any language need names. Even the old Dartmouth BASIC needs names for variables, and one can argue that with the limited namespace (one letter and one optional digit) one must give more thought to names in BASIC than in any other language.
My style for statically-type languages is to name variables and functions with semantic names. Names for functions are usually verb phrases, describe the action performed by the function. Names for objects usually describe the data contained by the variable.
My style for dynamically-typed languages is different. Names for functions typically describe the data structure that is returned by the function. Names for variables typically describe the data structure contained by the variable (or referenced by it, if you insist).
Perhaps this difference is due to my familiarity with the older statically-typed languages. Perhaps it is due to the robust IDEs for C++, Java, and C# (for Python and Ruby I typically use a simple text editor).
I find dynamically-typed languages much harder to debug than statically-typed languages. Perhaps this is due to the difference in tools. Perhaps it is due to my unfamiliarity with dynamically-typed languages. But perhaps it is simply easier to analyze and debug statically-typed languages.
If that is true, then I can further state that it is better to use a statically-typed languages for large projects. It may also be better to use a dynamically-typed language for smaller programs. I'm not sure how large 'large' is in this context, nor am I sure how small 'small' is. Nor do I know the cross-over point, at which it is better to switch from one to the other.
But I think it is worth thinking about.
Actually, I tend to write FORTRAN in all programming languages. But that is another matter.
Sunday, June 21, 2015
More and smaller data centers for cloud
We seem to repeat lessons of technology.
The latest lesson is one from the 1980s: The PC revolution. Personal computers introduced the notion of smaller, numerous computers. Previously, the notion of computers revolved around mainframe computers: large, centralized, and expensive. (I'm ignoring minicomputers, which were smaller, less centralized, and less expensive.)
The PC revolution was less a change from mainframes to PCs and more a change in mindset. The revolution made the notion of small computers a reasonable one. After PCs arrived, the "space" of computing expanded to include mainframes and PCs. Small computers were considered legitimate.
That lesson -- that computing can come in small packages as well as large ones -- can be applied to cloud data centers. The big cloud providers (Amazon.com, Microsoft, IBM, etc.) have been built large data centers. And large is an apt description: enormous buildings containing racks and racks of servers, power supply distribution units, air conditioning... and more. The facilities may vary between the players: the hypervisors, operating systems, administration systems are all different among them. But the one factor they have in common is that they are all large.
I'm not sure that data centers have to be large. They certainly don't have to be monolithic. Cloud providers maintain multiple centers ("regions", "zones", "service areas") to provide redundancy in the event of physical disasters. But aside from the issue of redundancy, it seems that the big cloud providers are thinking in mainframe terms. They build large, centralized, (and expensive) data centers.
Large, centralized mainframe computers make sense for large, centralized mainframe programs.
Cloud systems are different from mainframe programs. They are not large, centralized programs. A properly designed cloud system consists of small, distinct programs tied together by data stores and message queues. A cloud system becomes big by scaling -- by increasing the number of copies of web servers and applications -- and not by growing a single program or single database.
A large cloud system can exist on a cloud platform that lives in one large data center. For critical systems, we want redundancy, so we arrange for multiple data centers. This is easy with cloud systems, as the system can expand by creating new instances of servers, not necessarily in the same data center.
A large cloud system doesn't need a single large data center, though. A large cloud system, with its many instances of small servers, can just as easily live in a set of small data centers (provided that there are enough servers to host the virtual servers).
I think we're in for an expansion of mindset, the same expansion that we saw with personal computers. Cloud providers will expand their data centers with small- and medium-sized data centers.
I'm ignoring two aspects here. One is communications: network transfers are faster in a single data center than across multiple centers. But how many applications are that sensitive to time? The other aspect is the efficiency of smaller data centers. It is probably cheaper, on a per-server basis, to build large data centers. Small data centers will have to take advantage to something, like an existing small building that requires no major construction.
Cloud systems, even large cloud systems, don't need large data centers.
The latest lesson is one from the 1980s: The PC revolution. Personal computers introduced the notion of smaller, numerous computers. Previously, the notion of computers revolved around mainframe computers: large, centralized, and expensive. (I'm ignoring minicomputers, which were smaller, less centralized, and less expensive.)
The PC revolution was less a change from mainframes to PCs and more a change in mindset. The revolution made the notion of small computers a reasonable one. After PCs arrived, the "space" of computing expanded to include mainframes and PCs. Small computers were considered legitimate.
That lesson -- that computing can come in small packages as well as large ones -- can be applied to cloud data centers. The big cloud providers (Amazon.com, Microsoft, IBM, etc.) have been built large data centers. And large is an apt description: enormous buildings containing racks and racks of servers, power supply distribution units, air conditioning... and more. The facilities may vary between the players: the hypervisors, operating systems, administration systems are all different among them. But the one factor they have in common is that they are all large.
I'm not sure that data centers have to be large. They certainly don't have to be monolithic. Cloud providers maintain multiple centers ("regions", "zones", "service areas") to provide redundancy in the event of physical disasters. But aside from the issue of redundancy, it seems that the big cloud providers are thinking in mainframe terms. They build large, centralized, (and expensive) data centers.
Large, centralized mainframe computers make sense for large, centralized mainframe programs.
Cloud systems are different from mainframe programs. They are not large, centralized programs. A properly designed cloud system consists of small, distinct programs tied together by data stores and message queues. A cloud system becomes big by scaling -- by increasing the number of copies of web servers and applications -- and not by growing a single program or single database.
A large cloud system can exist on a cloud platform that lives in one large data center. For critical systems, we want redundancy, so we arrange for multiple data centers. This is easy with cloud systems, as the system can expand by creating new instances of servers, not necessarily in the same data center.
A large cloud system doesn't need a single large data center, though. A large cloud system, with its many instances of small servers, can just as easily live in a set of small data centers (provided that there are enough servers to host the virtual servers).
I think we're in for an expansion of mindset, the same expansion that we saw with personal computers. Cloud providers will expand their data centers with small- and medium-sized data centers.
I'm ignoring two aspects here. One is communications: network transfers are faster in a single data center than across multiple centers. But how many applications are that sensitive to time? The other aspect is the efficiency of smaller data centers. It is probably cheaper, on a per-server basis, to build large data centers. Small data centers will have to take advantage to something, like an existing small building that requires no major construction.
Cloud systems, even large cloud systems, don't need large data centers.
Sunday, June 14, 2015
Data services are more flexible than files
Data services provide data. So do files. But the two are very different.
In the classic PC world ("classic" meaning desktop applications), the primary storage mechanism is the file. A file is, at its core, a bunch of bytes. Not just a random collection of bytes, but a meaningful collection. That collection could be a text file, a document, a spreadsheet, or any one of a number of possibilities.
In the cloud world, the primary storage mechanism is the data service. That could be an SQL database, a NoSQL database, or a web service (a data service). A data service provides a collection of values, not a collection of bytes.
Data services are active things. They can perform operations. A data service is much like a query in an SQL database. (One may think of SQL as a data service, if one likes.) You can specify a subset of the data (either columns or rows, or both), the sequence in which the data appears (again, either columns or rows, or both), and the format of the data. For sophisticated services, you can collect data from multiple sources.
Data services are much more flexible and powerful than files.
But that's not what is interesting about data services.
What is interesting about data services is the mindset of the programmer.
When a programmer is working with data files, he must think about what he needs, what is in the file, and how to extract what he needs from the file. The file may have extra data (unwanted data rows, or perhaps undesired headings and footings). The file may have extra columns of data. The data may be in a sequence different from the desired sequence. The data may be in a format that is different from what is needed.
The programmer must compensate for all of these things, and write code to handle the unwanted data or the improper formats. Working with files means writing code to match the file.
In contrast, data services -- well-designed data services -- can format the data, filter the data, and clean the data for the programmer. Data services have capabilities that files do not; they are active and can perform operations.
A programmer using files must think "what does the file provide, and how can I convert it to what I need?"; a programmer using data services thinks "what do I need?".
With data services, the programmer can think less about what is available and think more about what has to be done with the data. If you're a programmer or a manager, you understand how this change makes programmers more efficient.
If you're writing code or managing projects, think about data services. Even outside of the cloud, data services can reduce the programming effort.
In the classic PC world ("classic" meaning desktop applications), the primary storage mechanism is the file. A file is, at its core, a bunch of bytes. Not just a random collection of bytes, but a meaningful collection. That collection could be a text file, a document, a spreadsheet, or any one of a number of possibilities.
In the cloud world, the primary storage mechanism is the data service. That could be an SQL database, a NoSQL database, or a web service (a data service). A data service provides a collection of values, not a collection of bytes.
Data services are active things. They can perform operations. A data service is much like a query in an SQL database. (One may think of SQL as a data service, if one likes.) You can specify a subset of the data (either columns or rows, or both), the sequence in which the data appears (again, either columns or rows, or both), and the format of the data. For sophisticated services, you can collect data from multiple sources.
Data services are much more flexible and powerful than files.
But that's not what is interesting about data services.
What is interesting about data services is the mindset of the programmer.
When a programmer is working with data files, he must think about what he needs, what is in the file, and how to extract what he needs from the file. The file may have extra data (unwanted data rows, or perhaps undesired headings and footings). The file may have extra columns of data. The data may be in a sequence different from the desired sequence. The data may be in a format that is different from what is needed.
The programmer must compensate for all of these things, and write code to handle the unwanted data or the improper formats. Working with files means writing code to match the file.
In contrast, data services -- well-designed data services -- can format the data, filter the data, and clean the data for the programmer. Data services have capabilities that files do not; they are active and can perform operations.
A programmer using files must think "what does the file provide, and how can I convert it to what I need?"; a programmer using data services thinks "what do I need?".
With data services, the programmer can think less about what is available and think more about what has to be done with the data. If you're a programmer or a manager, you understand how this change makes programmers more efficient.
If you're writing code or managing projects, think about data services. Even outside of the cloud, data services can reduce the programming effort.
Monday, June 8, 2015
OS X is special enough to get names
Apple has several product lines, and two operating systems: Mac OS and iOS. (Or perhaps three, with Apple Watch OS being the third.) The operating systems have different origins, different designs, and, interestingly, different marketing. Releases of iOS and WatchOS are numbered; releases of Mac OS are named.
Why this distinction? Why should releases of Mac OS be graced with names ("Panther", "Tiger", "Mavericks", "El Capitan") and other operating systems limited to plain numbers?
The assignment of names to Mac OS is a marketing decision. Apple clearly believes that the users of Mac OS want names, while the users of iOS and WatchOS do not. They may be right.
The typical user of iOS and WatchOS is an ordinary, non-technical person. Apple iPads and iPhones and Watches are made for "normal", non-technical individuals.
The typical user of Mac OS, on the other hand, is not a normal, non-technical individual. At least, not in Apple's eyes. Apple may right on this item. I have seen programmers, web designers, and developers carry Apple MacBooks. Software conferences are full of software geeks, and many carry Apple MacBooks. (Only a few carry iPads.)
If this is true, then the audience for Mac OS is different from the audience for iOS. And if that is true, then Apple has an incentive to keep Mac OS separate from iOS. So we may see separate paths (and features) for Mac OS and iOS (and WatchOS).
When Apple releases a version of Mac OS without a cute code name, then we can assume that Apple is getting ready to merge Mac OS into iOS.
Why this distinction? Why should releases of Mac OS be graced with names ("Panther", "Tiger", "Mavericks", "El Capitan") and other operating systems limited to plain numbers?
The assignment of names to Mac OS is a marketing decision. Apple clearly believes that the users of Mac OS want names, while the users of iOS and WatchOS do not. They may be right.
The typical user of iOS and WatchOS is an ordinary, non-technical person. Apple iPads and iPhones and Watches are made for "normal", non-technical individuals.
The typical user of Mac OS, on the other hand, is not a normal, non-technical individual. At least, not in Apple's eyes. Apple may right on this item. I have seen programmers, web designers, and developers carry Apple MacBooks. Software conferences are full of software geeks, and many carry Apple MacBooks. (Only a few carry iPads.)
If this is true, then the audience for Mac OS is different from the audience for iOS. And if that is true, then Apple has an incentive to keep Mac OS separate from iOS. So we may see separate paths (and features) for Mac OS and iOS (and WatchOS).
When Apple releases a version of Mac OS without a cute code name, then we can assume that Apple is getting ready to merge Mac OS into iOS.
Sunday, June 7, 2015
Code quality doesn't matter today
In the 1990s, people cared about code quality. We held code reviews and developed metrics to measure code. We debated the different methods of measuring code: lines of code, cyclomatic complexity, function points, and more. Today, there is little interest in code metrics, or in code quality.
I have several possible explanations.
Agile methods Specifically, people believe that agile methods provide high-quality code (and therefore there is no need to measure it). This is possible; most advocates of agile tout the reduction in defects, and many people equate the lack of defects with high quality. While the re-factoring that occurs (or should occur) in agile methods, it doesn't guarantee high quality. Without measurements, how do we know?
Managers don't care More specifically, managers are focussed on other aspects of the development process. They care more about the short-term cost, or features, or cloud management.
Managers see little value in code It is possible that managers think that code is a temporary thing, something that must be constantly re-written. If it has a short expected life, there is little incentive to build quality code.
I have one more idea:
We don't know what makes good code good In the 1990s and 2000s, we built code in C++, Java, and later, C#. Those languages are designed on object-oriented principles, and we know what makes good code good. We know it so well that we can build tools to measure that code. The concept of "goodness" is well understood.
We've moved to other languages. Today we build systems in Python, Ruby, and JavaScript. These languages are more dynamic than C++, C#, and Java. Goodness in these languages is elusive. What is "good" JavaScript? What designs are good for Ruby? or Python? Many times, programming concepts are good in a specific context and not-so-good in a different context. Evaluating the goodness of a program requires more than just the code, it requires knowledge of the business problem.
So it is possible that we've advanced our programming languages to the point that we cannot evaluate the quality of our programs, at least temporarily. I have no doubt that code metrics and code quality will return.
I have several possible explanations.
Agile methods Specifically, people believe that agile methods provide high-quality code (and therefore there is no need to measure it). This is possible; most advocates of agile tout the reduction in defects, and many people equate the lack of defects with high quality. While the re-factoring that occurs (or should occur) in agile methods, it doesn't guarantee high quality. Without measurements, how do we know?
Managers don't care More specifically, managers are focussed on other aspects of the development process. They care more about the short-term cost, or features, or cloud management.
Managers see little value in code It is possible that managers think that code is a temporary thing, something that must be constantly re-written. If it has a short expected life, there is little incentive to build quality code.
I have one more idea:
We don't know what makes good code good In the 1990s and 2000s, we built code in C++, Java, and later, C#. Those languages are designed on object-oriented principles, and we know what makes good code good. We know it so well that we can build tools to measure that code. The concept of "goodness" is well understood.
We've moved to other languages. Today we build systems in Python, Ruby, and JavaScript. These languages are more dynamic than C++, C#, and Java. Goodness in these languages is elusive. What is "good" JavaScript? What designs are good for Ruby? or Python? Many times, programming concepts are good in a specific context and not-so-good in a different context. Evaluating the goodness of a program requires more than just the code, it requires knowledge of the business problem.
So it is possible that we've advanced our programming languages to the point that we cannot evaluate the quality of our programs, at least temporarily. I have no doubt that code metrics and code quality will return.
Monday, June 1, 2015
Waterfall or Agile -- or both
A lot has been written (and argued) about waterfall methods and agile methods. Each has advocates. Each has detractors. I take a different path: they are two techniques for managing projects, and you may want to use both.
Waterfall, the traditional method of analysis, design, coding, testing, and deployment, makes the promise of a specific deliverable at a specific time (and at a specific cost). Agile, the "young upstart" makes the promise of frequent deliverables that function -- although only with what has been coded, not necessarily everything you may want.
Waterfall and agile operate in different ways and work in different situations. Waterfall works well when you and your team know the technology, the tools, the existing code, and the business rules. Agile works when you and your team are exploring new areas (technology, code, business rules, or a combination). Agile provides the flexibility to change direction quickly, where waterfall locks you in to a plan.
Waterfall does not work when there are unknowns. A new technology, for example. Or a new team looking at an existing code base. Or perhaps significant changes to business rules (where "significant" is smaller than you think.) Waterfall's approach to define everything up front cannot handle the uncertainties, and its schedules are likely to fail.
If your shop has been building web applications and you decide to switch to mobile apps, you have a lot of uncertainties. New technologies, new designs for applications, and changes to existing web services are required. You may be unable to list all of the tasks for the project, much less assign reasonable estimates for resources and time. If your inputs are uncertain, how can the resulting plan be anything but uncertain?
In that situation, it is better to use agile methods to learn the new technologies. Complete some small projects, perhaps for internal use, that use the tools for mobile development. Gain experience with them. Learn the hazards and understand the risks.
When you have experience, use waterfall to plan your projects. With experience behind you, your estimates will be better.
You don't have to use waterfall or agile exclusively. Some projects (perhaps many) require some exploration and research. That surveying is best done with agile methods. Once the knowledge is learned, once the team is familiar with the technology and the code, a waterfall project makes good business sense. (You have to deliver on time, don't you?)
As a manager, you have two tools to plan and manage projects. Use them effectively.
Waterfall, the traditional method of analysis, design, coding, testing, and deployment, makes the promise of a specific deliverable at a specific time (and at a specific cost). Agile, the "young upstart" makes the promise of frequent deliverables that function -- although only with what has been coded, not necessarily everything you may want.
Waterfall and agile operate in different ways and work in different situations. Waterfall works well when you and your team know the technology, the tools, the existing code, and the business rules. Agile works when you and your team are exploring new areas (technology, code, business rules, or a combination). Agile provides the flexibility to change direction quickly, where waterfall locks you in to a plan.
Waterfall does not work when there are unknowns. A new technology, for example. Or a new team looking at an existing code base. Or perhaps significant changes to business rules (where "significant" is smaller than you think.) Waterfall's approach to define everything up front cannot handle the uncertainties, and its schedules are likely to fail.
If your shop has been building web applications and you decide to switch to mobile apps, you have a lot of uncertainties. New technologies, new designs for applications, and changes to existing web services are required. You may be unable to list all of the tasks for the project, much less assign reasonable estimates for resources and time. If your inputs are uncertain, how can the resulting plan be anything but uncertain?
In that situation, it is better to use agile methods to learn the new technologies. Complete some small projects, perhaps for internal use, that use the tools for mobile development. Gain experience with them. Learn the hazards and understand the risks.
When you have experience, use waterfall to plan your projects. With experience behind you, your estimates will be better.
You don't have to use waterfall or agile exclusively. Some projects (perhaps many) require some exploration and research. That surveying is best done with agile methods. Once the knowledge is learned, once the team is familiar with the technology and the code, a waterfall project makes good business sense. (You have to deliver on time, don't you?)
As a manager, you have two tools to plan and manage projects. Use them effectively.
Subscribe to:
Posts (Atom)