Showing posts with label machine learning. Show all posts
Showing posts with label machine learning. Show all posts

Wednesday, June 1, 2022

Ideas for Machine Learning

A lot of what is called "AI" is less "Artificial Intelligence" and more "Machine Learning". The differences between the two are technical and rather boring, so few people talk about them. From a marketing perspective, "Artificial Intelligence" sounds better, so more people use that term.

But whichever term you use, I think we can agree that the field of "computers learning to think" has yielded dismal results. Computers are fast and literal, good at numeric computation and even good at running video games.

It seems to me that our approach to Machine Learning is not the correct one. We've been at it for decades, and our best systems suffer from fragility, providing wildly different answers for similar inputs.

That approach (from what I can tell) is to build a Machine Learning system, train it on a large set of inputs, and then have it match other inputs to the training set. The approach tries to match similar aspects to similar aspects.

I have two ideas for Machine Learning, although I suspect that they will be rejected by the community.

The first idea is to change the basic mechanism of Machine Learning. Instead of matching similar inputs, design systems which minimize errors. That is, balance the identification of objects with the identification of errors.

This is a more complex approach, as it requires some basic knowledge of an object (such as a duck or a STOP sign) and then it requires analyzing aspects and classifying them as "matching", "close match", "loose match", or "not a match". I can already hear the howls of practitioners, switching their mechanisms to something more complex.

But as loud as those complaints may be, they will be a gentle whisper compared to the reaction of my second idea: Switch from 2-D photographs to stereoscopic photographs.

Stereoscopic photographs are pairs of photographs of an object, taken by two cameras some distance apart. By themselves they are simple photographs. Together, they allow for the calculation of depth of objects. (Anyone who has used an old "Viewmaster" to look at a disk of transparencies has seen the effect.)

A stereoscopic photograph should allow for better identification of objects, because one can tell that items in the photograph are in the same plane or different planes. Items in different planes are probably different objects. Items in the same plane may be the same object, or may be two objects in close proximity. It's not perfect, but it is information.

The objections are, of course, that the entire corpus of inputs must be rebuilt. All of the 2-D photographs used to train ML systems are now invalid. Worse, a new collection of stereoscopic photographs must be taken (not an easy task), stored, classified, and vetted before they can be used.

I recognize the objections to my ideas. I understand that they entail a lot of work.

But I have to ask: is the current method getting us what we want? Because if it isn't, then we need to do something else.

Wednesday, December 14, 2016

Steps to AI

The phrase "Artificial Intelligence" (AI) has been used to describe computer programs that can perform sophisticated, autonomous operations, and it has been used for decades. (One wag puts it as "artificial intelligence is twenty years away... always".)

Along with AI we have the term "Machine Learning" (ML). Are they different? Yes, but the popular usages make no distinction. And for this post, I will consider them the same.

Use of the term waxes and wanes. The AI term was popular in the 1980s and it is popular now. Once difference between the 1980s and now: we may have enough computing power to actually pull it off.

Should anyone jump into AI? My guess is no. AI has preconditions, things you should be doing before you start with a serious commitment to AI.

First, you need a significant amount of computing power. Second, you need a significant amount of human intelligence. With AI and ML, you are teaching the computer to make decisions. Anyone who has programmed a computer can tell you that this is not trivial.

It strikes me that the necessary elements for AI are very similar to the necessary elements for analytics. Analytics is almost the same as AI - analyzing large quantities of data - except it uses humans to interpret the data, not computers. Analytics is the predecessor to AI. If you're successful at analytics, then you are ready to move on to AI. If you haven't succeeded (or even attempted) at analytics, you're not ready for AI.

Of course, one cannot simply jump into analytics and expect to be successful. Analytics has its own prerequisites. Analytics needs data, the tools to analyze the data and render it for humans, and smart humans to interpret the data. If you don't have the data, the tools, and the clever humans, you're not ready for analytics.

But we're not done with levels of prerequisites! The data for analytics (and eventually AI) has its own set of preconditions. You have to collect the data, store the data, and be able to retrieve the data. You have to understand the data, know its origin (including the origin date and time), and know its expiration date (if it has one). You have to understand the quality of your data.

The steps to artificial intelligence are through data collection, metadata, and analytics. Each step has to be completed before you can advance to the next level. (Much like the Capability Maturity Model.) Don't make the mistake of starting a project without the proper experience in place.