The Peak of Inflated Expectations

800px-Hype-Cycle-General.png

I work in Artificial Intelligence (AI) / Machine Learning (ML) / Deep Learning (DL) (depending on what buzzword is the most popular at the moment), so I know enough to comment on this article sent to me over the weekend: The A.I. Diet (Sorry, it’s from the failing NY Times, and may be paywalled).

Before I get to the main part of my rant, let me quote from the author:

The results? In the sweets category: Cheesecake was given an A grade, but whole-wheat fig bars were a C -. In fruits: Strawberries were an A+ for me, but grapefruit a C. In legumes: Mixed nuts were an A+, but veggie burgers a C. Needless to say, it didn’t match what I thought I knew about healthy eating.

I’m sorry, but any “algorithm” that tells you that Cheesecake is an A it wrong. We may nitpick about the details of lots of things, but no one who thinks for themselves would think Cheesecake is good for you.

So what really got my rant going was this quote statement: “In that way, an algorithm was built without the biases of the scientists.”

Nope, that’s not how it works.

Now, I do make fun of all the people who say that the AI facial recognition systems are racially biased. But here, there is an inherent bias that can’t be taken out of the system. Let me tell you a little about how all ML algorithms work.

First: you have to train the algorithm to know what to look for. Generally, what you do is give it many many examples of something where you know what the right answer is. The very first time, the algorithm will make a very bad guess. You look at what the algorithm guesses, compare it to the right answer and then adjust the internal parameters of the algorithm so that the error becomes less. Whether it’s an analytic or iterative, number solution, exact minimum finding, or simply gradient decent, it's all the same. You’re trying to figure out a combination of internal parameters that will give you the right answer.

In my case work it’s much easier than in the health “sciences”. If I want to train an algorithm to find a Russian Tupolev Tu-160, I show it a bunch of examples of one and it will learn the patterns so that it can later find them in new images. (Of course, it’s not that nice for me, I’m not allowed to see the examples, so I have to write a system that a cleared operator can use to feed the examples, but in the end it’s all the same.)

In my case, we know what we want to find, but in the health case, we still don’t know the right answer. There’s no magic, the AI can’t come up with something novel, it can only learn to recognize patterns that were reinforced by the team training the algorithm. So if they say “this is what healthy people look like”, the AI will learn to predict the inputs that return that result.

But that assumes that the metrics they’re looking at on the back end are the right ones!

Now, this article says that they’re looking at glucose response, so that’s good. That’s probably an important one to look at. But it’s still uni-dimensional and doesn’t take into account so much other stuff. For example, is it better to eat food that had negligable glucose response but you eat little bits all day long, or is it better to eat one big meal that might spike your glucose and then it returns to baseline and you fast the rest of the day?

I know which one Jason Fung would recommend. I know what was most likely the hormonal milieu that we evolved under. But to add that into a ML loss function would require the researcher to think about that.

There are good things here, this algorithm will be able to identify for you what foods create a muted glucose response. But that’s as far as it goes.

This is once case where the wisdom of the ancients is probably going to tell you so much more than the peak of inflated expectations surrounding the AI world. Sometimes technology doesn’t solve your problems.

After nearly 20 years of work in the AI field - I wrote my first neural network in 2001, in C (not ++) - you youngins who “code in python” are not special! - I’m firmly in the trough of disillusionment. It’s good for what it’s good for, but it’s not going to save us.

Michael Deskevich