HomeHealthcareHealthcare AnalyticsCause and Effect: Optimizing Treatment Decisions with Predictive Analytics

Cause and Effect: Optimizing Treatment Decisions with Predictive Analytics

By Eric Siegel, Founder of Machine Learning Week, former Columbia professor, bestselling author, Prediction Impact

When machine learning is used to drive clinical and operational decisions by making predictions – for healthcare, marketing, financial risk, and beyond – it’s often also called predictive analytics or predictive AI.

But standard predictive analytics does not directly address what is the greatest challenge faced by healthcare and marketing: Across large numbers of individuals, deciding who to treat in a certain way.

Yes, you heard me correctly. Predictive analytics still needs a certain tweak before it’s designed to optimize organizational activities.

Let’s take a step back. The world is run by organizations, which serve us as individuals by deciding, for each one, the best action to take, i.e., the proper outgoing treatment:

TREATMENTS: Marketing outreach, sales outreach, personalized pricing, political campaign outreach, medication, surgery, etc.

That is, organizations strive to analytically decide whom to investigate, incarcerate, set up on a date, or medicate.

Organizations will be more successful, saving more lives or making more profit—and the world will be a better place—if treatment decisions are driven to maximize the probability of positive outcomes, such as consumer actions or healthcare patient results:

OUTCOMES: Purchase, stay (retained), donate, vote, live/thrive, etc.

In fact, the title of my first book itself includes a list of such outcomes: Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die.

But this book title—and predictive analytics as a field in general—may lead you astray by implying the best way to improve the probability of these actions (or, alternatively, the probability of averting them, in the case of the latter two, lie and die) is to predict them. However, predicting an outcome does not directly help an organization drive that outcome. Instead, treatment decisions are optimized when organizations predict something completely different from outcome or behavior:

WHAT TO PREDICT: Whether a certain treatment will result in the outcome.

That is, predict whether the treatment will influence or persuade the individual (in the case of healthcare, you could put it that we wish to predict whether the treatment will “influence the individual’s body”). Predict whether it will cause the desired outcome.

The whole point of the massive efforts taken by our society’s organizations—all the treatments they apply across millions of individuals—is to improve outcomes. To have a positive effect on the world. To influence, in that respect. And the best way to decide on each treatment is to predict whether it will have the desired effect on outcome.

The analytical method to predict influence is uplift modeling (aka persuasion, net lift, or true lift modeling):

Uplift model—A predictive model that predicts the influence on an individual’s behavior that results from applying one treatment over another.

An uplift model predicts, for each individual, “How much more likely is this treatment to generate the desired outcome than the alternative treatment?” This directly informs the choice of treatment for the individual. I think some people refer to this as prescriptive analytics, although I’ve never been clear on the definition of that term (buzzword?).

One somewhat subtle technicality of central interest here is that, rather than predicting the influence of a treatment per se, an uplift model predicts the influence of going with one treatment over an alternative treatment (which could itself be the passive treatment, e.g., “Do not send a brochure”). Such a model outputs the change to outcome probability that results from choosing treatment A over B.

This is a paradigm shift; uplift models are utterly distinct from standard predictive models. In common practice, a predictive model simply predicts the outcome, e.g., whether a customer will buy. Does that help? Not necessarily—who cares whether someone will buy? What you really want to know is whether you can nudge that behavior (outcome) with the available treatments at hand. Predicting whether a treatment choice will make a difference directly informs the treatment decision. And predicting that—predicting the influence, or lack thereof, on outcome—is an entirely different thing from predicting outcome itself. That’s the purpose of uplift modeling.

In the name of clarity, let’s reframe. What marketers usually call a response model doesn’t simply predict who will buy per se. Rather, more specifically, it predicts, “Will the customer buy if contacted?” It is predicting the result of one treatment (contact) without any consideration for or prediction about any alternative treatment, such as not contacting or contacting with a different marketing creative. This is still just a matter of predicting outcome; it is not an uplift model and it does not predict influence. Therefore, a response model suffers from a sometimes-crippling, common limitation: The predicted outcome itself doesn’t matter so much as whether the marketing treatment should be credited for influencing that outcome.

So, as put by Daniel Porter, one of the three main hands-on quants at Obama for America who actually executed on uplift modeling, response modeling is like shooting fish in a barrel. By targeting marketing outreach to those with a propensity to buy, you may be spending a great deal of your marketing budget to contact “sure things,” customers likely to buy either way. You are overlooking the more subtle yet immensely powerful customer niche that must be identified: the sliver of customers for whom the marketing outreach is effective. One doesn’t compete well with other fishermen or other marketers by only capturing those easy to get.

Take a healthcare example: Will my headache go away if I take this pill? An answer of “yes” does not mean the pill worked; maybe my headache would have gone away regardless of treatment. Let’s save the pills for those patients earmarked by an uplift model which predicts, Will this pill increase the chance my headache will go away, as compared to taking no pill?

Influencing people, the very purpose of so many of the external actions taken by companies, is freaking elusive. You can’t detect influence, even in retrospect. For example, if I send you a brochure and you buy my product, how do I know you weren’t going to buy it anyway? How do I know I actually influenced you? The answer is, I don’t. You can’t ever be 100% confident in observing any individual act of influence, even after the fact. And yet, in business, politics, and healthcare, you must predict this unobservable, influence.

If you’re a quant, here’s a word on how uplift modeling gets around this gotcha and actually works. You have to model over both a treatment data set and control data set, together, at the same time. That’s a new, off-kilter approach to many long-time practitioners of the art who are accustomed to analyzing just one data set at a time.

The adoption of uplift modeling is rapidly growing. This is the state of the art, folks. So far I have encountered publicly presented success stories of its deployment at Fidelity, HP, Intuit, Obama for America, Schwab, Staples, Subaru, Telenor, and US Bank.

Is uplift modeling for you? Are the additional human expertise requirements, analytical complexity, and data requirements worthwhile? I leave you with the following resources in order to dig in and learn more.

Where to learn more about uplift modeling:

Eric Siegel, Ph.D. is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times, and a frequent keynote speaker. He wrote the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities, as well as The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Eric’s interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice.

Must Read

Related News