Treating Risk Like Cancer

An embarrassing thing happened to me in Amsterdam.

I’d just finished dinner with a new partner at a nice restaurant. Ok – more expensive than nice but you know what I mean. I grade food in Amsterdam on a curve. We were getting to know each other, talking about where we came from and where we’re going. After the dessert the waiter brought the check. We split the bill: 167.35 euros for me, 167.35 euros for him. His card worked. Mine didn’t. WTF!

Bear in mind there was wine with each course… so I wasn’t at my sharpest when the bill arrived. I checked my balance on my bank’s mobile app. There was plenty of money in the account. Whatever. Not one of those euros were helping me.

I gave the waiter my Amex. It went through because…it always goes through.

It’s probably happened to you, too. A risk system prevents you from making a purchase. You go from enjoying yourself into rapid problem solving mode. Not fun.

One of the biggest complaints I hear from our new partners is “My old biller was scrubbing too hard!” In other words, the biller was stopping good transactions and preventing sales. It can happen. It was the reason my card wasn’t accepted at the restaurant in Amsterdam.

This summer Visa changed its rules. If “scrubbing too hard” to be under a 2% limit used to be annoying then scrubbing to be under 1% can kill your business. How does a biller know which transactions to accept and which to block?

The early approach involved looking for patterns in data. Specialists would look at their data and come up with ideas to identify risk. “It looks like people in France chargeback a lot.” Programmers would query databases to find patterns. “Yes, it’s true. People in France chargeback more than average.” Then the programmers would write algorithms to identify and block those transactions.

Large billers also have risk analysts who manually review transactions looking for suspicious signs. Perhaps they could see that the same IP had been used to make 10 transactions with different cards in a short period of time. Then they could check to see if those users had opened the confirmation email with the login data. If they had not been opened the risk analyst could cancel those transactions.

The goal of a Risk Department is this: “Find the smallest group with the highest percentage of bad guys.” That may not make immediate sense. Risk wants to block as few transactions as possible. Ideally Risk finds all of the risky transactions in less than 1% of the total. Then they would not be blocking any good, non-risky transactions. Ideal. No one has reached that ideal but the best Risk teams are moving closer towards it each day.

One of our partners at Vendo comes from a long line of innovative doctors. His great-grandfather invented a dye that surgeons use to identify cancerous cells during an operation. It’s called “Terry’s Polychrome Methylene Blue.” Before this dye, doctors would start cutting and they would cut out too much healthy tissue…just to be sure they had removed all of the cancer. Once they applied the dye, however, the cancer cells would identify themselves by changing color. The surgeon could make sure that he only cut out the cancer leaving as much of the healthy body as possible. That’s what risk is trying to do. Only cut out the cancer.

A false positive is identifying a good transaction as risky and either blocking it or refunding. You want to do that as little as possible. That’s me in Amsterdam not being able to buy with my regular card and switching to Amex. That’s the surgeon before the dye. That’s the biller that is doing risk the old way in a world that has changed completely.

A friend of mine died of cancer a few years ago. Her doctor told me that we don’t yet understand the disease. He said, “Once we do then we will be able to write down the cure on a single sheet of paper.” Today we have lots of treatments for risk. Many different approaches. But we don’t really understand it well enough to write the solution on one sheet of paper.

Or do we?

Perhaps we do have a way of managing it that is as inexplicable and difficult to understand as the thing itself. A large insurance company recently spent tens of millions of dollars, hundreds of thousands of manhours and not a small amount of computing power to find a better way of evaluating medical risks and setting prices for their customers. A machine learning technique produced 20% better results than the next best approach.

In the end they went with the second best approach. Why? Because they wanted to be able to understand their model and they couldn’t understand what the machine was doing. It used a kind of alien intelligence. The humans couldn’t figure it out. So they destroyed the machine they feared. In the process they turned their backs on a 20% increase that would have made them the market leader.

How does Artificial Intelligence become intelligent? How does Machine Learning learn?

Just like a child. It senses its environment and tries to get what it wants. A baby wants food. It cries. It gets food. It learns that crying brings food.

In contrast, AI doesn’t want anything naturally. It has to be told what to want. You could think of this like instilling values in a child. We teach kids the Golden Rule, “Do unto others as you would have them do unto you.”

We tell the Risk AI that it should maximize revenue within constraints. Low reversals (refunds, chargebacks, stolen card alerts, etc.) and high throughput of good transactions. It learns by trying different approaches. When it finds one that works it does more of it.

What are some of the ways we trained the Risk AI to perform risk tasks?

We started with linear regression. This one is familiar to anyone who has sold their home. A linear regression model compares your house with recent homes that have been sold. It gives you the value of your house based on its features. If your house has three bedrooms, was built less than ten years ago and you have recently renovated your kitchen then your house would be worth X. Improving your landscaping would increase the price of your house by $20,000. If it only costs $10,000 you would do it. If you add a fourth bedroom it would add $30,000 to the value but the cost would be $50,000. Linear regression tells you not to do it.

The primary advantage of the linear regression model is that it is understandable. However, the results weren’t that good when we tried the algorithm on past data. There were too many clean transactions that were seen as risky.  When we used linear regression on 18 months of transactions it found 50% of risk in 30% of transactions. That means that if you had a chargeback ratio of 1.4% (over the limit) and wanted to be at 0.7% (comfortably under the limit) then linear regression would cut your sales by 30%. Do you have a 100 sales a day? With this approach you would be left with only have 70 sales a day. No, that wasn’t going to work. The results on historical data were so bad we never even tested it on live transactions. We had to keep looking for smarter solutions.

We tried Gradient Boosting Machines. Here’s Wikipedia on gradient boosting: “Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.”

Sounds good – and complicated – (it is!) but it still didn’t produce the results we wanted.

Next we tried random forest. This also uses a collection of decision trees. You’ve seen decision trees before. They have goofy ones in the back of every issue of Wired Magazine. Your customer support people use them to decide when to give a refund or escalate. Here’s Wikipedia’s definition: A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.

The “random” part is designed to avoid overfitting data, that is, making an algorithm work really well for past data but not “street smart.” We want a system that is constantly learning and random forest looks at the results of collections of different decision trees to be more flexible in dealing with the changing reality of risk.

A decision tree relies on patterns that a human can spot. Having large numbers of decision trees that are built by the machine enables the AI to identify patterns that no human could ever see. This is the approach to AI we use today. However, it competes with other approaches and will certainly be replaced with new, improved AI driven solutions in the future. It’s a never ending process. We’ve been investing deeply in our Risk AI for over three years and we’re still learning a lot. It’s a very long learning curve.

It is very costly to build a system that goes beyond human intelligence. There are three upfront costs. You have to gather large amounts of relevant data. You have to build teams that can work with it. You have to create tools and access tremendous amounts of computing power. All of those costs can be understood upfront, before starting the project. However, there is a fourth cost that is hidden. It is the cost of ignorance, of giving up control.

But how much conscious control do we exercise generally? Our brains perform a massive amount of unconscious calculations each day. When we are driving a car we look at oncoming traffic and decide whether to enter the lane. We measure the speed of oncoming cars, we estimate our car’s ability to accelerate, etc. We do all of this unconsciously. A self driving car also does millions of calculations before deciding to enter traffic. We can’t explain the information we are processing fully… neither can the AI driving the self-driving car.

No one fully understands how AI takes each decision. We can’t understand it because it is beyond human understanding. We design it, we feed it data and we measure the results it produces. What happens inside the servers where the AI lives is a black box, literally and figuratively.

It’s nerve wracking. We would much rather work with a system that we can understand fully. Other billers have simpler systems that they can understand. However, those systems produce inferior results. In today’s world of tighter risk restrictions we cannot afford the comfort of old ways.

Google has gone through a similar transition. They used to rely on algorithms that they could understand. In recent years they switched to AI. Why? Search results were 15% better. The choice was clear. Switch to AI or no longer be the king of search, dethroned by an AI upstart.

Why do we feel comfortable sharing our hard won intellectual property? Because there’s little risk in sharing. Billers always keep their risk rules close to their chest. Have you wondered why? Because they don’t want fraudsters figuring their rules out and going around them to defraud clients.

Our head of analytics is French. He lives in Barcelona. Recently he had to make a payment for his french mobile phone account. He tried from Barcelona with a french credit card and was blocked. He used a proxy so that he would appear to be in France, re-attempted the transaction, and it was successful. Clearly the risk algorithm used by his french mobile carrier checked for card/location mismatch but not for proxy. That is exactly the kind of thing that billers don’t want you to know.

An AI doesn’t have fixed rules so we’re happy to talk about it. We used to have those rules. Back then we kept our mouths shut about what we were doing, for obvious reasons. Fraudsters focus their energies on systems they can reverse engineer. That’s only possible with simple, understandable risk systems. The best way for our industry to advance is with cutting edge treatments for maximum health.

Leave a Reply

Your email address will not be published. Required fields are marked *