How we analyse and test new pricing models

How we analyse and test new pricing models

Despite what you might think, working in a tech start-up isn’t all fun and excitement. Sometimes it’s table tennis and beer fridges. 🏓But most of the time at Cuvva, it’s working hard to create insurance that’s as close to perfect as possible for our customers.

So sometimes, we have to roll up our sleeves and delve into some hefty datasets. Luckily we get pretty excited by this kind of thing. 🤓And without it, we’d spend a lot of time guessing and making assumptions.

Take price, for instance. 💰Until recently, if an underwriter asked to change their pricing, we wouldn’t really know what the end result would be for the customer. Would some people be unable to get a quote? Would they have to pay a lot more to get covered?

We need to know these things. So we built a tool that would show us the impact of pricing changes. 🛠

The tool: how it works

Using the tool, we can run old data through new pricing models to see what effect the changes have. It takes around an hour to process some 50,000 policies.

The process involves fetching, processing, and re-quoting existing data, and then exporting the results to our PostgreSQL database.

Afterwards, we look into our Postgres views. It’s an easy way to read the results to see what, if anything, has changed

What’s cool is that the entire process is “idempotent”. This means we can stop and start the processing whenever we need to. So, if the tool, which runs on the developer’s machine, lost internet access, it can pick up where it left off. And if we spot an error, we can re-run it only for the affected quotes.

From this data we can work out the difference in premium, quotability, and total paid by our customers in another view. 📊Then, we can import loss ratios using existing claims data, and we can use (yet) another view to pre-compute the whole lot in a format the underwriter can run with.

(We use a lot of views in this tool – they extrapolate the data in a format that’s really easy to process.)

All of this gives us the information we need to make informed decisions around price changes. 👍

Testing, testing…

To test our shiny new tool, we ran old quotes with the new model to see how prices would be affected.

We had the claims data from April to June 2018. So we used this data to test the effect the changes would have across the board.

To keep things accurate, all the information had to match the original quote. We made sure things like the driver’s age and licence details were the same, even if that information had changed since the customer first got the quote. Even a tiny difference could mean we’d get the wrong results.

The results

The test gave us roughly similar results. But as it turns out, there were slight differences in quotability.

We rolled the new model out to 10% of our customers using our config flag service. But we found some customers couldn’t extend their policies because of restrictions in the new model.

So we rolled the test back and made some changes so that everyone who used Cuvva with the old model could still use it with the new one. 🚗 Once we’d made those changes, we rolled it out again to another random 10% of our customers. And so far, so good!

Although we had a few teething problems, being able to use the model before we rolled it out gave us confidence that our customers weren’t in for complete disaster. It gave us the peace of mind we needed to make the change for a small percentage of our customers. 💭

And the tool will keep being useful. It’s all about finding data-driven ways to make things better for our customers.

If you want to help us change the insurance industry, head over to our jobs page. We’re hiring! 🚀