Agentic AI: The Next Wave in Insurtech?

We’re witnessing the next step in the evolution of AI — it’s Agentic AI, and it’s transforming insurtech by automatically adjusting premiums, identifying emerging risks, and making underwriting decisions, all with minimal human intervention. Sapiens Marketing Director Mark Sidlauskas sits down with Sapiens Decision CTO Denzil Wasson to break down the buzz behind Agentic AI, and how it’s reshaping insurance operations in our latest podcast.

 

Subscribe to the Sapiens Insurance 360 Podcast
Table of Contents

Mark Sidlauskas: Hi everyone! Welcome to the Sapiens Insurance 360 podcast. I’m your host, Mark Sidlauskas, marketing director at Sapiens. I’m glad that you’re out there listening, this is where we discuss the latest news, trends, and issues from across the insurance solutions and technology spectrum. So let’s get started. AI is rapidly becoming part of our daily lives and businesses are embracing AI to transform processes and drive new levels of automation and productivity.We’re seeing use cases across the industry like in risk assessment, underwriting, claims processing, and fraud prevention, just to name a few. Now there’s a lot of buzz about Agentic AI as the next game changer. With Agentic AI, systems are designed to act autonomously with minimal human intervention and work towards a goal, often interacting with other agents. Imagine AI agents that continuously gather and analyze data from multiple sources like satellite imagery, IoT sensors, public records, and social media to provide real-time scoring.They can automatically adjust premiums, identify emerging risks, and make underwriting decisions for standard policies without human oversight. To help us explore Agentic AI is Denzil Wasson, CTO of Sapiens Decision. Denzel is responsible for Sapiens Decision’s technical strategy and delivery, and brings over 30 years of diverse technology, architecture and implementation experience to ensure customer success. Denzil, welcome to the program!

Denzil Wasson: Thanks, Mark. It’s great to be here!

Mark Sidlauskas: So let’s get started with a bit of history and perhaps you can level-set us. Just a few years ago, decision management was a category and we were just beginning to incorporate AI into our products with Gen AI and machine learning models. So that was converged with decision management into a new AI decisioning category that’s emerged, and that promises enormous value to us. So how did we get from the innovative way of managing rules by extracting decision logic from the underlying technology? And how did that evolve to Agentic AI?

Denzil Wasson: Yeah. So I think that history is important as to where we’ve come from. So for us, it’s a vindication of our vision. We’ve always said that business logic should be treated as a first-class asset and really has a faster lifecycle than the underlying systems, that implement that business logic. So traditionally, the pushing business logic to code, you know, we’ve recognized that really doesn’t work for these highly regulated environments where there’s a lot of contextual difference and there’s potentially fairly rapid change. And so, they imagined the first, the business rule engines, which oftentimes resulted in just about the separation of code and wasn’t really owned by the business and oftentimes resulted in an anarchy of rules became decision management, where the business logic really lived in the hands of the business and the business analysts, and was involved through its lifecycle and published for consumption-based systems. So that’s the traditional business decision management platform that we saw a few years ago. With the advent of AI, you know, the probabilistic decision making, you know, prior to that, all the decision making was declarative and people would essentially do the analysis manually, come up with new probabilities, and then implement those into declarative logic. So with the advent of the machine learning, now we’re able to combine those things. And so we’re able to get the probabilistic items, directly at runtime from the data sets, from the transaction flow. And that’s great, but what do you do with it? You know, if you scored 70, what does that mean? What should I do? And so this is where we see the decision management world and the decision intelligence platforms or decision AI emerging. Because essentially, you need to combine action logic, declarative logic, with the probabilistic logic to actually achieve an outcome. And so the emergence of this new category called decision intelligence platforms is the combination of traditional business logic with AI, both machine learning as well as the new emerging technology of Agentic AI.

Mark Sidlauskas: So if I’m a business analyst and I get a report or some sort of output from an AI model that says the probability is 0.7, what do I do with that 0.7 from the AI system? And I apply my rules to it. And decision says, well, if it’s between point six and point eight, you do X. If it’s in another interval you do something different things. Is that the combination?

Denzil Wasson: Exactly. That’s exactly the combination. We also know that, AI, in the early days where we’re still having challenges around hallucination, you know, there’s still trust concerns around AI. And so AI definitely needs to be combined with other logic in terms of producing an actual action as well as, guardrails, you know, particularly in these highly legislated use cases and industries.

Mark Sidlauskas: So that brings us to AI agents themselves, right? They are tasked with doing these types of things.

Denzil Wasson: Right. So the agents are a significant watershed moment in terms of moving us to the next level. You know, we were an early adopter of generative AI. And in our decision intelligence platform in that we would essentially help the decision analyst, the person responsible for building, logic and pushing it through its lifecycle. We would help them with generative AI, take multimodal sources of natural language documents from their own company policies that say something from the legislators or administrators and even things from the legacy source code. Throw it all into what we called a model AI, which was our gen AI capability, and it would say based on all this information, here’s what I think your logic actually looks like and what you need to do. And then essentially that would become a first-class citizen and they’d be able to push it through its lifecycle. We also combine the ability for machine learning to be in that mix, where you could be provided a natural way to combine machine learning components together with the declarative logic. And you’re in a situation then, where essentially you can say, I’ve applied my declarative logic to decide on the context, which machine learning model should I use? Should I even use the machine learning model? Can I just reject this outright and not incur the compute expense of a machine learning model? And then take that output from a machine learning model and actually reach an actionable conclusion. And that’s where we were pre-Agentic. And so now we’re in a situation where previously, we assisted the analyst and we helped the analyst. Whereas now, we can actually do things with Agentic on behalf of the analyst. So previously, the analyst would take our suggestions and decide whether to implement them and they would take that decision logic through its lifecycle. Now we’re in a position where we can make the suggestions and say would you like us to apply that? Would you like us to do an impact analysis? Would you like us to take this through the lifecycle? Would you like me to generate all the different tests? And so essentially, we’re able to do things. And that’s the difference between agentic and generative, is agentic does things that are actually useful. So in decision, the tooling where we’re introducing agents to be able to essentially do a lot of the commodity work for the human. And then the human is more of a governing body in terms of human in the loop. And then at runtime, when these decisions are actually employed, instead of just producing an outcome that says, oh, yes, you can go ahead and run the policy, an agent can actually be dispatched to go and run the policy. And so the agents are actually doing things for us. Now we’re exposing agents within our tooling, but also the runtime platform will comprise agents that you can call and say, here’s the problem I’m trying to solve. And it will say, oh, this is the packet of logic that you should be executing to solve that problem. Whereas today, we have to know exactly what is my use case, what logic do I need to do. So the future world becomes a lot more dynamic, based on a problem statement that I can define against an ecosystem of agents.

Mark Sidlauskas: So can you walk us through, like a typical problem that someone might encounter? The problem statement of maybe underwriting a policy in a particular state. What would the business analyst be doing and what choices would he be given?

Denzil Wasson: So today the business analyst would have to research and say, what are the rules for this particular state? And they would have to figure out how to model those rules. With gen AI, they would just say kill the rules and it would propose a model. With Agentic AI, they would say, listen, I would like you to propose a model that is taking all of these items. And then I would like you to take that model, and I would like you to generate test cases and here’s some of my criteria around test cases, I’d like you to push that all the way just to pre-deployment, show me all the results of what you’ve got, and then I’ll make the decision as to whether you deploy that amount. And so essentially, the human is just defining goals, constraints and KPIs for the task. The agents are taking care of that and presenting the results to the user for the user to say, yes, I’m good with that, no, I’m not good with that. The user also can decide the granularity of governance. They can say, ask me at every step, or they can say, do the whole thing and then show me what you did. So it’s a completely, variable environment, and it’s really up to the user as to how involved they need to be in the sleep. Obviously, there will be points in the lifecycle that a human has to review [at] this point because AI is where it’s at. And then I think once, what’s really exciting, not only from the design time point of view, but from the runtime point of view, the fact that today, we might have KPIs being monitored and then eventually a monitoring of a KPI results in a change coming from a requirements perspective through the lifecycle. And that could take can take weeks, could take months for that to happen. With the agents, not only are they listening and recognizing there’s a KPI that can actually do something with it. And so that makes the new one a lot more exciting in terms of being able to change those cycles, because remember, originally our goal was to extract business logic as a separate asset to be able to accelerate its lifecycle. And now we’re looking at doing that even faster with automation from agents.

Mark Sidlauskas: So if I’m, let’s say in the mortgage industry and interest rates drop where some change happens in the market, I could go, I could tell my agent, change the loan to value ratio so I can extract more from the market, or less depending on my risk profile, and it would execute that immediately.

Denzil Wasson: Yeah, I think it executes it immediately. I think what’s really interesting is that change could potentially ripple through to other agents. And another agent could be going through your servicing book and saying, we should make an offer to all of these customers. Because you changed that, and so the causality and how things happen is going to be a lot more exciting in terms of optimizing and instead of us saying, oh, because of this, we should think about that. You know, we have the agents that essentially will be proposing to us, you should do these things, do you want to do them? I can make the change for you, would you like to do it? And then we’re in a world where we can feed back and say, [I] kind of like what you’re suggesting, but maybe tweak this and that and then present that use case to me again, and the agent does it, and then we say, yes, let’s do that. And it goes away and does it.

Mark Sidlauskas: So I would have the appropriate guardrails in place to make sure that if I don’t like it, if it’s going to cause me any damage, any business risk, I can shut it down or guide it to the right decision.

Denzil Wasson: Yeah, I think that’s always important. You know, at the end of the day, the agents and in today’s world are using tools, and those tools are some implementation that is doing something. And when to use a tool, when, which agent to use, when, and what the tools should do, are all going to be governed by logic, by some business logic and some in some way. You know, I may look at the service book and say, let’s make an offer to all these people. But someone needs to say, well, let’s not do that to mortgages that we’re busy foreclosing on, as an example. And so you have to have some kind of constraints around the logic.

Mark Sidlauskas: So as we move further up to the future, we’re optimizing around the system and then maybe systems and across a huge ecosystem, potentially. Where do you see the future or what’s the future vision for AI decisioning and Agentic AI?

Denzil Wasson: Yeah. So I think the agents are going to get more powerful, their tools are going to get more powerful. I think that the trust is going to evolve. You know, the more structured use cases will reach a point that the agents become more reliable. And as they become more reliable, we can give them more autonomy. And so essentially, it means that our work is going to be a lot more creatively focused and a lot less on the commodity page today. It’s almost like we’re going to have assistants that are saying, I’ve figured out this whole idea, here’s my presentation. Do you like it or don’t you like it, and then what would you change? Whereas today, we have to do all of that ourselves. And so I think it really allows us to offload commodity work, allows us to be more creative. And I think that creativity, I think for a lot of organizations, essentially is going to result in a hyper-personalization of the offerings, where because we have the ability, the agents, the ability to make changes very quickly. I think that companies are going to have opportunities to really please their customers.

Mark Sidlauskas: Yeah. I think it takes personalization to a very much next level. We’re seeing that right now. So this could take us to places that we hadn’t even thought about before.

Denzil Wasson: Yep!

Mark Sidlauskas: So thanks, Denzel! It’s fascinating to see what’s going to happen here. And we should get together for another episode maybe in six months, probably sooner, because things are changing so rapidly!

Denzil Wasson: Absolutely. Yeah. [Our] agents can get together.

Mark Sidlauskas: Yeah, exactly. Mine will call yours. To our listeners, we have more exciting content planned as always, we’d love to hear from you. Connect with us on social media, share your feedback, and don’t forget to subscribe to the podcast so you never miss an episode. Until next time, this is Sapiens Insurance 360. Thanks for listening!

Explore More