<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1194005000700189&amp;ev=PageView&amp;noscript=1">
VizuriBlogHero.jpg

From Vizuri's Experts

AI in Insurance - Should You be Scared?

At this year's Digital Insurance conference, the main theme was “Everything AI.”

AI (artificial intelligence) has become the new buzz word in the insurance industry from life to P&C (property & casualty) insurance.

Insurers constantly push to drive efficiencies and reduce costs, without sacrificing quality. This push occurs amid ever-increasing regulatory pressures, as well as the need to respond faster to market changes and to improve the client experience.

AI looks more and more like the solution to all of insurers’ problems and needs. AI has many possible applications in the insurance world, but from our experience, it can establish unprecedented efficiencies and even improve the management of risk when applied to core insurance systems that support underwriting and claims.

But how do you check to make sure your systems stay compliant with industry regulations? Is it a compliance liability? Will it automate claims and underwriting professionals out of their jobs? Should you be afraid of AI?

First, the basics: How do we define AI?

Artificial intelligence attempts to simulate human intelligence processes using machines, specifically computer systems. These processes include learning, reasoning and self-correction.

Applications of AI include expert systems such as rules engines, speech recognition, smart personal assistants, spam filters, fraud detection, self-driving cars, and chatbots that act as automated responders for online customer support.

Two main approaches to AI: symbolic and non-symbolic.

Symbolic AI, also known as Good, Old-Fashioned AI (GOFAI), incorporates intelligent systems based on rules and knowledge. Their actions can be explained and are understandable and interpretable by humans.

In symbolic AI, signs and symbols like strings of characters are processed, representing real-world decisions and concepts. Symbols can be arranged in structures to form a concept or domain model that can be used to present the relationship between the signs and symbols.

Non-symbolic AI, on the other hand, strives to build computational systems mimicking the human brain. It uses algorithms to discover patterns within both structured and unstructured data.

Good examples of non-symbolic AI include machine/deep learning, neural networks and image analysis. Non-symbolic AI does not interpret a symbolic representation to make a decision. Instead, it has the ability to automatically learn and improve from examples or instruction without being explicitly programmed. It uses very large amounts of data and looks for patterns in it to make better decisions based on upfront instructions and examples.

Non-symbolic AI can also be referred to as “heuristic programming” which basically means that it does not always get the right answer. Heuristic decisions are based on “rules of thumb.”

Non-symbolic AI has become a reality with the rise of big data. Everything from car manufacturers knowing how you drive and where you drive to your Fitbit tracking your every move provide examples of big data that can now be used to make future predictions and recommendations.

Another great example of non-symbolic AI being used to immense benefit is Turbo Tax. Powered by a Tax Knowledge Engine, artificial intelligence in the form of machine learning is used to deliver a personalized, streamlined tax preparation experience by connecting more than 80,000 pages of U.S. tax requirements and instructions to an individual’s unique financial situation.

So, what is the big difference and why is it important to make a distinction?

The big difference is that symbolic AI decisions can be explained and are human readable in an understandable language.

Non-symbolic AI is pretty much a black box. It lacks interpretability and cannot explain why a certain decision was made. Non-symbolic AI decisions are based on “rules of thumb” and can therefore not be used to make critical decisions.

Knowing the difference between the two approaches helps us understand where to apply the correct type of AI to solve a specific problem.

The upside to GOFAI: It’s auditable.

It is important to proof compliance through audits and explain exactly how risk is calculated. For example, whenever an underwriting decision is made, it must be explained to answer how and why a decision was made. The same goes for claims adjusters.

Auditability of decisions can have a huge impact on compliance and an insurer’s bottom line. As mentioned in the above description of symbolic AI, GOFAI is based on human-made rules and knowledge and it makes decisions that can be directly interpreted by humans. On the other hand, non-symbolic AI is a black box—a huge challenge for when a decision needs to be traced. 

The insurance industry has been using GOFAI for a while now. However, the AI technologies of today and tomorrow have come a long way and have completely revolutionized the industry.

But will it automate people out of a job?

The short answer is no. As AI continues to mature in sophistication, it will need human intervention to extrapolate its ultimate effects on the insurance industry. AI can create insight into the relationships of risks and help make suggestions and identify patterns in data. We view AI as a way to support underwriters and adjusters so they can focus on decisions that require a human touch.

The key question is while they won’t be automated away, what are underwriters and adjusters going to do about AI? AI needs input from subject matter experts and their years of experience to build automated systems to help improve efficiency and reduce costs through better decision making. As experts in their fields, they are the ideal candidates to manage and run these AI systems. 

Therefore, underwriters and adjusters should not expect to be replaced by AI, but they should plan to learn more about how to manage AI systems with the goal of having the systems help in decision making processes. This will help insurance companies address major barriers to overcome in an effort to successfully deploy AI solutions. These include talent gaps, technological constraints, and the difficulty of embracing innovation and change.

AI and innovative technologies have already begun to disrupt the insurance industry and will continue to do so, making first adopters increasingly competitive. It is important to not be threatened by AI. Embrace it and contribute to strategies that help your organization stay abreast in today’s rapidly changing competitive environment.

To learn more about insurance industry disruption, check out our on-demand webinar with Forrester Principal Analyst Ellen Carney where we discuss how insurance innovators are using the deep experience their underwriters provide in a smarter way. 

Ben-Johan van der Walt

Ben-Johan van der Walt is a Software Architect/Engineer with over 20 years of experience leading successful projects of various sizes and scopes. He is a seasoned professional, with outstanding project planning, execution, mentoring and support skills. He is always ready for a challenge.