24.08.2020

Decision Management Systems and Machine Learning: Tips for Choosing the Right Provider

Share article:

An interview with Fabian Cotic, Machine Learning Specialist and Presales Consultant at Actico, about the use of AI techniques such as Machine Learning in automated business decision-making and Decision Management Systems.

Anyone who has to make thousands of business decisions per day needs a high-performance Decision Management System (DMS) that also uses AI techniques such as machine learning (ML). To help organizations select the right solution, world-leading IT consultancy Gartner has summarized the key challenges and selection criteria in its report, “How to Choose Your Best-Fit Decision Management Suite Vendor“.

As a pioneer of artificial intelligence in DMS, Actico has an expert understanding of the typical stumbling blocks when it comes to finding the best setting or system for specific requirements. That’s why we took the opportunity to talk to Fabian Cotic, a Machine Learning Specialist and Presales Consultant at Actico, about the most important considerations when choosing a Decision Management System for customers.

Why should companies deploy a DMS?

Making good business decisions has never been more important. For example, let’s say an organization wants to set the price of a product or approve a loan. These are situations in which decisions are fundamental and directly determine business success. A DMS can help automate these decisions through straightforward management of decision logic and a low-code approach. You don’t need a trained programmer to write the decision logic anymore. With the right system in place, a business user can model these cases.

You mean the people who already know what decision they would make themselves …

Exactly, that’s a crucial point. Anyone who has ever performed requirements engineering for a project knows that it isn’t that easy, and that many aspects can fall by the wayside. That’s why it’s so nice to be able to graphically record the flow of the decision as defined by the business user. That helps a lot to develop a good understanding of the process. And ultimately, this is how you generate high-performance Java code. I can then easily integrate this code into my IT application landscape. The special thing about this procedure is that the decision logic is extracted from the code and the implementation is no longer just left to the IT staff. The business unit is also now able to lend a hand.

“With a DMS, you extract the decision logic from an application and significantly reduce the burden on IT.”
What are the differences between a DMS and business rules?

These applications used to be called Business Rules Management Systems (BRMS). Due to the developments in AI in recent years, the term DMS has established itself. These systems combine pure BRMS with AI and Machine Learning. And to make the difference clear, we now talk about Decision Management Systems (DMS).

So a DMS is a development of the concept that enables the integration of additional features such as event processing…

Not just event processing, but also Machine Learning and new features that help break down decisions more effectively (because they can be very complex). To achieve this, I need tools that BRMS couldn’t deliver. For example, DMS applications can automatically break down complex decisions into a larger number of small ones to achieve greater transparency.

How do I, as a potential customer, know whether I need Machine Learning or not?

If I understand the decision logic, then a rule-based system is probably the right choice. But if I find that a rules system isn’t enough because I don’t know how to build the decision logic, it makes sense to rely on Machine Learning. However, it’s important that you can also access the required data.

For example, we have a customer in the banking sector who determines product affinities with our software. It’s relatively difficult to write an “if-then” rule as to whether a customer might be interested in a certain product. So, the bank has set up a large data pool (“data lake”) that contains information on, for example, whether the customer recently bought a bicycle, or has been active in a Facebook group focused on “real estate”. Based on these kind of real-life touchpoints, Machine Learning can be used to determine whether he has an affinity for a certain product. And that works pretty well. With a large number of data sources, it would be far too complex for a human to create the relevant rules manually.

How can software architects determine when it makes sense to use a DMS?

First you have to see how complex the rules are: Do I even need a dedicated system for this, or can I just map it in my source code? Of course, business rules can also be formulated directly in common programming languages. However, if the complexity increases significantly and it becomes harder to manage in the code, it quickly becomes very confusing. In addition, you often want or have to make a lot of changes to the rules.

If, for example, you have to adjust your decision-making logic once a week, it’s pretty unrealistic to expect that your IT department will be able to program you a new release every week. That’s the key aspect of a DMS: It allows business units to act independently of the IT department and make changes in production systems themselves.

Of course, this also includes quality assurance, which takes place in the form of continuous integration in the background. But this doesn’t have to be programmed by the user; the tool has to deliver it out-of-the-box. This is the only way to ensure that the changes work as intended.

As a result, business users can adapt the decision logic (and not just the IT department). If this is an important requirement for a project, you should choose a tool that is easy for “normal” business people to use.

If Machine Learning and AI are also used, a DMS makes sense because it can explain why it made the decision in a specific case. The system must be able to provide documentation of the decision algorithm at the push of a button. This is particularly important when Decision Management Systems are used in highly regulated industries such as financial services and insurance. Companies can use the DMS not just to document the decisions, but also to continuously improve them.

“Through logging within a DMS, companies can not only document decisions, but also continuously improve them.”
What is the best way to evaluate the technical properties of a DMS?

By first figuring out what you need. So, I have to think about which decision technology I need in advance. Are business rules enough for me, is Machine Learning a requirement, is streaming analysis important, and do I perhaps also need some optimization procedures? You have to be clear about these four requirements at the beginning. They should be put in writing and formulated as a basis for selecting providers via RFIs and RFPs.

The next step is often a proof-of-concept (PoC) to show that the selected solution is able to meet the requirements. Some customers ask for a trial license to experience the system for themselves and find out what it can do. Others have a specific project for which we, as a provider, set up a PoC to demonstrate how the system can be deployed.

But we also often work with IT specialists who would like to get involved themselves and are not interested in us taking on the project management for the implementation of the DMS. These are mostly people who have decided that they need a DMS solution – for example because they want to decouple the logic from the application. It’s also often about agility, because you can develop within the system very quickly. In our system, you can set up a logic snippet rapidly using drag and drop. I have already programmed a lot in Java myself, but with templates and ready-made modules you can reach your goal much faster.

What does that look like in the real world?

For example, you don’t have to write a test case and insert a new category. You click on “Create new rule test” and see the possible input and output data. Then you just have to select them and define what should come out at the other end. We also have ready-made rule sets for typical compliance scenarios that occur repeatedly at banks and insurance companies, so that you don’t have to create them from scratch. For example, a bank must ensure that transactions are not carried out with people who are on a blacklist and who could be terrorists or other criminals. There is, for example, a template that performs exactly this kind of check based on rules technology.

Are there any other decision criteria?

If, for example, you have an event stream platform that continuously produces data, and the DMS needs to be integrated into this platform, it makes sense to choose a provider who delivers this as part of the solution. In this environment, however, customers often opt for an open source solution like Kafka, which they integrate using Java and the appropriate rules.

You have already mentioned how important ease of use is for non-IT people with these systems. Are there any big differences between systems in this regard?

There are tools that use natural language. In other words, you write running texts like “If that’s bigger than that, then do the following.” At first this works very well for business unit employees, but it soon gets confusing if there are a lot of rules involved.

Our tool, on the other hand, is graphically structured, and can define sub-areas and organize them visually. There is a dependency matrix that shows which rule calls other rules and how everything is related. The graphical models – Gartner calls them “Decision Flows” – make this very clear.

Then there are also decision tables that almost every DMS has. These work in a similar way to Microsoft Excel, which many users are already familiar with. And of course, you can also import decision tables from Excel for most applications.

Can you also extract rules from existing systems? Gartner talks about Rules Harvesting here.

This doesn’t exist out of the box, but a DMS can, of course, learn this through Machine Learning. Theoretically, the findings could then be automatically converted into rules again, but in practice we don’t do that because it doesn’t deliver better results. Of course, doing so would make it easier to explain, but this is often not a requirement and therefore not necessary. And if explainability is required, we can also do this with Machine Learning using the Shapley algorithm.

From my own experience I can say that Rules Harvesting is rarely used in practice right now. Even Gartner says that this is currently at the experimental stage.

Does the issue of cloud-based or onsite deployment also play an important role in the customer’s decision?

For an RFI, of course, you have to be clear about where the applications should run – in the cloud or in your own data center. We support both, but this is not the case with all providers.

You should also find out which mechanisms and support a DMS provides for creating and automatically deploying the solution. This also includes quality assurance, which is very important.

We can generate the source code from the system, compile the program, carry out the necessary tests with test suites and then make the result available as a separate web service with SOAP or a REST API, for example. There is also the option of providing a Java API that accesses the DMS rules in the background and always uses the current rule set. This enables us to integrate into any IT landscape.

With a DMS, can you still understand years later why it made a particular decision?

This is a very important point! Every decision and transaction in the system is saved and logged. I can, for example, tell you that five years ago the model was implemented in version X and with this data set, and that the system made this specific decision. In other words, you can use a test instance to create the exact state that led to a decision weeks or years ago and run it again.

What happens when current developments – like the Covid-19 pandemic – change people’s behavior significantly? Can the system recognize this itself?

In Machine Learning, this is referred to as concept drift. It can happen all of a sudden when something like Covid-19 appears and completely changes the buying behavior of customers. In this scenario, the predictions of the ML models no longer work. Detecting this drift automatically is relatively difficult. Nevertheless, we have made it our mission to solve the problem. The aim is to implement monitoring that detects drift and triggers a retraining of the model. The idea behind this is to build a system that is so intelligent that it updates itself when it detects a change.

“An intelligent DMS can recognize changes in the initial data and automatically retrain itself – even if nobody in Germany does that yet.”
Is the drift detection performed statistically and automatically?

To implement something like this, you need to check threshold values ​​to see whether they are exceeded. One possible approach is to look at how important a certain feature was in training and to check whether the statistical distribution of this feature in the data has changed. Then it’s important to assess whether this had an important impact on the statements generated by the ML model.

Using this (still theoretical) method, we could also carry out the adjustments fully automatically, but I don’t know anyone in Germany who would adjust the models without a final human check. However, in Asia, for example, there are companies that are ready to take more risks in terms of technological progress.

If a company is deciding whether to use a DMS in the future: When should it choose a one-stop-shop solution, and when should it choose a multi-vendor approach?

The question is of course: Have you already invested in this area? Data science platforms often already exist before a DMS is introduced. This has to be taken into account in any case. But even if they can choose a one-stop-shop approach, there is still no guarantee they will be able to establish a fully integrated solution. That’s because many DMS providers have bought parts of their solution portfolio through acquisitions. The compatibility of the individual components is therefore not as high as it should be. Only the highest possible levels of compatibility can ensure that the version control system is the same, that the models are always deployed in the same way, and that uniform logging and monitoring is possible.

It’s also important to understand that data scientists do not normally operate and control rules systems because they are located in different areas of the business. These are often separate silos: the data scientists take care of the Machine Learning models, while other people create the sets of rules.

It’s often the case that customers are already using a rule technology and know how to enter the rules graphically. They also like using our ML technology because they can keep using the familiar graphical user interface. We then train the business unit users to create Machine Learning models within the interface. A one-stop-shop solution makes sense here in order to minimize training costs.

If customers already have existing Machine Learning models, they are often available as Python scripts. We can then integrate them into our system as well.

So, to be clear: What are the pros and cons of using the built-in ML solution within a DMS, and when does an external solution make sense?

Ultimately, it depends on the tools that data scientists prefer to use. Most prefer to work with open source technologies such as Python or R. According to Gartner, these can be integrated by all of the DMS they investigated. In other words: An external DSML does not make sense here. Then there are also companies that have strategically opted for an external DSML platform because they use a “best-of-breed” approach. This could be because a certain feature is required that only an external DSML can provide. These companies don’t want to create models outside of the DSML. This usually leads to the DMS and DSML being loosely coupled via web services. However, this connection has implications in terms of latency. We have customers who need to process tens of thousands of requests per second and that simply doesn’t work with a web connection. We’re one of the few vendors that doesn’t have to call external systems through web services to run an ML model. Ultimately, however, the decision is always case-dependent.

Fabian Cotic
PRESALES-CONSULANT

Fabian Cotic is working as a Machine Learning Specialist for ACTICO where he has been part of many AI-based projects within the financial service sector. Besides his work within consulting projects he has also worked as a software engineer at ACTICO where he was responsible for integrating AI & Machine Learning into the ACTICO Platform. During his three years at ACTICO Fabian managed it to create a proven track record of many successful models that made their way into production.

That might be of interest for you

Whitepaper: Successful Automation

Learn more about direct involvement of business users with low-code platforms.

Download Whitepaper
Whitepaper: Central Business Rules Engine

An Architecture Concept for Higher Business Agility and Consistency
By implementing a central business rules engine, organizations take their agility and efficiency to a new level.

Download Whitepaper
Enter The Next Level of Intelligent Automation

ACTICO Platform is a powerful software for intelligent automation and digital decisioning.

Learn more