An interview with Hans Jürgen Rieder, CEO of Actico, about the need to leave robotic process automation (RPA) behind and use centralized Decision Management Systems instead.
Leaders who want to restructure their corporate architecture and drive technological innovation often lack a clear strategy for how to adopt and expand automation within their companies. As leading IT consulting company Gartner describes in its report, “Move Beyond RPA to Deliver Hyperautomation“, it isn’t enough to introduce a few robotic process automation (RPA). End-to-end automation requires a lot more – what Gartner calls “hyperautomation”.
We had the opportunity to talk to Hans Jürgen Rieder about the findings of the Gartner report, the use of Decision Management Systems and RPA tools, and why it’s better to stay away from RPA.
Is hyperautomation a continuation of RPA, or a step beyond it?
When one talks about automation, it always means “end-to-end”. Of course, there are various tools that achieve this throughout the entire automation chain. RPA is the dumbest tool of all in this context, as the term “robotic” suggests. The automation usually involves very simple tasks, such as copy-paste or the completion of form fields. It’s based on simple tools that cannot make complex decisions. So, when it comes to straightforward tasks, RPA is pretty good. But the fact we’re talking only about simple tasks also means the savings that can be achieved with RPA are negligible. RPA only covers a small part of the automation chain.
RPA is the dumbest tool of all in the automation tool chain, as the term “robotic” suggests.
The next problem with RPA is that it is often introduced as the result of short-term decisions. If you want to integrate RPA into the IT infrastructure, it quickly becomes complex. Because the tools are so simple, they are usually used by people within business units, rather than by IT. But if the decision chain is changed by a process, the RPA element must also be adapted accordingly – or it will break. RPA is often used to plug a gap that isn’t covered by standard ERP software; that means it never gets implemented cleanly by IT as a project. This usually happens because organizations want to minimize costs when using RPA to fulfil a need quickly.
In other words, RPA is effectively a bridge technology that is used as a first step in dealing with the topic of automation, and to shift decision chains towards business. This becomes critical, for example, if you have hundreds or more of these small RPA processes in use, because you quickly lose any kind of overview. In this scenario, it becomes highly complex; keeping this networked RPA system in check is a real challenge because ultimately, the RPA snippets and scripts need to be managed. In other words, RPA scripts provide a quick ad hoc solution, but in large numbers they only create new problems because of their complexity.
RPA creates benefits in the short term, becomes questionable in the medium term, and is catastrophic in the long term.
This might be of interest to you: RPA Report – Burning the RPA Bandwagon
Does employee turnover also play a role here? When the people who wrote the RPA scripts leave the organization?
As such, the scripts are relatively small and easy to read. It’s the large volume of scripts that creates the problems. You are introducing RPA to save costs and minimize headcount for the completion of relatively simple tasks. Then, if hundreds or even thousands of these small scripts are in use, you need to go through all of them and understand what they do whenever changes are required. This costs an enormous amount of time and money and ultimately doesn’t produce much in the way of savings. The complexity creates an enormous overhead. That’s why I’m not a friend of RPA.
In themselves, RPA scripts hardly change anything in existing processes. They don’t create efficiency per se, they just take on the tasks that were previously done by people. They replace employees with simple automation, but the logic is still the same. You can improve efficiency much more effectively if you look at the entire decision-making chain end-to-end – which you should do regularly anyway.
And when the people are no longer there, there is no one around to check whether the individual steps still make sense. The system then simply reinforces itself. As a result, knowledge gets lost and nobody cares about the processing chain anymore. Disillusionment with RPA is already a reality within some companies, but it will only spread more widely in the future.
Then why do companies still use RPA?
RPAs can be created by business users – you don’t need IT anymore because the tools are very user-friendly these days. But in taking this approach, you are effectively decoupling IT from traditional IT applications. That means if IT makes changes, business users may not notice them (and vice versa). So, when I use RPA, the question is: “How do I structure the company in a more intelligent way?” I have to rethink my organization, and make sure that the individual parts are well informed. I believe that companies are truly intelligent when they are organized according to clusters, each of which contain both business and IT employees.
However, RPA is often also a KPI issue (Key Performance Index = a performance indicator): The number of robots used is seen as a positive indicator and is often used as a yardstick for automation and efficiency. Analysts also play a role here because, based on these numbers, they say that companies do not use enough automation – just because the number of RPA scripts is too small. I think I just explained how nonsensical this is.
So, if that’s not a promising approach to automation, what is intelligent automation?
Intelligent automation is created using a Digital Operations Toolbox. That means you use low-code platforms, possibly also an RPA tool (with all its side effects) and a Decision Management System. The question is: When do you use which technology to best effect? Each of these tools has its place, but I use decision automation when I have to make complicated decisions, when these decisions often change, when IT and business have to work well together and – and this is the most important point – when I want to store knowledge centrally in a system that should be accessible via multiple applications. Because the other tools (like RPA) really can’t do this.
One example from the financial world shows why this is important: Here, regulation plays an important role because it is constantly changing (and also varies by region and country). It would not be wise to store these rules within individual applications.
It isn’t wise to store compliance and regulatory requirements within lots of individual applications. A more intelligent approach is to map them in a central system that other applications can access.
Doing so would mean that every time legal requirements change, I would have to track the changes within every application individually and make sure that nothing was overlooked. Above all, this would have to happen according to the same logic and the same tests within every application. So, it’s clearly more intelligent to put these rules in one place. because I only have to maintain them in one place and I can be sure that they will be used consistently everywhere.
From an IT perspective, you are building a central IT service using rules-based technology. The system becomes intelligent because it uses low-code elements and can be modeled graphically (which makes it easier for business users to read and understand). The tools should also display relevant analysis and statistical data. This creates the intelligent, end-to-end overview for which the entire toolbox of automation technologies can be used.
Moreover, it’s also about retaining knowledge. If I automate a lot of tasks, then at some point the people who had this knowledge will no longer be there (not only through headcount reduction, but also through turnover). That’s why it’s important to store knowledge centrally in the form of rules for Decision Management Platforms. Ultimately, “standard” applications also store knowledge in the form of process flows.
What is the best way to approach setting up intelligent automation? Of course, IT has to set up the system, but what happens next?
The first step is to be clear that you want to use a central system. This may seem trivial, but it’s the first important decision that must be made to start providing automated decisions as a central service. Once you have set up the system, it’s a step-by-step and process-by-process approach. You store the rules centrally and use them to generate the automation – also because you know that a central service is necessary to achieve this. And even with large projects, you take the same process-by-process approach, and start by picking one example and executing on it.
Ultimately, it’s a budget issue and money within companies is a finite resource. When you start a project, it will take a month or two to realize the first efficiency gains. You can then use these to finance the next project and continue on this basis. In the past, people tried to do everything at once in a major transformation project, but nowadays you do it differently – step by step. It’s crucial to start quickly and deliver value rapidly. For me, that’s also intelligent automation.
Even if you don’t realize yet that you need a central decision-making system, you still know that you want to be more efficient and you’re thinking about how to do that. The end result is that some form of automation is always part of the solution.
So, is the main driver for implementing intelligent automation systems mainly about the potential cost savings?
The cost issue is only one aspect. The flexibility and speed that such systems allow can also bring significant benefits. Take the example of a decision about whether to approve a loan that is often used: Nowadays, a customer can immediately be granted or refused a loan via numerous Internet platforms. He no longer has to go to a bank branch and provide a lot of data that is then used to eventually make a decision. He receives more or less immediate feedback as to whether data is still missing, or whether his application has been accepted or refused.
And this also applies to many other areas of normal life – whether it’s online purchases, which advertisements and offers are displayed to me online, which prices I can see: These are all automated processes that are primarily designed to increase sales rather than save costs.
And there are an almost infinite number of other examples where you can use intelligent automation, be it in production and quality assurance, in trade or in administration. Automation potential exists pretty much everywhere …
… but usually where RPA is already being used?
Yes, but it’s not just about optimizing individual processes, it’s about gaining an overview via a central system; to have a system where data converges, which I can then analyze to solve problems for the long term, instead of having to work around them with a small RPA script. Only in a central system can I see how often something is accessed and needed. The goal is to automate useful tasks and eliminate useless ones.
The goal of intelligent automation is to automate useful tasks and eliminate useless ones.
In particular, the statistical evaluation of how frequently rules are used by an organization within a centralized system allows me to see what customers actually use and what they don’t, and then optimize and refine my processes accordingly – this is also part of intelligent automation.
Are Decision Management Systems also used in Support by, for example, large Internet Service Providers?
Yes, of course. Support processes in particular have a lot of central elements. If you call a central support function, an AI-supported system can quickly determine the causes of a problem with a few questions and suggest possible solutions. The advantage of a DMS here is that you can quickly add new attributes and aspects to the system, and integrate them into the models (for example, when the provider delivers new routers to its customers). This is one of the great advantages of a DMS: to deliver a new router, you can simply copy the logic for the support process and only have to make minor product-specific adjustments afterwards.
Another good example is the delivery logistics of large food retailers: Enormous amounts of data (inventory data, sales, stocks, delivery times and supply chains) are generated, seasonal fluctuations have to be taken into account, and the knowledge is often in the heads of the department and logistics managers. Logistics changes are a key aspect of daily operations. A DMS helps to reduce manual checks, to establish more transparency, and to significantly reduce incorrect deliveries. Department heads in particular appreciate our graphic ACTICO Modeler, which directly generates Java code that can be easily integrated into the existing IT landscape.
And here is the crucial point when choosing a DMS: How easily can I make changes, how clearly laid out is the system, and how quickly can the customized models be taken live?
In addition to costs and sales, Gartner also considers risk minimization as a key aspect of intelligent automation.
Risk management is important in many areas of a business, for example in production. Machines cannot be allowed to break, because that would generate millions in additional costs very quickly. By using machine learning and rules, possible failures can be predicted and avoided, otherwise known as predictive maintenance. The advantage of a DMS is that the production managers and the workers on site can let their knowledge flow into the control system and, for example, automatically add upcoming tool changes to a ticket system.
Of course, risk management is also an important topic for financial institutions. The heavy regulatory burden means financial institutions have to spend incredibly large amounts of money on reacting to changes every year. There is already a lot of automation in this area today, but the degree varies greatly from company to company. Many banks are dealing with hundreds or even thousands of applications, all of which need to be checked for changes required by supervisory authorities such as SEC or CSA . Often these applications are also networked with each other, which creates enormous complexity that even the banks struggle to understand completely.
Shouldn’t banks be using DMS a lot more then? Or is that too utopian?
No, I don’t find that utopian at all, but rather very realistic. I would strongly recommend building and extending this kind of system. This knowledge should really be centralized by the banks and then implemented bit-by-bit. It’s utopian to want to convert every system at once (because that would probably cost billions). But I think that mapping new regulatory requirements within a new system and implementing all the relevant decisions one-by-one is very realistic.