Opinion: capital markets and the road to intelligent automation

Opinion: Capital Markets and the Road to Intelligent Automation

Many firms are underutilizing the full potential of automation technology. How can firms overcome the challenges along the way? It all starts with data.

Nearly half of capital markets professionals globally say their firms already are using AI in their trading processes. Compared to their financial services peers, that puts them in pole position.

But there’s still a long way to go to reach the next level – creating real value across the enterprise using the building blocks of AI such as machine learning, natural language processing (NLP), etc., while also industrializing the approach.

Right now, many firms are underutilizing the full potential of automation technology, primarily using it in a specific business area. So how can firms stride over the hurdles that might otherwise cause them to falter along the way?

GPS signal lost

A key problem for effective adoption is how to move beyond initial pilots. Making room for new technologies and processes across the enterprise is never easy in an industry with lots of old systems, siloed departments and disparate data. These points are backed up by Capgemini Research which revealed that a whopping 49% of participants cited lack of coordination among different business units creating an incomplete view of the business process. Interestingly, regulation is not a barrier, according to the latest Bank of England and FCA survey, but rather the biggest constraints are internal to firms, such as legacy IT systems and data limitations.

And it’s not just the coordination of different business units that are impacting the ability to scale AI projects. AI needs data, lots of it. In many conversations we have had with industry experts, we talked a lot about access to and quality of data in the industry.

To start with, if we look at the quality of data across the industry, data exceptions are pervasive for all types of firms and types of data. Missing or late data is a multiple daily challenge for 31% of firms and this means firms are failing at the first hurdle. Firms need constant updates to their data to be able to deliver the most basic services: making investment and trading decisions, pricing securities, valuing portfolios, measuring risk and performance, and so on. This only gets worse if we are building composite data sets, and bespoke valuation methodologies – this kind of data is of the highest value to firms. It is also the most complex. This is the data that is creating the highest volume of exceptions, and therefore having the biggest impact on available resources and getting delayed.

Second, 80% of available data is unstructured – think unwieldy unstructured emails, audio, social media data. Data that is not often included in analysis because it is just too hard to get into systems. But it’s important intel and helps provide a more complete picture.

An example of an AI project handling unstructured emails, for example, uses natural language processing for classification. A typical project might be automating the current manual processing of 3,000-5,000 incoming client emails every day containing margin calls, netting and standard settlement instructions. These emails will typically contain shorthand comments, which staff must read and then push to the relevant team to action. Handling that kind of data is something else entirely. By using natural language processing to classify the unstructured emails and extracting relevant data points, a high level of automation can be achieved. Naturally within such a process, exceptions will also occur, and these can be flagged by validation rules.

Another hurdle is that data science is still very caught up with the technology aspect of AI, with scientists typically focused on the algorithms and less on the data preparation for ingestion into AI projects. I’ve mentioned already about the typical quality of data coming into systems but the starting point for all AI initiatives has to be normalized, cleansed and trusted data taken from all relevant sources. There is a role for mobilizing the information that AI needs from all parts of the business. Getting the right information ready from the start is a key part of the process and one that is often missed.

No backseat driving, please

AI teams and business also need to work together. The truth is, technology teams can’t know everything about the business and the business can’t know everything about AI, so firms should get the two teams together at the very start to make sure the right data is taken from all relevant sources.

AI needs both quantity and quality of data. Machine learning, in particular, needs copious amounts of trained data to produce decent outcomes. Poor data quality is a killer of AI techniques – as the old saying goes, “garbage in, garbage out.” There’s no getting around it – you need both quality and quantity. Failure is inevitable if you don’t have the ability to scale up techniques that ensure the quality of data that's being fed through systems. Not all data is created equal, and the winning firms are those that take data quality seriously.

How to avoid total gridlock

Before embarking on ambitious intelligent automation projects, firms need to prioritize data. HFS Research describes the Holy Grail of AI as the intersection of iterative data inputs and minimal training of algorithms. They emphasize the many misguided expectations that one only has to throw machine learning at data and that will be sufficient to integrate those data sets in production. Not so; rather, firms have to move to a data-centric mindset where data is the centerpiece of data strategies. Simply put, it all starts with the data.