The Xceptor Blog

5 things we learned about data and AI from Xceptor's Dan Reid

22 August 2019

4 Minutes read time

In the second episode of our new podcast series Unleash Your Data, we had a chat with Dan Reid, CTO of Xceptor and one of its original founders. Here’s a few key highlights:


When it comes to AI, the technology tail is still wagging the dog

The shiny newness of AI technology, machine learning, NLP etc. is distracting firms from the end goals and it is not uncommon for many AI initiatives to be led by the technology itself. Previously the business needs dictated the IT needs. As Dan says, we can learn from the past and should get the two teams together at the very start. After all, the business data is what underpins the eventual solution.


All hail the data scientist – but don’t forget about the data

Data science, a much-needed discipline in the data-heavy banking, finance, securities and insurance industries, is getting more attention and is also a scarce resource. The focus though is still too caught up in the technology. This is especially when it comes to how data is being prepared. Data scientists tend to be caught up in the algorithms and less on the data preparation for ingestion into AI projects.


AI teams can’t know everything about the business, and the business can’t know everything about AI

Xceptor has developed some good insight into the dynamic of the clients’ central AI teams and their business teams. It has become clear that there is a role for mobilising the information that AI teams need from all parts of the business. And vice versa, making sure the work carried out by the central AI teams gets rolled out where it needs to in the business.


AI can achieve a lot, however, if data isn’t ingested properly, prepare for heartburn

Data is not good to go from the start in any project (see our previous post highlighting recent data quality stats in the world of finance), let alone for complex AI initiatives. Getting the right information ready from the start is a key part of the process and one that is often missed. The starting point for all AI initiatives should be normalised, cleansed, trusted data taken from all relevant sources.


AI needs both quantity and quality of data

Machine learning requires a copious amount of trained data to produce decent outcomes. But it is also true to say that poor data quality is a killer of AI techniques - as the old saying goes, ‘garbage in, garbage out’. There’s no getting around it - you need both quality and quantity. Failure is inevitable if you don’t have the ability to scale up techniques that ensure the quality of data that's being fed through.


Want to hear direct from Dan? Click here for more first-hand knowledge of how firms are using AI and the data considerations they need to make to succeed.