Fireside Friday with… LSEG’s Emily Prince

The TRADE sits down with group head of analytics at the London Stock Exchange Group (LSEG) and CEO of The Yield Book, Emily Prince, to discuss the ever-evolving role of AI in capital markets, including how feasible its use in trading is, how market opinion is continuing to change, and the importance of a responsible, holistic approach to the technology.

What is the sentiment towards AI and is it changing?

It is an interesting question because there is a broad spectrum of approaches across organisations. It fluctuates between those organisations that are leaning in and are on the front foot and those that are watching carefully and are engaged but they have not yet put all of their chips on the table. There are also organisations that are unsure of how to start and how to enable themselves and I think there’s a humility that comes with that. 

There is an interesting dynamic within AI where it is accessible to all of us, but challenging to implement within financial services. As much as there are some really exciting things that we can do with AI, in financial services we have a lot of regulation and compliance that we need to think about.

Culture within firms is also a significant factor. AI is not one team’s problem to solve, you have to get a community really working together for a truly holistic approach. 

We are starting to see a change in terms of the demands of the end users and in their actual way of working, which is impacting the character of these previously very tightly defined personas. I think that is going to have a really interesting effect in financial services.

How feasible is AI use when it comes to trading?

One of my rules with AI is that if you have to explain it and you have to repeat it, maybe don’t use it. The problem with AI is that it is a probabilistic model so if you ask the same question twice, you’re potentially going to get a different response.

That of course doesn’t mean it’s not valuable, but it means that you have to think very carefully about which use case you are solving for. So, when it comes to traders, this is really fascinating because it comes down to trust – if a trader was like a magic box and every time you shook it came out with a different answer, would you trust it?

Traders are essentially accountable, so the idea that people are going to use AI and potentially get different outcomes that they can’t fully explain in these high-risk tasks seems implausible.

Furthermore, the idea that regulators are ever going to get comfortable with AI giving the best answer only “most of the time” is highly unlikely. For that reason, I think traders are going to be around for a while.

What is meant in practice when the industry says we must adopt AI in a responsible way?

When it comes to responsible AI, the key is trust. One has to delve into what it actually means to be responsible – essentially, if you’re using AI enabled processes, can you trust it and what does it mean to trust something?

One aspect is knowing where the data has come from, another is checking for biases and establishing the ethics of models, and another is choosing the right model for the right task and taking responsibility for that selection.

At the end of the day, we are now responsible for products that are AI enabled, so a big part of that responsibility is reinforcing it, through a secondary model and maybe a tertiary model. In the event that something catastrophic happens, we must have that ability to say, ‘not a problem, we have a back-up, we’re going to switch to something else that is more suitable and continue to uphold the status quo’.

How important is that holistic approach to AI?

What’s important to note – aside from teams coming together to approach this at the same time – is the fact that AI came from outside of financial services and is now being imported into financial services. We are seeing examples in other regulated industries of how people are implementing AI which has the potential to be hugely transportable and relevant when we think about financial services.

I believe that this is going to have a dramatic effect on the way that financial services operate because in the past we’ve seen financial services very much segregated from everything else and tightly defined. That is now starting to change, and we are starting to see cross industry collaboration. 

Interestingly, we are not only focused on the research and investment in AI within financial services. As a community, we are beginning to see the potential relevance of AI investment within other industries. A part of this is greater mobility of  talent from non-finance industries, bringing relevant new perspectives and skills.

«