AI in Finance: Cutting Through the Hype (With Case Studies)

As I wrote in Calling AI’s bluff, we are currently living at the ‘peak of inflated expectations’ when ‘mass media hype’ begins and just before the ‘negative press begins’ (click on the previous blog for the famous hype cycle).

The media also seems unable to distinguish the limits of AI, even calling it ‘creative’ (e.g. “Machine Creativity Beats Some Modern Art”) and showing beautiful pictures generated by AI techniques:

All this pictures were generated by computers

Another example of hype is the furor among journalists just after AlphaGo beat the Go world champion:

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

I will try to explain the limitations (and opportunities)of AI — elsewhere you can find examples of what AI can revolutionise, but at the end of this blog I will show you what humans can do in Finance that machines cannot (yet).

Cutting Through the Hype

In my previous blog I differentiated between Artificial General Intelligence (AGI) and Narrow Artificial Intelligence (NAI). AGI is the stuff of science fiction — when the machine can perform any task humans can do. From a business perspective we care about NAI — the activities that a machine can do better, faster and cheaper than a human that can (productivity improvement).

I found that AI is a term that combines many different techniques, each one slightly different. However, the most hyped ones are the techniques that relate to Machine Learning, and very importantly: they are all examples of Narrow Artificial Intelligence.

I also found the following bubble diagram quite helpful:

source

The diagram also allows us to figure out what Machine Learning technique we need to solve a business problem.

Another very useful diagram that allows us to understand which technique should we use is this (however, it lacks the reinforced learning branch):

[source]

I have written a few case studies in the past that show how easy is now days to implement this techniques. For example:

Identifying Credit Card Fraud: it is not directly shown in the bubble diagram, but if you follow Machine Learning — Supervised Learning — Classification you will find ‘Identity Fraud Detection’. In my case study AI as a White Box in Finance (follow the link) I explain how to identify fraudulent transactions using a supervised classification technique.

Loan Credit Rating: Another example of classification, I explain how the method used for credit card fraud can be used verbatim for loan credit rating: Explaining AI — A Credit Rating Case Study.

Financial Regime Identification: Following the Machine Learning — Unsupervised Learning — Clustering branch I wrote an example of a system that can automatically identify different regimes in Financial Series:Rates Clustering.

Market Forecasting: this example is mentioned in the diagram: Machine Learning — Supervised Learning-Regression. Anexample (not very successful but that allows us to understand the required process) is Why Financial Series LTSM Prediction Fails.

Also, make sure that you notice the Machine Learning — Reinforced Learning — Game AI (the AlphaGo branch) and Machine Learning — Supervised Learning — Classification (the branch related to the image classification generation; as a byproduct you can generate images that look like what you tried to classify). I do not have an image example, but you can run theWriting Like Cervantes example to see how a neural network can ‘learn’ a language and writing style from scratch, and then ‘write’ in the same style.

Like a magic trick, once you know how it is made you can notice the similarities (take lots of data / use methods that require a big computer) and with your eyes open you can see the limitations.

Limits of Narrow Artificial Intelligence

In Judea Pearl’s The Book of Why he introduces the following cartoon, where he helpfully pictures a robot (representing Narrow Artificial Intelligence as of now — including AlphaGo) in the first rung of a ‘causation’ ladder.

His argument against NAI can be boiled down to:

a full causal model is a form of prior knowledge that you have to add to your analysis in order to get answers to causal questions without actually carrying out interventions. Reasoning with data alone won’t be able to give you this. [source]

This [blog]explains in further detail the inability of ‘data alone’ that allows you to develop causal models.

In Finance, developing causal models are our bread and butter:

  • Are the Italian politics driving higher yields ?
  • What is the effect of the Trump election in the Stock Market ?
  • What is the impact of higher gasoline prices on Inflation and the Fed ?

And Pearl just showed that data alone cannot be used to automatically build these models ! Because Robots (today) do not have ‘prior’ knowledge nor the ability to perform randomized control test they cannot answer this questions.

Another example: Statisticians are very familiar with the idea of ‘spurious correlations’ and there is even a site that collects several ridiculous ones (like Letters in Winning words of Scripps National spelling Bee and Number of people killed by venomous spiders). We find it funny because we have ‘prior’ knowledge that spiders do not care at all of how many letters there are in spelling contest words, but a robot not only lacks this knowledge, it is unable to deduce it from the data (with current hyped techniques — in fact this is a hot area of research).

The final nail in the coffin for all this ‘creativity’ talk is the last rung of the ladder: Imagination. Once we see up close the algorithm mechanism to ‘create’ images and written text, you notice that there are new concepts are not being generated. The Lion-Man (the oldest known animal-shaped sculpture in the world) which famously imagined something not found in nature is hailed as differentiator between humans and Neanderthals (notice the 2nd rung on the causality ladder — the Neanderthal stops there)

What can Financial Professionals do that (N)AI can’t ?

Hence, financial analysts that can think like Neanderthals and climb to the second rung of the ladder of causation):

  • develop cause-effect economic models,
  • identify the hidden mechanisms that can impact a variable,
  • understanding the impact of the variables on a new financial instrument,

will still be employable, bat above all will have to sort out the and explain the outputs when the AI techniques are used to sift through mountains of data.

But to be safer, the financial analyst should be able to climb to the third rung of the ladder — working on projects that require imagination:

  • imagining new scenarios (both positive or negative), the ‘black swans’
  • developing new financial instruments,

this professional would be reaching the 3rd level — gaining over Neanderthal professionals.

Leave a Reply

%d bloggers like this: