Conference Day Two: 17 September 2019

8:00 am - 8:50 am Welcoming tea, coffee and registration

KEYNOTES & OPENING PLENARY SESSIONS

img

Elizabeth Pritchard

CEO & Founder - White Rock Data Solutions
Former Head of GTM - Crux Informatics

Session preview:
This talk is about what might be called the Achilles heel of data science. It is a general talk, making reference to algorithmic trading, but applying much more generally to the applications of machine learning, AI, and statistics in the modern world of what is often called “big data”.

Data science is fundamentally statistics, with a leavening of computer science, data visualisation, mathematics, and of course heavy involvement of expertise from the application domain. Statistics is about describing data and making inferences from it. “Making inferences” means saying something about the reality underlying the data, and about the population from which your data were drawn. In trading terms, this means looking at past data and trying to describe and perhaps understand the processes which led to the kind of data you have got, and hence enabling prediction of what might happen in the future.

The key to using past data in this way is a sound understanding of how it has been drawn. Is it representative of the entire population of data? If not, in what way does it fail to be representative? Was it drawn by a probability sampling method, so that one can say how confident one is with the results? Or was it drawn in some purposive or unspecified way which means that inferences about the overall population or the future might be risky?

Since we have recently witnessed a dramatic revolution in data capture tools and methods, these questions have become particularly pointed. No longer do we painstakingly collect each data value by hand with the specific aim of understanding and future prediction – using a clipboard, ruler, or questionnaire and slowly writing down the result, later entering it into a computer for analysis. Nowadays data are captured automatically for some operational reason, and then go straight into the database. The details of trading transactions flash electronically from the exchange to the data store. Sales details are scanned, and added automatically to lead to the total bill, but then also automatically accumulate in the company’s computers. Tax returns form the basis of payments, but tax records are then built up over time as each year’s payments are made. Travellers scan their travelcards to automatically pay the fare, but then details of the routes people take are electronically aggregated into a database. Web searches are made to find things out, but then those search details gathered and stored. And so on.

In short, much of the data we analyse has been collected for some operational purpose, not with the aim of subsequent analysis to see how what happened did happen, or what might happen in the future. And this difference in aims has consequences. At the least it means that there are other data, data you did not collect but which are nonetheless very informative about the underlying process and future changes. 

A very simple example is given by consumer loans. Using machine learning on past customers to distinguish those who defaulted from those who did not could well be useless as a predictive tool for a bank. After all, the data that the algorithm is trained on has all been obtained from people the bank previously thought were low risk, while it is unlikely that only low risk customers will apply in the future.

A familiar example from trading is the retrospective evaluation of predictive success rates. Companies or strategies which failed are likely to have dropped out of the data, giving a misleadingly positive impression of overall performance. And, worse, regression to the mean will mean that those companies or strategies which have done well in the past should be expected to do less well in the future. 

Covering various examples, this talk gives a brief introduction to my forthcoming book Dark Data*, showing you why the data you don’t have can matter even more than the data you do have, how to recognise that you have a problem, and then what to do about it.

__________

* Dark Data: Why What You Don’t Know Matters, David Hand, Princeton University Press, January 2020.

img

David Hand

Senior Research Investigator & Emeritus Professor of Mathematics
Imperial College London

The perceived wisdom is that as human and machine brains have quite different strengths and abilities, the combination of both capabilities will offer the most effective solutions.  But for how long will this remain the case?  Does the vast increase in data to be modeled; advanced algorithms and new deep learning techniques mean that a machine-only advantage will be reached in the near future? 
img

Prasenjeet Bhattacharya

Lead Data Scientist - Multi Asset
NN Investment Partners

img

Laurent El Ghaoui

UC Berkeley EECS Professor & Chief Scientist
SumUp Analytics

img

Sebastien Guglietta

Co-Head, Computational Intelligence Systematic Strategies
Brevan Howard

img

David Hand

Senior Research Investigator & Emeritus Professor of Mathematics
Imperial College London

img

Daniel Mitchell

Chief Executive Officer
Hivemind

Is the reality living up to the promise?

Session preview:
Over the past decade, the financial world has familiarised itself with technologies such as Artificial Intelligence (AI) and Blockchain. Today, the two are integral to services like fraud detection in banking and share trading analytics. What is the next frontier for FinTech? The vote is still out, but polls point in the direction of commodity trading.

During the panel “The Role of FinTech companies in Asset Management – is reality living up to the promise” that I will chair on the 17th of September, we will shine a light on the impact that FinTech firms are having on commodity trading. As a taster, I will be sharing my own perspectives in this blog, focusing on the challenges that FinTech firms have specifically set out to solve.

Our panel takes place against a backdrop of rapidly changing conditions for commodity markets. During the last few years, the fundamentals of supply and demand were their main drivers. However, as was the case during the 1970s, geopolitics is increasingly influencing prices and decision-making in commodity trading. Where uncertainty is now the only certainty, new technology is being used by FinTech firms to deal with four core – yet intertwined – areas:

Better insights
Reducing costs
Improving security and transparency
Levelling the playing field for SMEs


Better insights

For decades, successful trading was focused on nimble operations and scale, but thinner margins have made market participants look closer at accessing better data, instead of operating scale, to gain an advantage. Access to, and intelligent interpretation of data such as satellite imagery and freight data are increasingly giving traders an edge. 

As an example, at ChAI we apply machine learning techniques to satellite imagery, maritime shipping data and text data to increase accuracy in commodity price forecasting. Tradeteq, active in the trade finance space, uses network data and real-time payment behaviours to form a more accurate representation of credit scoring and tools to monitor investments – enabling new funding sources to reach underserved SMEs in need of trade finance.



Reducing costs

New technology is increasingly bringing efficiencies to all corners of the commodities industry. We see Blockchain and smart contracts to be at the forefront of this trend. With Blockchain, the reconciliation and physical documentation of trade can now be streamlined securely through an encrypted digital ledger. Digital processes that replace people, phones and paper trails have the potential to significantly reduce trading costs. 

Furthermore, the decentralisation aspect of Blockchain drives accessibility. Decentralisation brings down the barriers erected by today’s international financial system that exist between asset classes and geographical borders. Therefore, Blockchain also has the potential to reduce the fees that have been an inherent part of the industry by using smart contracts to automate the functions of middlemen.

Many FinTechs focus on agile customer on-boarding, automation and e-KYC (Know Your Customer) to reduce costs. FinTech Traydstream has developed an Optical Character Recognition (OCR) technology which makes it possible to check a letter of credit in 45 seconds, in sharp contrast to the manual checking time frame of more than 2 hours. Time, as we know, is money.



Improving security and transparency

Even in markets of healthy liquidity, participating traders remain largely unknown parties. Trying to trace the path of a trade is difficult as verification systems across world markets have not been standardized. Transparency is also lacking in terms of information flow. Weaknesses can certainly be hidden in vast amounts of financial data, but even without intentional meddling information is often asymmetrical. This causes the whole environment to be inefficient as investors cannot be certain that their data is precise, nor that they have all the information needed to invest confidently. 

Coming back to Blockchain, it promises to deliver transparency and accessibility alongside speed and cost-effectiveness. A welcome opportunity to eradicate the sort of warehouse fraud that has plagued the metals industry in particular, where parcels of metals have been sold to multiple buyers – causing huge losses for banks. 

Security is a natural product of decentralization. Whereas an adept hacker can infiltrate centralized data sources, like the servers that established trading platforms use, distributed systems don’t reward insider access. Furthermore, cryptography enables accurate transaction verification without storing sensitive identifying information. Permanently recording every verified transaction that takes place makes Blockchain an unsurpassed transparency tool – making it easier to identify stakeholders, see their complete history in the market and make confident trades accordingly.



Levelling the playing field

By enabling better insights, lowering costs and strengthening transparency, FinTechs provide access to services and analytical power previously only enjoyed by the largest and wealthiest of market participants – thus levelling the playing field. 

Historically, banks have tended to favour larger players who are chasing larger deals. There has often been a lack of resource and knowledge to finance smaller ticket deals. In addition, the regulatory capital requirements of Basel III and IV have hindered banks from servicing smaller companies. 

FinTechs are trying to make it easier for banks and others to do lower-value deals. Manging payment terms, paperwork, exposures and transaction tracking with greater efficiency lies at the heart of their efforts. This also makes pooling of trade finance books, securitisation and bringing in non-bank investors easier – providing better trade financing for the little guys. 



Conclusion

It is still early days for FinTech in commodity trading, and changes may be more in the form of evolution rather than revolution in a traditionally secretive, conservative industry. Before the eureka moment can take place, hurdles including common legal standards, links between different dealing platforms and persuading all participants in the supply chain to take part need to be ironed out.  

Still, it is undisputable that FinTech companies are having a dramatic effect on other parts of the financial services industry – providing better insights, lowering costs, improving security and transparency and helping the little guy play on more equal terms. It is time for commodity trading to embrace the same kind of innovation and move into the 21st century. - Tristan Fletcher, CEO, ChAI

img

Tristan Fletcher

Chief Executive Officer
ChAI

img

Mark Fletcher

Managing Director
Cardinal Analytics

img

Niall Hurley

Director of Business Development
Eagle Alpha

img

Ganesh Mani

Adjunct Faculty
Carnegie Mellon University; ex-SSgA

img

Darko Matovski

Co-Founder and CEO
CausaLens

10:30 am - 11:00 am Networking refreshment break in the exhibition area



img

Sebastien Guglietta

Co-Head, Computational Intelligence Systematic Strategies
Brevan Howard

11:20 am - 11:40 am Keynote: Advances in GPU Accelerated AI and ML

John Ashley - Director, Global Financial Services Strategy, NVIDIA
OpenAI recently blogged about the growth of compute applied to AI training since 2012. According to their statistics, compute has grown by a factor of 300,000 (doubling roughly every 3.5 months, more than five times faster than Moore’s Law). Large “real world” models like BERT can be trained today in under an hour – with 1,472 co-operating V100 GPUs in a DGX-2 based SuperPod – an architecture that has evolved over the past few years to be the premier platform for AI research. But it’s not just hardware, or the largest models – the relatively recent MLPerf benchmark results (v0.6)  from NVIDIA show year over year performance increases of between 20% and 75% on a variety of problems on the same hardware.  Software and innovation up and down the stack are what enable the industry to keep up the relentless pace of performance that is fueling research breakthroughs and real world applications. Leave with an understanding of how GPUs acceleration is advancing research and applications across data science. 
img

John Ashley

Director, Global Financial Services Strategy
NVIDIA


img

Mark Ainsworth

Head of Data Insights
Schroders

img

Michael Beal

CEO
Data Capital Management

img

Katya Chupryna

SPRINT (Spread Products Investment Technologies)
Citi

img

Sohail Raja

Chief Digital Officer
Societe Generale

img

Lisa Schirf

Former COO Data Strategies Group and AI Research
Citadel

12:30 pm - 1:30 pm Networking lunch in the exhibition area


12:30 pm - 1:30 pm Diversity roundtables

Facilitated discussions with earlier panelists to debate practical solutions on how to increase diversity within the industry.

AFTERNOON STREAMS


img

Elliott Hann

Executive Director, Data Solutions
UBS Investment Bank

STREAM A

1:35 pm - 1:55 pm Data, a new asset to your portfolio
Glen High - Quantitative Analyst, Ostrum Asset Management
Session preview:
In our fully connected world, never has access to information been that easy. This statement remains relevant when it comes to your personal information. By surfing on the web, one can guess your centers of interests, your habits, your age, your wealth, your location. Allowing unknown intermediaries to waylay your personal data is a threat to your security, but also a financial robbery. Indeed, you spent energy and effort to generate data that will be stolen then sold on opaque markets.

To oppose against both issues, one of the best ideas is to enforce law by giving to data originators total data ownership. In order to make of this paradigm a reality, governments would first have to provide blockchain-secured personal data portfolios to citizens. These incorruptible folders holding personal information would be stored in publicly held data hubs. Could be found in a portfolio medical data, internet consumer habits, and as much as the massive expansion of connected devices will collect about you. Only you would be able to allow others access to parts of your portfolio for a given time.

Naturally, a clearer data market will appear with its set of features. Data pricing, hedging and trading all are examples of new topics to be dealt with in a close future. Who will be the architects of such a utopia? Will the gap between data originators and data consumers truly be bridged? To whom will this tech really be in favor of? Such a dense project leaves many questions with vague answers that we can try to enlighten together.
img

Glen High

Quantitative Analyst
Ostrum Asset Management

STREAM A

1:55 pm - 2:15 pm Advances in machine learning: A finance perspective
Gary Kazantsev - Head of Quant Technology Strategy, Office of the CTO, Bloomberg

img

Gary Kazantsev

Head of Quant Technology Strategy, Office of the CTO
Bloomberg

STREAM A

2:15 pm - 2:35 pm Modelling with alternative data: how to avoid propagation of uncertainty from the worst data to the best
Tomaso Aste - Professor of Complexity Science; Head - Financial Computing and Analytics Group, University College London

img

Tomaso Aste

Professor of Complexity Science; Head - Financial Computing and Analytics Group
University College London


img

Elliott Hann

Executive Director, Data Solutions
UBS Investment Bank

img

Tomaso Aste

Professor of Complexity Science; Head - Financial Computing and Analytics Group
University College London

img

Martin Goodson

Chief Scientist and CEO
Evolution AI

img

Syed Husain

Chief IT Architect
BCG Platinion

img

Andrea Nardon

Partner, Portfolio Manager, Group Head of Quant Solutions
Sarasin & Partners

STREAM B

1:30 pm - 1:35 pm QUANT & RISK METHODS
Anjelika Klamp - Managing Director, CITE Investments


img

Anjelika Klamp

Managing Director
CITE Investments

STREAM B

1:35 pm - 1:50 pm Latest developments in deep learning
Miquel Noguer I Alonso - Co-Founder, Artificial Intelligence Finance Institute

img

Miquel Noguer I Alonso

Co-Founder
Artificial Intelligence Finance Institute

STREAM B

1:50 pm - 2:05 pm Developing macro predictions using alt data
Apurv Jain - Visiting Researcher, Harvard Business School

img

Apurv Jain

Visiting Researcher
Harvard Business School

STREAM B

2:05 pm - 2:20 pm Risk in the financial data science research process: What investors should know
Joseph Simonian - Senior Investment Strategist, Acadian Asset Management

img

Joseph Simonian

Senior Investment Strategist
Acadian Asset Management

STREAM B

2:20 pm - 2:35 pm Reinventing the quant group to meet 21st century data and AI opportunities
Daniel Rosengarten - Head of ALM Quantitative Development, Barclays Investment Bank
This talk will discuss what changes are needed in quant groups to excel in an innovative AI, data driven and highly regulated risk environment.
img

Daniel Rosengarten

Head of ALM Quantitative Development
Barclays Investment Bank


img

Joseph Simonian

Senior Investment Strategist
Acadian Asset Management

img

Apurv Jain

Visiting Researcher
Harvard Business School

img

Lukas Prorokowsk

Senior Quantitative Analyst
Banque Internationale a Luxembourg

img

Daniel Rosengarten

Head of ALM Quantitative Development
Barclays Investment Bank

STREAM C

1:30 pm - 1:35 pm START-UP SHOWCASE
Roland Fejfar - Head TechBD EMEA, APAC, Morgan Stanley


img

Roland Fejfar

Head TechBD EMEA, APAC
Morgan Stanley

Meeting 4 leading AI driven start-ups from within the asset management world. Each will present their business and a winner voted on by you, the audience.

  • ChAI helps mitigate commodity price volatility by forecasting their prices using both traditional and alt data (including satellite, maritime & political risk) and the latest in AI techniques over time horizons of one day to one year.
  • Revelio Labs leverages the latest advances in AI research methods to create structured and accurate representations of raw labour data contained in millions of resumes, online profiles, and job postings.
  • SumUp Analytics is advancing the way companies leverage unstructured, text-data. We provide a large scale, high-speed text analytics platform that enables users to extract key insights in a fast, efficient and transparent way.
  • QuantsUnited provides superior and stable investment return to asset managers with AI-based quantitative strategies, powered by selected crowd-sourced data scientists and proprietary algorithms
img

Laurent El Ghaoui

UC Berkeley EECS Professor & Chief Scientist
SumUp Analytics

img

Axel Orgogozo

CEO
QuantsUnited

img

Ben Zweig

Chief Executive Officer
Revelio Labs

img

Stephen Butler

CCO
ChAI

3:25 pm - 3:50 pm Networking refreshment break in the exhibition area

STREAM A

3:50 pm - 3:55 pm COMPLIANCE AND SUSTAINABLE FINANCE
Marcus Hooper - Domain Expert, GFT


img

Marcus Hooper

Domain Expert
GFT


img

Nico Smuts

Investment Data Scientist
Investec Asset Management

img

Clayton Feick

Global Head of Sales
Quandl

img

David Kemp

Group Head of Compliance
GAM Investments


img

Marcus Hooper

Domain Expert
GFT

img

Kevin Bourne

Former Global Head of Sustainable Investment / Former Global Head of Electronic and Portfolio Trading
FTSE Russell / HSBC

img

Justin Kew

Sustainability Manager
Carmignac

img

Michael Soloman

Founder
Responsible 100


img

David Jessop

Global Head of Quantitative Research
UBS Investment Bank

STREAM B

3:55 pm - 4:15 pm Crypto-assets are a data science heaven
Jesus Rodriguez - Chief Technology Officer, Into The Block

img

Jesus Rodriguez

Chief Technology Officer
Into The Block


img

Vadim Kanofyev

Quantitative Researcher
Bloomberg

img

Ciprian Marin

Director of Quantitative Research
Lazard Investment Management

img

Veronika Lunina

Quantitative Analyst
NatWest Markets


img

Andreas Petrides

Associate, Equities Execution Research Strats
Goldman Sachs

img

Michael Steliaros

Global Head of Quantitative Execution Services
Goldman Sachs




img

Ganesh Mani

Adjunct Faculty
Carnegie Mellon University; ex-SSgA

In this talk, we give a sneak peek of The Book of Alternative Data, which is to be published in early 2020. We briefly discuss some of the challenges when using alternative such as structuring it and quantifying its value, as well as the risks involved. We'll go through a few of the use cases from book. These include using automotive supply chain data to trade auto stocks, satellite imagery to model retailers earnings per share and news data to understand FX volatility around central bank meetings.
img

Saeed Amen

Chief Executive Officer
Cuemacro

img

Alexander Denev

Head of AI - Financial Services Advisory
Deloitte

Session preview:
Given the intrinsic complex and dynamic nature of the machine learning (ML) the possibility of failure does not come like a surprise.

There are many reasons why this can happen. One of them is the biases in the training data and method (e.g. sampling, data preparation). Another reason is that the ultimate scope of the ML is not well defined and transparent. Further issues are linked to the machine learning techniques which are not able to inform us when the information is not clear or they cannot effectively learn from the data. Finally, ML uses a high number of hyper parameters (e.g. how many trees I consider in a decision process like random forest). These hyper parameters are defined by the developer and they cannot be derived by the data. 

Of course perfection always starts with mistakes. So, how can we make the ML a better place? Of course, the starting point is the data.

First of all, it is important that data is accurate, complete and sufficient to extract statistically significant insights. Data inputs must be interpretable, coherent with the internal policy of the firms and give a business rationale. In addition, we need to have a robust approach to pre-process the data to avoid any corrupted learning processes. 

Another important point is the calibration. As we know, this is a crucial part for the traditional models and it is even more important for ML given the amount of parameters, data and the frequency they are updated. In this case, we can establish specific controls to assess if the calibration is appropriate and develop a monitoring framework including thresholds and triggers to inform if the model is working as expected. Of course, the above requires some changes in the way we review model risk for ML. First of all, we need to review the model risk policy to reflect the features of the ML discussed above. Secondly, the validators must be sure they are equipped with the right tools to deal with the big data and computation complexity behind the ML exploit. - Maurizio Garro, Senior Manager, Market, Credit & Risk, Lloyds Banking Group

img

Ganesh Mani

Adjunct Faculty
Carnegie Mellon University; ex-SSgA

img

Maurizio Garro

Senior Manager, Market, Credit & Risk
Lloyds Banking Group

img

Blanka Horvath

Honorary Lecturer in Mathematical Finance
King's College London

img

Iuliia Shpak

Quantitative Strategies Specialist
Sarasin & Partners LLP

STREAM C

4:55 pm - 5:15 pm Beyond historical data: Simulations using deep intelligence agents
Natraj Raman - Lead Data Scientist, S&P Global Ratings
Financial traders typically assess their investment strategies against historical market prices. When using only data from the past, market conditions outside historical bounds are ignored. The augmentation of historical prices with synthetic prices generated using simulations provide an effective supplement. This talk describes an Agent Based Model to simulate market data for various what-if scenarios such as sudden price crash, bearish or bullish market sentiment and shock contagion. Unlike traditional agents that make trading decisions based on rules, heuristics or simple learners, a new class of deep intelligence agents that exploit the latest advances in artificial intelligence are used for decision making. The simulation model is validated by examining its ability to replicate the main statistical properties of financial markets.
img

Natraj Raman

Lead Data Scientist
S&P Global Ratings

KEYNOTES AND CLOSING PLENARY SESSIONS