Tuesday 2 May 2017

Some reflections on QuantCon 2017

As you'll know if you've been following any of my numerous social media accounts I spent the weekend in New York at QuantCon, a conference organised by Quantopian who provide a cloud platform for python systematic trading strategy backtesting.

Quantopian had kindly invited me to come and speak, and you can find the slides of my presentation here. A video of the talk will also be available in a couple of weeks to attendees and live feed subscribers. If you didn't attend this will cost you $199 less a discount using the code CarverQuantCon2017 (That's for the whole thing - not just my presentation! I should also emphasise I don't get any of this money so please don't think I'm trying to flog you anything here).

Is a bit less than $200 worth it? Well read the rest of this post for a flavour of the quality of the conference. If you're willing to wait a few months then I believe that the videos will probably become publicly available at some point (this is what happened last year).

The whole event was very interesting and thought provoking; and I thought it might be worth recording some of the more interesting thoughts that I had. I won't bother with the less interesting thoughts like "Boy it's much hotter here than I'd expected it to be" and "Why can't they make US dollars of different denominations more easily distinguishable from each other?!".


Machine learning (etc etc) is very much a thing


Cards on the table - I'm not super keen on machine learning (ML), AI Artificial intelligence, NN Neural Networks, and DL Deep Learning (or any mention of Big Data, or people calling me a Data Scientist behind my back - or to my face for that matter). Part of that bias is because of ignorance - it's a subject I barely understand, and part is my natural suspicion of anything which has been massively over hyped.

But it's clearly the case that all this stuff is very much in vogue right now, to the point where at the conference I was told it's almost impossible to get a QuantJob unless you profess expertise in this subject (since I have none I'd be stuck with a McJob if I tried to break into the industry now); and universities are renaming courses on statistics "machine learning"... although the content is barely changed. And at QuantCon there were a cornucopia of presentations on these kind of topics. Mostly I managed to avoid these. But the first keynote was about ML, and the last keynote which was purportedly about portfolio optimisation (by the way it was excellent, and I'll return to that later), so I didn't manage to avoid it completely.

I also spent quite a bit of time during the 'off line' part of the conference talking to people from the ML / NN / DL / AI side of the fence. Most of them were smart, nice and charming which was somewhat disconcerting (I felt like a heretic who'd met some guys from the Spanish inquisition at a party, and discovered that they were all really nice people who just happened to have jobs that involved torturing people). Still it's fair to say we had some very interesting, though very civilised, debates.

Most of these guys for example were very open about the fact that financial price forecasting is a much harder problem than forecasting likely credit card defaults or recognising pictures of cats on the internet (an example that Dr Ernie Chan was particularly fond of using in his excellent talk, which I'll return to later. I guess he likes cats. Or watches a lot of youtube).

Also, this cartoon:

Source: https://xkcd.com/1831/ This is uncannily similar to what DJ Trump recently said about healthcare reform.


The problem I have here is that "machine learning" is a super vague term which nobody can agree on a definition for. If for example I run the most simple kind of optimisation where I do a grid search over possible parameters and pick the best, is that machine learning? The machine has "learnt" what the best parameters are. Or I could use linear regression (200+ years old) to "learn" the best parameters. Or to be a bit fancier, if I use a Markov process (~100 years old) and update my state probabilities in some rolling out of sample Bayesian way, isn't that what an ML guy would call reinforcement learning?

It strikes me as pretty arbitrary whether a particular technique is machine learning or considered to be "old school" statistics. Indeed look at this list of ML techniques that Google just found for me, here:

  1. Linear Regression
  2. Logistic Regression
  3. Decision Tree
  4. SVM
  5. Naive Bayes
  6. KNN
  7. K-Means
  8. Random Forest
  9. Dimensionality Reduction Algorithms
  10. Gradient Boost & Adaboost

Some of these machine learning techniques don't seem to be very fancy at all. Linear and logistic regression are machine learning? And also Principal Components Analysis? (which apparently is now a "dimensionality reduction algorithm". Which is like calling a street cleaner a "refuse clearance operative")

Heck, I've been using clustering algorithms like KNN for donkeys years, mainly in portfolio construction (of which more later in the post). But apparently that's also now "machine learning".

Perhaps the only important distinction then is between unsupervised and supervised machine learning. It strikes me as fundamentally different to classical techniques when you let the machine go and do it's learning, drawing purely from the data to determine what the model should look like. It also strikes me as potentially dangerous. As I said in my own talk I wouldn't trust a new employee with no experience in the financial markets to do their fitting without supervision. I certainly wouldn't trust a machine.

Still this might be the only way of discovering a genuinely novel and highly non linear pattern in some rich financial data. Which is why I personally think high frequency trading is one of the more likely applications for these techniques (I particularly enjoyed Domeyards Christina Qi's presentation on this subject, which most of us only know about through books like Flash Boys).

I think it's fair to say that I am a bit more well disposed towards those on the other side of the fence than I was at the conference. But don't expect me to start using neural networks anytime soon.


... but "Classical" statistics are still important


One of my favourite talks that I've already mentioned was Dr Ernie Chan who talked about using some fairly well known techniques to identify pictures of cats on you tube enhance the statistical significance of backtests (with a specific example of a multi factor equity regression).


Source: https://twitter.com/saeedamenfx

Although I didn't personally learn anything new in this talk I found it extremely interesting and useful in reminding everyone about the core issues in financial analysis. Fancy ML algorithims can't help solve the fundamental problem that we usually have insufficient data, and what we have has a pretty low ratio of signal to noise. Indeed most of these fancy methods need a shed load of data to work, especially if you run them on an expanding or rolling out of sample basis as I would strongly suggest. There are plenty of sensible "old school" methods that can help with this conundrum, and Ernie did a great job of providing an overview of them.

Another talk I went to was about detecting structural breaks in relative value fixed income trading, which was presented by Edith Mandel of Greenwhich Street Advisors. Although I didn't actually agree with the approach being used this stuff is important. Fundamentally this business is about trying to use the past to predict the future. It's really important to have good robust tests to distinguish when this is no longer working, so we know that the world has fundamentally changed and it isn't just bad luck. Again this is something that classical statistical techniques like Markov chains are very much capable of doing.


It's all about the portfolio construction, baby


As some of you know I'm currently putting the final touches to a modest volume on the ever fascinating subject of portfolio construction. So it's something I'm particularly interested in at the moment. There were stacks of talks on this subject at Quancon, but I only managed to attend two in person.

Firstly the final keynote talk, which was very well received, was on Building Diversified Portfolios that Outperform Out-of-Sample", or to be more specific Hierarchical Risk Parity (HRP), by Dr. Marcos López de Prado:

Source: https://twitter.com/quantopian. As you can see Dr. Marcos is both intelligent, and also rather good looking (at least as far as I, a heterosexual man, can tell).

HRP is basically a combination of a clustering method to group assets and risk parity (essentially holding positions inversely scaled to a volatility estimate). So in some ways it is not hugely dissimilar to an automated version of the "handcrafted" method I describe in my first book. Although it smells a lot like this is machine learning I really enjoyed this presentation, and if you can't use handcrafting because it isn't sophisticated enough then HRP is an excellent alternative.

There were also some interesting points raised in the presentation (and Q&A, and the bar afterwards) more generally about testing portfolio construction methods. Firstly Dr Marcos is a big fan (as am I) of using random data to test things. I note in passing that you can also use bootstrapping of real data to get an idea of whether one technique is just lucky, or genuinely better.

Secondly one of the few criticisms I heard was that Dr Marcos chose an easy target - naive Markowitz - to benchmark his approach against. Bear in mind that (a) nobody uses naive Markowitz, and (b) there are plenty of alternatives which would provide a sterner test. Future QuantCon presenters on this subject should beware - this is not an easy audience to please! In fairness other techniques are used as benchmarks in the actual research paper.

If you want to know more about HRP there is more detail here.

I also found a hidden gem in one of the more obscure conference rooms, this talk by Dr. Alec (Anatoly) Schmidt on "Using Partial Correlations for Increasing Diversity of Mean-variance Portfolio".

Source: https://twitter.com/quantopian


That is more interesting than it sounds - I believe this relatively simple technique could be something genuinely special and novel which will allow us to get bad old Markowitz to do a better job with relatively little work, and without introducing the biases of techniques like shrinkage, or causing the problems with constraints like bootstrapping does. I plan to do some of my own research on this topic in the near future, so watch this space. Until then amuse yourself with the paper from SSRN.


Dude, QuantCon is awesome


Finance and trading conferences have a generally bad reputation, which they mostly deserve. "Retail" end conferences are normally free or very cheap, but mostly consist of a bunch of snake oil salesman. "Professional" conferences are normally very pricey (though nobody there is buying their ticket with their own money), and mostly consist of a bunch of better dressed and slightly snake oil salespeople.

QuantCon is different. Snake oil sales people wouldn't last 5 minutes in front of the audience at this conference, even if they'd somehow managed to get booked to speak. This was probably the single biggest concentration of collective IQ under one roof in finance conference history (both speakers and attendees). The talks I went to were technically sound, and almost without exception presented by engaging speakers.

Perhaps the only downside of QuantCon is that the sheer quantity and variety of talks makes decisions difficult, and results in huge amount of regret at not being able to go to a talk because something only slightly better is happening in the next room. Still I know that I will have offended many other speakers by not (a) going to their talk, and (b) not writing about it here.

So I feel obligated to mention this other review of the event from Saeed Amen, and this one from Andreas Clenow, who are amongst the speakers whose presentations I sadly missed.

PS If you're wondering wether I am getting paid by QuantCon to write this, the answer is zero. Regular readers will know me well enough that I do not shill for anybody; the only thing I have to gain from posting this is an invite to next years conference!

3 comments:

  1. QuantCon
    No disrespect intended since I am a big fan of Quantopian's back testing platform but I have to say the name is MOST unfortunate for such an event. There are very few actual fools on the Quantopian forum (with one or two notable exceptions) but my belief is that the vast majority of users and of people who flock to these conferences are doomed to disappointment and failure.

    ReplyDelete
  2. Is this lad salty enough Roberto?

    Great story!

    ReplyDelete
  3. Just a quick comment on machine learning. An algorithm needn't be complex to be considered machine learning. I would absolutely consider linear regression and Markov processes 'machine learning.' Perhaps a more palatable term for old school practitioners would be 'statistical learning', which was actually used by several pioneers in the field including Friedman, Tibshirani and Hastie. But then again, every time something is reinvented, it needs a fancy name to generate some buzz, doesn't it?

    ReplyDelete

Comments are moderated. So there will be a delay before they are published. Don't bother with spam, it wastes your time and mine.