Gradient Boosting Machines (GBM): from Zero to Hero (with R and Python code)
This talk will get you started with gradient boosting machines (GBM), a very popular machine learning technique providing state-of-the-art accuracy on numerous business prediction problems. After a quick intro to machine learning and the GBM algorithm, I will show how easy it is to train and then use GBMs in real-life business applications using some the most popular open source implementations (xgboost, lightgbm and h2o). We’ll do all this in both R and Python with only a few lines of code and this talk will be accessible for a wide audience (with limited prior knowledge of machine learning). Finally, in the last part of the talk I will provide plenty of references that can get you to the next level. GBMs are a powerful technique to have in your machine learning toolbox, because despite all the latest hype about deep learning (neural nets) and “AI”, in fact GBMs usually outperform neural networks on structured/tabular data most often encountered in business applications.
Chief Scientist, Epoch (USA)
Szilard studied Physics in the 90s and obtained a PhD by using statistical methods to analyze the risk of financial portfolios. He worked in finance, then more than a decade ago moved to become the Chief Scientist of a tech company in Santa Monica, California doing everything data (analysis, modeling, data visualization, machine learning, data infrastructure etc). He is the founder/organizer of several meetups in the Los Angeles area (R, data science etc) and the data science community website datascience.la. He is the author of a well-known machine learning benchmark on github (1000+ stars), a frequent speaker at conferences (keynote/invited at KDD, R-finance, Crunch, eRum and contributed at useR!, PAW, EARL, H2O World, Data Science Pop-up, Dataworks Summit etc.), and he has developed and taught graduate data science and machine learning courses as a visiting professor at two universities (UCLA in California and CEU in Europe).