#6 Chai Time Kaggle Talks with Kaggle Competition Master Gilles Vandewiele

Hi All!
In our 6th Chai Time session we have a Kaggle Competition Master!

The Guest:
Gilles Vandewiele graduated and obtained his PhD in Computer Science Engineering. He conducts research in the domain of white-box machine learning for critical domains and (semantic) knowledge models. Other research interests are bio-inspired algorithms and sport-related data science in general.

Gilles is often found sitting behind his desk, if not he is most likely doing a code golf in Python, competing in a data science competition or out playing football with friends or colleagues.

In this Chai Time Kaggle Talk, Sanyam will be interviewing Kaggle Master GIlles about his Kaggle journey and we will also be learning about their team’s winning solution to the Google Brain, Ventilator Pressure Prediction Comp.
We’ll learn how they combined CNNs, LSTMs, Transformer Models and PID matching to finish on the top of the LB

Links:

4 Likes

What is your favourite chai, Gilles?

1 Like

I was not able to fully answer Sanyam’s question yesterday behind the intuition of our deconv1d layer in our LSTM + Transformer Encoder set-up. I forwarded this question to Shujun of our team, who created that architecture and he replied to me:

"Deconvolution disentangles the convoluted sequence, if you consider convolution to fuse the elements in the convolution kernel. Additionally, since I don’t use padding in the conv layers, deconv also retrieves the full dimensionality sequence. I also tried to take out the deconv layers and juts use conv layers with padding instead, but the results were significantly worse."

I hope this somewhat explains it!

1 Like