(Wait, shouldn’t that word be “learning”?)
(And how many keys do we have in the lab?)
Ugh, I hate machine learning. Deep learning especially.
I met machine learning for the first time when I was a freshman. Andrew Ng (I don’t need to introduce this guy, I guess) came to Tsinghua and gave a lecture about his work. It was so fascinating.
But when I got to work with machine learning for the first time, it was not so pleasant. We had a machine learning course. The lecturer is a theorist, with high self-esteem. (He asked which book did we use for our theory course. When someone mentioned that classic book by Sipser, he said we should at least use Modern Approach!) The course was mostly about proving one bound after another, and now I can remember none of them! We did have fun looking at some machine learning problems from a nonlinear programming or even game theoretic point of view, though. The only algorithm mentioned in the course was (kernel-) SVM, and that was not without a ton of theoretic analysis.
And he hates deep learning. He mentioned it only once, in the concluding part of the course, only to downplay it with sarcasm. Are all machine learning theorists like that?
(BTW the TA seemed to be a super lazy otaku girl)
And then, after learning abstract things for the whole semester, the final course project was a concrete one – to solve one problem on Kaggle! I don’t even have a decent learning algorithm in my repository.
But, challenge accepted!
And the result – it did not give me frustration. It gave me cancer. My best score on the benchmark (which is a mediocre one) was achieved with bugged code. Only after the course project deadline was I able to get a better score with a correct program.
Those with the best results were doing intern at an alumni’s computer vision startup. Surely, you can’t hope to beat state-of-the-art deep neural networks implemented with a powerful framework deployed on a cluster with your crappy hand-written naive algorithm running on your laptop!
What’s after that? The next semester, when I found out that that computational biology course is all about machine learning, I quit immediately. (Seriously, discovering new interaction between proteins and existing drugs by doing text mining on existing literature sounds just ridiculous.)
Well, I don’t really hate machine learning itself. It really is powerful. What I don’t like is the way people work with it, and the hype.
I’m not saying that you have to give a theory to justify your method. Theory of neural networks is hard, and somehow they just work super good in reality, I understand that. I mean, just dumping everything into your network, tuning your network at random, and hoping that magic will happen, that definitely is not the correct attitude! But, my perception is that, this is how a lot of people are doing machine learning right now.
And people are well aware of it! In China we call that “炼丹(liàn dān)” (or TCM if you prefer that), and at last year’s 21ccc one speaker called that “alchemy” – see, we even have internationally accepted terminology for those sort of things!
And the hype – the number of people doing machine learning is too damn high! It is not rocket science, but neither is it for dumb people who know only how to liàn dān!
I’ve avoided coming in touch with machine learning thus far. But now, it is coming for me!
Because, nearning is the key, ahahahaha!
Computer graphics today is not like decades ago when people focused on rendering. Those problems are largely solved. It has become much broader. Citing words from a talk I attended at MSRA, today, computer graphics is about generating novel content from existing content, in the form of images, videos, 3D models, or even something non-visual like text or sound. It is about capturing human’s creativity. And how do you do that? Learning, of course!
So I started learning to do learning!
The past several days were spent getting the environment sorted – to get cuda working on my Arch Linux machine was a bit of a pain for a casual linux user – the proprietary NVIDIA driver keeps crashing so I had to use bumblebee for my intel/NVIDIA dual GPU laptop, and I had to workaround the incompatibilities between cuda 7.5 and gcc 6. But I guess you can hardly call that a trouble.
The whole procedure was like – put a convolution layer – then a pooling layer – then another pair of such layers – then 3 fully connected layers – then just dump in the data and sit there watching the error drop.
No! That does not feel good. Did I do anything? The days when I was hopelessly tweaking my bugged crappy hand-written naive learning code were like a joke.
But anyway, this still is a step forward.
If learning is the key, then what is the key to learning? There must be something deep (no, I don’t mean a deeper network) that distinguishes groundbreaking machine learning research from liàn dān, something that you have to use your brain to figure out. We shall see.