My life as a Ph.D. student, S01E07: Pondering life and others’ life

While my life has been not quite eventful for the past months, I do want to mention one thing I didn’t see coming.

Two professors came to ask me about my former Yao-class classmates who are applying to their open PhD positions. Incidentally both of them will be hiring their first batch of students, so that’s presumably an important decision for them.

And one of the two students in question also turned to me for help. That’s very important to them for sure!

I wondered why my classmates are applying for PhD now instead of at this time last year in the first place… Maybe they did a one-year Master’s program. Maybe they took a one-year gap. I really didn’t know. I should have known better about my classmates. That’s my bad.

Well, you can’t really blame me on that. In a place like Yao-class, everyone cares about themselves.

It feels strange. Just months ago you were still classmates. Now you kind of have a say on their future. I don’t even feel qualified to give my opinions on their performances that could be taken very seriously! But I said what I knew and what I thought. Even if I can’t help them, I hope that I at least didn’t hinder them.

Life is so strange and unpredictable after all.

When you have nothing to do, you think about life. Like what I did several days ago when I left my laptop in the lab at night and was sitting at my desk in my apartment, facing the wall, grabbing a Subway.

“If you are what you eat, then after years half of me would be made of Subway!”

Well, I didn’t think about sandwiches.

Why do PhD? That’s the one question every PhD student keep asking himself. It’s a hard one. Sometimes you also ask, why is everyone doing PhD?

Look at ourselves. We are Yao-class graduates. It’s good to be humble but we can just be bold and say that we are the top CS students from China and among the most intelligent ones of all Chinese students. We could probably achieve success easily in many fields with our talent. But most of us choose to do the one least cost-efficient thing. It seems that it has become some kind of obligation. If you are the best student, you do PhD.

Let’s not think too hard about it. For us is has been a long and lonely journey to get where we are, and let’s hope that the answer would become clear further down the road.

My life as a Ph.D. student, S01E06: Entering working mode

I haven’t written anything here for two months… That’s what happens when you have research to do and no interesting story to tell. Now that several months has passed since I arrived here, the feeling of freshness is completely gone, and its time for serious work.

I don’t know how an average student spend the very earliest stage of his academic career – before his first publication. It must take some unusual talent to give big results and publish first-author top conference papers outright! I guess many have to do some mundane task as part of a large project, then get his first publication as a credit for his work. Then I should be happy that I don’t have those mundane task to do!

I’m basically still exploring GANs. I’ve probably found a way to generate high-resolution images with a single model. It would be really exciting if I can beat the state-of-the-art, but I’ll still consider it to be a good experience even if I can’t. I’ve learned a lot about neural networks, and I got a lot of interesting ideas to try, which I won’t tell you before I publish 🙂

The life is kind of dull, though. This is probably the first time in my life when I could keep working hard for weeks, but also the first time when I sometimes feel mentally exhausted. I know that that’s gonna be the norm. The world is in turmoil, but as long as WWIII does not happen and no one is shooting at you, it won’t make your life any different. As long as progress in your work keeps you cheered up things would be fine, but if you got stuck in your work and at the same time does not have a life, it would be bleak…

The semester is over. But you don’t really have holidays as a PhD student. You always have one conference submission deadline after another, and for me next year’s SIGGRAPH would be the first one. We are not sure if we could do what we want yet, but I hope we could, of course.

A little bit of thought on solving Rubik’s Cubes and the possibility of doing that with neural networks

I’ve been thinking about solving combination puzzles with computer for a long time.

I believe that every CS student have learned how to solve the 15-puzzle with heuristic search in elementary AI courses, or at least to solve the simpler 8-puzzle with BFS in elementary algorithm courses. At first sight, Rubik’s Cube may seem to be a similar puzzle, just a bit bigger.

Yes, they are all “sequential move” type of puzzles where you try to restore a bunch of things to some initial configuration by performing some operation sequentially. But Rubik’s Cube is definitely much more difficult, both for humans and computers. Size is not what really matters – you can make yourself an arbitrarily large (n^2-1)-puzzle, and it will still be easier than Rubik’s Cube, just quadratically more boring and time consuming. Apparently there are deep group theoretic reasons. Actually I once read about why some of these puzzles are more difficult than others in a Scientific American article. But I think this can be explained by 15-puzzle being more local than Rubik’s Cube: in Rubik’s Cube, one move could affect 1/3 of all blocks where in 15-puzzle, one move affect only one piece.

The consequence is that in 15-puzzle you can see clearly which configuration is closer to being solved – you can see it more easily as a human, and you can design a heuristic more easily for a computer. In Rubik’s Cube, this is much more difficult. I haven’t even seen one example of a good search heuristic for Rubik’s Cube. Maybe people just does not bother finding one: by memorizing some 200 move sequences and practice people could solve a Rubik’s Cube under 10 seconds, and the total number of states of a Rubik’s Cube, 43,252,003,274,489,856,000 (I can memorize this number) and the God’s number, 20, is just small enough to make it tractable to solve with a (bidirectional) brute-force search.

But when you have a larger combination puzzle – a 4×4×4 or 5×5×5 or bigger cube or a megaminx, the number of possible states explodes and brute-force search becomes mostly useless.

Somehow people could still solve them by finding patterns. Someone have even designed an algorithm to solve a n×n×n in Θ(n^2/log n) moves – what a strange time complexity, a log in the denominator!

Actually, finding solutions to all those exotic puzzles is my favorite part in this Rubik’s Cube game. The 200 or so formulas may have been found by brute-force computer search, but one you have played with a Rubik’s Cube for some time and if you are clever enough, you can construct those formulas yourself for other puzzles. You start by just randomly – or maybe purposefully – turning the faces and see what happens. Then you suddenly find a short move sequence that does something interesting – say, rotating a small number of pieces while leaving their position unchanged, or only moving some corner pieces but does not affect edge pieces.

Then you get a bunch of formula, each one of them will keep a certain set of pieces unchanged, move a second set of pieces in some desired fashion, and move the remaining pieces in someway you don’t really care.

Then with what you have at hand, you define your strategy. Be it edge-first, corner-first, layer-first or whatever you may want to call it, usually you have several stages. In each stage, you want to keep unchanged a larger set of pieces that are already restored than the previous stages, and try to restore several more pieces to advance to the next stage.

You might not notice it but what you are doing is actually repeatedly reducing the group of possible states to one of its subgroups, until you get a trivial group consisting only of the solved state.

The process might not be optimal, but it works.

But it seems that we have not formulated this process well enough so that we can let computers do it. I’ve been thinking about it for a while. May be we human and computers could cooperate: we define the stages so that the quotient group between two stages is not so large so that we can brute-force search for a formula that brings us from one stage to the next. It would be so good if the computer can also learn to define the stages. But that part does involve some insight into the structure of the puzzle.

Yesterday when I was dealing with the GAN thing, I suddenly came up with the idea of solving Rubik’s Cubes with a neural network.

I think that’s quite natural. These days people seem to believe that neural networks can do everything. But after a quick google search I found that few people have considered the same thing.

So it’s time to have a try! Brute-force search definitely is ineffective for solving complex combination puzzles but maybe neural networks could do the magic. The idea cannot be simpler: encode the state of each piece into one-hot vectors and just build a network to predict from the input state the mext move towards solving the puzzle.

Then it comes to the encoding, the training data and the structure of the network. I’ve seen some people represent the state of a Rubik’s Cube by one-hot encoding the color of stickers at each position… that sounds ridiculous. I think one-hot encoding the position of each piece makes more sense. We have 20 movable pieces, and each of them have 24 positions, so the input would be a 480-dimensional 0-1 vector. Note that this is actually more than what you need if you choose to use stickers: 48 movable stickers, each with 6 possible colors, producing a 288-dimensional 0-1 vector. But still, I think the position of the pieces is more intrinsic.

The training data would be obtained by scrambling the cube. We train the network to bring a state to its previous state in the scramble sequence. The scramble sequence should be generated from a random walk: with a fixed probability (say 0.05 which is the reciprocal of 20, the God’s number) we come back to the solved state, otherwise we choose randomly from 12 possible moves. This makes sense since it ensures that each state in the state space is visited exponentially more often from its nearest path to the solved state.

Then we come to the fun part – the structure. We could just use some plain fully-connected layers. But remember that there is symmetry in the Rubik’s Cube, as I mentioned in a previous post. So if we rotate the input state, the same rotation should apply to the prediction as well. This can be enforced through weight sharing. Imagine that all the neurons, including the input, are divided in to 24 classes, corresponding to 24 elements in the chiral octahedral symmetry group O. Then if there is a link between a neuron a a neuron b and under some spatial rotation neuron a becomes neuron c and neuron b becomes neuron d, then the weight between a and b and that between c and d should be identical! Now we have an interesting 24-fold weight sharing inside the network! This still looks like a fully connected network, yet it is strangely entangled in itself.

While I haven’t tried this yet, I’m expecting the network to do more than merely predicting the next step. By examining for each neuron which input state cause it to have maximum activation, we should be able to see what pattern the network is using to find the next move – what should be kept and what should be moved, which should help us find the stages in the human solving strategy.




Generative Adversarial Network,生成式对抗网络,是一种生成模型(废话)。基于神经网络的生成模型很常见,但主要是在序列形式的数据上训练RNN,比如生成文本啥的。GAN采取了不同的思路:对抗训练。两个神经网络:一个生成网络,学习从随机噪声向量产生与训练数据相似的样本,和一个判别网络,学习判别一个样本是来自训练数据还是来自生成网络。对抗的目标是,生成网络提高判别误差,判别网络降低判别误差。



虽然思路很简单,但实际操作的时候还是比较tricky的,很难训练好。原因有几个:首先,生成网络的质量无法量化(使用判别误差显然是不行的),只能靠主观判断,而凭主观判断是难以看出训练到底进行到什么程度、有没有卡住的,也就导致很难找到合适learning rate。

其次,两个网络的训练速度是很不同的。为了确保两个网络共同进步,要使它们保持能力相当。原paper认为判别网络应当多训练。但我的实测结果是在使用相同的learning rate时判别网络对生成网络基本上是吊打,不知道是什么情况。如果尝试保持每一回合结束后的判别误差约等于random guess的话,每个回合大概要训练生成网络20次以上……我还没有做实验,不过感觉可能是和输入噪声变量的数量有关。



同样道理,learning rate不能太大。


我是在MNIST手写数字数据集上尝试训练的。网络结构什么的……这么简单的数据集其实随便什么结构都好吧……判别网络输入接两层卷积,接一层全连接,接输出,使用batch bormalization。生成网络正好反过来。

在调整网络结构、调整learning rate、调整两个网络的训练速度比均无果之后,我想了个不太优美的办法。判别网络每次只判别一个样本。如果可以一次判别整个batch呢?虽然每一个样本都像是真的,但整个batch长得一样,明显就是假的嘛……
















针对在不同数字生成区域边界上存在的四不像输出,有可以改进的办法:由于MNIST是有class label的,可以做有监督学习:将class label转换成one-hot vector放在输入的种子里,确保不同数字生成区域分离,然后将判别网络改成判别真假+分类、生成网络的目标改成增加判别误差、减小分类误差,可以期望产生更好的输出。目前还在尝试中,结果如下







My life as a Ph.D. student, S01E05: Still working on my first neural network…

I was planning to write a post after getting everything done but it took longer than what I like…

We are working on some generative models now and for the last week I was learning how to train a Generative Adversarial Network. GANs are super cool, but also notoriously difficult to train! They probably aren’t the best choice for a machine learning newbie like me.

But doing something challenging is fun! Using something that works out of the box is just too lame. And if it already works well, it likely isn’t worth studying.

Here I’d like to talk about what I’ve learned so far. Let’s see the result first. I’m getting something like this from a GAN trained on MNIST dataset:


It is generating something. Not exactly digits but do resemble some hand written script…

I found that there are some practical issues not addressed in the DCGAN paper. It suggested using batch normalization. Batchnorm behaves differently when training and when evaluating. When training. it uses the mean and variance of the training batch. When evaluating, it uses the statistics of all the examples it has seen. It is logical to use evaluating mode for the generator when training the discriminator and vice versa.

In the original GAN paper, in each discriminator training round, the “true batch” from real data and the “false batch” from generated examples are fed into the discriminator separately. It is logical to make the distribution of each batch the same, so true and false examples should be mixed in the same batch.

To avoid making the wrong decision for the previous two issues I tried both ways round. But the suggestions given in the DCGAN paper still didn’t quite work out for me. I’ve followed them as much as I can. I can’t do the fractional-strided convolution for the generator network because that kind of convolution is not readily available in torch… But otherwise I did exactly what was advised. But still, the generator network keeps collapsing every input to a single output. According to what I read, this indeed is the most observed failure mode of GAN.

Then I came up with a not-so-elegant trick to solve this problem. The generator collapses because it thinks that output is the single best image to trick the discriminator into mistaking it for a real example from the dataset and learns to produce that output from any input. But then the discriminator quickly learns that that particular example is fake. Then the generator looks for the next output that can trick the discriminator…

This happens because the discriminator looks at a single example at a time. If we can somehow let the discriminator reject a batch if every example in the batch looks the same, then the generator should not be tempted to collapse every input into the same output anymore! How do we do this? My solution is, for each batch, take their mean. Then for each example in the batch, take its difference from the mean and concatenate the result with itself by adding them as additional channels. It is expected that if the batch is similar then the intensity in the additional channel would be small and the discriminator would be able to learn it.

But then there is a conflict with batchnorm. Since we want the discriminator to tell the difference between a true batch and a false bath, we cannot mix them in the same batch! But on the other hand, the difference of intensity between a true batch and a false batch is what makes the additional channels useful, with batchnorm, the differences are wiped out! To prevent this, we cannot let batchnorm normalize the batches individually and have to mix them!

So do we not use batchnorm? But without batchnorm, the training becomes very unstable.

The solution is to mix the two batches after their additional channels have been calculated separately.

This method actually works! the result is not perfect, but at least the generator does not collapse anymore! In the training process, you can see that at some point, the generator almost collapses but then the output soon starts to diverge.

At this point I’m not quite sure how to get the rest of things right. Tuning training parameters, or go model shopping?

And then there is one last mystery. The GAN paper says we should train the discriminator more. But in reality the discriminator constantly beat the crap out of the generator even if I train the generator 10 times as hard! Is this normal?

Actually the difficulty of balancing the training of the generator and the discriminator is another major factor that makes GAN hard to train, along with generator collapsing. The third factor probably is that since there are two networks competing, there is no single loss function to measure how well the training is going.

I’ll probably write something more elaborate if I do become better at training GANs.

And what is the one thing that I want to generate with a GAN?

Image of anime style eyes!

My life as a Ph.D. student, S01E04: Game on! (Well, not really)

USC is renowned for its School of Cinematic Arts. That’s at best remotely related to my work, though.

But only after I came did I discover that they have equally competitive game design programs.

I once played Cloud when I was in middle school. A beautiful game. And then, a while ago, I found that this game was made in USC and the creator is my compatriot and an alumnus of SJTU! Indie game development definitely was a rare thing back then in China, and arguably still is now.

I’m taking CSCI-522 “Game Engine Development” this semester. I did plan to take a graphics course, but my boss is not teaching this semester, and the other graphics course really is just elementary, so I opted to try something different.

I’m not planning to become a game developer. (Well, strictly speaking that possibility is not ruled out yet.) You don’t need a phd to make games!  Actually I doubt whether such courses will make you a “game developer”. My classmates are almost exclusively Master’s students. I don’t know whether they are MSCS or MFA, but either ways, being an expert in game engine is probably only gonna get you in to a big game company to work on their engines. You are still a programmer, just working with some different kind of program. You won’t be the next, say, Notch or Jonathan Blow. Does not sound cool.

But being cool is one thing, actually getting something to work is another thing! I barely know anything about the game development community but I guess in a big company you are guaranteed a good income while making a living as a indie game developer is much harder. If you think you had a great idea about game design but people don’t buy it, as often is the case, you’d probably rather learn to draw cute girls than designing games! At least you know someone’s gonna buy it if there is waifu! (No I’m joking)

But all those sort of things are none of my business. I’m taking this course to gain experience in working on giant complex projects. I heard that CSCI-522 is one of the more demanding courses offered by the compute science department. We are given a game engine that can barely do the basic rendering, animation, scripting, etc. and we will work on it so that in the end it will actually be something workable. Not being bold enough to read the source code of GCC or Linux kernel, this is the most complicated thing I’ve ever get my hands on.

In game engines efficiency is everything. Optimization is everything. That means you will work with a lower level language, and you won’t even want to use standard libraries. You don’t want generic solutions. You want something that works best for this particular program. So you do everything from scratch in C++: from memory management, to data structures, to runtime type information, to event system, and more.

And then, on top of that, you figure out how to abstract out the difference between hardware platforms and graphic APIs. Yes, we are gonna make it work on Windows and PSVita and iPad and more.

Then you do all the computation – physics and linear algebra and computational geometry, move objects around, pass control events around, gather draw calls and send them down the graphics pipeline.

Around that, you also need a script system and some sort of level editor. That really is a lot.

And while doing that, you use all kind of tricks to reduce coding work, do profiling, speed up compilation and help debugging.

Yes. Engineering. But as a graphics researcher, those are very useful skills if you want your research to be more than a paper and become something useful.

Thus far the technical part of the course has not gone beyond my existing knowledge, but I did learn some little tricks.

Here is one pre-class quiz question: you want to count the number of calls to certain functions. Suppose you want to implement several functions that take two integer arguments named a and b and return one integer. Define a macro called PROFILED_FUNC and a class called Tracker, such that

    //do some computation
    //return some value

defines such a function func, and when you call Tracker::stat(), it prints out the number of times each such function has been called.

The macro would be something like

#define PROFILED_FUNC(func)                            \
int func(int a, int b) {                               \
    static int callCount = 0;                          \
    if (callCount == 0) {                              \
        Tracker::register(#func, &callCount);          \
    }                                                  \
    callCount++;                                       \
    return _unprofiled_ ## func(a, b);                 \
}                                                      \
int _unprofiled_ ## func(int a, int b)

the # and ## are really alien to me! This made me believe more that I’ll never master C++.

And another trick: if you have a huge class definition in a header, you won’t want to include every time you use the class, since it slows down compilation. Time is money. It is common knowledge that we can forward declare a class, but that way we can only use a pointer to that class and cannot dereference it.

Now suppose you want to allocate a object of that class. Then you need to know its size. How do you avoid including the header?

You forward declare a function that allocate the object and return the pointer and put the definition in the implementation of that class.

And then there are other tricks. Thanks god that we don’t have template meta-programming!