I’ve been thinking about solving combination puzzles with computer for a long time.
I believe that every CS student have learned how to solve the 15-puzzle with heuristic search in elementary AI courses, or at least to solve the simpler 8-puzzle with BFS in elementary algorithm courses. At first sight, Rubik’s Cube may seem to be a similar puzzle, just a bit bigger.
Yes, they are all “sequential move” type of puzzles where you try to restore a bunch of things to some initial configuration by performing some operation sequentially. But Rubik’s Cube is definitely much more difficult, both for humans and computers. Size is not what really matters – you can make yourself an arbitrarily large (n^2-1)-puzzle, and it will still be easier than Rubik’s Cube, just quadratically more boring and time consuming. Apparently there are deep group theoretic reasons. Actually I once read about why some of these puzzles are more difficult than others in a Scientific American article. But I think this can be explained by 15-puzzle being more local than Rubik’s Cube: in Rubik’s Cube, one move could affect 1/3 of all blocks where in 15-puzzle, one move affect only one piece.
The consequence is that in 15-puzzle you can see clearly which configuration is closer to being solved – you can see it more easily as a human, and you can design a heuristic more easily for a computer. In Rubik’s Cube, this is much more difficult. I haven’t even seen one example of a good search heuristic for Rubik’s Cube. Maybe people just does not bother finding one: by memorizing some 200 move sequences and practice people could solve a Rubik’s Cube under 10 seconds, and the total number of states of a Rubik’s Cube, 43,252,003,274,489,856,000 (I can memorize this number) and the God’s number, 20, is just small enough to make it tractable to solve with a (bidirectional) brute-force search.
But when you have a larger combination puzzle – a 4×4×4 or 5×5×5 or bigger cube or a megaminx, the number of possible states explodes and brute-force search becomes mostly useless.
Somehow people could still solve them by finding patterns. Someone have even designed an algorithm to solve a n×n×n in Θ(n^2/log n) moves – what a strange time complexity, a log in the denominator!
Actually, finding solutions to all those exotic puzzles is my favorite part in this Rubik’s Cube game. The 200 or so formulas may have been found by brute-force computer search, but one you have played with a Rubik’s Cube for some time and if you are clever enough, you can construct those formulas yourself for other puzzles. You start by just randomly – or maybe purposefully – turning the faces and see what happens. Then you suddenly find a short move sequence that does something interesting – say, rotating a small number of pieces while leaving their position unchanged, or only moving some corner pieces but does not affect edge pieces.
Then you get a bunch of formula, each one of them will keep a certain set of pieces unchanged, move a second set of pieces in some desired fashion, and move the remaining pieces in someway you don’t really care.
Then with what you have at hand, you define your strategy. Be it edge-first, corner-first, layer-first or whatever you may want to call it, usually you have several stages. In each stage, you want to keep unchanged a larger set of pieces that are already restored than the previous stages, and try to restore several more pieces to advance to the next stage.
You might not notice it but what you are doing is actually repeatedly reducing the group of possible states to one of its subgroups, until you get a trivial group consisting only of the solved state.
The process might not be optimal, but it works.
But it seems that we have not formulated this process well enough so that we can let computers do it. I’ve been thinking about it for a while. May be we human and computers could cooperate: we define the stages so that the quotient group between two stages is not so large so that we can brute-force search for a formula that brings us from one stage to the next. It would be so good if the computer can also learn to define the stages. But that part does involve some insight into the structure of the puzzle.
Yesterday when I was dealing with the GAN thing, I suddenly came up with the idea of solving Rubik’s Cubes with a neural network.
I think that’s quite natural. These days people seem to believe that neural networks can do everything. But after a quick google search I found that few people have considered the same thing.
So it’s time to have a try! Brute-force search definitely is ineffective for solving complex combination puzzles but maybe neural networks could do the magic. The idea cannot be simpler: encode the state of each piece into one-hot vectors and just build a network to predict from the input state the mext move towards solving the puzzle.
Then it comes to the encoding, the training data and the structure of the network. I’ve seen some people represent the state of a Rubik’s Cube by one-hot encoding the color of stickers at each position… that sounds ridiculous. I think one-hot encoding the position of each piece makes more sense. We have 20 movable pieces, and each of them have 24 positions, so the input would be a 480-dimensional 0-1 vector. Note that this is actually more than what you need if you choose to use stickers: 48 movable stickers, each with 6 possible colors, producing a 288-dimensional 0-1 vector. But still, I think the position of the pieces is more intrinsic.
The training data would be obtained by scrambling the cube. We train the network to bring a state to its previous state in the scramble sequence. The scramble sequence should be generated from a random walk: with a fixed probability (say 0.05 which is the reciprocal of 20, the God’s number) we come back to the solved state, otherwise we choose randomly from 12 possible moves. This makes sense since it ensures that each state in the state space is visited exponentially more often from its nearest path to the solved state.
Then we come to the fun part – the structure. We could just use some plain fully-connected layers. But remember that there is symmetry in the Rubik’s Cube, as I mentioned in a previous post. So if we rotate the input state, the same rotation should apply to the prediction as well. This can be enforced through weight sharing. Imagine that all the neurons, including the input, are divided in to 24 classes, corresponding to 24 elements in the chiral octahedral symmetry group O. Then if there is a link between a neuron a a neuron b and under some spatial rotation neuron a becomes neuron c and neuron b becomes neuron d, then the weight between a and b and that between c and d should be identical! Now we have an interesting 24-fold weight sharing inside the network! This still looks like a fully connected network, yet it is strangely entangled in itself.
While I haven’t tried this yet, I’m expecting the network to do more than merely predicting the next step. By examining for each neuron which input state cause it to have maximum activation, we should be able to see what pattern the network is using to find the next move – what should be kept and what should be moved, which should help us find the stages in the human solving strategy.