Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Far more impressive than the Deep Blue moment.

This time the computer did not win out of pure bruteforce. Deep Blue relied on an opening book and massive computational power to explore the game tree. After the opening it was pretty much on its own, bruteforcing moves.

This technology used a neural network trained with hundreds of thousands of games which provided the pattern matching aspect, combined with the bruteforce move sequence reading, the montecarlo tree search... and 1200 CPUs + 600 GPUs.



Assuming a Titan X with single precision, those 600 GPUs are 4 PFlops! Deep Blue extrapolated to today with Moore's law would only be ~72TFlops.

While DNN+RL+Tree search is cool, the hardware requirements for AlphaGo to play at this level are staggering and only supported by large marketing budgets :)


Deep Blue was the 259th most powerful supercomputer of its time, what might the corresponding placement of Alpha-Go be?


AlphaGo is using a 1920 cpu/ 280 gpu distributed setup for the Sedol games. One source reported the gpus are Nvidia K40. The gpus give a peak possible performance of 470 terraflops. That would put it somewhere in the middle of the Top500 list, similar to Deep Blue in its time.

Note though that AlphaGo almost certainly uses single precision arithmetic -- for neural networks even single precision is overkill.

Also the Top500 list is based on Linpack, which measures performance for computations that are pretty strongly interconnected across the different processors of the system. AlphaGo's Monte Carlo tree search problem is more embarrasingly parallel, with evaluation of different positions really being independent computations.

It is much easier to make systems that can handle embarrasingly parallel loads than the highly interconnected loads handled by the top500 supercomputers. So even though the flops are comparable, the systems are not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: