The humble Raspberry Pi has been put to lots of takes advantage of by hobbyists around the years since it was launched, but the most up-to-date use for the board is certainly extremely novel: bundling hundreds of the issues jointly to make a low-priced supercomputer (of sorts).
As Anandtech reports, the Significant General performance Computing division around at the Los Alamos National Laboratory has lumped 750 Raspberry Pi boards jointly, in a process designed (and crafted) by BitScope that is composed of 5 rack-mounted Pi Cluster Modules, every single of which have 150 boards apiece.
That usually means 750 processors with a overall of 3,000 cores, which the National Laboratory notes tends to make for a extremely productive HPC testbed for software developers and scientists who just cannot afford to fork out for time on a true supercomputer.
In other terms, this is not a ‘true’ supercomputer, but an inexpensive growth testbed of a related scale, or as the makers put it, a “highly parallelized platform for exam and validation of scalable units software technologies”.
Gary Grider, chief of the HPC division at Los Alamos National Laboratory, commented: “It’s not like you can maintain a petascale device all over for R&D function in scalable units software. The Raspberry Pi modules allow developers figure out how to publish this software and get it to function reliably without having a dedicated testbed of the exact same dimension, which would expense a quarter billion dollars and use 25 megawatts of energy.”
Nifty, eh? So it seems that the Raspberry Pi is not just about low-expense hobbyist computing, but low-expense supercomputing – or at least facilitating the growth and tests of software for the latter.