Disposable HPC Nodes

atoms-0001l.jpg

Via our Twitter feed, we read an interesting article by Douglas Eadline, Ph.D. In the article Douglas makes the argument, based on a linked research paper, that the use of lower cost, lower power nodes would enable an organisation to move to a Recovery Orientated Computing (ROC) platform.

The idea would be that each node would be limited by power and processor speed, a reality that even so-called low power processors are now capable of 2GHz. If you could get each node under the £200 mark, than incremental failures doesn't impact the whole.

This idea of using cheap compute nodes smacks of the technique that Google has been running in their data centre. It also reminded me of a 2005 article by Robert X. Cringely, where he 'scooped' the Google Data Centre in a Crate idea. My key takeaway from this was the idea of a consumable resource. The crate, as delivered to the customer, was sealed by Google. The customer hooked up power, water and connectivity to the box via external sockets. As nodes inside the container died, compute capacity was reduced. Once the compute capacity reached a pre-agreed threshold (based on the customer SLA and rental cost), say 70%, Google would ship you a new crate, sync the data, and remove the old crate. Job done.

Personally we love this idea. Although we have standard servers from the likes of IBM and Dell, we're also investigating lower cost, commodity nodes for distributed computing. A lot of the Mini-ITX boards, that support both desktop and low-power CPU's, can also take PCI Express cards (on riser boards). Doing so allows us to add a single GPU to the mix. A tiny compute node, with the power of GPU processing (CUDA, OpenCL) would provide us with an extremely cost effective solution.


Feedback and comments welcomed to steven.algieri@scalabiliti.com