Researchers, including one of Indian origin, from the Massachusetts Institute of Technology (MIT) have developed a new system for data centre caching that uses flash memory which could save a lot of energy.
According to the researchers, flash memory, the kind of memory used in most smartphones, could dramatically reduce the number of cache servers required by a data centre.
Cache servers generally use random-access memory (RAM), which is fast but expensive and power-hungry.
The finding, presented at the International Conference on Very Large Databases in Munich, Germany, showed that per GB of memory, flash consumes about five per cent as much energy as RAM and costs about one-tenth as much.
It also has about 100 times the storage density, meaning that more data can be crammed into a smaller space.
In addition to costing less and consuming less power, a flash caching system could dramatically reduce the number of cache servers required by a data centre.
The drawback to flash is that it was much slower than RAM.
“That’s where the disbelief comes in. Access time in flash is 10,000 times longer than in DRAM [dynamic RAM],” said senior author Arvind.
The system, dubbed BlueCache, keeps up with the requests flooding the data centre by using the common computer science technique of “pipelining”.
Inside a BlueCache server, the flash memory is connected to the central processor by a wire known as a “bus”, which, like any data connection, has a maximum capacity.
BlueCache amasses enough queries to exhaust that capacity before sending them to memory, ensuring that the system is always using communication bandwidth as efficiently as possible.
With all these optimisations, BlueCache is able to perform write operations as efficiently as a DRAM-based system.