MIT Scientists have developed a new data-compression technique to help computers and mobile devices run faster and perform more tasks simultaneously.
According to the scientists, the new process helps improve performance, as it reduces the frequency and amount of data programs need to fetch from the main memory.
Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate.
The device’s software doesn’t however store such data in fixed-size chunks. It instead uses data structures or objects that contain various types of data and have variable sizes.
MIT has now come up with a new approach for compressing objects across the memory hierarchy.
According to the scientists, programmers could benefit from this technique when programming in any modern programming language that stores and manages data in objects, without changing their code.
And because each computer application consumes less memory, it could run faster, helping a device to support more applications within its allotted memory.
Traditional architectures store data in blocks in a hierarchy of progressively larger and slower memories called ‘caches.’
Recently accessed blocks rise to the smaller, faster caches, while older blocks are moved to slower and larger caches, eventually ending back in main memory.
While this organization is flexible, it is costly, the scientists contend: To access memory, each cache needs to search for the address among its contents.
The MIT team in October detailed a new approach called Hotpads, that stores entire objects, tightly packed into hierarchical levels, or ‘pads.’
For their current study, the researchers designed another novel technique called ‘Zippads,’ that leverages the Hotpads architecture to compress objects.
When the objects first start at the faster level, they’re uncompressed. But when they’re evicted to slower levels, they’re all compressed.
“The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that’s how most modern programming languages manage data,” says first author Po-An Tsai, a graduate student From MIT CSAIL.
“All computer systems would benefit from this,” adds co-author and CSAIL professor Daniel Sanchez. “Programs become faster because they’ve stop being bottle-necked by memory bandwidth.”
Image and content: BlickPixel-Pixabay/MIT