MIT researchers claim they have a way to make faster chips

Jonathan Vanian, Gigaom.com

A team of MIT researchers have discovered a possible way to make multicore chips a whole lot faster than they currently are, according to a recently published research paper

The researchers’ work involves the creation of a scheduling technique called CDCS, which refers to computation and data co-scheduling. This technique can distribute both data and computations throughout a chip in such a way that the researchers claim that in a 64-core chip, computational speeds saw a 46 percent increase while power consumption decreased by 36 percent. This boost in speed is important because multicore chips are becoming more prevalent in data centers and supercomputers as a way to increase performance. 

The basic premise behind the new scheduling technique is that data has to be near the computation that uses it, and the best way to do so is with a combination of hardware and software that distributes both the data and computations throughout the chip more easily than before.

Although current techniques like nonuniform cache access (NUCA) — which basically involves storing cached data near the computations — have worked so far, these techniques don’t take in account the placement of the computations themselves. 

The new research touts the use of an algorithm that optimally places the data and the compute together as opposed to only the data itself. This algorithm allows the researchers to anticipate where the data needs to be located.

“Now that the way to improve performance is to add more cores and move to larger-scale parallel systems, we’ve really seen that the key bottleneck is communication and memory accesses,” said MIT professor and author of the paper Daniel Sanchez in a statement. “A large part of what we did in the previous project was to place data close to computation. But what we’ve seen is that how you place that computation has a significant effect on how well you can place data nearby.”

While the CDCS-related hardware loaded on the chip accounts for 1 percent of the chip’s available space, the researchers believe that it’s worth it when it comes to the performance increase.