Date
July 28, 2015
Author
Smruti Sarangi
Due to continued technology scaling as predicted by the Moore's law, the number of cores per chip are doubling roughly every two years. As a result a lot of applications that required large clusters, and mainframes, are being run on server processors that have a unified view of memory and storage. Some of these applications such as micro-blogging, algorithmic trading, map-reduce based analytics, as well as traditional applications such as web servers and file servers are being re-architected to run on large multicore servers. With the advent of faster memory and storage technologies the adoption of servers for such applications has accelerated over the past few years. Along with improvements in hardware, we need improvements in software as well, particularly the OS and the hypervisor (virtual machine). The aim of this project is to see how prepared our current operating systems are in handling the deluge of novel workloads that are expected to run on them, and additionally what features of operating systems are the most beneficial for a certain class of workloads.