Clustered Storage: The Unsung Hero of Business Today

By Matt Hurford, systems engineering manager, NetApp ANZ

“Clustered storage” as a term may not be a topic that is making headlines across the world, but over the past four or five years, clustered—or scale-out—storage has grown increasingly and consistently more essential to meeting and mitigating the unrelenting challenges of data growth.

As the amount of daily personal and business data usage has grown to astronomical levels, utilisation of clustered storage has expanded from use in primarily technical applications such as engineering and CAD/CAM simulation to use for such common business applications as Oracle® databases, SAP® business management software, and Microsoft® Exchange. Over the past 12 months, industry analysts have even begun ranking vendors who supply clustering or scale-out storage. And yet clustered storage is still not a ubiquitous part of the data growth conversation.

Maybe that’s the case because it works.

It is hard to generate a lot of heat without a lot of hotly contested debate. There is also little debate that scale-out storage is an ever-more-critical, even integral, component for organisations that rely on data to drive business results. That pretty much means all organisations these days, doesn’t it?

For those new to the topic, clustered storage is a large, highly adjustable pool of storage that can be expanded or minimised as needed, all while appearing to end users as a single system. Storage can be divided and allocated among different users who can utilise the same body of storage while being securely partitioned from each other. Having such a big pool of storage allows data to be moved nondisruptively within that clustered environment for performance balancing, to take a system offline, or simply to perform routine maintenance and upgrades. More than ever before, nondisruptive operations are making the transition from nice to have to need to have.

As year-over-year data growth becomes truly unprecedented, the proliferation of bigger datasets are causing IT departments to struggle to keep up which can cause productivity issues throughout an organisation. Datasets are not simply growing—they are growing rapidly and consuming storage voraciously. Against a backdrop of such torrential data downpours, simply adding new storage or even upgrading existing storage can have a dramatic impact on a company’s success.

Systems taken down or offline even for a few hours can bury employees in everything from e-mail to sales figures. Half a day without access to critical information can easily cause an organisation to miss weekly or even quarterly business goals. And with more organisations operating globally, there are few, if any, true off-hours anymore. Midnight at a home office often falls during critical operating hours in another part of the world. Increasingly, planned downtime, because it’s more common, can be more disruptive than unplanned outages.

As most CIOs and IT decision makers readily concede, dialing back is no longer an option. With voluminous datasets being the new normal, companies have no choice but to keep pace with the velocity of today’s information. Without the agility and scalability of clustered storage, businesses and consumers alike can anticipate greater delays in services, spending more time and effort to expand storage infrastructures, and having less access to information when it is needed—all of which can and will be reflected in a company’s bottom line.

As the data-deluge juggernaut rolls on, many organisations will soon discover that clustered storage can mean the difference between controlling data and being controlled by it. Clustering continues to offer organisations a highly efficient and effective way to manage the new—and intensifying—realities of this data-deluge era.