Click here to sign up for our newsletter & receive a £5 voucher![close]
×

Registration

Profile Informations

Login Details

or login

First name is required!
Last name is required!
First name is not valid!
Last name is not valid!
This is not an email address!
Email address is required!
This email is already registered!
Password is required!
Enter a valid password!
Please enter 6 or more characters!
Please enter 16 or less characters!
Passwords are not same!
Terms and Conditions are required!
Email or Password is wrong!

Don’t let datacentres be a waste of space

It is estimated that about 30% of storage within a datacentre is wasted due to under-utilisation caused largely by the lack of enforcement of routine storage management policies and best practices. Research has found that storage utilisation in datacentres decreased from 67% in 2011 to 56% in 2017, equating to an effective cost increase of 11%.

A skills challenge is also affecting storage utilisation. Fewer, less-skilled general IT staff are administering more storage, which is increasingly resulting in systems administrators who lack core storage management competencies managing critical storage infrastructures. This is compounded by the fact that new storage offerings often use old, inefficient storage management technologies and techniques as a result of product re-engineering.

Save on storage costs

Faced with constant pressure to reduce datacentre costs, most IT professionals concentrate on getting the biggest discount or best price when buying new storage. However, it is often simpler and faster to save on storage costs by improving the utilisation of existing storage infrastructures.

Storage utilisation is the percentage of used storage capacity relative to the amount of available or configured capacity – and it has been consistently declining in datacentres over the past six years. Over the past several years, the utilisation of storage in the datacentre has dropped from 67%, to an all-time low of 54% in 2016, before slightly improving to 56% in 2017.

For every 10% of unused storage or reduced storage utilisation in a typical 300TB array, the cost of wasted space is about $12,000 for a hybrid array and $60,000 for a solid-state array, including software, support and maintenance. These cost inefficiencies effectively double if storage utilisation is at only 60%, and so on for every 10% reduction in capacity utilisation. Purchase costs for storage within integrated systems are close to those of hybrid storage arrays, although they can be higher. Costs for object storage systems, which are often used for analytics workloads, are about the same as low-cost all-disk (hard-disk drive) arrays.

IT administration costs or staff costs can also be reduced when the storage utilisation rates are increased, as reducing the total amount of raw capacity requires fewer storage administrators. This, in turn, will lead to an increase in storage administrator or even generalist administrator productivity.

Inefficient storage utilisation

The amount of purchased storage array capacity has increased by 15% a year for the past five years, but this extra storage is not being used efficiently. This additional storage capacity is being administered by fewer staff, many of whom have less storage management expertise.

Certainly, storage array administration has become simpler, and provisioning has become automated because of application programming interfaces (APIs) and software integration between hypervisors and storage arrays. At the same time, more internal server or direct-attached storage is being used in integrated or hyper-converged systems. However, despite the relative ease of storage provisioning, the reduction in specialist storage administrators and lack of storage management practices are already having negative effects on storage utilisation and IT costs.

For example, the ongoing decrease in storage utilisation and increased waste could also be due to server, network and storage administration tasks becoming combined into the responsibility of one “integrated systems administrator”.

Because many organisations no longer have a dedicated storage group or specialty, storage best practices may not be implemented. In worst-case scenarios, the systems administrators may even lack any agreed, documented or enforced storage best practices. Also, there may be no agreed target for storage utilisation.

In some environments, such as those where Docker containers are created and deleted quickly, leftover persistent storage from retired containers may be left behind. Over time, this can become a problem as the quantity of wasted unused container storage increases.

Storage or general administrators must manage this leftover storage using container management or tracking software, which can inform the storage system or the storage administrator when the retired container storage can be deleted and reused for other applications or containers. The process for managing containers and the data lifecycle for temporary data – creation, usage and deletion – is the same as that for any applications.

Also, some datacentres may have specific islands of storage for use in big data or analytics applications. The software used by some analytics applications (such as mirroring, used in some distributed file systems) may not be the most efficient from a storage cost and efficiency perspective.

For example, unless otherwise specified, older versions of Hadoop use its default replication to store files, which requires three times as much storage than if a single file was stored.

Most efficient options

This is why administrators should use the most efficient or up-to-date Raid protection algorithms or erasure codes to optimise storage utilisation. Similarly, when using hypervisors, IT leaders and hypervisor administrators need to check periodically that they are using the most efficient storage options.

Additionally, they must review and analyse software-defined storage (SDS) products at the time of purchase before they can be used to replace the system’s inbuilt storage services.

In fact, it is not recommended that storage utilisation should be 100%, because that would leave no space for expansion. However, there is no reason why storage utilisation should be as low as 56%. Instead, IT leaders are recommended to aim for a storage utilisation rate of about 80%, which has been a best-practice target for decades.

This 80% utilisation rate allows the 20% of free storage space to be used for peaks in demand and short-term growth.

However, any calculation on storage utilisation is compounded by the growth in unstructured data used in analytics, and by the amount of internal storage used with hyper-converged infrastructure (HCI) and/or integrated systems, which now comprise an increasingly large proportion of storage within datacentres.

IT leaders must therefore prioritise these systems for monitoring and reporting on storage utilisation, because there may be no established best practices or storage management processes. This is especially important in large HCI and data analytics infrastructures, measuring from hundreds of terabytes to petabytes.

This article is based on an excerpt from the Gartner report “Storage utilisation is decreasing, stop wasting money” by Valdis Filks and Santhosh Rao.

Go to Source