Data Reduction methods for virtualized environments

Data Storage field uses compression and deduplication techniques for reducing capacity needs. And the same applies to Virtual storage environments where compression usually works at file level, while deduplication tends to work at block level.

Compression is probably best suited to use on file servers that contain not so frequently accessed data, such as archival data. So, in virtual environments NTFS file system compression is used by some administrators to reduce data footprint on the underlying physical storage volumes.

But as NTFS compression consumes a lot of CPU cycles, it has turned out into a legacy feature that has faded out. Since, Virtual Servers run on CPU intensive workloads, depending on compression algorithm in this scenario will not prove as a prudent approach.

Deduplication can be implemented at the storage level if the storage supports this algorithm. Dedup runs outside of a virtual machine and so can help in eliminating the redundancy that exists across VMs.

For example, Virtual Machines that run the same operating system have identical systems files. Dedup can help to remove this redundancy, reducing the amount of physical storage required by the virtual machines.

Hence, deduplication is currently one of the major data reduction methods of choice in virtualized environments, with volume level compression being used with less frequency.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s