In the world of data storage high end performance for critical applications can only be achieved with high IOPS. At the same time, businesses are not willing to overspend on storage resources like flash. Therefore, in this article, we’ll discuss how to use cache for data acceleration in order to get exhilarating returns at a fraction of cost.
Technically speaking, caching can be implemented in three ways-
Write around caching- This type of caching is also known as “read-only” cache, in which data is copied to cache from disk, after a certain amount of read requests occur.
Write through cache- In this caching technique, data is written to flash tier and the hard disk tier at the same time. This is done for data redundancy purposes, but also ensures recently written data will be quickly accessible from the fastest tier of storage to achieve high end performance. The only downside of this technique implementation is that the write I/O occurs at the speed of hard disk storage, since the application cannot accept the next transaction until the write operation has occurred.
Write back caching- In this caching technique, write I/Os occur at the speed of flash because the cache sends an acknowledgement to the application as soon as the write takes place on the flash tier. Then the cache will start copying the data to the hard disk once enough writes have coalesced or queued up. This type of caching allows for rapid application I/O in heavy write environments. The downside of it is that data loss could occur if the cache resource fails before data is copied to hard disk media.
Now, let’s find out how caching can accelerate data
Let us suppose, an enterprise is transferring data between a local system and a virtual drive in cloud, for backup and Disaster Recovery purposes. One of the limitations of cloud storage is that data transfer speeds over the internet are not nearly as fast as data transfer speeds within local network. This means it normally takes much longer to read and write data to the cloud that it would to local storage server. Now, that’s obvious, as network connections can turn inconsistent with unpredictable speeds.
This is where Gateways offered from companies such as StoneFly can help in eliminating bottlenecks.
Data is quickly written to and cached (temporarily stored) on lightning-fast solid state disk drives within the StoneFly Gateway prior to being securely transmitted to a remote iSCSI drives in the cloud. By accelerating the completion of writes, workloads on local workstations and servers can be significantly reduced.
Not, only does a cache help in accelerating data send to the cloud, it can also help in storing hot data which is in frequent use. This greatly improves read performance and speeds up access, as the same files need to be repeatedly downloaded from the cloud to the local systems, thereby reducing overall demand on bandwidth.
To more details about this appliance call 510.265.1616 or click on StoneFly Gateway Appliances