Home > Storage > Storage Efficiency with “Awesome” FAST Cache

Storage Efficiency with “Awesome” FAST Cache

What is FAST Cache ?

In a nut shell , FAST Cache is a collection of Enterprise Flash Drives that sits between System’s DRAM Cache and Disks. FAST Cache holds a large percentage of the most frequently used data in high performance FLASH drive. While the response time of DRAM Cache is in nanoseconds , response time of FAST Cache is in terms on milliseconds.

Why FAST Cache ?

  • HDD (hard disk drive) performance has not kept up with server demands, leaving applications I/O deficient
  • Higher capacity of disks also means we will need less number of drives to match capacity requirement but IOPS requirement kept on increasing
  • FAST Cache solves this problem by increasing controller cache using Flash drives since most reads and writes are now served directly from high performance FLASH drives
  • FAST Cache’s persistent nature keeps data cached even after a power failure
  • Benefits both reads and writes
  • Available from 73 GB to 2 TB cache depending on the model and is available from FLARE 30 and above

Have been testing various database workloads and virtual machine workloads on a CLARiiON Storage loaded with FAST Cache and must admit that the results are “AWESOME” !! To be honest we never thought Applications would be able to utilize so much amount of cache since we were using Systems with 8GB and 16GB Cache upto now . Everything has been proved wrong post the tests and here are some results that would blow you off .

Please find below some findings from Unisphere Analyzer on various application and VM loads.

Case Study – 1

Two LUNs created out of 4+1 disks in RAID 5 Storage Pool hosting Virtual machines . Each LUN generates a load of around 1800 IOPS during scheduled AV Scan and almost complete load is handled by FAST Cache.



Out of the 3500 IOPS served by LUN , only 100 IOPS is from disks and less than 20% utilization at Raid Group .

Case Study – 2 : Database Query Operation

 

 

What does this mean ?

With 80% of IOPS being served by FAST Cache the number of disks required at the back-end comes down drastically giving the same performance and at times even better performance at less than half the number of disks. Random IO’s and Read IO’s benefit the most from FAST cache . Plan for average work loads and the spike or the bursty load is handled by FAST Cache

TRY IT TO BELIEVE IT !!!!!!

Advertisements
Categories: Storage Tags: , ,
  1. March 20, 2011 at 9:36 pm

    I always love customer proof points.
    FAST Cache is a gem of a technology which we’re only scratching the surface on.

  2. Jason Burroughs
    March 21, 2011 at 11:42 am

    What array model and FAST Cache setup do you have? We have an NS120 with 100GB in pre-prod and an NS480 in prod with 200GB that we just upgraded to R30 and we are anxious to see results with.

  3. Dan Isaacs
    March 21, 2011 at 7:11 pm

    interesting data. have you run the tests with a larger data set? Say, 10 or 11 datastores? I’m wondering how that performance would scale once the working set is larger than cache. Would the benefit of the FASTCache scale, or drop off? And if it drops off, how much?

    How many VM did you have in each datastore? What were you using to generate the load?
    Thanks! Love the graphs!

  4. March 21, 2011 at 8:32 pm

    @ Jason : Its a 480 and 8 disks for FAST Cache (Rest is NDA)

    @DAN : 20 VMs per datastore. 40VMs on that Storage Pool. We have tried with DB that is 5 times larger than FAST Cache capacity and still gave us am amazing performance. What you have to note is not the size of DB or VMs but amount of data that is active. All results are from Production workloads.

  5. Xu Ming
    March 27, 2011 at 6:14 pm

    Hi Sudrsn, Could you tell me where you get these beautiful charts? It’s a screenshot from some monitoring system or you create these charts by your self? If the latter one can you tell me how to do this?

    I’m trying to create beautiful charts based on some data extracted from database.

    Best Regards,
    Xu Ming

    • March 28, 2011 at 6:09 am

      It’s a screenshot from Unisphere Analyzer , Performance analysis tool of EMC . You can still create similar graphs in Microsoft Excel easily.

  6. May 10, 2012 at 12:36 pm

    Hi Sudrsn,

    For case one, I have the following questions:

    (1) Are these two LUNs assigned as VMFS datastore for ESX Server then to build VM on them?
    (2) Was the scheduled AV scan running inside the VM guest OS?

    The reason I want to know these factors is that: FAST Cache can only provide significant performance improvments for non-large-sequential I/O. I think AV scan should be sequential I/O type. But if they are running inside a VM, that might be different, as the load on the LUN would be VM file reading, which is large block random maybe.

    • June 29, 2012 at 12:09 pm

      Yes . Both Your assumptions are right. We have assigned the LUNS to ESX and have created VMFS volumes and our AV runs inside the VM

  7. May 10, 2012 at 12:36 pm

    Hi Sudrsn,

    For case one, I have the following questions:

    (1) Are these two LUNs assigned as VMFS datastore for ESX Server then to build VM on them?
    (2) Was the scheduled AV scan running inside the VM guest OS?

    The reason I want to know these factors is that: FAST Cache can only provide significant performance improvments for non-large-sequential I/O. I think AV scan should be sequential I/O type. But if they are running inside a VM, that might be different, as the load on the LUN would be VM file reading, which is large block random maybe.

  1. April 16, 2011 at 12:09 am
  2. August 12, 2012 at 12:02 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: