FAST Cache Configuration Best Practice

EMC’s FAST Cache has been one of the most talked about technology in recent times due to the much needed Caching capabilities that it brings to EMC CLARiiON series of arrays. In a nut shell , FAST Cache is a collection of Enterprise Flash Drives that sits between Storage System’s DRAM Cache and Disks. FAST Cache holds a large percentage of the most frequently used data in high performance FLASH drive. While the response time of DRAM Cache is in nanoseconds , response time of FAST Cache is in terms on milliseconds. In my previous posts , I had explained on the Performance benefits that one could derive from FAST Cache on various work loads like VDI , Database etc..

We had an interesting discussion recently on the Configuration Best Practice for disks that constitute FAST Cache. Remember FAST Cache is a collection of Enterprise Flash Drives or SSD configured in RAID to provide Read & Write Caching capabilities. Scanning through EMC KB , Came across an Primus (ID: emc251589) on the Configuration Best Practice for FAST Cache and thought of sharing the same with the larger audience so that it could be useful when you configure the same in your environment. The Primus is being constantly updated and would recommend you to look out for updated information by logging on to powerlink.

Best Practice :

  • The order that the drives are added into FAST Cache is important, because that will dictate which drives are Primary and which are Secondary. The first drive added is the first Primary, the next drive added is its Secondary, the third drive is the Primary for the second mirror and so on.
  • For highest availability, ensure that the Primary and Secondary for each RAID 1 pair are on different buses (i.e. not just different enclosures on the same bus).
  • The FAST Cache drives can sustain very heavy workloads, but if they are all on the same bus, they could completely saturate this bus with I/O. This would especially be an issue if the drives were all on bus 0, because this is used to access the vault drives. Spread the FAST Cache drives over as many buses as possible.
  • Avoid using the Vault drive DAE, if possible, for the FAST Cache (DAE 0_0) on a VNX, because this would then use a different SAS lane than the vault drives. It is acceptable to use bus 0 for some, but preferably not all, of the FAST Cache drives (unless the CLARiiON only has a single backend bus).
  • As these drives are individual RAID 1 groups, the Primary drive in each pair will be more heavily accessed than its Secondary mirror, because read I/O operations only need to access one of these drives.
  • It is easiest to be certain of the order in which these drives are bound by using the following CLI command to bind the FAST Cache                                                                                                                                                   Naviseccli cache -fast -create -disks disksList -mode rw -rtype r1
  • The FAST Cache driver has to track every I/O in order to calculate whether a block needs to be promoted to FAST Cache. That adds to the SP CPU utilization. Disabling FAST Cache, for LUNs unlikely to need it, will reduce this overhead.
  • After a few hours, FAST cache will be using nearly all of the available drive capacity of its flash drives. For every new block that is added into FAST cache, another must be removed and these will be the blocks that are the oldest in terms of the most recent access. If there is not much FAST cache capacity (in comparison to LUN capacity), blocks that are frequently accessed at certain times of day will have been removed by the time they are accessed again the next day. Restricting the use of FAST cache to the LUNs, or Pools, which need it the most, can increase the probability of a frequently accessed block still being in FAST cache the next time it is needed.

Here are some general rules of thumb for using FAST Cache:

  • Do not enable FAST Cache on all reserved / private LUNs, apart from metaLUN components.
  • Do not enable FAST Cache on MirrorView/S Secondary mirror LUNs or on Pools which contain these LUNs.
  • Do think about the type of data on a LUN and consider if FAST Cache is needed. For example log files are generally written and read sequentially across the whole LUN. Therefore these would not be good candidates for FAST Cache. Avoiding using FAST Cache on unsuitable LUNs reduces the overhead of tracking I/O for promotion to FAST Cache.
  • Do not put all FAST Cache drives on a single bus (apart from on CLARiiONs with only one backend bus per SP, such as the CX4-120).
  • Do locate the Primary and Secondary mirrors for each RAID 1 pair of FAST Cache drives on different buses, to improve Availability. The order the drives are added into FAST Cache is the order in which they are bound, with the first drive being the first Primary; the second drive being the first Secondary; the third drive being the next Primary and so on.

NOTE / DISCLAIMER : Please do check with EMC Technical / Support Team before making any changes to FAST Cache configuration in your environment. These are general best practice shared by EMC and might vary depending on your environment.

About Sudharsanhttp://vpando.comThis blog is to share my PERSONAL thoughts on Virtualization and Storage . My name is Sudharsan and I am a VMware Certified Professional and Microsoft Certified Technology Specialist on Hyper-V based out of India . Disclaimer : The views expressed anywhere on this blog is strictly mine .

13 thoughts on “FAST Cache Configuration Best Practice

  1. What about using Enterprise Flash Drives to create LUNs? can you mix EFD with FC drives in the same DAE? would that cause any performance issue with the different drives?

    • We can defintely mix EFD’s and FC Disks on the same DAE but would recommend distributing EFD’s across as much Buses/backend loops as much as possible. It also depends on how do you use the EFDs ? Is it for FAST ? Reason for spreading across the Buses is that the backend bus shouldn’t be saturated since EFD’s can do lot more IO than the FC Disks and we will have to be careful about that.

  2. Sudharsan, you mention not to enable Fast Cache for Mirrorview/S LUNs, is this also true for RecoverPoint LUNs (where RecoverPoint write the logs for replication)

    • I would not recommend in enabling FAST Cache for the Recover Point Journal/Log lun considering that the data would be Written sequentially mostly and any sequential write’s would not be the right candidate for FAST Cache.

  3. A question.

    I was checking many-a docs and found that I can have EFD and FC in same enclosure.
    But at what speed will the enclosure work?

  4. we have 10 SSD’s, 50 SAS and 60 NL-sas.
    should we use just one pool which consists on all these drives for fast vp implementation or should we create multiple pools?

  5. The info posted in this blog is outdated. Per VNX best practice for performance (dated 12/2013), up to 8 flash drives should be placed in enclosure 0_0, and mirror SSD drives within the enclosure (if more than 8 drives and on separated bus).

  6. Note that for the new VNX (VNX5200, 5400, 5800, 8000) you CANNOT mix FAST Cache and FAST VP disks. And there is no necessity that the FAST Cache disks need to be placed in Bus 0 Enclosure 0 anymore. Check VNX2 Best Practices document for further info.

Leave a reply to Rob P Cancel reply