Marvell's DragonFly virtual storage accelerators have been around for close to two years. Back in April 2011, StorageReview had some hands-on time with the DragonFly storage accelerator. In August 2012, at the Flash Memory Summit, Marvell officially announced  the general availability of the DragonFly platform with two members, NVRAM and NVCACHE.

Companies such as Fusio-IO and Violin Memory are growing fast, thanks to exploding interest in the usage of Flash memory for enterprise storage. Bandwidth and latency / IOPS requirements leave PCIe as the interface of choice for this purpose. In the consumer market, we have seen SATA SSDs gain acceptance as a front end for bigger capacity HDDs. In the enterprise market, PCIe SSDs are emerging as a back-end for DRAM.

Marvell's DragonFly platform's base offering is NVRAM, which consists of a PCIe card with up to 8 GB of ECC DRAM on it and up to 32 GB of SLC NAND Flash. There is an ultracapacitor to guard against sudden power loss by backing up the DRAM contents to the Flash. The NVRAM can be used as a data cache for DAS, SAN and NAS arrays irrespective of the protocol used to access the underlying storage. The use of DRAM allows for ultra-low latency and response times (in the order of tens of microseconds). The NVCACHE product, on the other hand, uses off-the-shelf SATA MLC SSDs to augment the storage capacity of the NVRAM product. The DRAM on the PCIe card serves to address the write IOPS and latency limitations of the SSDs. This results in a performance similar to that of high-end PCIe SSDs.

Marvell is announcing the latest addition to the DragonFly family, the NVDRIVE, today. Similar to the NVCACHE, the NVDRIVE is a PCIe SSD storage / caching solution. Instead of relying on external SATA SSDs, Marvell has integrated multiple SanDisk mSATA SSDs on the board. The hardware itself has a PCIe 2.0 x8 interface with 4 GB of DRAM and up to 1.5 TB of usable Flash capacity. The unit can also act in multiple modes (NVRAM / SSD / cache). The ultracaps provide the power backup to backup the DRAM in case of power issues (the amount of DRAM is basically dictated by the largest ultracap that could be placed on the board). The additional cache serves to increase the SSD endurance also. Marvell provides a turn-key solution with firmware and software support (virtualization-aware block and file object filter drivers for caching on multiple Linux distributions). The software also allows for dynamic cache provisioning.

Marvell claims that the NVDRIVE is a price/performance leader with more than 10x TPM (Transactions per minute) per dollar compared to the leading PCIe SSD caching solution (read, Fusion-IO) in the HammerORA TPC-C MySQL 5.1 database benchmark.

Comments Locked

4 Comments

View All Comments

  • Kevin G - Thursday, January 3, 2013 - link

    The layout of the NVDRIVE seems kinda odd. Off hand it looks like it has 8 mSATA boards: 4 hidden in that picture behind a daughter board that has 4 mSATA boards itself. Considering that there are some connectors that'll stack two high, it makes me wonder why they went with a daughter board for additional mSATA slots.

    The ultracapacitor looks like a battery backup from the picture. If that's the case, why bother having it on the PCI-e card when it could be mounted else where in a server chassis? (Several RAID controllers I've seen do this.)

    If the server market is going to go with mSATA, why not a back plane on the front of a server chassis with numerous mSATA slots that could be hot swapped? mSATA cards are narrow enough that it could be oriented vertically in the front of a 1U chassis. They're thin enough that 16 could easily be mounted up front with room for two 2.5" HD's for backup. With 256 GB mSATA cards, that'd be 4 TB of unformatted space which is a respectable amount for a 1U server.
  • PaulJeff - Thursday, January 3, 2013 - link

    I like the idea of a mSATA hot swap bay, the high density potential is great.

    That said, "SATA" is not considered "enterprise" in many IT circles. HP calls SATA "Nearline SAS" to not have to say SATA.

    Perhaps, mSAS is the way to go. But to ratify a new connector standard would be expensive and time consuming.
  • Kevin G - Thursday, January 3, 2013 - link

    The main advantage of SAS as a protocol over SATA is multipathing so that one controller can go down but the data is still accessible via a different controller.

    The mSATA form factor uses the same connector as mini-PCIe. That spec has one PCI-e defined with a provision for a second PCI-e to go over the connector. Multi-lane support could be accomplished by having each PCI-e lane go to a different controller. PCIe does allow this in their specification but I have not heard of any implementation using multiple controllers for a single slot. With SATA-Express and NVMe controllers could also use the mSATA/mini-PCIe form factor and allow for enterprise class features. I'm kinda surprised that the SATA-Express and NVMe working groups didn't converge on a common mSATA/miniPCIe spec.
  • dilidolo - Thursday, January 3, 2013 - link

    SAS advantage is full deplux vs. half duplex on SATA.
    A lot of SAS environment is not multipathed.

Log in

Don't have an account? Sign up now