NetApp Buys SolidFire And With It a Solid All-Flash Storage Advantage

In a huge move, NetApp has announced it’s intention to buy Solidfire.  This acquisition bodes well for NetApp.  It indicates an understanding of the market they are transitioning into based on the mistakes they have made.  Having erred in trying to build it themselves they are now purchasing one of the most sophisticated and scalable flash storage array companies available. It has been my favorite company because they have done so many things correctly – from the management of the company to the technology they have built into their all-flash arrays. My hope is that they leverage SolidFire’s strong flash storage platform and not worry too much about cannibalizing their existing products.   SolidFire offers an excellent scale-out architecture that is without doubt ahead of all the other vendors when it comes to providing cloud storage features for cloud storage providers and entesfhigh2rprise cloud deployments.  They have won over a large number of cloud providers.  It is not a surprise – they have built in a number of critical scale-out and storage features which I have previously reviewed.  Solidfire scale-out arrays scale up past 100 nodes to provide a highly available view of storage with quality-of-service controls and all the usual suspects of data reduction built in to the operating system (including replication, dedup, compression, etc).  In my view, focusing only on storage misses an important point – storage lives within larger ecosystems.   Solidfire works on OpenStack, Cloudstack, Citrix and VMware frameworks and offers a solid well-rounded group of storage features with a focus on complete virtualization and  cloud solutions. You can look at the post OpenStack Announcement: Solidfire/Dell/Red Hat Unleash Solidfire Agile Infrastructure (Flash-Storage-Based) Clould Reference Architecture to understand their involvement in cloud solutions. The Solidfire architecture allows from four arrays to 100 to be clustered and provide petabytes of flash storage that are highly available.  Couple this with quality of service and all the standard data reduction features and you end up with a really nice flash storage foundation.  The advantage of their approach allows unlike arrays to be clustered with different types of SSDs.They can also make use of either iSCSI or 8/16Gb Fiber Channel.  It’s worth looking at some of the excellent features of this platform which include seamlessly upgrading storage or seamlessly scaling out. Here are some nice videos that demonstrate some of their advanced features :

Scale and Upgrade Storage Seamlessly

 

Provision, Control and Change Storage Performance

 

SolidFire Cluster Install and Setup in 5 Minutes

 

Quality of Service

 

 

Why Lego Makes Sense in Toys, Software, Servers and Storage

Update : For my most recent post on the new class of flash arrays that fully support deduplication and a full range of storage features, Recommendations for All-Flash Storage Arrays; Aiming Beyond Simply IOPS and Low Latency.

Legos are remarkable. You can almost build anything with them, however, that is really not the only remarkable aspect of legos. The underlying lesson is that from very small inter-legooperable parts you can efficiently build very big things. The lego approach has found itself into software practices and as well to servers. Good software design emphasizes small generic functions that can be used extensively and flexibly by other functions. On the server front – while very large servers still exist and serve many useful functions – the majority of data centers are building clouds from a large number of  small servers. These servers are 1RU or 2 RU at most.  And yes we see the larger 4RU and greater sized servers, but they are infrequent and very specific in function. For example, If you check out Hadoop in data centers, you will discover that they run on these smaller servers in clustered groups. If you check out database-as-a-service – increasingly they run on an architecture of small servers and small storage units. Software such as Mesos and Chef make it easier to provision, manage and create clouds populated by hundreds and thousands of small servers. The sweet spot is the two processor Intel server.  Oracle’s Exadata architecture is a database poster-child for this approach.  A popular version of Exadata is comprised of 14 storage servers and 8 database servers. Basically going from an eighth rack to a full rack approaches it in a lego-like manner. The reality is that the combination of operating systems software, application software and small servers have been combined to offer architectures that can do very big things. Increasingly, software plays a ever bigger role in everything. In storage, the lego approach is a big win for enterprise and cloud providers because it offers extreme scale-out and flexibility when combined with software, storage features and fast underlying hardware. Yes, you could get a 3RU or 18RU array architecture – but the win is in building architectures with smaller 1RU arrays that perform elastically, resiliently and flexibly like legos and construct large storage architectures and scale-out storage spaces that offer much more than what is possible from a single large storage array.  When I say more – I don’t necessarily mean IOPS – all these flash storage arrays provide more IOPS than are needed and arguably hybrid arrays also provide excellent performance.  More important – the architects can grow their architecture in an as-needed basis. They don’t have to buy huge blocks of expensive flash storage if they don’t need them. Let’s look at an example of the lego model.  I see SolidFire as employing a lego-like model. A starter configuration include five 1RU storage nodes with complete data protection across the storage nodes. Key – data protection doesn’t just happen at the node or array level, it takes place at the cluster level.  A good example of a lego-like storage model is SolidFire’s all-flash storage nodes, because of the operating system software each node offers features that become more powerful as they are aggregated.  First, you can add new nodes and aggregate the flash storage into a single view and single pool. Five, Six, Seven… ten … one hundred nodes can be pulled together to create large storage pools if desired. This is key you can add new nodes as demand dictates resulting in immediate capacity and increased performance. You add these nodes (or remove them) with no downtime and with minimal performance effects. This scale-out behavior allows you to go from five to one hundred nodes. SolidFire’s petabyte scale-out goes well beyond 280 TB of some competing systems, it goes to 3.4 PB. Second, you can guarantee performance levels because SolidFire offers quality-of-service features.  Third, each node can be upgraded dedupsfnon-disruptively. Fourth, all the usual data reduction suspects are available – thin provisioning, deduplication and compression are offered. Fifth, as in most clouds, automation is a key – REST APIs and an advanced user interface allows automation of the storage cloud. Sixth, failure happens and these node provide for redundancy so data is not lost. Seventh, real time replication is offered to cope with potential disaster recovery scenarios.  Eighth, these nodes come with snapshot and recovery software. Ninth, complete high availability that provides high availability in a distributed manner.   Tenth, these lego nodes offer encryption.  Finally, and importantedly, you can mix the older and newer storage nodes in the same pool.  This lego like model fits what has happened with servers and how enterprise and cloud designs like new engineered systems are moving to.  SolidFire is only one example of a flash storage vendor that is doing approaching this right. There are others that are adopting this lego model.

In the end, most clouds and many enterprises care about the up-front costs, floor and rack space consumed, non-disruptive upgrades feature, power & cooling costs, performance, latency, capacity, data resiliency and high availability – and importantedly – providing enterprise features like extreme scale-out storage pools, quality of service tunability, deduplication and an ability to add arrays or nodes incrementally without incurring huge costs.  The lego approach to storage delivers on this.


gotostorageGo to more posts on storage and flash storage at http://digitalcld.com/cld/category/storage.


 

OpenStack Announcement : SolidFire/Dell/Red Hat Unleash SolidFire Agile Infrastructure (Flash Storage-Based) Cloud Reference Architecture

Very, very cool announcement today from SolidFire, Dell and Redhat. Everyone interested in the next generation datacenter should listen to this excellent keynote from a SolidFire’s CEO.  They are way, way ahead of the rest of the flash storage crowd.  Unlike announcements of some vendors whose native operating systems don’t even support extreme scale-out and quality-of-service and that announce their belated participation in the OpenStack foundation minus virtually any substantial meat – SolidFire delivered a strong announcement – a real-world, pre-tested, pre-validated Dell/Redhat/SolidFire reference architecture for building a flash-based cloud. And they are not new to OpenStack they have been supporting it for some time. Today, SolidFire unleashed a pre-validated, pre-tested reference architecture with two key players – Dell and Redhat. In this talk – two eBay engineers – Subbu Allamaraju (Chief Engineer, Cloud) and John Brogan (Cloud Storage Engineering) discussed with Dave Wright (CEO, SolidFire)  the challenges that moved them to look at OpenStack and how eBay is using OpenStack today.  It is definitely worthwhile to listen to these knowledgeable  eBay engineers provide a meaningful discussion on why OpenStack is important.

Then, Dave Wright discusses the newly created OpenStack cloud reference architecture.  You can find the Solidfire Agile Infrastructure (AI) Reference Architecture  here :

soldfireAICloud

David goes into some of detail on the reference architecture for a scale-out cloud. Two Dell and Redhat execs also joined David on stage to further discuss the new reference architecture.

Recommended Reading : SolidFire Unveils A Reference Architecture for Large-Scale Corporate VDI Deployments

In a new whitepaper that details a reference architecture for large corpora VDI deployments, SolidFire lays out an architectural blueprint.  This is an excellent read.  It covers everything from an architectural overview and details to real-world use cases, to infrastructure details to both network and storage configurations.  It also provides a methodology for leveraging SolidFire’s clustering and QoS strengths by allowing adjustable performance and capacity in a scalable granular way, allows guaranteed performance levels, simplifies VDI administration and lowers the per seat cost of a deployment.solidfirevdi

 


gotostorageGo to more posts on storage and flash storage at http://digitalcld.com/cld/category/storage.


 

Recommended Viewing : Tuning MongoDB for Next Generation Storage Systems (Video)

The video, Tuning MongoDB for Next Generation (Flash) Storage, is now available.

Storage architecture can have a direct impact on MongoDB performance. Traditional relational databases were designed around legacy SAN devices and required that the storage systems were dedicated to the database. If you wanted more performance you purchased a larger array. With NoSQL databases, the model has been flipped upside down. These databases are designed from the ground up to be distributed. More hosts equals more performance. By leveraging solid-state drive technology with concepts like storage virtualization, quality of service and horizontal scaling, next generation storage systems like SolidFire are able to combine the comforts of traditional dedicated storage performance with the simplicity and scalability expected in a MongoDB environment.

In this video a real-world clustered deployment of MongoDB on SolidFire is discussed. This deployment a private densely virtualized cloud. This is an ecommerce back-end.  They use the QoS and cluster features of SolidFire.  They go over the architecture.  Very good video.

mongodb_tuning

 


gotostorageGo to more posts on storage and flash storage at http://digitalcld.com/cld/category/storage.


 

Enforcing Quality-of-Service on VMware VVOLS with Flash Storage Systems

VVOLSVMware Virtual Volumes (VVols) basically is about making storage VMware-centric – the notion of making virtual machine disk (VMDK) becoming a first class citizen in the storage world.  The focus around LUN-centricity takes a back seat to the focus on VMDK allowing you to snapshot, clone and replicate on a VM basis.   There is a really nice write-up describing VVols by Cormac Hogan which nicely describes why virtual volumes is a nice shift in storage virtualization thinking.  The video will help  those new to VVols understand them.

One wonderful aspect about this is that it allows quality-of-service on a per VM-basis. This  ability in effect gives us a true software-defined storage framework.  Imagine buying an array that costs upwards of $400k that can serve up 2 million IOPS and yet be unable to manage those IOPS effectively with the VMs.  Management of the 2 million IOPS happens randomly and without definition. That is what some vendors want you to do. Those vendors hope you don’t notice the value of the quality-of-service aspect. These vendors , for the most part, are being left behind by software-defined features that they are incapable of addressing.  I think this will show up dramatically in next Gartner report where a sea change is happening and it is likely that a big change will occur.  It should be mentioned that some of these vendors don’t even have in-line de-dup and non-disruptive upgrades at this late date.  Ignore them. Their are vendors, even startups, that have these technologies.  For example, TinTri, a very new company, has in-line data reduction features and are working on a host of VMware features – some around VVols. Getting back to the QoS and VVols – there are some flash vendors that support quality-of-service and allow you to on a per-VM basis manage your (flash) storage performance and guarantee quality of service.  One company is SolidFire.  Remember – it is no longer simply about storage capacity when you get to the cloud – it is also guaranteeing storage performance  about limiting storage performance.

Other vendors like EMC and Tintri are working on some of these aspects – you can see in the demo what EMC is working on delivering :

 

SolidFire Demos VMware/CloudStack Cluster QoS Configured by CloudStack Plugin

SolidFire has put out an excellent, short video showing how SolidFire flash storage system integrates with the latest version CloudStack.  They show how to use their CloudStack plugin to connect a SolidFire array to two VMware clusters .  They show how to use quality-of-service in conjunction with them.  Here is the technical demo video:

A detailed guide to configuring SolidFire Storage in CloudStack gives technical details on how to create volumes, adjust performance IOPS on the volumes, adding volumes to VMware, adding volumes to XenServer, defining a volume as primary storage in CloudStack, advanced QoS configurations in CloudStack and much more.

SolidCloudstack

You can catch part two of this – which is the reference architecture by checking out this page.

 
 


gotostorageGo to more posts on storage and flash storage blogs at http://digitalcld.com/cld/category/storage.


 

Citrix CloudPlatform : Flash-Based Citrix Reference Architecture

Virtualization has become the topic du-jour among some of the flash array manufacturers.  Regretfully, some of them lack the very basics of virtualization storage resource management (limiting or guaranteeing IOPS for a particular tenant). Others can not cluster their arrays/nodes into one large storage space. Let alone add a node/array in a live non-disruptive upgrade that will add to the cluster transparently. Some, even without the ability to cluster on a simple array configuration can not do live upgrades without taking an outage.  Others don’t have de-dup.  Some don’t have any of these storage features. Which can be a an issue if you are a storage company and wish to go public.   So, many flash storage array vendors are unable to assign IOPS performance thresholds and thus end up running their storage performance randomly and indiscriminately.  I ran into this reference architecture which is apropos setting up a Citrix CloudPlatform (leveraging Apache’s CloudStack) architecture and also leveraging flash-based storage.  It should be mentioned like a very few other flash vendors, SolidFire allows all of the above features.  This is a nice write-up.

citrix

 


gotostorageGo to more posts on storage and flash storage blogs at http://digitalcld.com/cld/category/storage.


 

Cloud Storage : In Search of the The Next Generation Cloud Storage Platform

Today’s post is the second one on Cloud Storage. Today we look at SolidFire. We examine some of things SolidFire has built into their platform.  A year ago, there were only a few flash storage vendors – today it is a crowded field – and last year’s leaders are only running on last years success and marketing momentum.  A new group of flash storage companies have emerged with as much emphasis on software and storage features as on hardware.  Everyone has IOPs this year – but not everyone has key storage features like QoS, dedup, etc.

In high multi-tenant environments such as a cloud or a highly virtualized architectures – resource management is an extremely important feature.  Not just of CPU, network bandwidth and memory – but also IOPs.  There is a lot of learning going on with regards to virtualization and what it can teach us about the next generation of cloud deployments. SolidFire has an interesting read :

solidfire1

There are two interesting meta-cloud projects aimed at cloud infrastructures. For those unaware of these projects, their scope is stunning.  OpenStack is a project aimed at providing infrastructure as a service (IaaS).  There is an OpenStack Foundation that manages the project.  There are over 200 companies that are part of this project. With the OpenStack project are number of inter-related sub-projects aimed at controlling :

It should be noted that a cornerstone of the project is that OpenStack’s APIs be compatibility with Amazon’s EC2 and Amazon’s S3.

As if that wasn’t interesting enough, there is another Cloud IaaS project – CloudStack which  provides many of the same features and has been around long enough to have significant adoption.

You can see what SolidFire is doing with the Citrix’s CloudPlatform (which is based on CloudStack) and provides in this reference architecture document.

SolidFire2

OpenStack has a lot of backers and the adoption rate is quite high. With IBM’s endorsement of OpenStack it has given the project a big boost.  Enterprise flash array vendors are providing reference architectures that show how OpenStack plays with their flash storage arrays.  Today we will focus on one such company, SolidFire, which has done a lot of work to make sure that their storage products work with OpenStack and CloudStack, let alone VMware.  Both open source stacks have large followings – but for today we are looking at what SolidFire is up to.  For example, with regards to OpenStack, they have spent considerable energy providing reference architecture documents :

They have provided a short OpenStack 101 video :

solidfire10

One thing one notices is that SolidFire provides a Quality-of-Service (QoS) architecture.  Any one that has worked in virtualized environments recognizes immediately the need for resource controls on the tenants.  IO is a natural place to have such a control.   Some vendors pretend that this feature is unnecessary, but the opposite is true.  In a cloud or a highly virtualized environments with high multi-tenancy making demands on IO subsystems it makes perfectly good sense to have QoS.  SolidFire provides a an elegant solution that limits ‘noisy neighbors’ (tenants making extremely high demands on the IO subsystem and effecting performance of other tenants).  One extremely important point is that SolidFire’s storage system was not only architected with QoS in mind, but each SolidFire node is a self-contained node but when combined with other nodes functions cluster – exactly what you would expect for cloud storage.

It does, however, get even better. SolidFire’s ElementOS delivers features that other storage vendors lack. One is deduplication.  In virtualized environment this is basically one of those features that is pretty important. Often it can reduce disk use from 25% to 40%.  Some vendors don’t provide dedup and there is an enormous use of redundant files which wastes significant portions of available storage. Also delivered is thin-provisioning and real-time compression.

One recent cloud-oriented move, OnApp is working together with SolidFire to allow finely tuned billing of IOPs.  Using OnApp’s control panel it can specify minimum, maximum and burst IOPs.

solidfire11

One notices that unlike a number of  vendors that like to point to their high peak IOPs numbers – SolidFire’s aim is quite a bit more sophisticated and centered around what is increasingly the future – cloud deployments.  Deliver QoS resource controls for Cloud flash storage (which many vendors lack), deliver high performance flash storage, provide compression, snapshots/cloning, dedup, thin-provisioning and provide hardware that aggregates into useful storage clusters. In the end, unlike some vendors that deliver terabytes of isolated islands of arrays, SolidFire delivers petabytes of cloud-optimized and resource-managed clusters, and that is what the next generation of cloud storage should look like.

 
 


gotostorageGo to more posts on storage and flash storage blogs at http://digitalcld.com/cld/category/storage.