While some of the flash array vendors struggle to even get 3D NAND into their arrays, Samsung and Seagate have unleashed next generation SSDs that simply topples the scales of measurement of these devices. Consider that some flash array vendors offer storage capacity of 35 TB in 3 rack units – Samsung is offering a 32 TB 2.5″ SSD and you get a feel for the magnitude of what is happening in this segment. Of course there are other aspects to be considered but it is a stunning development that offers a number of happy alternatives to storage engineers.
Samsung put their 32 TB SSD into a 2.5″ form factor while Seagate’s is in a 3.5″ form factor.
You can read more on Samsung’s SSD here and here.
You can read more on Seagate’s SSD here.
In a huge move, NetApp has announced it’s intention to buy Solidfire. This acquisition bodes well for NetApp. It indicates an understanding of the market they are transitioning into based on the mistakes they have made. Having erred in trying to build it themselves they are now purchasing one of the most sophisticated and scalable flash storage array companies available. It has been my favorite company because they have done so many things correctly – from the management of the company to the technology they have built into their all-flash arrays. My hope is that they leverage SolidFire’s strong flash storage platform and not worry too much about cannibalizing their existing products. SolidFire offers an excellent scale-out architecture that is without doubt ahead of all the other vendors when it comes to providing cloud storage features for cloud storage providers and enterprise cloud deployments. They have won over a large number of cloud providers. It is not a surprise – they have built in a number of critical scale-out and storage features which I have previously reviewed. Solidfire scale-out arrays scale up past 100 nodes to provide a highly available view of storage with quality-of-service controls and all the usual suspects of data reduction built in to the operating system (including replication, dedup, compression, etc). In my view, focusing only on storage misses an important point – storage lives within larger ecosystems. Solidfire works on OpenStack, Cloudstack, Citrix and VMware frameworks and offers a solid well-rounded group of storage features with a focus on complete virtualization and cloud solutions. You can look at the post OpenStack Announcement: Solidfire/Dell/Red Hat Unleash Solidfire Agile Infrastructure (Flash-Storage-Based) Clould Reference Architecture to understand their involvement in cloud solutions. The Solidfire architecture allows from four arrays to 100 to be clustered and provide petabytes of flash storage that are highly available. Couple this with quality of service and all the standard data reduction features and you end up with a really nice flash storage foundation. The advantage of their approach allows unlike arrays to be clustered with different types of SSDs.They can also make use of either iSCSI or 8/16Gb Fiber Channel. It’s worth looking at some of the excellent features of this platform which include seamlessly upgrading storage or seamlessly scaling out. Here are some nice videos that demonstrate some of their advanced features :
Scale and Upgrade Storage Seamlessly
Provision, Control and Change Storage Performance
SolidFire Cluster Install and Setup in 5 Minutes
Quality of Service
Virtualization is a key aspect of modern computing architectures. Often the choice to go to hardware-level virtualization induces a damage to the performance characteristics of our virtual machines. As I have mentioned before – SmartOS zones and Docker offer a better way to go. In this presentation, Bryan Cantrill of Joyent provides a rapid-fire and humorous presentation highlighting the history of virtualization and the advantages of running Couchbase containers leveraging Triton, SmartOS and Docker. Also demonstrated is a remarkable display of Triton elasticity – easily creating a number of Couchbase servers on-the-fly all within lightweight virtualized containers running across a datacenter. What is offered is a sophisticated,highly scalable, highly performant, elastic solution for a datacenter.
Also can be found here.
It gets better. If you are interested in deploying the Couchbase containers yourself it is fairly straightforward and you can get the “recipe” from the following blog :
Following on the previous post, today’s post discusses an interesting Couchbase use-case.
Often the questions about a particular technology or product are –
- who is using it successfully ?
- how is it being used ?
- how scalable is it ?
- does it have good performance (usually within a context) ?
In Couchbase’s case they have a large volume examples of customer use-cases. One example is LinkedIn. In the first presentation there is a discussion of how LinkedIn uses Couchbase :
Within this context, an obvious second presentation is a presentation of Couchbase server scalability and performance at LinkedIn:
Increasingly, we are seeing more and more NoSQL databases used virtually across the spectrum of use-case and companies. One highly successful NoSQL implementation comes from Couchbase and I ran across a nice presentation which provides a nice introduction to Couchbase.
For more Couchbase learning – check out Couchbase’s presentation resources.
The arrival of Apache Solr 5 has brought with it a number of features. In these talks you will see some of the advantages of using Solr as your search engine.
Solr Meetup on Tuesday, August 11th in downtown Seattle discussing two topics on Solr :
How to register for the talk : http://www.meetup.com/Seattle-Solr-Lucene-Meetup/events/223899316/
This talk goes through the software stack needed to create a real estate portal. At a high level the presenter provides the initial schedule and how it was met. The use of Clojure and a number of software components and middleware was an integral part of the project.
It certainly worth viewing the presentation by Mohit Thatte as he provides a deep-dive into the Clojure data structures. The video can be found here :
The slides can be found at slideshare.
The move to the cloud is on. Increasingly, even companies that are mandated to comply with various corporate and national privacy and security standards, such as HIPAA, are also looking at ways they can extend their company networks to include auto-scaling clouds while at the same time abiding by those security standards. With the availability of sophisticated cloud providers such as AWS, Azure, Joyent and others it is increasingly attractive for companies to figure out ways to leverage these cloud providers and burst out of the corporate networks and transparently use these clouds.
In thinking about this, it becomes of interest to figure out the “how-to” of doing this. A number of cloud providers continue to work on being able to stretch on-premise with cloud. We can look at what Microsoft Azure has been doing to figure out why companies are looking at a merge of on-premises datacenter and cloud. They have provided an example of how a company could extend their on-premises datacenter with the Azure Cloud. In their informational diagram it is possible to turn on and off information bits within the the datacenter/cloud architecture. They allow for turning on and off information layers as can be seen from the figure above. You can see the example in the image below.
The above cloud represents an example of how companies can now merge the Azure cloud and their company network securely. You can find more details on this at the link below.
There are all sorts of challenges but companies like Microsoft are increasingly delivering ways to securely extend corporate networks into auto-scaling clouds.
Another company that allows bursting to the cloud is Cloudian. Their focus on providing an enterprise hybrid cloud allows corporate networks to connect safely with clouds. In Cloudian’s case, their product, HyperStore, combined with the Amazon cloud, allows for a next-generation hybrid IT cloud. The Cloudian/Amazon combination allows a 100 percent S3-compliant hybrid cloud storage platform. Dynamic migration from on-premises physical storage to off-premises cloud storage allows near infinite capacity scaling to meet the security and cost requirements of enterprise environments. Service providers who provide multi-SLA storage services are also benefited by this hybrid structure. You can read more about it :
In the next extending-into-the-cloud post, we will look at extending Microsoft SQL Server into the cloud. SQL Server 2016, when it arrives, it will encrypt all data by default, and is integrated with the R statistical programming language. More interestingly it allows a stretch into the Azure cloud. More on this in the next post. In the post that follows we will also discuss HIPAA cloud providers and whether they can remain relevant in the face of substantial improvement in merging the on-premises networks with clouds.
I’ve been heavily invested in learning and working on Solr deployments and also learning Chef this past few months. More on those technologies is coming shortly.
It is worth reading a Clojure introduction if you are trying to learn Clojure quickly. Here are two quick and useful reads.
Also there is a nice stackoverflow question and answer on learning how to write Clojure web services.