Climbing Over the Walls You Have Built : Extending Your Corporate Network to the Cloud (Part 1)

The move to the cloud is on.  Increasingly, even companies that are mandated to comply with various corporate and national privacy and security standards, such as HIPAA, are also looking at ways they can extend their company networks to include auto-scaling clouds while at the same time abiding by those security standards.  With the availability of sophisticated cloud diaglayersproviders such as AWS, Azure, Joyent and others it is increasingly attractive for companies to figure out ways to leverage these cloud providers and burst out of the corporate networks and transparently use these clouds.

In thinking about this, it becomes of interest to figure out the “how-to” of doing this. A number of cloud providers continue to work on being able to stretch on-premise with cloud. We can look at what Microsoft Azure has been doing to figure out why companies are looking at a merge of on-premises datacenter and cloud. They have provided an example of how a company could extend their on-premises datacenter with the Azure Cloud.  In their informational diagram it is possible to turn on and off information bits within the the datacenter/cloud architecture. They allow for turning on and off information layers as can be seen from the figure above. You can see the example in the image below.


The above cloud represents an example of how companies can now merge the Azure cloud and their company network securely. You can find more details on this at the link below.


There are all sorts of challenges but companies like Microsoft are increasingly delivering ways to securely extend corporate networks into auto-scaling clouds.

Another company that allows bursting to the cloud is Cloudian.  Their focus on providing an enterprise hybrid cloud allows corporate networks to connect safely with clouds. In Cloudian’s case, their product, HyperStore, combined with the Amazon cloud, allows for a next-generation hybrid IT cloud. The Cloudian/Amazon combination allows a 100 percent S3-compliant hybrid cloud storage platform. Dynamic migration from on-premises physical storage to off-premises cloud storage allows near infinite capacity scaling to meet the security and cost requirements of enterprise environments. Service providers who provide multi-SLA storage services are also benefited by this hybrid structure. You can read more about it :


In the next  extending-into-the-cloud post, we will look at extending Microsoft SQL Server into the cloud.  SQL Server 2016, when it arrives,  it will encrypt all data by default, and is integrated with the R statistical programming language. More interestingly it allows a stretch into the Azure cloud.  More on this in the next post.  In the post that follows we will also discuss HIPAA cloud providers and whether they can remain relevant in the face of substantial improvement in merging the on-premises networks with clouds.

Recommended Viewing : The BlueKai Playbook for Scaling to 10 Trillion Transaction a Month

Good talk on delivering a highly scalable solution. Ted Wallace, VP of Data Delivery at BlueKai discusses how BlueKai scales to 10 trillion data transactions per month.  BlueKai provides data-driven marketing and as a result needs highly scalable solutions. Ted Wallace discussed how they do this. He provides some good details – they use Aerospike to get the high database performance – average read/write response times are between 1 – 2 ms. Six Aerospike clusters with 6 to 10 server in each of three geographically located data centers. They use standard Linux hardware with four Intel 800G SSDs in each and 128 GB to 256 GB of RAM. Lots more details in the talk.  Select the image to go to the talk.


Recommended Reading: Exadata X3 – Measuring Smart Scan Efficiency With AWR

Ran across this on twitter. If you are interested in getting datapoints on Oracle Exadata there are some nice new data points from Trivadis that are certainly worth reading.  Keeping in mind that this is Exadata X3 (1/8 rack = 2 servers) and that Exadata X4 is now available. The report is in PDF format.




Recommended Viewing : Overcoming Roadblocks to the All-Flash Data Center

Very good talk on Nimbus Data‘s all-flash array.  Definitely worth listening to if you are looking at implementing flash storage in your data center.  Thomas Isakovich, CEO and gemini-drive-swapfounder of Nimbus Data along with George Crump of Storage Switzerland provided an informative hour covering considerations of implementing all-flash arrays into the data center. Nimbus Data has been doubling their array sales every year for the past three years and they are doing this without the traditional VC funding.  One interesting aspect to me about this, is that having watched the unfortunate Violin Memory IPO and subsequent disappointing earnings, coupled to a stock that went from $9 a share to $2.50 a share and today sits at $3.41 –  it is refreshing to see a company with a different strategy.  The talk discussed typical roadblocks to implementing all flash in a data center.  The new Gemini arrays were covered – these are actually very fast enterprise SSD-based array that is highly customized to deliver high performance, throughput and low latency in a small form factor with excellent power and cooling numbers. Gemini can produce over 2 million IOPS and 12 Gbps throughput. Also covered is the array software, Halo, which has dedup, thin-provisioning, cloning, snapshots, encryption, etc. Also covered were use-cases for Gemini. Good talk. You can select the image below or go to this link to go to talk.

ndbright01I encourage you to go the the Storage Swiss web site which is full of resources on these topics. Excellent site.


gotostorageGo to more posts on storage and flash storage at

Recommended Viewing : Presentation Videos From O’Reilly Velocity 2013 Conference – Web Performance And Operations

The  presentations from the O’Reilly Velocity 2013 Conference are available in video format.  If you don’t what this conference is about :

  • Three days of concentrated focus on key aspects of web performance, operations and mobile performance.
  • Keynotes, tutorials and sessions
  • Experts, visionaries and industry leaders converge along with hundreds of web developers, sys admins and other web professionals all under one roof.

The slides :


In addition – a recent post on immutable servers :



Recommended Reading : Today, IOPS Matter Less Than A Good Architecture and Storage Features

chinesedragonIf you have a fire-breathing dragon of a flash array that can deliver millions of IOPS  but you can’t leverage the features you need to increase the storage capacity (deduplication, compression and thin provisioning), upgrade the array while it’s running in production, or can’t easily replicate the data on it, can’t cluster multiple arrays or have high availability  – what do those IOPS serve ?  Features that support cloud and enterprise operations within these flash storage arrays are more important than IOPS and certainly architecture and price are important considerations as well.  In an interesting and excellent article in the author looks at the IOPS competition and further points out that producing huge benchmark numbers can be even done with consumer grade SSDs. I carry this further.  When I asked one vendor, who was touting a 2 million IOPS benchmark they had just finished and  were busy trying to convince the world of its value, if they had used linked clones on this VMware benchmark  – they answered ‘no’ as if it was a surprising question, or at least an inconvenient one.  Compression was ‘no’, de-duplication ‘no’, etc. You get the picture.  Today, I understand that this storage vendor didn’t have de-duplication and some other features found in competing systems. The native storage features of an array’s operating environment can offer huge value – companies like Nimbus Data, SolidFire, Pure Storage, Hitachi and others get this.  All of these companies can produce impressive IOPS benchmarks and have, but the battle has ceased to be about delivering mega-huge IOPS benchmarks  – it’s about how those IOPS can be used in production settings and the storage features around those IOPS.


More at : Top Thirteen Questions Questions To Ask Your Storage Array Vendor.

[ Photo : Dragon, Shanghai Art Museum.  ]


gotostorageGo to more posts on storage and flash storage at


Recommended : Nimbus Data Aims High and Delivers; Also Releases VDI Benchmark

Fast performance is one aspect – but when you couple to a suite of data reduction technologies and storage features you get something much more useful and resilient. Some of the features in the latest arrays from Nimbus Data are well thought out and absolutely great from and enterprise and cloud perspective.

Nimbus Data has really arrived. Its new Gemini arrays challenges it’s competitors in a serious way.  It has hopped over the leading flash array competitor by offering full non-disruptive upgrades coupled with full array redundancy, hot-swap-everything, in-line data reduction in the form of thin-provisioning, replication, deduplication and compression and NFS and CIFS.  The amazing thing is that those are just the tip of the iceberg.  A deep dive video really reveals an excellent design and some surprisingly great advancements to flash array technology in general :

It has also demonstrated something that many other leading flash vendors have not been able to do.  It leverages eight 16 Gbs FC ports in its Nimbus Gemini arrays. It also offers two hot-swappable controllers. They have advanced the multi-protocol capability of the product by offering the ability to run 40 Gb ethernet and Infiniband at the the same time or alternatively ethernet and fiber channel at the same time. They have adapters that can run at 10 Gb ethernet.  The controllers parallelize the IO across all 24 flash drives.  The modules can be removed from the front – a great design for removing the flash modules (most excellent – no removing the array out of the rack, taking the top off and potentially having servicing dilemmas like some vendors).


In a new benchmark they demonstrated the strength of the new arrays at handling VDI.  The benchmark was run with a Nimbus Gemini dual-controller 2U F400 all-flash array with 24 TB of raw capacity.

Data Point : The single array had 17.6 TB usable capacity for the test, and featured 24 one terabyte solid-state disks and a 4 TB cache with write-back caching.  A single Nimbus Gemini F400 can support more than 4,000 simultaneous VDI users at less than $40 per desktop.

You can read the full report :


The focus on a unified array operating system (more on this in a future post) that offers a full range of storage features that I have written about in earlier posts is an important aspect of the new arrays.


gotostorageGo to more posts on storage and flash storage at


Recommended Reading : Enabling Database Replication in the Cloud Using Jelastic

Replication is an important and key  technology for any database server. Without it  downtime or significant data loss which can incur large revenue losses. By replicating data from a master to one or more standby servers one can avoid any data loss. Three interesting articles.  The first example of replication in the Jelastic cloud shows how to replicate with MariaDB : jelastic00 The second one shows how to enable PostgreSQL replication in a Jelastic cloud. jelastic01 The next one shows how to enable MongoDB replication : jelastic02