Tag Archives: Thin Provisioning

The Emprise 9000- Scaleout SAN Architecture

If you look up Emprise in the Merriam-Webster dictionary you will see that it means “an adventurous or daring enterprise.”   That pretty much describes the Emprise product family’s launch 2 years ago.  We did something that no one else was, or is doing today.  Imagine being able to start from scratch on a storage solution, and I’m not talking about controller software.  I’m talking a complete re-engineering/architecting a solution that is built with enough resiliencies to offer the only zero-cost 5-year hardware warranty in the storage industry.  Not only is it super reliable, but it’s ridiculously fast and predictable.  When you can support 600+ Virtual Desktop (Performance VDI) “bootstorm” instances at a whopping 20 IOPS per bootup in 3U of space, I would classify that as wicked fast!!!  

 

In those 2 years we have not sat around on our laurels.  Steve Sicola’s team, headed up by our VP of Technology David “Gus” Gustavsson, has really outdone themselves with our latest Emprise product launch.  Not only did we move our entire user interface from “Web Services” to a RESTful API (ISE-Manager (blog about this later) and our iPhone/iPad App), he also released our 20(ea) 2.5” disk drive DataPac which has 40(ea) 2.5” drives in 3U of space for about 19.2TB’s space and a TON of performance.  His team also released our ISE Analyzer (advanced reporting solution built on our CoreteX/RESTful API (www.CorteXdeveloper.com )– I’ll blog about that soon) and our next release of our Emprise product family, the Emprise 9000.  I swear his team doesn’t sleep!!!

 

 So, the Emprise 9000 is a pretty unique solution in the market.  Today, when you think “scale out” architecture the first thing you might think about is NAS.  Hopefully our ISE NAS !!  We hope moving forward you will also think of our Emprise 9000.  The Emprise 9000’s ability to scale to 12 controllers puts it way above the 8 controllers the 3PAR solution scales to and above the 2-controllers the rest of the storage world produces (EMC Clariion, Compellent, HP EVA, IBM XIV etc).  When married with our Intelligent Storage Element (ISE) it truly gives our customers the most robust, scalable solution in the storage market today.  

 

Let’s be clear, the Emprise 9000 is not just a controller update.  It’s a combination of better, faster controllers, RESTful API and our ISE technology combined to solve performance starved applications issues like Virtual Desktops, Exchange, OLTP, Data Warehouse, Virtual Servers as well as various other types of applications found in datacenters today.  The ability to give predictable performance whether the solution is 10% utilized, or 97% utilized is a very unique feature.  Did I mention it comes with our free zero-cost hardware maintenance?  24x7x365 !!!

So for those keeping a tally at home, and for those competitors that want a little more information on what the Emprise 9000 can do, here is a quick list: (this is not all the features)

  • Each controller has a Dual quad-core Nehalem CPU’s!!
  • Scale-out to 12 Controller pairs
  • 8Gb Fibre Channel ports
    • N-Port Virtualization (NPIV)
  • 1Gb or 10Gb iSCSI ports                (10GB later this quarter)
    • You can run both FC and iSCSI in the same solution.
  • Scalable from 1 to 96(ea) ISE’s of any size
    • Max capacity would be 1.8PB with 96(ea) 19.2 TB ISE’s
  • Support for greater then 2TB LUNS
    • Up to 256TB size LUN
  • Thin Provisioned Volumes
  • Snapshots
    • READ only Snapshots
    • Writeable Snapshots as well. Think “smart-clone” technology of VDI
  • Heterogeneous Migration
    • You want to migrate off that EMC, HP, 3PAR, HDS, etc – we can do it natively in our storage controllers.
  • Sync/Async native IP or FC replication

 

So, as you can see it’s a pretty impressive list!!  And as with all new products, we will be adding new features pretty quickly so stay tuned to announcements from us around the 9000.  BUT there’s more!!  But I can’t really go into it today 🙂  Just stick around a couple of months for some even cooler stuff Gus’s team will be rolling out.   I just got back from a week in Vegas getting feed by a firehose about all the stuff we will be rolling out by the end of the year.  WOW !!  Impressive to say the least !!! 🙂

@StorageTexan <-Follow me on Twitter

Advertisement

The old pain in the DAS

The old pain in the DAS !! 

Is it me, or are others in the field seeing more and more companies being pushed to look at DAS solutions for their application environments?  It strikes me as pretty interesting.  I’m sure mostly it’s positioned around reducing the overall cost of the solution, but I also think it has a lot to do with assuring predictability around performance that you can expect when not having to fight for IOPS among other applications.  In fact, Devin Ganger did a great blog post around this very subject in regards to Exchange 2010.  It was a pretty cool read. I left a comment on his site, it pretty much matches (some may call it plagiarizing myself 🙂 ) my discussion here but I’ve had some time to expand a little more.  As such, here are my  thoughts on my own site 🙂

Let’s take a look back 10 years or so.  DAS solutions at the time signified low cost, low-to moderate performance and basic RAID-enabled reliability, administration time overhead, downtime for any required modification, as well as an inability to scale.  Pick any 5 of those 6 and I think most of us can agree on it.  Not to mention the DAS solution was missing key features that the applications vendors didn’t include like full block copies, replication, deduplication, etc.  Back then we spent a lot of time educating people on the benefits of SAN over DAS.  We touted the ability to put all your storage eggs in one highly reliable, networked, very redundant, easy-to-replicate-to-a-DR site-solution, which could be shared amongst many servers, to gain utilization efficiency.  We talked about cool features like “Boot from SAN” as well as full block Snapshots, replicating your data around the world and the only real way to do that was via a Storage array with those features. 

Fast forward to today and the storage array controllers are not only doing RAID and Cache protection (which is super important), they also doing thin provisioning, CDP, replication, dedupe (in some cases), snapshots (full copy and COW or ROW), multi-tier scheduled migration, CIFS, NFS, FC, iSCSI, FCoE, etc etc.  It’s getting to the point that performance predictability is pretty much going away not to mention, it takes a spreadsheet to understand the licensing that goes along with these features.  Reliability of the code, and mixing of different technologies (1GB, 2GB, 4GB, FC Drive bays, SAS connections, SATA connections, JBODs, SBODs, loops) as well as all the various “plumbing” connectivity options most arrays offer today is not making it any more stable.  2TB drive rebuild times are a great example of adding even more for controllers to handle.  Not to mention, the fundamental building block of a Storage array is data protection.  Rob Peglar over at the Xiotech blog did a really great job of describing “Controller Feature Creep”.  If you haven’t read it, you should.  Also, David Black over at “The Black Liszt” discussed “Mainframes and Storage Blades” he’s got a really cool “back to the future” discussion. 

Today it appears that application and OS/hypervisor vendors have caught up with all the issues we positioned against just years ago.  Exchange 2010 is a great example of this.  VSphere is another one.  Many application vendors now have native deduplication and compression built into their file systems, COW snapshots are a no-brainer, and replication can be done natively by the app which gives some really unique DR capabilities.  Not to mention, some applications support the ability to migrate data from Tier 1, to Tier 2 and Tier 3 based not only on a single “last touched” attribute, but also on file attributes like content (.PDF, .MP3), importance, duration, deletion policy and everything else without caring about the backend storage or what brand it is.  We are seeing major database vendors support controlling all aspects of the volumes on which logs, tablespaces, redo/undo, temporary space, etc. are held.  Just carve up 10TB’s and assign it to the application and it will take care of thin provisioning and all sorts of other ‘cool’ features.   

At the end of the day the “pain in the DAS” that we knew and loved to compete against is being replaced with “Intelligent DAS” and application aware storage capabilities.  All this gives the end user a unique ability to make some pretty interesting choices.  They can continue down the path with the typical storage array controller route, or they can identify opportunities that leverage native abilities in the application and “Intelligent DAS” solutions on the market today to vastly lower their total cost of ownership.   The question the end user needs to ask is, ‘What functionality is already included in the application/operating system I’m running?’ vs. ‘What do I need my storage  system to provide because my application doesn’t have this feature?’  At the end of the day, it’s Win-Win for the consumer, as well as a really cool place to be in the industry.  Like I’ve said in my “Cool things Commvault is doing with REST”, when you couple Intelligent DAS and application aware storage with a RESTful open standards interface, it really starts to open up some cool things. 2010 is going to be an exciting year for Storage.  Commvault has already started this parade so now its all about “who’s next”. 

 @StorageTexan <– Click on me to follow on Twitter. 

 PS – I’ve added an e-mail subscription capability to my site (as well as RSS feeds).  In the upper right corner of this site you will see a button to sign up.  Each time I post a new blog, you will get an e-mail of it.  Also, you will need to confirm the subscription via the e-mail you receive.