The Time for Storage Blades

The time for Storage Blades.

 I’m writing this blog from the El Dorado Resort in the Riviera Maya Mexico while sipping some sort of Frozen Blue Hawaiian cocktail, so my life at the moment doesn’t suck.  When most people are catching up on the latest Twilight novel, or Andre Agassi’s biography, I’m flipping through whitepapers and tech docs I’ve been printing out for the last few weeks.  For those of you that do a great job of “unhooking” from work during vacations, I applaud you. 

I was reading through a few things, and it’s becoming clear to me that the industry is at a crossroads.  I liken this to the same situation the server vendors went through just 6 or 7 years ago with the deployment of server blade technology.  I recall talking with a large independent school district in West Texas back then about connecting the new IBM BladeCenter with our Magnitude 3D solution.  The BladeCenter was a new thing, and this customer was searching for an ability to reduce rack space as well as cost predictability when deploying future applications.  Since then, blade servers have proliferated to the point that they’re typically the server of choice for deploying server hardware.  Virtual servers have taken this to another level.  It’s not uncommon for companies to fully understand the number of virtual machines a certain model Blade can support and buy different types of blades based on those requirements.

I think the storage industry is at this pivot point as well.  We can accurately tell you within a few GB’s the capacity predictability of a solution we quote, so why is it that we have a difficult time being that predictable with performance?  We still get speechless when customers ask what sort of performance predictability can I expect today, in 1 year, in 3 years and in 5 years.  This is going to become MORE important as virtual desktops and other IOP-hogging applications become more mainstream.  I’ve stopped being surprised when talking with channel partners as they recite problems when they deployed a certain storage vendor’s solution and they find out 3 months later it was undersized. Een though according to the vendor’s tools, it was sized according to the vendor’s best practice, and the customer still runs into performance issues as they ramp up the use of the solution.  What’s even more surprising is the storage vendor’s near-giddy response when telling them how they can solve this issue.  SSD anyone?  Can you spare a few tens of thousands of dollars after you’ve already blown your budget on the acquisition to purchase 2 SSD drives? Just to solve a performance issue?

Maybe it’s time for the storage industry to look at Storage Blades as a means to meet this performance/capacity predictability paradigm.  Imagine looking at storage as a building block for an overall storage strategy that allows you to be predictable in both capacity and performance so your sizing of application deployment becomes easier and more cost effective. Imagine knowing the dollars per IOP per Rack Unit on the life of the storage blade?  In other words, you can expect X IOPS per RACK Unit for the 5 year life expectancy of the storage solution.  Imagine being able to do this knowing that the predictability is based on running the storage solutions at 97% utilized  This begins to put you on par with the capacity planning you’ve grown accustomed to knowing.

I believe that Storage Blade architecture has the ability to bring the storage industry to a crossroads.  Do we continue piling on features and functions, already found in typical applications and OS’s, and live with horrible utilization rates and the inability to predict basic performance results?  Or should we stop this controller feature crawl and hit the big issues at hand: cost, capacity and performance predictability?  Storage Blades, like their server brethren, can possibly be the means to this end.  If nothing more, for certain applications, Storage Blades just make perfect sense. 

Oh well – I just saw someone order a Frozen Mojito that looks really good.  Back to the grind that is my vacation 🙂

@StorageTexan

Advertisement

The COOL things Xiotech is doing with RESTful API

The COOL things Xiotech is doing with RESTful API !!

(Video Demo at the bottom)

So, if you remember a few weeks ago I blogged about Commvault’s launching of their RESTful interface connector.   Well, one of the cool things to come out of this spring’s Storage Networking World in Orlando is Xiotech’s release of our CorteX Control Path RESTful API.  As far as I know, Xiotech is the only Storage Vendor pushing this sort of announcement out.  For cloud providers, this is the ultimate in flexibility around monitoring and managing storage blades.   Just as an example, in the Xiotech booth we are showing our new iPhone/iPad Application called ISEview.

  This should be in the Apple iTunes store in the next few weeks.  Well, as soon as we can get our developer to stop “tweaking/adding” cool things to it. 🙂   

  

Today this app is more useful in showing what can be done when you move to an open system, RESTful API.  In talking with the developers and marketing, we have some really cool things we want to add to this as we move forward.  As you can see from above, we can view ISE performance statistics as well as look at the MRC’s (Managed Reliability Controllers) and DataPac, power supply and cache battery health as well as some log information.  I can’t wait to see what other things Karl Morgan and Todd Burkey will be adding to it later!!

While the iPhone/iPad app is a “Geeky cool thing to do” again it speaks to what RESTful can do.  In our SNW booth we are also showing some work that Olocity did in their StorageIM monitoring product. 

 

My understanding is using our RESTful API SDK they were able to add our Emprise 5000 into their monitoring suite in only a few days.

Anyway, over the next few days I hope to catch up on some blogging, and also post a summary of “what I learned at SNW” so stay tuned.  BUT, no promises J

@StorageTexan <–Follow me on twitter!!

PS – thanks for @SFoskett for video taping a demo of the iPad / iPhone Application user interface. 

The journey to the Unified Storage Platform

Who would have thought such a simple question – really more of “seeking to understand” as my VP of Sales Mark Glasgow calls it – would kick off such a slew of e-mails, comments and tweets. The other day I asked the following question on Twitter:

            @StorageTexan: question 4 all NFS/NTAP/Celerra Luving ppl. wht is ur opinion on y NFS is superior ovr block device access in #VSphere?

I got a TON of feedback, so much so that I decided to ask the question on my blog as a means to collect the responses and to allow others more than 140 characters to respond 🙂  Out of that came some really interesting responses, including George Crump sharing his views in Information Week.

 Let’s take a step back. The majority of my storage life has been in the block-based access protocol.  For the majority of my career it was all about Fibre Channel connectivity.  Have you heard the saying “If all you sell are hammers, then everything in the world is a nail”?  That’s sort of where I was at 5 or so years ago.  The FC was my hammer, and all connectivity issues could be resolved with FC!!  Then a few years ago iSCSI started to inch its way into the discussion.  Xiotech adopted it as another option for connectivity and I got to add another tool into my bag 🙂 

Then I asked the above question as a means of self reflection.  Why would someone choose to bypass block-based connectivity in lieu of file-based?  Just didn’t seem logical to me.  I even happened to have that particular tool in my bag, so it’s wasn’t as if I was trying to compete against it, just wanted to see what the big deal was.  Today, almost all storage vendors (Xiotech included) offer NFS connectivity.  Some utilize gateways like the EMC Celerra, NetApp V-Series as well as Xiotech.  Others use native support like PillarData and NetApp FAS product line. 

At first blush I thought, it has to be because IP is viewed as being less expensive, less complicated then native Fibre. But I think at this point, the price argument against FC should be put to bed, thanks in large part for Cisco/Brocade/Qlogic driving down the costs.  Complexity is also something I think should be/could be put to bed.  Just sit in front of a Cisco Ethernet and a Cisco MDS switch, the IOS is the same; the perceived complexity around FC is really no longer an issue.  Now, for the smallest of the SMB, maybe costs and perceived complexity is enough to choose NFS.  I can see that.

Maybe it’s because these gateway devices offer something their block based architecture can’t support.  That starts to make sense.  Maybe it’s some sort of feature that drives this decision.   In some cases, maybe its thin provisioning, better integrated snapshot, single instance storage/Data DeDupe and even advanced async replication.  Most storage arrays on the surface can do these, but with a gateway device maybe they can do this better, cheaper, faster, etc.?  For the SME/SMB I can see this as a reason.

Then again, according to some of the people that responded to my blog and twitter, maybe it’s for performance reasons.  Some sort of ability to cache the front-end writes make the applications/hypervisors/OSes just run quicker. Others suggested that gateway devices made it just “stupid simple” to add more VM’s because you could essentially treat an NFS mount as a file/folder, and you can just keep dropping VMDK’s (files essentially) into these folders for easier management.  That makes sense as well.  I can see that, I mean if you look at a NAS device it’s essentially a server that runs an OS with a file system that connects to DAS/JBOD/SBOD/StorageArray on the backend right?   It could be viewed as a caching engine.

Then it dawned on me, it’s not really about one being better than the other, it’s more about choices.  That’s what “Unified Storage” is all about, the ability to add more tools to your bag to help solve your needs.  If you look inside a datacenter today, most companies have internally tiered their applications/servers to some extent.  Not everything is run on the same hardware, software etc.  You pick the right solution, for the right application.  Unified Storage is the ability to choose the right storage connectivity for the various different applications/hypervisors and operating systems.  The line gets really blurred as gateway devices get more advanced and better integrated. 

Either way, everyone seems to be moving more and more to the Unified Storage device.  It should be interesting to see what sort of things come out of Storage Networking World in a few weeks !!

@StorageTexan

The Debate-Why NFS vs Block Access for OS/Applications

 The Debate – Why NFS vs Block Access for OS/Applications?

I’m going to see if I can use my blog to generate a discussion. First and foremost, I don’t want this to be a competitive “ours is better than yours”.  So let’s keep it civil!!

Specifically I’m trying to wrap my head around why someone would choose to go NFS instead of doing block based connectivity for things like VSPhere?  For instance, NetApp supports NFS and block based FC connectivity.  EMC typically uses CX behind their NS gateway devices.  Most Storage companies (Xiotech included) offer a NAS solution – either native or gateway. 

So, why would I choose to run for instance VMware Vsphere (as an example) on NFS when I could just as easily run it without ? Is it that the file system used for that particular companies NAS solution offers something that their block based solution can’t?  ie) thin provisioning, Native DeDupe, Replication etc.  Is it used more as some sort of caching mechanism and gives better performance?  Is it more fundamental and it’s more of a connectivity choice (IP vs FC vs NFS Mount)?

Please feel free to post your responses.

Thanks,

@StorageTexan

VMware Virtual View

   

Xiotech Virtual View for VMware, HyperV and Citrix.  

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on.   In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster.  With Xiotech’s Virtual View it’s as easy as 1, 2, 3.  Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster.  From a high level, here is how you would do it today.     

 I like to refer to the picture below as the “Rinse and Repeat” part of the process.  Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.    

VMware Rinse and Repeat process

  

With Virtual View the steps would look more like the following.  Notice its “wizard” driven with a lot of the steps processed for you.  But it also gives you an incredible amount of “knob turning” if you want as well.     

Virtual View Wizard Steps

  

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.     

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!    

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!    

 If you have any questions, feel free to leave them in the comments section below.    

Thanks,    

@StorageTexan    

PS – here is a quick “commercial for Virtual View”    

10,000 Exchange Users in 3U of space

 

This is a pretty cool video done by the Technical Marketing team at Xiotech.  10,000 Exchange users in 3U of space!!   No fancy/expensive SSD needed for this !!

In summary:

250 VDI instances or

10,000 Exchange Users or

750 DVD Quality Video’s or

25,000 MP3’s

In 3U of space.  Now that’s WICKED FAST !!!

 As they say in Minny – “that’s not to shabby !!”

By the way, if by chance 10,000 is just not enough users for you.  Don’t worry, add a second ISE and DOUBLE IT TO 20,000.  Need 30,000, then add a THIRD ISE.  100,000 users in 10 ISE or 30U of RackSpace.  Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!!  I should say, our ISE comes with 1GB.  It’s not the size that counts, it’s HOW YOU USE IT !! 🙂

One Pillar Axiom 600 with one FC Slammer
24GB of cache <—-  WOW !!!!!
4 SSD Bricks for databases. Each with:
Two dedicated RAID controllers
13 50GB SSDs <— I’m going to guess that these aren’t very cheap.
2 SATA bricks for Logs. Each with:
13 500GB 7,200 RPM SATA disk drives

Hitachi AMS 2300 = 10,800 users – 400+ Pages PLUS !!! <– I have to say it again, WOW 400+ pages on this bad boy !!!

240 300GB 15K RPM SAS disks, <— Ahh ya  – TONZ of spindles !!!  We had 20(ea) 3.5″ Drives to do our testing. 
16GB of cache and
8(ea) 4Gb/s Fibre Channel paths was used for these tests.
Testing used 8 Sun Fire 4600 M2 servers with 32GB of RAM,
four dual-core AMD Opteron CPUs,
8(ea) Emulex 4Gbit/s Fibre Channel adapters and
Windows Server 2003 R2 Enterprise x64 with Service Pack 2.

Truth-Lies-and-Software-Licensing

Truth, Lies  and Software Licensing.

Nothing get’s vendors more fired up then calling their baby ugly, especially if their baby is the mainstay of their product.  Case in point, there was a recent discussion around the importance, or lack thereof, in regards to automated tiered storage (A-T-S).  Jon Toigo jumped into the mix last week with some pretty good points.  Jon mentioned that he received positioning statements from a few vendors and twitter was going crazy with comments from all sorts of people weighing in on the subject.   Pete Selin over on his blog jumped into it with John Dias of Compellent.  So, to say that enough people have voiced their opinion on how, where and when “automated tiered storage” should happen, is an understatement.  Don’t get me wrong, StorageTexan is not immune to piling on, but I’m going to refrain.  Well, I’m going to try really hard to refrain 🙂

If we take a step back and look at this as just another feature in a storage array we then need to ask “what’s the value” of that feature?  In other words, why does a customer choose to go that route?  Most of the time features like this are positioned to reduce some sort of costs.  Thin Provisioning is positioned to allow you to under purchase storage, and then as you need it purchase the storage later at a reduced cost.   The theory is disk drive prices are always going down.  In the case of A-T-S it’s to reduce the costs of “Tier 1” storage.  Why spend money on storing your stale data on 15k RPM drives when you can move less accessed data down to 10k, or even 7.5k RPM spinning “rust” (shout out to Toigo).  At least that’s how I’m seeing this positioned in accounts.  Again on paper both of these 2 solutions make perfect sense.  If all things were equal, it is cheaper to purchase drives down the road (when prices drop) or purchase 7.5k over 15k disk drives.   The great equalizer is the cost to license these features.  Not just the costs, but software maintenance that goes along with them as well.  It would be easy to debate these features on their cost savings model if they would just charge you a onetime fee for this feature.  Unfortunately in most cases it’s a little more difficult.  Typically these types of features are licensed by capacity (Per TB charge), or in other case it’s licensed by disk drives or BOTH :).  So, when you add more capacity or more spindles, you licensed those drives or capacity for those features.  You are always hit with a moving target.   When weighing the pros and cons it’s really important that you don’t get focused (by the vendor) in only one direction.  In the Thin Provisioning example, the cost savings of not having to purchase more storage then you need might be outweighed by the fact that the license isn’t free, or that you have the potential of getting locked into that vendor for all future storage purchases.  In the case of A-T-S, not having to purchase 15k drives is offset by the software licenses needed to support that feature.  You might find that running everything in Tier 1 (or even Tier 2) vs. the licensing costs (Maintenance included)of automated tiered storage is a wash.

My advice, do your homework.  As I’ve discussed in prior blogs, don’t focus on the easy stuff.  Cost per RAW TB is not a good way to measure value.  Fully understanding vendor’s storage utilization best practice is important as well.  Understanding how their features are licensed and the software maintenance associated with those features are just as important.   I always like to ask my prospects, when comparing vendors, to not just get the upfront storage purchase price, but also ask for a quote to add 20% more capacity (spindles, bays, licenses and maintenance).  This isn’t the perfect way to keep vendors honest, but it does give you a good indication of what sort of cost savings you may, or may not see in future purchases.  Lest any vendor try to low ball on the initial purchase and then jack up the costs on the next one!!  Xiotech has a whitepaper called “Strategies for Measuring and Optimizing the Value of Your Storage Investments” that goes into some other thoughts on measuring the value of your storage purchase.  It’s worth a look.

Again, at the end of the day, do your homework and ask a lot of questions. 

 @StorageTexan

The old pain in the DAS

The old pain in the DAS !! 

Is it me, or are others in the field seeing more and more companies being pushed to look at DAS solutions for their application environments?  It strikes me as pretty interesting.  I’m sure mostly it’s positioned around reducing the overall cost of the solution, but I also think it has a lot to do with assuring predictability around performance that you can expect when not having to fight for IOPS among other applications.  In fact, Devin Ganger did a great blog post around this very subject in regards to Exchange 2010.  It was a pretty cool read. I left a comment on his site, it pretty much matches (some may call it plagiarizing myself 🙂 ) my discussion here but I’ve had some time to expand a little more.  As such, here are my  thoughts on my own site 🙂

Let’s take a look back 10 years or so.  DAS solutions at the time signified low cost, low-to moderate performance and basic RAID-enabled reliability, administration time overhead, downtime for any required modification, as well as an inability to scale.  Pick any 5 of those 6 and I think most of us can agree on it.  Not to mention the DAS solution was missing key features that the applications vendors didn’t include like full block copies, replication, deduplication, etc.  Back then we spent a lot of time educating people on the benefits of SAN over DAS.  We touted the ability to put all your storage eggs in one highly reliable, networked, very redundant, easy-to-replicate-to-a-DR site-solution, which could be shared amongst many servers, to gain utilization efficiency.  We talked about cool features like “Boot from SAN” as well as full block Snapshots, replicating your data around the world and the only real way to do that was via a Storage array with those features. 

Fast forward to today and the storage array controllers are not only doing RAID and Cache protection (which is super important), they also doing thin provisioning, CDP, replication, dedupe (in some cases), snapshots (full copy and COW or ROW), multi-tier scheduled migration, CIFS, NFS, FC, iSCSI, FCoE, etc etc.  It’s getting to the point that performance predictability is pretty much going away not to mention, it takes a spreadsheet to understand the licensing that goes along with these features.  Reliability of the code, and mixing of different technologies (1GB, 2GB, 4GB, FC Drive bays, SAS connections, SATA connections, JBODs, SBODs, loops) as well as all the various “plumbing” connectivity options most arrays offer today is not making it any more stable.  2TB drive rebuild times are a great example of adding even more for controllers to handle.  Not to mention, the fundamental building block of a Storage array is data protection.  Rob Peglar over at the Xiotech blog did a really great job of describing “Controller Feature Creep”.  If you haven’t read it, you should.  Also, David Black over at “The Black Liszt” discussed “Mainframes and Storage Blades” he’s got a really cool “back to the future” discussion. 

Today it appears that application and OS/hypervisor vendors have caught up with all the issues we positioned against just years ago.  Exchange 2010 is a great example of this.  VSphere is another one.  Many application vendors now have native deduplication and compression built into their file systems, COW snapshots are a no-brainer, and replication can be done natively by the app which gives some really unique DR capabilities.  Not to mention, some applications support the ability to migrate data from Tier 1, to Tier 2 and Tier 3 based not only on a single “last touched” attribute, but also on file attributes like content (.PDF, .MP3), importance, duration, deletion policy and everything else without caring about the backend storage or what brand it is.  We are seeing major database vendors support controlling all aspects of the volumes on which logs, tablespaces, redo/undo, temporary space, etc. are held.  Just carve up 10TB’s and assign it to the application and it will take care of thin provisioning and all sorts of other ‘cool’ features.   

At the end of the day the “pain in the DAS” that we knew and loved to compete against is being replaced with “Intelligent DAS” and application aware storage capabilities.  All this gives the end user a unique ability to make some pretty interesting choices.  They can continue down the path with the typical storage array controller route, or they can identify opportunities that leverage native abilities in the application and “Intelligent DAS” solutions on the market today to vastly lower their total cost of ownership.   The question the end user needs to ask is, ‘What functionality is already included in the application/operating system I’m running?’ vs. ‘What do I need my storage  system to provide because my application doesn’t have this feature?’  At the end of the day, it’s Win-Win for the consumer, as well as a really cool place to be in the industry.  Like I’ve said in my “Cool things Commvault is doing with REST”, when you couple Intelligent DAS and application aware storage with a RESTful open standards interface, it really starts to open up some cool things. 2010 is going to be an exciting year for Storage.  Commvault has already started this parade so now its all about “who’s next”. 

 @StorageTexan <– Click on me to follow on Twitter. 

 PS – I’ve added an e-mail subscription capability to my site (as well as RSS feeds).  In the upper right corner of this site you will see a button to sign up.  Each time I post a new blog, you will get an e-mail of it.  Also, you will need to confirm the subscription via the e-mail you receive.

Backup-Restores-Power and cooling oh my!!

In today’s datacenter no matter how much de-duplication, storage-tiering, and archiving companies attempt to throw at an issue, there still seems to be an explosion of information that has to be backed up, protected, restored and archived.   I’ve stopped being surprised each time I’ve asked a customer how their backup window is doing.  It’s always horrible and out of control.  Even with advanced data de-duplication it still surprises me at the responses I get.  Not to mention, most of the time customers are running out of power, cooling as well as rackspace in their datacenters.  All of this becomes a sort of “Perfect Storm” that has the potential to sink the datacenter into a mess of inefficiencies. 

So as I’ve discussed in the past, I get to spend a lot of time architecting solutions.  One of the things I spend a lot of time helping design is solutions to help eliminate performance bottlenecks.  The great news is I feel pretty strongly that we have a solution that is best in the industry.  Imagine if you could eliminate storage as a potential bottleneck as well as reduce your power, cooling and overall carbon footprint with one storage solution?   Awesome right!!!  What if I told you that Xiotech has the fastest, best throughput raid-protected spinning media solution in the market today (by Storage Performance Council SPC-2)?  What if I also told you that not only is it the fastest, but it’s also the most greenest (is that a word?) as well?  You would probably tell me I was full of…well you know.  This 3U storage element packs a wallop of performance.  If you haven’t had a chance, you should check out this recent press announcement.  In it we talk about a single Emprise 5000 having the ability to simultaneously power 750 DVD quality video streams, 25,000 MP3’s or 4 Studio-class movie editing projects.  For those of you that are familiar with this types of performance hogging applications you know that in 3U of space, that’s pretty cool!!  We even mention having the equivalent of operating every movie theater screen in the state of Colorado at the same time from one system.  You know on a side note, after 10 years here at Xiotech – how come I don’t have one in my entertainment center yet!!! Brian Reagan – maybe you can make this happen for me 🙂

You probably noticed that I haven’t even touched on how we can reduce the carbon footprint!!  We have a cool little feature that is native in our Emprise 5000 product called “PowerNap”.  Not only is it native, but it’s also FREE!!  PowerNap utilizes industry-standard Wake on LAN (WOL) technology.  This gives the end user an incredible ability to power up and down the Emprise 5000 solution via scripts or cron jobs.   

Here is something I like to take prospects through.  Let’s say you run a VTL (or backup to disk process) type solution for backup or archiving.  With PowerNap, you can run a simple Perl or PowerScript, as part of a backup process, to spin up the Emprise 5000.  So you kick off your backups at 6pm, it takes just 60 seconds to bring the Emprise 5000 from 24watts of power, up to the full operating  power draw of around 500 watts.  Impressive right!!  So, during the day the unit stays in a low power draw state, only drawing 24 watts of power. NOW THAT’S GREEN!!!!  Let’s say that during the day you need to do a quick restore.  You run your restore process within your backup software solution and part of the restore process is to run the script to spin up the unit.  Once the file has been restored, the backup application can issue another script to “PowerNap” the unit.  By the way, we have a great “BestPractice Guide”.  If you are interested in it, follow me on twitter and send me a message.  I’ll send it over to you.

Did I mention the Emprise 5000 comes with FREE 5 Year Hardware Maintenance and the PowerNap feature is FREE TOO!!!!  In the “me-too” world of Storage array features and functions, it’s things like PowerNap and blazingly fast performance like this that make me happy to go to work each and every day !!

@StorageTexan

What does the Pacer, Yugo and Arbitrated Loop have in common?

What does the Pacer, Yugo and Arbitrated Loop have in common? You are probably running one of them in your datacenter. 

 
 George Crump recently blogged over at InfoWorld and asked, “do we really need Tier 1 storage”?  It struck me as interesting topic and while I disagreed with his reasons on where he put our solution, I tend to agree that the others mentioned are right where they should be.  In his article he specifically mentions some of the reasons both the monolithic array manufactures as well as the “modular guys” have “issues” and he zeroed in on performance and scalability.  Now his article was speaking about the front end controllers, but I think he missed out on pointing to the backend architectures as well.  I thought this would make a great blog posting 🙂  As you recall in my  “Performance Starved Applications” blog and my “Why running your hotel, like you run your Storage array can put you out of business” blog  I said that if you lined up the various different storage vendors next to each other about the only difference is the logo and the software loaded on the controllers.     

   
 Did you also know that if you looked behind those solutions you would see a large hub architecture – also known as our dear old friend “Mr. Arbitrated Loop”?  This is like running your enterprise wide Ethernet infrastructure on Ethernet hubs.  Can you imagine having to deal with them today?  For all those same reasons we dropped ethernet hubs like a bad habit, you should be doing the same thing with your storage array manufacturer if they are using arbitrated loops  in their backend storage.  Talk about a huge bottleneck to both capacity as well as performance at scale!!  So what’s wrong with Fibre Channel Arbitrated Loop (FCAL) on the backend?  Well for starters it doesn’t scale well at all.  Essentially you can only reference 126 components (for example a disk drive) per loop.  Most storage arrays support dual loops which is why you typically see a lot of 224 drive solutions on the market today, with 112 drives per loop – approaching the limit and creating a very long arbitration time.  Now, for those that offer more, it’s usually because they are doing more loops (typically by putting more HBA’s in their controller heads) on the backend.  The more loops on the backend, the more you have to rely on your controllers to manage this added complexity. When you are done reading my blog post, go and check out Rob Peglar’s blog post around storage controller “Feature Creep” called Jack of All Trades, Master of NONE !.  At the end of the day the limitations of FCAL on the backend is nothing new.   

  

About 4 years ago we at Xiotech became tired of dealing with all of these issues.  We rolled out a full Fabric backend on our Magnitude 3D 3000 (and 4000) solution.  We deployed this in a number of accounts.  Mostly it was used for our GeoRAID/DCI configuration where we split our controllers and bays between physical sites up to 10Km.  Essentially each bay was a loop all to itself directly plugged into a fabric switch.  Fast forward to our Emprise product family and we’ve completely moved away from FCAL on our backend.  We are 100% FULL, Non Blocking, Sweet and as pure as your mamas homemade apple pie Fabric with all of the benefits that it offers!!       

 My opinion (are you scooting towards the front of your chair in anticipation?) is unless you just enjoy running things in hubs I would STRONGLY advise that if you are looking at a new purchase of a Storage Array you should make sure they are not using 15-year old architecture on their backend !!  If you are contemplating architecting a private cloud, you should first go read my blog post on “Building resilient, scalable storage clouds” and applying the points I’ve made, to that endeavor.  Also, if you really are trying to make a decision around what solution to pick I would also suggest you check out Roger Kelley (@storage_wonk) over at http://www.storagewonk.com/.  He talked about comparing storage arrays “Apples to Apples”  and brought up other great differences.  Not to mention, Pete Selin (@pjselin) over at his blog talked about “honesty in the Storage biz” which was an interesting take on “Apples vs Apples” relative to configurations and pricing.  Each of these blog posts will give you a better understanding on how we differentiate ourselves in the market.         

  

Thanks,        

  @StorageTexan