Monthly Archives: March 2010

The journey to the Unified Storage Platform

Who would have thought such a simple question – really more of “seeking to understand” as my VP of Sales Mark Glasgow calls it – would kick off such a slew of e-mails, comments and tweets. The other day I asked the following question on Twitter:

            @StorageTexan: question 4 all NFS/NTAP/Celerra Luving ppl. wht is ur opinion on y NFS is superior ovr block device access in #VSphere?

I got a TON of feedback, so much so that I decided to ask the question on my blog as a means to collect the responses and to allow others more than 140 characters to respond 🙂  Out of that came some really interesting responses, including George Crump sharing his views in Information Week.

 Let’s take a step back. The majority of my storage life has been in the block-based access protocol.  For the majority of my career it was all about Fibre Channel connectivity.  Have you heard the saying “If all you sell are hammers, then everything in the world is a nail”?  That’s sort of where I was at 5 or so years ago.  The FC was my hammer, and all connectivity issues could be resolved with FC!!  Then a few years ago iSCSI started to inch its way into the discussion.  Xiotech adopted it as another option for connectivity and I got to add another tool into my bag 🙂 

Then I asked the above question as a means of self reflection.  Why would someone choose to bypass block-based connectivity in lieu of file-based?  Just didn’t seem logical to me.  I even happened to have that particular tool in my bag, so it’s wasn’t as if I was trying to compete against it, just wanted to see what the big deal was.  Today, almost all storage vendors (Xiotech included) offer NFS connectivity.  Some utilize gateways like the EMC Celerra, NetApp V-Series as well as Xiotech.  Others use native support like PillarData and NetApp FAS product line. 

At first blush I thought, it has to be because IP is viewed as being less expensive, less complicated then native Fibre. But I think at this point, the price argument against FC should be put to bed, thanks in large part for Cisco/Brocade/Qlogic driving down the costs.  Complexity is also something I think should be/could be put to bed.  Just sit in front of a Cisco Ethernet and a Cisco MDS switch, the IOS is the same; the perceived complexity around FC is really no longer an issue.  Now, for the smallest of the SMB, maybe costs and perceived complexity is enough to choose NFS.  I can see that.

Maybe it’s because these gateway devices offer something their block based architecture can’t support.  That starts to make sense.  Maybe it’s some sort of feature that drives this decision.   In some cases, maybe its thin provisioning, better integrated snapshot, single instance storage/Data DeDupe and even advanced async replication.  Most storage arrays on the surface can do these, but with a gateway device maybe they can do this better, cheaper, faster, etc.?  For the SME/SMB I can see this as a reason.

Then again, according to some of the people that responded to my blog and twitter, maybe it’s for performance reasons.  Some sort of ability to cache the front-end writes make the applications/hypervisors/OSes just run quicker. Others suggested that gateway devices made it just “stupid simple” to add more VM’s because you could essentially treat an NFS mount as a file/folder, and you can just keep dropping VMDK’s (files essentially) into these folders for easier management.  That makes sense as well.  I can see that, I mean if you look at a NAS device it’s essentially a server that runs an OS with a file system that connects to DAS/JBOD/SBOD/StorageArray on the backend right?   It could be viewed as a caching engine.

Then it dawned on me, it’s not really about one being better than the other, it’s more about choices.  That’s what “Unified Storage” is all about, the ability to add more tools to your bag to help solve your needs.  If you look inside a datacenter today, most companies have internally tiered their applications/servers to some extent.  Not everything is run on the same hardware, software etc.  You pick the right solution, for the right application.  Unified Storage is the ability to choose the right storage connectivity for the various different applications/hypervisors and operating systems.  The line gets really blurred as gateway devices get more advanced and better integrated. 

Either way, everyone seems to be moving more and more to the Unified Storage device.  It should be interesting to see what sort of things come out of Storage Networking World in a few weeks !!

@StorageTexan

Advertisements

The Debate-Why NFS vs Block Access for OS/Applications

 The Debate – Why NFS vs Block Access for OS/Applications?

I’m going to see if I can use my blog to generate a discussion. First and foremost, I don’t want this to be a competitive “ours is better than yours”.  So let’s keep it civil!!

Specifically I’m trying to wrap my head around why someone would choose to go NFS instead of doing block based connectivity for things like VSPhere?  For instance, NetApp supports NFS and block based FC connectivity.  EMC typically uses CX behind their NS gateway devices.  Most Storage companies (Xiotech included) offer a NAS solution – either native or gateway. 

So, why would I choose to run for instance VMware Vsphere (as an example) on NFS when I could just as easily run it without ? Is it that the file system used for that particular companies NAS solution offers something that their block based solution can’t?  ie) thin provisioning, Native DeDupe, Replication etc.  Is it used more as some sort of caching mechanism and gives better performance?  Is it more fundamental and it’s more of a connectivity choice (IP vs FC vs NFS Mount)?

Please feel free to post your responses.

Thanks,

@StorageTexan

VMware Virtual View

   

Xiotech Virtual View for VMware, HyperV and Citrix.  

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on.   In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster.  With Xiotech’s Virtual View it’s as easy as 1, 2, 3.  Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster.  From a high level, here is how you would do it today.     

 I like to refer to the picture below as the “Rinse and Repeat” part of the process.  Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.    

VMware Rinse and Repeat process

  

With Virtual View the steps would look more like the following.  Notice its “wizard” driven with a lot of the steps processed for you.  But it also gives you an incredible amount of “knob turning” if you want as well.     

Virtual View Wizard Steps

  

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.     

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!    

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!    

 If you have any questions, feel free to leave them in the comments section below.    

Thanks,    

@StorageTexan    

PS – here is a quick “commercial for Virtual View”    

10,000 Exchange Users in 3U of space

 

This is a pretty cool video done by the Technical Marketing team at Xiotech.  10,000 Exchange users in 3U of space!!   No fancy/expensive SSD needed for this !!

In summary:

250 VDI instances or

10,000 Exchange Users or

750 DVD Quality Video’s or

25,000 MP3’s

In 3U of space.  Now that’s WICKED FAST !!!

 As they say in Minny – “that’s not to shabby !!”

By the way, if by chance 10,000 is just not enough users for you.  Don’t worry, add a second ISE and DOUBLE IT TO 20,000.  Need 30,000, then add a THIRD ISE.  100,000 users in 10 ISE or 30U of RackSpace.  Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!!  I should say, our ISE comes with 1GB.  It’s not the size that counts, it’s HOW YOU USE IT !! 🙂

One Pillar Axiom 600 with one FC Slammer
24GB of cache <—-  WOW !!!!!
4 SSD Bricks for databases. Each with:
Two dedicated RAID controllers
13 50GB SSDs <— I’m going to guess that these aren’t very cheap.
2 SATA bricks for Logs. Each with:
13 500GB 7,200 RPM SATA disk drives

Hitachi AMS 2300 = 10,800 users – 400+ Pages PLUS !!! <– I have to say it again, WOW 400+ pages on this bad boy !!!

240 300GB 15K RPM SAS disks, <— Ahh ya  – TONZ of spindles !!!  We had 20(ea) 3.5″ Drives to do our testing. 
16GB of cache and
8(ea) 4Gb/s Fibre Channel paths was used for these tests.
Testing used 8 Sun Fire 4600 M2 servers with 32GB of RAM,
four dual-core AMD Opteron CPUs,
8(ea) Emulex 4Gbit/s Fibre Channel adapters and
Windows Server 2003 R2 Enterprise x64 with Service Pack 2.