Whats the deal with Dot Hill

 

What an AWESOME last couple of months this has been at Xiotech.  In the 10 years I’ve been with Xiotech it’s incredible the amount of attention we’ve been receiving lately!!  Just to rattle off a few things, our ISE launch has been a wild success. Our Virtual View VMware/Citrix/Hyper-V plug-in has just been fantastic, not to mention we are seeing a huge uptick in VMware / Citrix prospects and resellers wanting to partner with us to help move Virtual Desktop opportunities along.  Our VERY cool iPhone App and the new High Performance ISE-NAS launch we did out at Synergy in Las Vegas in April.  I didn’t even mention that last week I was at Microsoft TechEd in New Orleans and our booth was just PACKED.  We were talking about our recent ESRP Paper around 10,000 Exchange users in 3U of space.  Seems that Microsoft just LOVES DAS and we just happen to have a REALLY cool Storage Blade DAS solution !! 

Now, one of the cool things around the iPhone/iPad app really isn’t the App itself.  Don’t get me wrong, it’s been an awesome talking point, but it’s really around our move to CorteX which is our RESTful API.  For instance the recent business relationship that Dot Hill announced.  I think they can clearly see the advantage of RESTful API as well as our scalable Storage Blade architecture.  This opens up some really cool possibilities down the road.  So Dot Hill just validates our whole concept around RESTful API.  Commvault is running with RESTful, we are constantly talking with business partners and prospects in the “Cloud” space that want to see what they could do with our RESTful API and their widget/feature/function etc.  It sort of reminds me of the “WonderTwins” from the old “SuperFriends” days !! Then again, it could be that my 9 year old and twin 7 year olds have me feeling nostalgic as I see them watching these old school shows and I’m just reaching to find some sort of similarity 🙂

 

At the end of the day, I really enjoy working for a company that is just out-innovating the competition.  Applications are getting smarter and the perceived value-add of some of the traditional storage architectures are getting long in the tooth.  I truly believe that we are just a fundamentally different architecture that is just on the cusp of blasting off.  The days of monolithic or even “virtual” storage arrays are slowly going away.  Look at the success with the NetApp V-Series and some of the things coming out of EMC. 

Funny story, I was at TechEd and I had 2 different storage vendors come up to me to ask about our 5 year hardware warranty.  They kept trying to figure out what we hiding.  One even commented that they have had to change their strategy when competing with us.  They now have to add years 4 and 5 into their quote.  JUST PAINFUL for them !!

Oh well, time to get back to work.  It’s 2 weeks until the end of the quarter!!  Anyone need storage for their VDI project!!  🙂 We give you a sweet zero cost 5 year hardware warranty 24/7/365/4hr !!!  not to mention we can put about 500 to 700 Virtual Desktops in 3U of space !!!

Ahhh ya !!

@StorageTexan

Stalled Virtualization Projects-How Xiotech can help you unstick these deployments

Stalled Virtualization Projects? – How Xiotech can help you UN-STICK these deployments.

Xiotech is in a huge partner recruitment phase this year.  Things have been going just fantastic! However, one problem we are running into is trying to get some of these larger partners to give us the time of day.  Shocking I know- who wouldn’t want to carve out 60 mins of their time to talk to “Ziotech” (I didn’t misspell that – it happens to us ALL THE TIME).  Once we get our foot in the door it’s usually 20 mins of them explaining that they carry EMC, NetApp, Pillar, HDS, and Compellent, etc.  They always explain to us they just can’t take on yet another storage vendor.  What’s really interesting is we typically tell them we don’t want to replace any of their current storage offerings.  This usually leads to a skeptical look from them 🙂  I usually tell them that we have a very unique ability to “un-stick” Virtual Desktop opportunities. Let me explain a little further.

It never fails- VDI projects seem to get stalled, or simply get stuck in some rut that the prospect or partner can’t get out of.  Now, a stuck project can come in many shapes and sizes.  Sometimes it’s time and effort, sometime it’s other projects in the way. But the vast majority of the time its Cost/ROI/TCO type things.  Not just from a justification point of view, but most of the time from the upfront CAPEX view.  This has been especially true with 1000+ seat solutions.  Like I said, I just keep hearing more and more about these situations from our partners.  What normally follows is, “well the project is on hold due to funding issues.” So how can we differentiate ourselves in this kind of opportunity?  Funny you should ask that!!

I typically like to describe our ISE solution as a solution that has a VERY unique ability to do 3 things better than the rest. 

#1 – We give you true cost predictability over the life of the project. 

Let’s be honest, if you are about to deploy a 5000+ VDI desktop solution you are probably going to do this project over a 3 year time frame right?  Even if it’s only 500, why is it that as we look into these solutions further we only see 3 years of maintenance on the most expensive CAPEX which is storage?  By the time you get all 5000+ systems up and running it’ll be time for year 4 and 5 maintenance on your storage foundation.   If this isn’t your first storage rodeo, then you know that years 4 and 5 can be the most painful in regards to costs. Not to mention, what’s really cool about our solutions is the “Lego” style approach to design.  We can tell you what the cost is for each ISE you want to buy, since they are in 3U “blades” you can simply add the number you need to meet whatever metric you have in place and know the cost of each one.  As you can see, we do “Cost Predictability” pretty well.

#2 – We give you performance predictability. 

With our 3U Emprise 5000 we have the very unique ability to predict performance characteristics.  This is very difficult with legacy storage vendors.  Their array IOPS pool can swing 80% plus or minus depending on how full from a capacity point of view their solution is.  We deliver 348 SPC-1 IOPS per SPINDLE based on 97% capacity utilization.  Keep in mind most engineers for legacy storage arrays use 150 to 120 IOPS per spindle.  So based on that alone, we can deliver the same performance with ½ the spindles!!    

#3 – We can give you capacity predictability. 

Because of the linearity of our solution, when you purchase the Emprise 5000 we can tell you exactly how much useable, after RAID capacity you will have available.  Best practice useable capacity for our solution is 96% full.  That’s where we do all of our performance testing.  Compared with the industry average of anywhere from 60% to 80% your capacity “mileage” will vary !!

 So why should this be important to solution providers and customers?  So back to my VDI comments.  If you are in the process of evaluating, or even moving down the path to deploy VDI how important is it for you to fully understand your storage costs when trying to design this out?  If I could tell you that you can support 2000 VDI instance in 3U of space, and that 3U of space can hold 19TB’s of capacity and that solution cost $50,000 (I’m pulling this number out of the…..well, air) that could really be a pivotal point in getting your project off the ground don’t you think?  Like I said, no one deploys a 5000 seats solution at one time.  You do this over a number of years.  With our Storage Blades, you can do just that.  You simply purchase the one ISE at a time.  With its predictable costs, capacity and most importantly, it’s predictable performance you have the luxury of growing your deployment overtime, without having to worry about a huge upfront CAPEX hit.  Not to mention a 5 year hardware warranty better aligns with the finance side of the house and their typical 5 year amortization process.  No hidden year 4 and 5 Maintenance costs !! 

So, if you are looking at a VDI project or you’ve looked at it in the past and just couldn’t justify it, give us a call.  Maybe we can help lower your entry costs and get this project unstuck !!

Thanks,

@StorageTexan

Xiotech Storage Blade – 101

How Xiotech Storage Blades have the potential to change the storage paradigm.

It’s inevitable, whether I’m talking with a value added reseller (VAR) or a net-new prospect, I’m always asked to explain how our solution is so different then everyone else’s.  I figured it was a great opportunity to address this in a blog post. 

Xiotech recently released a whitepaper authored by Jack Fegreus of OpenBench Labs.  His ISE overview was so spot on that I wanted to copy/paste some of the whitepaper here.  I would encourage you to read his full whitepaper as well, which includes his testing results.  I’m pretty sure you will be as impressed as I was.

Before you continue reading, I need you to take a moment to suspend everything you understand about storage architecture, both good and bad.  I would like you to read this post with an open mind, setting aside your biases as much as possible.  If you can do this, it will make a LOT more sense.

**********

<Copied from http://www.infostor.com/index/articles/display/3933581853/articles/infostor/openbench-lab-review/2010/april-2010/a-radical_approach.html>

The heart of ISE—pronounced, “ice”— technology is a multi-drive sealed DataPac with specially matched Seagate Fibre Channel drives. The standard drive firmware used for off-the-shelf commercial disks has been replaced with firmware that provides detailed information about internal disk structures. ISE leverages this detailed disk structure information to access data more precisely and boost I/O performance on the order of 25%. From a bottom line perspective, however, the most powerful technological impact of ISE comes in the form of autonomic self-healing storage that reduces service requirements.

In a traditional storage subsystem, the drives, drive enclosures and the system controllers are all manufactured independently. That scheme leaves controller and drive firmware to handle all of the compatibility issues that must be addressed to ensure device interoperation. Not only does this create significant processing overhead, it reduces the useful knowledge about the components to a lowest common denominator: the standard SCSI control set.

Relieved of the burden of device compatibility issues, ISE tightly integrates the firmware on its Managed Reliability Controllers (MRCs) with the special firmware used exclusively by all of the drives in a DataPac. Over an internal point-to-point switched network, and not a traditional arbitrated loop, MRCs are able to leverage advanced drive telemetry and exploit detailed knowledge about the internal structure of all DataPac components. What’s more, ISE architecture moves I/O processing and cache circuitry into the MRC.
 
A highlight of the integration between MRCs and DataPacs is the striping of data at the level of an individual drive head. Through such precise access to data, ISE technology significantly reduces data exposure on a drive. Only the surfaces of affected heads with allocated space, not an entire drive, will ever need to be rebuilt. What’s more, precise knowledge about underlying components allows an ISE to reduce the rate at which DataPac components fail, repair many component failures in-situ, and minimize the impact of failures that cannot be repaired. The remedial reconditioning that MRCs are able to implement extends to such capabilities as remanufacturing disks through head sparing and depopulation, reformatting low-level track data, and even rewriting servo and data tracks.

ISE technology transforms the notion of “RAID level” into a characteristic of a logical volume that IT administrators assign at the time that the logical volume is created. This eliminates the need for IT administrators to create storage pools for one or more levels of RAID redundancy in order to allocate logical drives. Also gone is the first stumbling block to better resource utilization: There is no need for IT administrators to pre-allocate disk drives for fixed RAID-level storage pools. Within Xiotech’s ISE architecture, DataPacs function as flexible RAID storage pools, from which logical drives are provisioned and assigned a RAID level for data redundancy on an ad hoc basis.

What’s more, the ISE separates the function of the two internal MRCs from that of the two external Fibre Channel ports. The two FC ports balance FC frame traffic to optimize flow of I/O packets on the SAN fabric. Then the MRCs balance I/O requests to maximize I/O throughput for the DataPacs.

In effect, Xiotech’s ISE technology treats a sealed DataPac as a virtual super disk and makes a DataPac the base configurable unit, which slashes operating costs by taking the execution of low-level device-management tasks out of the hands of administrators. This heal-in-place technology also allows ISE-based systems, such as the Emprise 5000, to reach reliability levels that are impossible for standard storage arrays. Most importantly for IT and OEM users of the Emprise 5000 storage, Xiotech is able to provide a five-year warranty that eliminates storage service renewal costs for a five-year lifespan.

******************

Now, I’m going to keep this same open mind when I say the following: the Emprise 5000 storage blade just makes storage controllers better.  We make one and we’ve seen it first-hand.  We saw a significant jump in performance once we moved from the typical drive bays and drives that everyone else uses with the ISE.  Not to mention, with its native switch fabric architecture, it allowed us to scale our Emprise 7000 storage controllers to 1PB of capacity.  What’s really cool (open mind for me) is we’ve improved performance and reliability for a lot of storage controllers like DataCore, FalconStor, IBM-SVC and HDS USP-V, not to mention significant boosts as well for applications and OS’s. 

Feel free to close your mind now 🙂

@StorageTexan

The Time for Storage Blades

The time for Storage Blades.

 I’m writing this blog from the El Dorado Resort in the Riviera Maya Mexico while sipping some sort of Frozen Blue Hawaiian cocktail, so my life at the moment doesn’t suck.  When most people are catching up on the latest Twilight novel, or Andre Agassi’s biography, I’m flipping through whitepapers and tech docs I’ve been printing out for the last few weeks.  For those of you that do a great job of “unhooking” from work during vacations, I applaud you. 

I was reading through a few things, and it’s becoming clear to me that the industry is at a crossroads.  I liken this to the same situation the server vendors went through just 6 or 7 years ago with the deployment of server blade technology.  I recall talking with a large independent school district in West Texas back then about connecting the new IBM BladeCenter with our Magnitude 3D solution.  The BladeCenter was a new thing, and this customer was searching for an ability to reduce rack space as well as cost predictability when deploying future applications.  Since then, blade servers have proliferated to the point that they’re typically the server of choice for deploying server hardware.  Virtual servers have taken this to another level.  It’s not uncommon for companies to fully understand the number of virtual machines a certain model Blade can support and buy different types of blades based on those requirements.

I think the storage industry is at this pivot point as well.  We can accurately tell you within a few GB’s the capacity predictability of a solution we quote, so why is it that we have a difficult time being that predictable with performance?  We still get speechless when customers ask what sort of performance predictability can I expect today, in 1 year, in 3 years and in 5 years.  This is going to become MORE important as virtual desktops and other IOP-hogging applications become more mainstream.  I’ve stopped being surprised when talking with channel partners as they recite problems when they deployed a certain storage vendor’s solution and they find out 3 months later it was undersized. Een though according to the vendor’s tools, it was sized according to the vendor’s best practice, and the customer still runs into performance issues as they ramp up the use of the solution.  What’s even more surprising is the storage vendor’s near-giddy response when telling them how they can solve this issue.  SSD anyone?  Can you spare a few tens of thousands of dollars after you’ve already blown your budget on the acquisition to purchase 2 SSD drives? Just to solve a performance issue?

Maybe it’s time for the storage industry to look at Storage Blades as a means to meet this performance/capacity predictability paradigm.  Imagine looking at storage as a building block for an overall storage strategy that allows you to be predictable in both capacity and performance so your sizing of application deployment becomes easier and more cost effective. Imagine knowing the dollars per IOP per Rack Unit on the life of the storage blade?  In other words, you can expect X IOPS per RACK Unit for the 5 year life expectancy of the storage solution.  Imagine being able to do this knowing that the predictability is based on running the storage solutions at 97% utilized  This begins to put you on par with the capacity planning you’ve grown accustomed to knowing.

I believe that Storage Blade architecture has the ability to bring the storage industry to a crossroads.  Do we continue piling on features and functions, already found in typical applications and OS’s, and live with horrible utilization rates and the inability to predict basic performance results?  Or should we stop this controller feature crawl and hit the big issues at hand: cost, capacity and performance predictability?  Storage Blades, like their server brethren, can possibly be the means to this end.  If nothing more, for certain applications, Storage Blades just make perfect sense. 

Oh well – I just saw someone order a Frozen Mojito that looks really good.  Back to the grind that is my vacation 🙂

@StorageTexan

The COOL things Xiotech is doing with RESTful API

The COOL things Xiotech is doing with RESTful API !!

(Video Demo at the bottom)

So, if you remember a few weeks ago I blogged about Commvault’s launching of their RESTful interface connector.   Well, one of the cool things to come out of this spring’s Storage Networking World in Orlando is Xiotech’s release of our CorteX Control Path RESTful API.  As far as I know, Xiotech is the only Storage Vendor pushing this sort of announcement out.  For cloud providers, this is the ultimate in flexibility around monitoring and managing storage blades.   Just as an example, in the Xiotech booth we are showing our new iPhone/iPad Application called ISEview.

  This should be in the Apple iTunes store in the next few weeks.  Well, as soon as we can get our developer to stop “tweaking/adding” cool things to it. 🙂   

  

Today this app is more useful in showing what can be done when you move to an open system, RESTful API.  In talking with the developers and marketing, we have some really cool things we want to add to this as we move forward.  As you can see from above, we can view ISE performance statistics as well as look at the MRC’s (Managed Reliability Controllers) and DataPac, power supply and cache battery health as well as some log information.  I can’t wait to see what other things Karl Morgan and Todd Burkey will be adding to it later!!

While the iPhone/iPad app is a “Geeky cool thing to do” again it speaks to what RESTful can do.  In our SNW booth we are also showing some work that Olocity did in their StorageIM monitoring product. 

 

My understanding is using our RESTful API SDK they were able to add our Emprise 5000 into their monitoring suite in only a few days.

Anyway, over the next few days I hope to catch up on some blogging, and also post a summary of “what I learned at SNW” so stay tuned.  BUT, no promises J

@StorageTexan <–Follow me on twitter!!

PS – thanks for @SFoskett for video taping a demo of the iPad / iPhone Application user interface. 

The journey to the Unified Storage Platform

Who would have thought such a simple question – really more of “seeking to understand” as my VP of Sales Mark Glasgow calls it – would kick off such a slew of e-mails, comments and tweets. The other day I asked the following question on Twitter:

            @StorageTexan: question 4 all NFS/NTAP/Celerra Luving ppl. wht is ur opinion on y NFS is superior ovr block device access in #VSphere?

I got a TON of feedback, so much so that I decided to ask the question on my blog as a means to collect the responses and to allow others more than 140 characters to respond 🙂  Out of that came some really interesting responses, including George Crump sharing his views in Information Week.

 Let’s take a step back. The majority of my storage life has been in the block-based access protocol.  For the majority of my career it was all about Fibre Channel connectivity.  Have you heard the saying “If all you sell are hammers, then everything in the world is a nail”?  That’s sort of where I was at 5 or so years ago.  The FC was my hammer, and all connectivity issues could be resolved with FC!!  Then a few years ago iSCSI started to inch its way into the discussion.  Xiotech adopted it as another option for connectivity and I got to add another tool into my bag 🙂 

Then I asked the above question as a means of self reflection.  Why would someone choose to bypass block-based connectivity in lieu of file-based?  Just didn’t seem logical to me.  I even happened to have that particular tool in my bag, so it’s wasn’t as if I was trying to compete against it, just wanted to see what the big deal was.  Today, almost all storage vendors (Xiotech included) offer NFS connectivity.  Some utilize gateways like the EMC Celerra, NetApp V-Series as well as Xiotech.  Others use native support like PillarData and NetApp FAS product line. 

At first blush I thought, it has to be because IP is viewed as being less expensive, less complicated then native Fibre. But I think at this point, the price argument against FC should be put to bed, thanks in large part for Cisco/Brocade/Qlogic driving down the costs.  Complexity is also something I think should be/could be put to bed.  Just sit in front of a Cisco Ethernet and a Cisco MDS switch, the IOS is the same; the perceived complexity around FC is really no longer an issue.  Now, for the smallest of the SMB, maybe costs and perceived complexity is enough to choose NFS.  I can see that.

Maybe it’s because these gateway devices offer something their block based architecture can’t support.  That starts to make sense.  Maybe it’s some sort of feature that drives this decision.   In some cases, maybe its thin provisioning, better integrated snapshot, single instance storage/Data DeDupe and even advanced async replication.  Most storage arrays on the surface can do these, but with a gateway device maybe they can do this better, cheaper, faster, etc.?  For the SME/SMB I can see this as a reason.

Then again, according to some of the people that responded to my blog and twitter, maybe it’s for performance reasons.  Some sort of ability to cache the front-end writes make the applications/hypervisors/OSes just run quicker. Others suggested that gateway devices made it just “stupid simple” to add more VM’s because you could essentially treat an NFS mount as a file/folder, and you can just keep dropping VMDK’s (files essentially) into these folders for easier management.  That makes sense as well.  I can see that, I mean if you look at a NAS device it’s essentially a server that runs an OS with a file system that connects to DAS/JBOD/SBOD/StorageArray on the backend right?   It could be viewed as a caching engine.

Then it dawned on me, it’s not really about one being better than the other, it’s more about choices.  That’s what “Unified Storage” is all about, the ability to add more tools to your bag to help solve your needs.  If you look inside a datacenter today, most companies have internally tiered their applications/servers to some extent.  Not everything is run on the same hardware, software etc.  You pick the right solution, for the right application.  Unified Storage is the ability to choose the right storage connectivity for the various different applications/hypervisors and operating systems.  The line gets really blurred as gateway devices get more advanced and better integrated. 

Either way, everyone seems to be moving more and more to the Unified Storage device.  It should be interesting to see what sort of things come out of Storage Networking World in a few weeks !!

@StorageTexan

The Debate-Why NFS vs Block Access for OS/Applications

 The Debate – Why NFS vs Block Access for OS/Applications?

I’m going to see if I can use my blog to generate a discussion. First and foremost, I don’t want this to be a competitive “ours is better than yours”.  So let’s keep it civil!!

Specifically I’m trying to wrap my head around why someone would choose to go NFS instead of doing block based connectivity for things like VSPhere?  For instance, NetApp supports NFS and block based FC connectivity.  EMC typically uses CX behind their NS gateway devices.  Most Storage companies (Xiotech included) offer a NAS solution – either native or gateway. 

So, why would I choose to run for instance VMware Vsphere (as an example) on NFS when I could just as easily run it without ? Is it that the file system used for that particular companies NAS solution offers something that their block based solution can’t?  ie) thin provisioning, Native DeDupe, Replication etc.  Is it used more as some sort of caching mechanism and gives better performance?  Is it more fundamental and it’s more of a connectivity choice (IP vs FC vs NFS Mount)?

Please feel free to post your responses.

Thanks,

@StorageTexan