Tag Archives: Citrix

Stalled Virtualization Projects-How Xiotech can help you unstick these deployments

Stalled Virtualization Projects? – How Xiotech can help you UN-STICK these deployments.

Xiotech is in a huge partner recruitment phase this year.  Things have been going just fantastic! However, one problem we are running into is trying to get some of these larger partners to give us the time of day.  Shocking I know- who wouldn’t want to carve out 60 mins of their time to talk to “Ziotech” (I didn’t misspell that – it happens to us ALL THE TIME).  Once we get our foot in the door it’s usually 20 mins of them explaining that they carry EMC, NetApp, Pillar, HDS, and Compellent, etc.  They always explain to us they just can’t take on yet another storage vendor.  What’s really interesting is we typically tell them we don’t want to replace any of their current storage offerings.  This usually leads to a skeptical look from them 🙂  I usually tell them that we have a very unique ability to “un-stick” Virtual Desktop opportunities. Let me explain a little further.

It never fails- VDI projects seem to get stalled, or simply get stuck in some rut that the prospect or partner can’t get out of.  Now, a stuck project can come in many shapes and sizes.  Sometimes it’s time and effort, sometime it’s other projects in the way. But the vast majority of the time its Cost/ROI/TCO type things.  Not just from a justification point of view, but most of the time from the upfront CAPEX view.  This has been especially true with 1000+ seat solutions.  Like I said, I just keep hearing more and more about these situations from our partners.  What normally follows is, “well the project is on hold due to funding issues.” So how can we differentiate ourselves in this kind of opportunity?  Funny you should ask that!!

I typically like to describe our ISE solution as a solution that has a VERY unique ability to do 3 things better than the rest. 

#1 – We give you true cost predictability over the life of the project. 

Let’s be honest, if you are about to deploy a 5000+ VDI desktop solution you are probably going to do this project over a 3 year time frame right?  Even if it’s only 500, why is it that as we look into these solutions further we only see 3 years of maintenance on the most expensive CAPEX which is storage?  By the time you get all 5000+ systems up and running it’ll be time for year 4 and 5 maintenance on your storage foundation.   If this isn’t your first storage rodeo, then you know that years 4 and 5 can be the most painful in regards to costs. Not to mention, what’s really cool about our solutions is the “Lego” style approach to design.  We can tell you what the cost is for each ISE you want to buy, since they are in 3U “blades” you can simply add the number you need to meet whatever metric you have in place and know the cost of each one.  As you can see, we do “Cost Predictability” pretty well.

#2 – We give you performance predictability. 

With our 3U Emprise 5000 we have the very unique ability to predict performance characteristics.  This is very difficult with legacy storage vendors.  Their array IOPS pool can swing 80% plus or minus depending on how full from a capacity point of view their solution is.  We deliver 348 SPC-1 IOPS per SPINDLE based on 97% capacity utilization.  Keep in mind most engineers for legacy storage arrays use 150 to 120 IOPS per spindle.  So based on that alone, we can deliver the same performance with ½ the spindles!!    

#3 – We can give you capacity predictability. 

Because of the linearity of our solution, when you purchase the Emprise 5000 we can tell you exactly how much useable, after RAID capacity you will have available.  Best practice useable capacity for our solution is 96% full.  That’s where we do all of our performance testing.  Compared with the industry average of anywhere from 60% to 80% your capacity “mileage” will vary !!

 So why should this be important to solution providers and customers?  So back to my VDI comments.  If you are in the process of evaluating, or even moving down the path to deploy VDI how important is it for you to fully understand your storage costs when trying to design this out?  If I could tell you that you can support 2000 VDI instance in 3U of space, and that 3U of space can hold 19TB’s of capacity and that solution cost $50,000 (I’m pulling this number out of the…..well, air) that could really be a pivotal point in getting your project off the ground don’t you think?  Like I said, no one deploys a 5000 seats solution at one time.  You do this over a number of years.  With our Storage Blades, you can do just that.  You simply purchase the one ISE at a time.  With its predictable costs, capacity and most importantly, it’s predictable performance you have the luxury of growing your deployment overtime, without having to worry about a huge upfront CAPEX hit.  Not to mention a 5 year hardware warranty better aligns with the finance side of the house and their typical 5 year amortization process.  No hidden year 4 and 5 Maintenance costs !! 

So, if you are looking at a VDI project or you’ve looked at it in the past and just couldn’t justify it, give us a call.  Maybe we can help lower your entry costs and get this project unstuck !!

Thanks,

@StorageTexan

Advertisements

Cool things Commvault is doing with REST

Cool things Commvault is doing with REST 

I’ve always been a HUGE fan of Commvault.  They just rock.  When I was a Systems Engineer back in Austin in the early 2000’s, I don’t think we had an account that I didn’t take Commvault into to try and solve a customer’s backup issues.  AND WE DIDN’T EVEN SELL COMMVAULT !!!   They had such cool technology that was clearly leaps and bounds above everyone else. Not to mention, they had some really cool people that worked for them as well (Shout out to Jeanna, Joelle, RobK and of course Mr Cowgil).  

Fast forward a few years and the release of Simpana as well as the addition of native DeDuplication clearly gave Data Domain and various other deduplication solutions a run for their money.  You would think that would be enough for one company!!   I was pretty excited about their recent press release around adding cloud data storage as a tier option in Simpana.  Dave Raffo over at SearchDataBackup.Com did a really nice job of summarizing the announcement.  It’s a clear sign that Commvault is still very much an engineering driven organization.  Which is just AWESOME!! 

 I think the biggest nugget that I pulled out of the press release is Commvault’s ability to integrate native REST capabilities.  The more and more I hear about REST’s potential, the more I get excited about some of the endless possibilities it can offer.  In this case, it allowed Commvault to easily integrate their backup architecture to include 3rd party cloud solutions like Amazon S3, EMC Atmos and a slew of others.   They didn’t need to build an API for each vendor; they just relied on REST’s open API to do that for them. 

If you haven’t had a chance you should check out Brian Reagan’s blog posting that mentions something we are calling CorteX.  Essentially CorteX is our RESTful based ecosystem on which developers can gain access to our Emprise solutions.  This is the next evolutionary step in our ongoing open architecture capabilities.  As some of you are aware, we’ve been touting our WebService’s Software Development Kit for some time.  It’s allowed us to do things like VMWare Virtual View which ties directly into Virtual Center to give VMWare Admin’s unprecedented abilities, as well as Microsoft developers creating a provisioning application called SANMAN that integrates some of their processes directly to our storage.  RESTful API will take this to a greater level.  Just like Commvault was able to tie directly into public cloud storage providers, CorteX will give unprecedented abilities to do really cool things. 

I’ve probably said more then I should 🙂 So I’ll leave it with “more to come on CorteX as we get ready to release”.  I’ve probably stolen enough of Brian’s thunder to get an e-mail from him!!  It’s always good to hear from a Sr VP right!! 

So, keep an eye on Xiotech over the next couple of months and start paying attention to vendors that support RESTful API’s !!!

Thanks,

@StorageTexan

How to build resilient, scalable storage clouds and turn your IT department into a profit center

How to build resilient, scalable storage clouds and turn your IT department into a profit center!!

If you’ve been living under a rock for the last year the topic of Cloud based computing might be new to you.  Don’t worry about it at this point, there are CLEARLY more questions than answers on the subject.  I get asked at just about every meeting what my interpretation of “cloud” is.  I will normally describe it as an elastic, utility based environment that when properly architected, can grow and shrink as resources are provisioned and de-provisioned.  It’s a move away from “silo based” infrastructure and into a more flexible and scalable, utility based solution.  From a 30,000 foot view, I think that’s probably the best way to describe it.  Then the conversation usually rolls to “so, how do you compare your solution to others” relative to cloud. Here is what I normally talk about.

First and foremost we have sold solutions that are constructed just like everyone else’s.  Our Magnitude 3D 4000 product line is built with pretty much the exact same pieces and parts as does Compellent, NetApp FAS, EMC Clariion and HP EVA etc.  Intel-based controller motherboards, Qlogic HBAs, Xyratex or other SBOD drive bays connected via arbitrated loops.  Like I’ve said in prior posts, just line each of these up, remove the “branding” and you wouldn’t be able to tell the difference.  They all use the same commodity parts.  Why is this important?  Because none of those solutions would work well in a “Cloud” based architecture.  Why?  Because of all the reasons I’ve pointed out in my “Performance Starved Application” post, as well as my “Cost per TB” post.  THEY DON’T SCALE WELL and they have horrible utilization rates.  If you really want to build a storage cloud you have to zero in on what are the most important aspects of it, or what I like to refer to as “The Fundamentals”.

 First you MUST start with a SOLID foundation.  That foundation must not require a lot of “care and feeding” and it must be self healing.   With traditional storage arrays, you could end up with 100, 200 or even 1000 spinning disks.  Do you really want to spend the time (or the HUGE maintenance dollars) swapping out, and dealing with bad disks?  Look don’t get me wrong, I get more than a few eye rolls when I bring this up.  At the end of the day, if you’ve never had to restore data because of a failed drive, or any other issue related to failed disks then this is probably not something high on your list of worries.  For that reason, I’ll simply say why not go with a solution that guarantees that you won’t have to touch the disks for 5 years and backs it up with FREE HARDWARE MAINTENANCE (24/7/365/4hr)!!  Talk about putting your money where your mouth is.  From a financial point of view, who cares if you’ve never had to mess with a failed drive, it’s freaking FREE HARDWARE MAINTENANCE for 5 years!!

Secondly, it MUST have industry leading performance.  Not just “bench-marketing” type performance, I mean real audited, independent, third party, validated performance numbers.  The benchmarks from the Storage Performance Council are a great example of a third party solution.  You can’t just slap SSD into an array and say “I have the fastest thing in the world”. Here is a great example; if you are looking at designing a Virtual Desktop Infrastructure then performance should be at the top of your design criteria (boot storms).  Go check out my blog topic on the subject.  It’s called “VDI and why performance matters”

Finally, you need the glue that holds all of this together from a management and a reporting point of view.  WebServices is that glue. It’s the ubiquitous “open standard” tool on which many, many application solutions have been built on. We are the only company who builds its storage management and reporting on Web Services, and have a complete WSDL to prove it.   No other company epitomizes the value of WebService than Microsoft.  Just go to Google “SANMAN XIOTECH” and you’ll see that the folks out in Redmond have developed their own user interface to our solution (our WSDL) to enable automated storage provisioning.  HOW AWESOME IS THAT!!  Not to mention, WebServices also gives you the ability to do things like develop “chargeback” options which turns the information technology department into a profit center.  We have a GREAT customer reference in Florida that has done this very thing.  They’ve turned their IT department into a profit center and have used those funds to refresh just about everything in their datacenter.

So those are the fundamentals.  In my opinion, those are the top 3 things that you need to address before you move any further into the design phase.  Once your foundation is set, then you can zero in on some of the value added attributes you would like to be able to offer as a service in the cloud. Things like CDP, CAS, De-Duplication, Replication, NAS etc.

@StorageTexan <– Follow Me on Twitter !!!

VMware Virtual Desktop Infrastructure (VDI) and Why Performance matters

Is Storage Performance Predictability when building VMWare Virtual Desktop (VDI) Storage Clouds important?  This can also apply to Citrix and Microsoft Windows Hyper-V Virtual Desktop Systems.

Here is yet another great example of why I just love my job.  Last week  at our Xiotech National Sales Meeting we heard from a net-new educational customer out in the western US.  They recently piloted a VDI project with great success.   One of the biggest hurdles they were running into, and I would bet other storage cloud (or VDI specific) providers are as well, is performance predictability.  This predictability is very important.  Too often we see customer focus on the capacity side of the house and forget that performance can be extremely important (VDI boot storm anyone?).  Rob Peglar wrote a great blog post called “Performance Still Matters” over at the Xiotech.com blog site.  When you are done reading this blog, head over to it and check it out 🙂

So, VDI cloud architects should make sure that the solution they design today will meet the requirements of the project over the next 12 months, 24 months and beyond.  To make matters worse, they need to consider what happens if the cloud is 20% utilized or if/when it becomes wildly successful and utilization is closer to 90% to 95%.  The last thing you want to do is have to add more spindles ($$$) or turn to expensive SSD ($$$$$$$$$) to solve an issue that should have never happened in the first place.

So, let’s assume you already read my riveting, game changing piece on “Performance Starved Applications” (PSA). VDI is ONE OF THOSE PSA’s!!!  Why is this important?  If you are looking at traditional storage (Clariion, EVA, Compellent  Storage Center, Xiotech Mag3D, NetApp FAS) arrays it’s important to know that once you get to about 75% utilization performance drops like my bank account did last week while I was in Vegas.  Like a freaking hammer!!  That’s just HORRIBLE (utilization and my bank account).  Again you might ask why that’s important?   Well I have three kids and a wife, who went back in to college, so funds are not where they should be at…..oh wait (ADD moment) I’m sure you meant horrible about performance dropping and not my bank account.  So, what does performance predictability really mean?  How important would it be to know that every time you added an intelligent storage element (Xiotech Emprise 5000 – 3U) with certain DataPacs you could support 225 to 250 simultaneous VDI instances (just as an example) including boot storms?  This would give you an incredible ability to zero in on the costs associated with the storage part of your VDI deployment.  This is especially true when moving from a pilot program into a full production roll out.  For instance, if you pilot 250 VDI instances, but you know that you will eventually need support for 1000, you can start off with one Emprise 5000 and grow it to a total of four elements.  Down the road, if you grow further than 1000 you fully understand the storage costs associated with that growth, because it is PREDICTABLE.

What could this mean to your environment?  It means if you are looking at traditional arrays, be prepared to pay for capacity that you will probably never use without a severe hit to performance.  What could that mean for the average end user?  That means their desktop boots slowly, their applications slow down and your helpdesk phone rings off the hook!!  So, performance predictability is crucial when designing scalable VDI solutions and when cost management (financial performance predictability) is every bit as critical.

So if you are looking at VDI or even building a VDI Storage Cloud then performance predictability would be a great foundation on which to build those solutions.  The best storage solution to build your application on is the Xiotech Emprise 5000.

Thanks,

@StorageTexan