Tag Archives: HP

VMware Virtual Desktop Infrastructure (VDI) and Why Performance matters

Is Storage Performance Predictability when building VMWare Virtual Desktop (VDI) Storage Clouds important?  This can also apply to Citrix and Microsoft Windows Hyper-V Virtual Desktop Systems.

Here is yet another great example of why I just love my job.  Last week  at our Xiotech National Sales Meeting we heard from a net-new educational customer out in the western US.  They recently piloted a VDI project with great success.   One of the biggest hurdles they were running into, and I would bet other storage cloud (or VDI specific) providers are as well, is performance predictability.  This predictability is very important.  Too often we see customer focus on the capacity side of the house and forget that performance can be extremely important (VDI boot storm anyone?).  Rob Peglar wrote a great blog post called “Performance Still Matters” over at the Xiotech.com blog site.  When you are done reading this blog, head over to it and check it out 🙂

So, VDI cloud architects should make sure that the solution they design today will meet the requirements of the project over the next 12 months, 24 months and beyond.  To make matters worse, they need to consider what happens if the cloud is 20% utilized or if/when it becomes wildly successful and utilization is closer to 90% to 95%.  The last thing you want to do is have to add more spindles ($$$) or turn to expensive SSD ($$$$$$$$$) to solve an issue that should have never happened in the first place.

So, let’s assume you already read my riveting, game changing piece on “Performance Starved Applications” (PSA). VDI is ONE OF THOSE PSA’s!!!  Why is this important?  If you are looking at traditional storage (Clariion, EVA, Compellent  Storage Center, Xiotech Mag3D, NetApp FAS) arrays it’s important to know that once you get to about 75% utilization performance drops like my bank account did last week while I was in Vegas.  Like a freaking hammer!!  That’s just HORRIBLE (utilization and my bank account).  Again you might ask why that’s important?   Well I have three kids and a wife, who went back in to college, so funds are not where they should be at…..oh wait (ADD moment) I’m sure you meant horrible about performance dropping and not my bank account.  So, what does performance predictability really mean?  How important would it be to know that every time you added an intelligent storage element (Xiotech Emprise 5000 – 3U) with certain DataPacs you could support 225 to 250 simultaneous VDI instances (just as an example) including boot storms?  This would give you an incredible ability to zero in on the costs associated with the storage part of your VDI deployment.  This is especially true when moving from a pilot program into a full production roll out.  For instance, if you pilot 250 VDI instances, but you know that you will eventually need support for 1000, you can start off with one Emprise 5000 and grow it to a total of four elements.  Down the road, if you grow further than 1000 you fully understand the storage costs associated with that growth, because it is PREDICTABLE.

What could this mean to your environment?  It means if you are looking at traditional arrays, be prepared to pay for capacity that you will probably never use without a severe hit to performance.  What could that mean for the average end user?  That means their desktop boots slowly, their applications slow down and your helpdesk phone rings off the hook!!  So, performance predictability is crucial when designing scalable VDI solutions and when cost management (financial performance predictability) is every bit as critical.

So if you are looking at VDI or even building a VDI Storage Cloud then performance predictability would be a great foundation on which to build those solutions.  The best storage solution to build your application on is the Xiotech Emprise 5000.

Thanks,

@StorageTexan

Cost Per TB

Greetings and welcome back to my blog !!  I think i’m starting to get the hang of this !!  Hopefully you will agree !!

In my blog post titled Performance Starved Applications we specifically zeroed in on those storage array vendors whose best practice is to keep their systems below a certain utilization rate for performance reasons. If you recall, I mentioned that in our Emprise product line, using ISE technology, we do not have these issues.  Not only can you use more than the industry standard 75%, WE ENCOURAGE YOU TO RUN IT AT 95%% FULL!!!!  Heresy I know!!  We are such rebels over here!!

Let me give you some more food for thought on why this should be important to you.  Why should you pay for the 25% capacity you don’t use?  This is only a big deal if you don’t have an unlimited budget :).  Many times a prospective storage buyer just zero’s in on the upfront costs which in most cases is really misleading.  I like to look at this in a sort of “Cost vs. Value” thing if that makes any sense.   If you’ve ever considered “Cost Per raw TB” as one of the metrics towards the purchase of an array, you want to read further!!

Full disclosure here; I “borrowed” ALL of the following information from Roger Kelley, Principal Storage Architect for the Xiotech Western Region.  He put the “Rock” in Rock Star and I’m not above “borrowing” his ideas!!  His are ALWAYS better than mine anyway!!!  Don’t they say stealing is the perfect form of flattery?  I’m pretty sure that’s how it goes 🙂

Cost vs. Value

If I offered to sell you a 100TB SAN for $1000 how many systems would you buy?  Would you consider $10.00 per TB to be a good deal?  I’m mean COME ON it’s freaking $10.00 per TB!!  What if you bought it, got it to your datacenter, installed it, powered it on, opened up your server and found out that all but 1GB of capacity was used for other things?  The upfront cost maybe AWESOME but it’s far from a good value right?  If you can only use 1GB of that 100TB solution you are actually paying $1,000.00 per GB or $1,024,000 per TB!!!  OUCH.

We at Xiotech have referred to this as the industry’s “Dirty little secret”.  This secret is all about highlighting the upfront cost based on “raw” capacity which makes their configs look great.  As you can see from the example above, that doesn’t make this a good value.  Upfront cost per raw TB, in my opinion is not a good thing to focus on.  In my PSA blog we talked about the common industry “best practice” usable being about 75% right? Probably one of the best discussion on this dirty secret happened over at Chuck Hollis Blog back in 2008.  He basically had a “Slap Fight” comparing his EMC (CX4=70%) with NetApp (FAS=60%) and HP (EVA=70%) on their various utilization rates. (STOP don’t leave yet – finish my blog, comment at the bottom how great this was and then go to his and reference my blog while you leave him a comment).  The comments are the place you want to spend your time in.  It’s just a classic slap fight.  You should bookmark it for future reference.  If I can take a moment for some gratuitousness I just want to do a little SHOUT OUT to Chuck Hollis.  I want to be you and Rob Peglar when I grow up!!

Again, I’m not trying to pick on anyone in particular.  This could be any one of the following solutions (or all of them): Clariion, EVA, FAS, Compellent Storage Center, Xiotech Magnitude 3D, etc.  These solutions are typical and represent a vast number of installed arrays.  At the end of that day it’s important to not get wrapped around the axel on “cost per RAW capacity” but to zero in on “cost per useable capacity”.    Unless you just like paying more money then you should!!

Thanks,

@StorageTexan

Performance Starved Applications

In my role here at Xiotech I get to spend a lot of time with prospects as well as customers. It’s probably why I love my job so much. My wife seems to think it has more to do with my A.D.D. (Attention Deficit Disorder) and the ability to change topics and discussions on a daily basis then it does anything else. She’s probably right; at least she likes to point out just how right she typically is!! But that’s probably a separate blog posting 🙂  So back to my role and the blog topic at hand.

Have you heard of the phrase “Performance Starved Applications” or PSA? Hopefully this term is not new to you. PSAs are simply applications that are not performing correctly due in large part to two things. First, the customer purchased a solution that the vendor failed to size correctly; or if it was sized properly, the customer failed to understand the caveats once it was deployed. What sort of caveats might this be? In typical SBOD (Switched Bunch of Disks) arrays, as the storage system fills up, the performance drops like a hammer. This isn’t just a one vendor issue. Have you noticed that a lot of storage vendors use the same parts to build their arrays? For example, place a Clariion, EVA, FAS, Storage Center, Magnitude 3D next to each other. About the only difference is the logo and the software loaded on the controllers. In most cases they are using the exact same drive bays, shuttles and drives. So, they all suffer from the same issue and they all have a similar caveat when it comes to just how full you can load up their system. The industry has settled on about 75%.

So hopefully we all agree, as the storage system fills up, the performance goes down at just about the same interval. To give you an example, a typical enterprise-quality Seagate drive can produce about 300 “typical” IOPS empty. At 50% full, it drops to about 50% of the performance or 150 IOPS, at 75% its closer to 119 IOPS. So, if we can agree on these numbers the rest should make sense. Let’s play with some numbers. So a customer needs 20TB’s of 15K enterprise drives and let’s say we use those new spiffy 300GB 15k Enterprise Seagate drives. Using my “South Texas” math that’s 20TB/300GB drives = 67 Drives. (before you say it, yes I know a 300 GB drive is NOT 300GB but that’s another blog topic all together – not to mention all the sparing needed for that number of spindles, RAID overhead, etc.) so if we take 67 drives times 300 IOPS we get 20,100 IOPS. SWEET. But let’s be realistic, they have no intention of keeping their storage solution empty. So, should we use 50%? That drops the number down to 10,050 IOPS. Not bad, but let’s be really serious here, NO WAY does a customer only use 50% of their capacity right? Most of my customers use 90% of their capacity but we’ll settle in on the industry standard 75% so that IOP Pool would be around 8000 (just in case you were curious on the 90% number it’s 105 IOPS per drive so that would be 7,000). So I put this into a nice graph because I’m a big fan of pictures !!

<Click to make larger>

So, that’s eye opening.

You essentially paid for 20,100 IOPS but you only get to use 8000. Now think of some of your transactional applications you are running like E-mail and databases and ask yourself “How are they running?” It wouldn’t surprise me if you start to remember a user here or there mentioning that at times their email just sits and waits or database reports that you need take hours to run. Those are what we like to call PSA. And truthfully, most of the time the storage array is the last thing people think of.

 So, how can you solve this issue? The typical answer from vendors is “Buy 50% more capacity”. Problem is, you bought more than you need and never use it 🙂 instead, check out one of our Emprise solutions. They are built on our patented (80+ patents) storage system called the Intelligent Storage Element (ISE). The ISE suffers from no such performance degradation. You can check out a couple of YouTube video’s here – http://www.youtube.com/XiotechCorporation

So, the next time you think your applications are running slow or your end users complain of performance problems, you might want to check and see just how full your array is. If you enjoy using 100% of both the performance and capacity you purchased then give Xiotech a call !!!