Tag Archives: VDI

What is your Windows 7 strategy?

 

My new favorite question !!  “What is your Windows 7 strategy?”   What appears to be a pretty straightforward question has led to some FANTASTIC discussions around Virtual Desktop opportunities lately.  It’s been my recent experience that most companies have been on Windows XP a few years longer than they would like to be so Windows 7 is something that is at the forefront of their mind.  Once the question is asked it usually starts moving towards a Virtual Desktop discussion (VDI) pretty quickly.  I’ve found that Windows 7 migration is one of the largest catalysts to get the momentum moving on a VDI architectural discussion (the others being VDI-Security, desktop disaster recovery as well as ”bring your own computer” or BYOC).  This makes sense since most of the time you buy a new PC with Windows 7 and not upgrade an existing desktop to Windows 7.  This is especially true if the existing desktop hardware is at least two years old. 

 

The upside to the VDI and Windows 7 discussion is this: it’s about as good a time as any to move to VDI then at any other point in time.  You already must replace the desktop hardware, so you might as well investigate the viability of virtualizing the desktop.  Sometimes it takes a little more than just a Windows 7 strategy to get things moving.  VDI also has some really nice side benefits like helping with local and remote users and their disaster recovery ramifications.  For instance, for a few years I ran the Systems Engineering Group for my region at Xiotech.  At the time I had 8 or 9 engineers and it never failed to happen that at least once a quarter someone’s laptop hard drive would crash, or better yet, get stolen – it always seemed that the AE’s had the most issues 🙂 .  Each time that happened it took that SE or AE two or three days to get back up to speed and that’s if they had a good backup and we could ship them a replacement laptop quickly.  If not, it was a week.  What a VDI Architecture could have delivered was a “Desktop/Laptop Disaster Recovery process”.  In this case, they could have simply stopped into a store such as BestBuy, Target, or Fry’s Electronics and purchased a laptop, walked into a Starbucks (WiFi) and downloaded the VDI client and away they would go.  Now, if you don’t have remote users, think of the ability to swap out a failed desktop and have the end user up and running in an hour.  It’s HUGE!!!  The whole BYOC (Bring Your Own Computer) concept really can take this to the next level.  Imagine the helpdesk team never having to image a laptop or desktop hard drive ever again.  You give the end-users a set dollar amount and require them to pick up 2 or 3 years of 24/7 phone support on the laptop/desktop and just supply them with a remote desktop access. 

 

Let’s look at the security enhancements you get with a Virtual Desktop architecture.  If you are a financial trading company, maybe you want to be able to lock down the company’s information as tightly as possible.  If you are a K-12 or University facility, maybe you want to keep the students from altering the OS/applications that might affect the next student.  The ability for a new desktop instance to be spawned at each unique login means that if a student loads a virus into the desktop, the next time someone logs in, they get a fresh, virus-free image.  What seems to have gotten the most attention in these K-12 institutions is the ability to offer the “Computer Lab” applications to home users.  Imagine a student logging into a VDI session from anywhere in the world and having the same level of access as if they were on campus.  The same look/feel etc.  Maybe the student has an iPad or a NetBook.  They can then access the lab 24/7/365.

So if you are struggling with concept of VDI and are looking for different ways to justify it hopefully I’ve left you with a few things to consider.  If nothing more, ask yourself (or the desktop team) the following:  “What is our Windows 7 strategy?”

@StorageTexan

The Emprise 9000- Scaleout SAN Architecture

If you look up Emprise in the Merriam-Webster dictionary you will see that it means “an adventurous or daring enterprise.”   That pretty much describes the Emprise product family’s launch 2 years ago.  We did something that no one else was, or is doing today.  Imagine being able to start from scratch on a storage solution, and I’m not talking about controller software.  I’m talking a complete re-engineering/architecting a solution that is built with enough resiliencies to offer the only zero-cost 5-year hardware warranty in the storage industry.  Not only is it super reliable, but it’s ridiculously fast and predictable.  When you can support 600+ Virtual Desktop (Performance VDI) “bootstorm” instances at a whopping 20 IOPS per bootup in 3U of space, I would classify that as wicked fast!!!  

 

In those 2 years we have not sat around on our laurels.  Steve Sicola’s team, headed up by our VP of Technology David “Gus” Gustavsson, has really outdone themselves with our latest Emprise product launch.  Not only did we move our entire user interface from “Web Services” to a RESTful API (ISE-Manager (blog about this later) and our iPhone/iPad App), he also released our 20(ea) 2.5” disk drive DataPac which has 40(ea) 2.5” drives in 3U of space for about 19.2TB’s space and a TON of performance.  His team also released our ISE Analyzer (advanced reporting solution built on our CoreteX/RESTful API (www.CorteXdeveloper.com )– I’ll blog about that soon) and our next release of our Emprise product family, the Emprise 9000.  I swear his team doesn’t sleep!!!

 

 So, the Emprise 9000 is a pretty unique solution in the market.  Today, when you think “scale out” architecture the first thing you might think about is NAS.  Hopefully our ISE NAS !!  We hope moving forward you will also think of our Emprise 9000.  The Emprise 9000’s ability to scale to 12 controllers puts it way above the 8 controllers the 3PAR solution scales to and above the 2-controllers the rest of the storage world produces (EMC Clariion, Compellent, HP EVA, IBM XIV etc).  When married with our Intelligent Storage Element (ISE) it truly gives our customers the most robust, scalable solution in the storage market today.  

 

Let’s be clear, the Emprise 9000 is not just a controller update.  It’s a combination of better, faster controllers, RESTful API and our ISE technology combined to solve performance starved applications issues like Virtual Desktops, Exchange, OLTP, Data Warehouse, Virtual Servers as well as various other types of applications found in datacenters today.  The ability to give predictable performance whether the solution is 10% utilized, or 97% utilized is a very unique feature.  Did I mention it comes with our free zero-cost hardware maintenance?  24x7x365 !!!

So for those keeping a tally at home, and for those competitors that want a little more information on what the Emprise 9000 can do, here is a quick list: (this is not all the features)

  • Each controller has a Dual quad-core Nehalem CPU’s!!
  • Scale-out to 12 Controller pairs
  • 8Gb Fibre Channel ports
    • N-Port Virtualization (NPIV)
  • 1Gb or 10Gb iSCSI ports                (10GB later this quarter)
    • You can run both FC and iSCSI in the same solution.
  • Scalable from 1 to 96(ea) ISE’s of any size
    • Max capacity would be 1.8PB with 96(ea) 19.2 TB ISE’s
  • Support for greater then 2TB LUNS
    • Up to 256TB size LUN
  • Thin Provisioned Volumes
  • Snapshots
    • READ only Snapshots
    • Writeable Snapshots as well. Think “smart-clone” technology of VDI
  • Heterogeneous Migration
    • You want to migrate off that EMC, HP, 3PAR, HDS, etc – we can do it natively in our storage controllers.
  • Sync/Async native IP or FC replication

 

So, as you can see it’s a pretty impressive list!!  And as with all new products, we will be adding new features pretty quickly so stay tuned to announcements from us around the 9000.  BUT there’s more!!  But I can’t really go into it today 🙂  Just stick around a couple of months for some even cooler stuff Gus’s team will be rolling out.   I just got back from a week in Vegas getting feed by a firehose about all the stuff we will be rolling out by the end of the year.  WOW !!  Impressive to say the least !!! 🙂

@StorageTexan <-Follow me on Twitter

How to design a Scale Out NAS Architecture

Scale-out architecture and why it is important when architecting a storage solution.

I had an interesting discussion with an architectural firm the other day.  Most of the discussion was around scaling for the future.  In our discussion we talked about the linear scalability of the ISE technology and he pointed out that while that made a ton of sense for his block-access requirements he was a little concerned around the unstructured data, as well as some plans utilizing NFS for some of his server and desktop virtualization needs.  The last thing he wanted to worry about was changing his architecture in 12 to 24 months due to growth or technology changes.  So we started working on architecting a solution utilizing our new “scale-out” ISE-NAS solution.

You’ve probably heard a lot about scale-out type architectures. 3PAR sort of led the way with their ability to scale out (at least to eight) their storage controllers to their fixed-backend backplane-attached disk drives and it offers up a pretty unique solution (at least in a block storage architecture).  3PARs problem is they don’t really have an answer for the same scalability around unstructured data (NAS).  Don’t get me wrong, they list 5 NAS companies on their website, 1 is out of business and the other 4 have either been acquired by their competitors or is a straight up competitor.  This scale out architecture seems to have caught on in the emerging NAS Gateway devices like Symantec FileStore and Isilon.  Clearly both FileStore and Isilon are very different on the scale-out architecture.  More below.

 

So first things first, let’s describe what a “scale-out” architecture means, at least to me that is.  When architecting solutions, it’s always important to put a solution together that can grow with the business.  In other words, they know what they need today, and they have an idea what they might need in 12 months, but 24 – 48 is a complete crap shoot.  They could be 5X the size, or just 2X the size but the architecture needs to be in place to support either direction. What is sometimes not discussed is what happens when you run out of either front-side processing power, backend IOPS or usable capacity?  Most storage solutions give you 1 to 2(ea) clustered controllers, and a fixed number of disk-drives they can scale to dependent on the specific controller you purchase.  From a front-end NAS solution most of them only scale to 2 nodes as well.  If you need more processing power,  more backend IOPS or capacity, you buy a second storage solution or you spend money to upgrade storage controllers that are not even remotely close to being amortized off the CFO’s books.  If you look at the drawing above, you can clearly see what scale-out architecture should look like.  You need more front-side processing, no problem.  You need more backend IOPS or Capacity, no problem.  They scale independently of each other.  There is no longer the case of “You love your first <insert storage/NAS solution of choice> and you hate your third, fourth etc etc. Isilon is probably a great example of that.  They tout their “scale-out” architecture but it clearly has some caveats.  For example, If you need more processing power, buy another Isilon, you need more capacity buy another Isilon, you need more backside IOPS…well you get the idea 🙂  It’s not a very efficient “scale-out” architecture.  It’s closer to a Scale up !!

Let’s also not loose site on the fact that this is a solution that will need to be in place for about 4 to 5 years, or the amount of time in which your company will amortize it.   The last thing you want to have to worry about is a controller upgrade, or net-new purchase because you didn’t size correctly or you under/over guessed your growth or even worse, years 4 and 5 hardware maintenance.  This is especially true if the vendor “end of life’d” their product before it was written off the books !!!  Cha-CHING.

 

So this company I was working with fluctuates with employees depending on what jobs they are working on.  It could go from 50 people to 500 people in a moment’s notice and while they would LOVE to size for 500, most of the time they were around 50 to 100.  So as I mentioned above, we started architecting a solution that incorporated our ISE-NAS solution based on Symantec’s FileStore product. When coupled with our Emprise 5000 (ISE) gives them the perfect scale-out solution.  They can start with 2-nodes and grow to 16 by simply adding NAS engines (x86) to the front end.  If they need more capacity, or backend IOPS, we can scale any direction independent of the rest of the solution.  Coupled with our predictable performance we gave them the ultimate ability to size for today, and know exactly what they can scale to in the future.

In the world of “Unified Storage”, cloud computing and 3 to 5 year project plans, its important to consider architecture when designing a solution to plan for the future.  Scale-Out architecture just makes a lot of sense.  BUT – do your homework.  Just because they say “scale-out” doesn’t really mean they are the same.  Dual-Clustered controllers – or even eight-way – will eventually become the bottle neck and the last thing you want to worry about is having to do a wholesale swap-out/upgrade of your controller nodes to remove the bottleneck or worse, have to buy a second (or third) storage solution to manage!!

@StorageTexan

Stalled Virtualization Projects-How Xiotech can help you unstick these deployments

Stalled Virtualization Projects? – How Xiotech can help you UN-STICK these deployments.

Xiotech is in a huge partner recruitment phase this year.  Things have been going just fantastic! However, one problem we are running into is trying to get some of these larger partners to give us the time of day.  Shocking I know- who wouldn’t want to carve out 60 mins of their time to talk to “Ziotech” (I didn’t misspell that – it happens to us ALL THE TIME).  Once we get our foot in the door it’s usually 20 mins of them explaining that they carry EMC, NetApp, Pillar, HDS, and Compellent, etc.  They always explain to us they just can’t take on yet another storage vendor.  What’s really interesting is we typically tell them we don’t want to replace any of their current storage offerings.  This usually leads to a skeptical look from them 🙂  I usually tell them that we have a very unique ability to “un-stick” Virtual Desktop opportunities. Let me explain a little further.

It never fails- VDI projects seem to get stalled, or simply get stuck in some rut that the prospect or partner can’t get out of.  Now, a stuck project can come in many shapes and sizes.  Sometimes it’s time and effort, sometime it’s other projects in the way. But the vast majority of the time its Cost/ROI/TCO type things.  Not just from a justification point of view, but most of the time from the upfront CAPEX view.  This has been especially true with 1000+ seat solutions.  Like I said, I just keep hearing more and more about these situations from our partners.  What normally follows is, “well the project is on hold due to funding issues.” So how can we differentiate ourselves in this kind of opportunity?  Funny you should ask that!!

I typically like to describe our ISE solution as a solution that has a VERY unique ability to do 3 things better than the rest. 

#1 – We give you true cost predictability over the life of the project. 

Let’s be honest, if you are about to deploy a 5000+ VDI desktop solution you are probably going to do this project over a 3 year time frame right?  Even if it’s only 500, why is it that as we look into these solutions further we only see 3 years of maintenance on the most expensive CAPEX which is storage?  By the time you get all 5000+ systems up and running it’ll be time for year 4 and 5 maintenance on your storage foundation.   If this isn’t your first storage rodeo, then you know that years 4 and 5 can be the most painful in regards to costs. Not to mention, what’s really cool about our solutions is the “Lego” style approach to design.  We can tell you what the cost is for each ISE you want to buy, since they are in 3U “blades” you can simply add the number you need to meet whatever metric you have in place and know the cost of each one.  As you can see, we do “Cost Predictability” pretty well.

#2 – We give you performance predictability. 

With our 3U Emprise 5000 we have the very unique ability to predict performance characteristics.  This is very difficult with legacy storage vendors.  Their array IOPS pool can swing 80% plus or minus depending on how full from a capacity point of view their solution is.  We deliver 348 SPC-1 IOPS per SPINDLE based on 97% capacity utilization.  Keep in mind most engineers for legacy storage arrays use 150 to 120 IOPS per spindle.  So based on that alone, we can deliver the same performance with ½ the spindles!!    

#3 – We can give you capacity predictability. 

Because of the linearity of our solution, when you purchase the Emprise 5000 we can tell you exactly how much useable, after RAID capacity you will have available.  Best practice useable capacity for our solution is 96% full.  That’s where we do all of our performance testing.  Compared with the industry average of anywhere from 60% to 80% your capacity “mileage” will vary !!

 So why should this be important to solution providers and customers?  So back to my VDI comments.  If you are in the process of evaluating, or even moving down the path to deploy VDI how important is it for you to fully understand your storage costs when trying to design this out?  If I could tell you that you can support 2000 VDI instance in 3U of space, and that 3U of space can hold 19TB’s of capacity and that solution cost $50,000 (I’m pulling this number out of the…..well, air) that could really be a pivotal point in getting your project off the ground don’t you think?  Like I said, no one deploys a 5000 seats solution at one time.  You do this over a number of years.  With our Storage Blades, you can do just that.  You simply purchase the one ISE at a time.  With its predictable costs, capacity and most importantly, it’s predictable performance you have the luxury of growing your deployment overtime, without having to worry about a huge upfront CAPEX hit.  Not to mention a 5 year hardware warranty better aligns with the finance side of the house and their typical 5 year amortization process.  No hidden year 4 and 5 Maintenance costs !! 

So, if you are looking at a VDI project or you’ve looked at it in the past and just couldn’t justify it, give us a call.  Maybe we can help lower your entry costs and get this project unstuck !!

Thanks,

@StorageTexan

VMware Virtual View

   

Xiotech Virtual View for VMware, HyperV and Citrix.  

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on.   In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster.  With Xiotech’s Virtual View it’s as easy as 1, 2, 3.  Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster.  From a high level, here is how you would do it today.     

 I like to refer to the picture below as the “Rinse and Repeat” part of the process.  Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.    

VMware Rinse and Repeat process

  

With Virtual View the steps would look more like the following.  Notice its “wizard” driven with a lot of the steps processed for you.  But it also gives you an incredible amount of “knob turning” if you want as well.     

Virtual View Wizard Steps

  

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.     

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!    

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!    

 If you have any questions, feel free to leave them in the comments section below.    

Thanks,    

@StorageTexan    

PS – here is a quick “commercial for Virtual View”    

10,000 Exchange Users in 3U of space

 

This is a pretty cool video done by the Technical Marketing team at Xiotech.  10,000 Exchange users in 3U of space!!   No fancy/expensive SSD needed for this !!

In summary:

250 VDI instances or

10,000 Exchange Users or

750 DVD Quality Video’s or

25,000 MP3’s

In 3U of space.  Now that’s WICKED FAST !!!

 As they say in Minny – “that’s not to shabby !!”

By the way, if by chance 10,000 is just not enough users for you.  Don’t worry, add a second ISE and DOUBLE IT TO 20,000.  Need 30,000, then add a THIRD ISE.  100,000 users in 10 ISE or 30U of RackSpace.  Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!!  I should say, our ISE comes with 1GB.  It’s not the size that counts, it’s HOW YOU USE IT !! 🙂

One Pillar Axiom 600 with one FC Slammer
24GB of cache <—-  WOW !!!!!
4 SSD Bricks for databases. Each with:
Two dedicated RAID controllers
13 50GB SSDs <— I’m going to guess that these aren’t very cheap.
2 SATA bricks for Logs. Each with:
13 500GB 7,200 RPM SATA disk drives

Hitachi AMS 2300 = 10,800 users – 400+ Pages PLUS !!! <– I have to say it again, WOW 400+ pages on this bad boy !!!

240 300GB 15K RPM SAS disks, <— Ahh ya  – TONZ of spindles !!!  We had 20(ea) 3.5″ Drives to do our testing. 
16GB of cache and
8(ea) 4Gb/s Fibre Channel paths was used for these tests.
Testing used 8 Sun Fire 4600 M2 servers with 32GB of RAM,
four dual-core AMD Opteron CPUs,
8(ea) Emulex 4Gbit/s Fibre Channel adapters and
Windows Server 2003 R2 Enterprise x64 with Service Pack 2.

Cool things Commvault is doing with REST

Cool things Commvault is doing with REST 

I’ve always been a HUGE fan of Commvault.  They just rock.  When I was a Systems Engineer back in Austin in the early 2000’s, I don’t think we had an account that I didn’t take Commvault into to try and solve a customer’s backup issues.  AND WE DIDN’T EVEN SELL COMMVAULT !!!   They had such cool technology that was clearly leaps and bounds above everyone else. Not to mention, they had some really cool people that worked for them as well (Shout out to Jeanna, Joelle, RobK and of course Mr Cowgil).  

Fast forward a few years and the release of Simpana as well as the addition of native DeDuplication clearly gave Data Domain and various other deduplication solutions a run for their money.  You would think that would be enough for one company!!   I was pretty excited about their recent press release around adding cloud data storage as a tier option in Simpana.  Dave Raffo over at SearchDataBackup.Com did a really nice job of summarizing the announcement.  It’s a clear sign that Commvault is still very much an engineering driven organization.  Which is just AWESOME!! 

 I think the biggest nugget that I pulled out of the press release is Commvault’s ability to integrate native REST capabilities.  The more and more I hear about REST’s potential, the more I get excited about some of the endless possibilities it can offer.  In this case, it allowed Commvault to easily integrate their backup architecture to include 3rd party cloud solutions like Amazon S3, EMC Atmos and a slew of others.   They didn’t need to build an API for each vendor; they just relied on REST’s open API to do that for them. 

If you haven’t had a chance you should check out Brian Reagan’s blog posting that mentions something we are calling CorteX.  Essentially CorteX is our RESTful based ecosystem on which developers can gain access to our Emprise solutions.  This is the next evolutionary step in our ongoing open architecture capabilities.  As some of you are aware, we’ve been touting our WebService’s Software Development Kit for some time.  It’s allowed us to do things like VMWare Virtual View which ties directly into Virtual Center to give VMWare Admin’s unprecedented abilities, as well as Microsoft developers creating a provisioning application called SANMAN that integrates some of their processes directly to our storage.  RESTful API will take this to a greater level.  Just like Commvault was able to tie directly into public cloud storage providers, CorteX will give unprecedented abilities to do really cool things. 

I’ve probably said more then I should 🙂 So I’ll leave it with “more to come on CorteX as we get ready to release”.  I’ve probably stolen enough of Brian’s thunder to get an e-mail from him!!  It’s always good to hear from a Sr VP right!! 

So, keep an eye on Xiotech over the next couple of months and start paying attention to vendors that support RESTful API’s !!!

Thanks,

@StorageTexan

How to build resilient, scalable storage clouds and turn your IT department into a profit center

How to build resilient, scalable storage clouds and turn your IT department into a profit center!!

If you’ve been living under a rock for the last year the topic of Cloud based computing might be new to you.  Don’t worry about it at this point, there are CLEARLY more questions than answers on the subject.  I get asked at just about every meeting what my interpretation of “cloud” is.  I will normally describe it as an elastic, utility based environment that when properly architected, can grow and shrink as resources are provisioned and de-provisioned.  It’s a move away from “silo based” infrastructure and into a more flexible and scalable, utility based solution.  From a 30,000 foot view, I think that’s probably the best way to describe it.  Then the conversation usually rolls to “so, how do you compare your solution to others” relative to cloud. Here is what I normally talk about.

First and foremost we have sold solutions that are constructed just like everyone else’s.  Our Magnitude 3D 4000 product line is built with pretty much the exact same pieces and parts as does Compellent, NetApp FAS, EMC Clariion and HP EVA etc.  Intel-based controller motherboards, Qlogic HBAs, Xyratex or other SBOD drive bays connected via arbitrated loops.  Like I’ve said in prior posts, just line each of these up, remove the “branding” and you wouldn’t be able to tell the difference.  They all use the same commodity parts.  Why is this important?  Because none of those solutions would work well in a “Cloud” based architecture.  Why?  Because of all the reasons I’ve pointed out in my “Performance Starved Application” post, as well as my “Cost per TB” post.  THEY DON’T SCALE WELL and they have horrible utilization rates.  If you really want to build a storage cloud you have to zero in on what are the most important aspects of it, or what I like to refer to as “The Fundamentals”.

 First you MUST start with a SOLID foundation.  That foundation must not require a lot of “care and feeding” and it must be self healing.   With traditional storage arrays, you could end up with 100, 200 or even 1000 spinning disks.  Do you really want to spend the time (or the HUGE maintenance dollars) swapping out, and dealing with bad disks?  Look don’t get me wrong, I get more than a few eye rolls when I bring this up.  At the end of the day, if you’ve never had to restore data because of a failed drive, or any other issue related to failed disks then this is probably not something high on your list of worries.  For that reason, I’ll simply say why not go with a solution that guarantees that you won’t have to touch the disks for 5 years and backs it up with FREE HARDWARE MAINTENANCE (24/7/365/4hr)!!  Talk about putting your money where your mouth is.  From a financial point of view, who cares if you’ve never had to mess with a failed drive, it’s freaking FREE HARDWARE MAINTENANCE for 5 years!!

Secondly, it MUST have industry leading performance.  Not just “bench-marketing” type performance, I mean real audited, independent, third party, validated performance numbers.  The benchmarks from the Storage Performance Council are a great example of a third party solution.  You can’t just slap SSD into an array and say “I have the fastest thing in the world”. Here is a great example; if you are looking at designing a Virtual Desktop Infrastructure then performance should be at the top of your design criteria (boot storms).  Go check out my blog topic on the subject.  It’s called “VDI and why performance matters”

Finally, you need the glue that holds all of this together from a management and a reporting point of view.  WebServices is that glue. It’s the ubiquitous “open standard” tool on which many, many application solutions have been built on. We are the only company who builds its storage management and reporting on Web Services, and have a complete WSDL to prove it.   No other company epitomizes the value of WebService than Microsoft.  Just go to Google “SANMAN XIOTECH” and you’ll see that the folks out in Redmond have developed their own user interface to our solution (our WSDL) to enable automated storage provisioning.  HOW AWESOME IS THAT!!  Not to mention, WebServices also gives you the ability to do things like develop “chargeback” options which turns the information technology department into a profit center.  We have a GREAT customer reference in Florida that has done this very thing.  They’ve turned their IT department into a profit center and have used those funds to refresh just about everything in their datacenter.

So those are the fundamentals.  In my opinion, those are the top 3 things that you need to address before you move any further into the design phase.  Once your foundation is set, then you can zero in on some of the value added attributes you would like to be able to offer as a service in the cloud. Things like CDP, CAS, De-Duplication, Replication, NAS etc.

@StorageTexan <– Follow Me on Twitter !!!

VMware Virtual Desktop Infrastructure (VDI) and Why Performance matters

Is Storage Performance Predictability when building VMWare Virtual Desktop (VDI) Storage Clouds important?  This can also apply to Citrix and Microsoft Windows Hyper-V Virtual Desktop Systems.

Here is yet another great example of why I just love my job.  Last week  at our Xiotech National Sales Meeting we heard from a net-new educational customer out in the western US.  They recently piloted a VDI project with great success.   One of the biggest hurdles they were running into, and I would bet other storage cloud (or VDI specific) providers are as well, is performance predictability.  This predictability is very important.  Too often we see customer focus on the capacity side of the house and forget that performance can be extremely important (VDI boot storm anyone?).  Rob Peglar wrote a great blog post called “Performance Still Matters” over at the Xiotech.com blog site.  When you are done reading this blog, head over to it and check it out 🙂

So, VDI cloud architects should make sure that the solution they design today will meet the requirements of the project over the next 12 months, 24 months and beyond.  To make matters worse, they need to consider what happens if the cloud is 20% utilized or if/when it becomes wildly successful and utilization is closer to 90% to 95%.  The last thing you want to do is have to add more spindles ($$$) or turn to expensive SSD ($$$$$$$$$) to solve an issue that should have never happened in the first place.

So, let’s assume you already read my riveting, game changing piece on “Performance Starved Applications” (PSA). VDI is ONE OF THOSE PSA’s!!!  Why is this important?  If you are looking at traditional storage (Clariion, EVA, Compellent  Storage Center, Xiotech Mag3D, NetApp FAS) arrays it’s important to know that once you get to about 75% utilization performance drops like my bank account did last week while I was in Vegas.  Like a freaking hammer!!  That’s just HORRIBLE (utilization and my bank account).  Again you might ask why that’s important?   Well I have three kids and a wife, who went back in to college, so funds are not where they should be at…..oh wait (ADD moment) I’m sure you meant horrible about performance dropping and not my bank account.  So, what does performance predictability really mean?  How important would it be to know that every time you added an intelligent storage element (Xiotech Emprise 5000 – 3U) with certain DataPacs you could support 225 to 250 simultaneous VDI instances (just as an example) including boot storms?  This would give you an incredible ability to zero in on the costs associated with the storage part of your VDI deployment.  This is especially true when moving from a pilot program into a full production roll out.  For instance, if you pilot 250 VDI instances, but you know that you will eventually need support for 1000, you can start off with one Emprise 5000 and grow it to a total of four elements.  Down the road, if you grow further than 1000 you fully understand the storage costs associated with that growth, because it is PREDICTABLE.

What could this mean to your environment?  It means if you are looking at traditional arrays, be prepared to pay for capacity that you will probably never use without a severe hit to performance.  What could that mean for the average end user?  That means their desktop boots slowly, their applications slow down and your helpdesk phone rings off the hook!!  So, performance predictability is crucial when designing scalable VDI solutions and when cost management (financial performance predictability) is every bit as critical.

So if you are looking at VDI or even building a VDI Storage Cloud then performance predictability would be a great foundation on which to build those solutions.  The best storage solution to build your application on is the Xiotech Emprise 5000.

Thanks,

@StorageTexan