Tag Archives: Storage

VMware Virtual View

   

Xiotech Virtual View for VMware, HyperV and Citrix.  

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on.   In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster.  With Xiotech’s Virtual View it’s as easy as 1, 2, 3.  Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster.  From a high level, here is how you would do it today.     

 I like to refer to the picture below as the “Rinse and Repeat” part of the process.  Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.    

VMware Rinse and Repeat process

  

With Virtual View the steps would look more like the following.  Notice its “wizard” driven with a lot of the steps processed for you.  But it also gives you an incredible amount of “knob turning” if you want as well.     

Virtual View Wizard Steps

  

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.     

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!    

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!    

 If you have any questions, feel free to leave them in the comments section below.    

Thanks,    

@StorageTexan    

PS – here is a quick “commercial for Virtual View”    

Advertisement

The old pain in the DAS

The old pain in the DAS !! 

Is it me, or are others in the field seeing more and more companies being pushed to look at DAS solutions for their application environments?  It strikes me as pretty interesting.  I’m sure mostly it’s positioned around reducing the overall cost of the solution, but I also think it has a lot to do with assuring predictability around performance that you can expect when not having to fight for IOPS among other applications.  In fact, Devin Ganger did a great blog post around this very subject in regards to Exchange 2010.  It was a pretty cool read. I left a comment on his site, it pretty much matches (some may call it plagiarizing myself 🙂 ) my discussion here but I’ve had some time to expand a little more.  As such, here are my  thoughts on my own site 🙂

Let’s take a look back 10 years or so.  DAS solutions at the time signified low cost, low-to moderate performance and basic RAID-enabled reliability, administration time overhead, downtime for any required modification, as well as an inability to scale.  Pick any 5 of those 6 and I think most of us can agree on it.  Not to mention the DAS solution was missing key features that the applications vendors didn’t include like full block copies, replication, deduplication, etc.  Back then we spent a lot of time educating people on the benefits of SAN over DAS.  We touted the ability to put all your storage eggs in one highly reliable, networked, very redundant, easy-to-replicate-to-a-DR site-solution, which could be shared amongst many servers, to gain utilization efficiency.  We talked about cool features like “Boot from SAN” as well as full block Snapshots, replicating your data around the world and the only real way to do that was via a Storage array with those features. 

Fast forward to today and the storage array controllers are not only doing RAID and Cache protection (which is super important), they also doing thin provisioning, CDP, replication, dedupe (in some cases), snapshots (full copy and COW or ROW), multi-tier scheduled migration, CIFS, NFS, FC, iSCSI, FCoE, etc etc.  It’s getting to the point that performance predictability is pretty much going away not to mention, it takes a spreadsheet to understand the licensing that goes along with these features.  Reliability of the code, and mixing of different technologies (1GB, 2GB, 4GB, FC Drive bays, SAS connections, SATA connections, JBODs, SBODs, loops) as well as all the various “plumbing” connectivity options most arrays offer today is not making it any more stable.  2TB drive rebuild times are a great example of adding even more for controllers to handle.  Not to mention, the fundamental building block of a Storage array is data protection.  Rob Peglar over at the Xiotech blog did a really great job of describing “Controller Feature Creep”.  If you haven’t read it, you should.  Also, David Black over at “The Black Liszt” discussed “Mainframes and Storage Blades” he’s got a really cool “back to the future” discussion. 

Today it appears that application and OS/hypervisor vendors have caught up with all the issues we positioned against just years ago.  Exchange 2010 is a great example of this.  VSphere is another one.  Many application vendors now have native deduplication and compression built into their file systems, COW snapshots are a no-brainer, and replication can be done natively by the app which gives some really unique DR capabilities.  Not to mention, some applications support the ability to migrate data from Tier 1, to Tier 2 and Tier 3 based not only on a single “last touched” attribute, but also on file attributes like content (.PDF, .MP3), importance, duration, deletion policy and everything else without caring about the backend storage or what brand it is.  We are seeing major database vendors support controlling all aspects of the volumes on which logs, tablespaces, redo/undo, temporary space, etc. are held.  Just carve up 10TB’s and assign it to the application and it will take care of thin provisioning and all sorts of other ‘cool’ features.   

At the end of the day the “pain in the DAS” that we knew and loved to compete against is being replaced with “Intelligent DAS” and application aware storage capabilities.  All this gives the end user a unique ability to make some pretty interesting choices.  They can continue down the path with the typical storage array controller route, or they can identify opportunities that leverage native abilities in the application and “Intelligent DAS” solutions on the market today to vastly lower their total cost of ownership.   The question the end user needs to ask is, ‘What functionality is already included in the application/operating system I’m running?’ vs. ‘What do I need my storage  system to provide because my application doesn’t have this feature?’  At the end of the day, it’s Win-Win for the consumer, as well as a really cool place to be in the industry.  Like I’ve said in my “Cool things Commvault is doing with REST”, when you couple Intelligent DAS and application aware storage with a RESTful open standards interface, it really starts to open up some cool things. 2010 is going to be an exciting year for Storage.  Commvault has already started this parade so now its all about “who’s next”. 

 @StorageTexan <– Click on me to follow on Twitter. 

 PS – I’ve added an e-mail subscription capability to my site (as well as RSS feeds).  In the upper right corner of this site you will see a button to sign up.  Each time I post a new blog, you will get an e-mail of it.  Also, you will need to confirm the subscription via the e-mail you receive.

Why running a hotel like you run your storage array could put you out of business.

<this post was updated on April 2, 2010>

Recently I wrote about why “Cost per raw TB” wasn’t a very good metric for comparing storage arrays.  In fact, my good friend Roger Kelley over at StorageWonk.com wrote a nice blog specifically “Comparing Storage Arrays “apples to apples” .  We don’t say this as a means to simply ignore some of the features and functions that some of the other vendors offer.  It’s just our helpful reminder that there is no “free storage lunch”.

So let me take you on a different type of journey around “cost per raw TB” and “cost per useable TB” and apply it to something outside of technology.  Hopefully this will make sense!!

Let’s assume you are in the market for a 100 room hotel.  You entertain all sorts of realtors that tell you why their hotel is better than the others. You’ve decided that you want to spend about $100,000 for 100 room hotel which averages about $1000 per room.   So, at a high level all the hotels offer that same cost per room.  Let’s call this “Cost per raw occupancy”.  It’s the easy way to figure out costs and it looks fair. 

You narrow down your list of hotels to three choices.  We’ll call them hotel C, hotel N and hotel X.   Hotel C and N have the same architecture, same basic building design, essentially they look the same other than names and colors of the buildings.  Hotel X is unique in the fact that it’s brand new and created by a group that has been building hotel rooms for 30+ years with each hotel getting better and better.  They are so confident in their building that it comes with 5 years of free building maintenance.   

So, you ask the vendors to give you their “best practice, not to exceed hotel occupancy rate”.  Hotel C tells you they have some overhead associated with some of their special features so their number is about 60 rooms that could be rented out at any given time.  The reservation system will let you book an unlimited amount of rooms, but once you get over 60 things just stop working well and guests complain.  Hotel N says they can do about 70 rooms before they have issues.  Hotel X says they have tested at 96 room’s occupancy without any issues at all.  

So, while at a high level hotel’s C, N and X were $1000 a room, after further review hotel C is about $1600 a room, hotel N is $1400 a room and hotel X is $1041 a room.  Big difference!!  Let’s assume each of these vendors could “right size” their hotel to meet your 100 room request but the room cost will stay the same.  So, hotel C would now cost you $160,000, hotel N is $140,000 and hotel X is $104,000.  So that my friend is what I like to call “Cost per useable occupancy” !!

Another way to do this is to have hotel C and N right size down to your budget number based on “cost per useable occupancy”.  If the $100,000 is the most important and you understand that you will only get to rent out 60 or 70 rooms from the other hotels, then you could save money with Hotel X by just purchasing 60 rooms in hotel X.  That would bring Hotel X’s costs down to $60,000 or a nice savings of $40,000!!  The net-net is you get 60 rooms across all 3 hotels but 1 offers you a HUGE savings. 

At the end of the day, as the owner of that hotel you want as many rooms rented out as possible.  The last thing you want to see happen is your 100 room hotel only capable of 60% or 70% occupancy. 

So, if you are in the market for a 100 room hotel, or a Storage Array, you might want to spend a little more time trying to figure out what their best practice occupancy rate is !!  It’ll save you money and heartburn in the end.  

I’ll leave you with this – based on the array you have today, what do you think your occupancy rating would be for your 100 room hotel?  Feel free to leave the vendor name out (or not) 🙂

@StorageTexan

Cool things Commvault is doing with REST

Cool things Commvault is doing with REST 

I’ve always been a HUGE fan of Commvault.  They just rock.  When I was a Systems Engineer back in Austin in the early 2000’s, I don’t think we had an account that I didn’t take Commvault into to try and solve a customer’s backup issues.  AND WE DIDN’T EVEN SELL COMMVAULT !!!   They had such cool technology that was clearly leaps and bounds above everyone else. Not to mention, they had some really cool people that worked for them as well (Shout out to Jeanna, Joelle, RobK and of course Mr Cowgil).  

Fast forward a few years and the release of Simpana as well as the addition of native DeDuplication clearly gave Data Domain and various other deduplication solutions a run for their money.  You would think that would be enough for one company!!   I was pretty excited about their recent press release around adding cloud data storage as a tier option in Simpana.  Dave Raffo over at SearchDataBackup.Com did a really nice job of summarizing the announcement.  It’s a clear sign that Commvault is still very much an engineering driven organization.  Which is just AWESOME!! 

 I think the biggest nugget that I pulled out of the press release is Commvault’s ability to integrate native REST capabilities.  The more and more I hear about REST’s potential, the more I get excited about some of the endless possibilities it can offer.  In this case, it allowed Commvault to easily integrate their backup architecture to include 3rd party cloud solutions like Amazon S3, EMC Atmos and a slew of others.   They didn’t need to build an API for each vendor; they just relied on REST’s open API to do that for them. 

If you haven’t had a chance you should check out Brian Reagan’s blog posting that mentions something we are calling CorteX.  Essentially CorteX is our RESTful based ecosystem on which developers can gain access to our Emprise solutions.  This is the next evolutionary step in our ongoing open architecture capabilities.  As some of you are aware, we’ve been touting our WebService’s Software Development Kit for some time.  It’s allowed us to do things like VMWare Virtual View which ties directly into Virtual Center to give VMWare Admin’s unprecedented abilities, as well as Microsoft developers creating a provisioning application called SANMAN that integrates some of their processes directly to our storage.  RESTful API will take this to a greater level.  Just like Commvault was able to tie directly into public cloud storage providers, CorteX will give unprecedented abilities to do really cool things. 

I’ve probably said more then I should 🙂 So I’ll leave it with “more to come on CorteX as we get ready to release”.  I’ve probably stolen enough of Brian’s thunder to get an e-mail from him!!  It’s always good to hear from a Sr VP right!! 

So, keep an eye on Xiotech over the next couple of months and start paying attention to vendors that support RESTful API’s !!!

Thanks,

@StorageTexan

How to build resilient, scalable storage clouds and turn your IT department into a profit center

How to build resilient, scalable storage clouds and turn your IT department into a profit center!!

If you’ve been living under a rock for the last year the topic of Cloud based computing might be new to you.  Don’t worry about it at this point, there are CLEARLY more questions than answers on the subject.  I get asked at just about every meeting what my interpretation of “cloud” is.  I will normally describe it as an elastic, utility based environment that when properly architected, can grow and shrink as resources are provisioned and de-provisioned.  It’s a move away from “silo based” infrastructure and into a more flexible and scalable, utility based solution.  From a 30,000 foot view, I think that’s probably the best way to describe it.  Then the conversation usually rolls to “so, how do you compare your solution to others” relative to cloud. Here is what I normally talk about.

First and foremost we have sold solutions that are constructed just like everyone else’s.  Our Magnitude 3D 4000 product line is built with pretty much the exact same pieces and parts as does Compellent, NetApp FAS, EMC Clariion and HP EVA etc.  Intel-based controller motherboards, Qlogic HBAs, Xyratex or other SBOD drive bays connected via arbitrated loops.  Like I’ve said in prior posts, just line each of these up, remove the “branding” and you wouldn’t be able to tell the difference.  They all use the same commodity parts.  Why is this important?  Because none of those solutions would work well in a “Cloud” based architecture.  Why?  Because of all the reasons I’ve pointed out in my “Performance Starved Application” post, as well as my “Cost per TB” post.  THEY DON’T SCALE WELL and they have horrible utilization rates.  If you really want to build a storage cloud you have to zero in on what are the most important aspects of it, or what I like to refer to as “The Fundamentals”.

 First you MUST start with a SOLID foundation.  That foundation must not require a lot of “care and feeding” and it must be self healing.   With traditional storage arrays, you could end up with 100, 200 or even 1000 spinning disks.  Do you really want to spend the time (or the HUGE maintenance dollars) swapping out, and dealing with bad disks?  Look don’t get me wrong, I get more than a few eye rolls when I bring this up.  At the end of the day, if you’ve never had to restore data because of a failed drive, or any other issue related to failed disks then this is probably not something high on your list of worries.  For that reason, I’ll simply say why not go with a solution that guarantees that you won’t have to touch the disks for 5 years and backs it up with FREE HARDWARE MAINTENANCE (24/7/365/4hr)!!  Talk about putting your money where your mouth is.  From a financial point of view, who cares if you’ve never had to mess with a failed drive, it’s freaking FREE HARDWARE MAINTENANCE for 5 years!!

Secondly, it MUST have industry leading performance.  Not just “bench-marketing” type performance, I mean real audited, independent, third party, validated performance numbers.  The benchmarks from the Storage Performance Council are a great example of a third party solution.  You can’t just slap SSD into an array and say “I have the fastest thing in the world”. Here is a great example; if you are looking at designing a Virtual Desktop Infrastructure then performance should be at the top of your design criteria (boot storms).  Go check out my blog topic on the subject.  It’s called “VDI and why performance matters”

Finally, you need the glue that holds all of this together from a management and a reporting point of view.  WebServices is that glue. It’s the ubiquitous “open standard” tool on which many, many application solutions have been built on. We are the only company who builds its storage management and reporting on Web Services, and have a complete WSDL to prove it.   No other company epitomizes the value of WebService than Microsoft.  Just go to Google “SANMAN XIOTECH” and you’ll see that the folks out in Redmond have developed their own user interface to our solution (our WSDL) to enable automated storage provisioning.  HOW AWESOME IS THAT!!  Not to mention, WebServices also gives you the ability to do things like develop “chargeback” options which turns the information technology department into a profit center.  We have a GREAT customer reference in Florida that has done this very thing.  They’ve turned their IT department into a profit center and have used those funds to refresh just about everything in their datacenter.

So those are the fundamentals.  In my opinion, those are the top 3 things that you need to address before you move any further into the design phase.  Once your foundation is set, then you can zero in on some of the value added attributes you would like to be able to offer as a service in the cloud. Things like CDP, CAS, De-Duplication, Replication, NAS etc.

@StorageTexan <– Follow Me on Twitter !!!

VMware Virtual Desktop Infrastructure (VDI) and Why Performance matters

Is Storage Performance Predictability when building VMWare Virtual Desktop (VDI) Storage Clouds important?  This can also apply to Citrix and Microsoft Windows Hyper-V Virtual Desktop Systems.

Here is yet another great example of why I just love my job.  Last week  at our Xiotech National Sales Meeting we heard from a net-new educational customer out in the western US.  They recently piloted a VDI project with great success.   One of the biggest hurdles they were running into, and I would bet other storage cloud (or VDI specific) providers are as well, is performance predictability.  This predictability is very important.  Too often we see customer focus on the capacity side of the house and forget that performance can be extremely important (VDI boot storm anyone?).  Rob Peglar wrote a great blog post called “Performance Still Matters” over at the Xiotech.com blog site.  When you are done reading this blog, head over to it and check it out 🙂

So, VDI cloud architects should make sure that the solution they design today will meet the requirements of the project over the next 12 months, 24 months and beyond.  To make matters worse, they need to consider what happens if the cloud is 20% utilized or if/when it becomes wildly successful and utilization is closer to 90% to 95%.  The last thing you want to do is have to add more spindles ($$$) or turn to expensive SSD ($$$$$$$$$) to solve an issue that should have never happened in the first place.

So, let’s assume you already read my riveting, game changing piece on “Performance Starved Applications” (PSA). VDI is ONE OF THOSE PSA’s!!!  Why is this important?  If you are looking at traditional storage (Clariion, EVA, Compellent  Storage Center, Xiotech Mag3D, NetApp FAS) arrays it’s important to know that once you get to about 75% utilization performance drops like my bank account did last week while I was in Vegas.  Like a freaking hammer!!  That’s just HORRIBLE (utilization and my bank account).  Again you might ask why that’s important?   Well I have three kids and a wife, who went back in to college, so funds are not where they should be at…..oh wait (ADD moment) I’m sure you meant horrible about performance dropping and not my bank account.  So, what does performance predictability really mean?  How important would it be to know that every time you added an intelligent storage element (Xiotech Emprise 5000 – 3U) with certain DataPacs you could support 225 to 250 simultaneous VDI instances (just as an example) including boot storms?  This would give you an incredible ability to zero in on the costs associated with the storage part of your VDI deployment.  This is especially true when moving from a pilot program into a full production roll out.  For instance, if you pilot 250 VDI instances, but you know that you will eventually need support for 1000, you can start off with one Emprise 5000 and grow it to a total of four elements.  Down the road, if you grow further than 1000 you fully understand the storage costs associated with that growth, because it is PREDICTABLE.

What could this mean to your environment?  It means if you are looking at traditional arrays, be prepared to pay for capacity that you will probably never use without a severe hit to performance.  What could that mean for the average end user?  That means their desktop boots slowly, their applications slow down and your helpdesk phone rings off the hook!!  So, performance predictability is crucial when designing scalable VDI solutions and when cost management (financial performance predictability) is every bit as critical.

So if you are looking at VDI or even building a VDI Storage Cloud then performance predictability would be a great foundation on which to build those solutions.  The best storage solution to build your application on is the Xiotech Emprise 5000.

Thanks,

@StorageTexan

My 10 years with Xiotech

WOW!! I’m very proud to say that today is my 10 year anniversary with Xiotech.  What an awesome ride this has been.  In those 10 years I’ve had an opportunity to work with some brilliant people.  Some of them are still here at Xiotech, others have moved on to other endeavors, but each one has made an impact both professionally and personally on me.

It just seems like it was just yesterday, sitting in the Minneapolis Decathlon Club Hotel (I think it’s a water park now 🙂 ) overwhelmed with a sense of “wow, this is soooo cool”, as well as thinking “Crap, I hope they don’t realize I have no idea what I’m doing” 🙂  The week before, I was a Windows and Unix Systems Admin for a telemarketing company out of Austin with no Pre-Sales experience.  I basically snuck into the position because the CIO of my company wanted to go back into sales and Xiotech snatched him up.  The good news for me is they were also looking for a Systems Engineer.  The CIO put in a good word for me and BAM I was hired and flown up to Minny for the National Sales Meeting!!  

I’m sure you’ve heard the expression “He took to it like a duck takes to water”.  Well that duck is me.  I took to the Pre-Sales Systems Engineering position like a duck takes to water.  I LOVED IT and I tried to do everything in my power to absorb as much knowledge as I could.  Not just the ins and outs of storage, although I did that (I still have my SNIA Level 2 certification hanging on my wall !!).  As well as trying to hone/craft the whole concept around “Consultative Selling”.  I’m convinced that once you fully grasp consultative selling, you move very quickly into “Rock Star” status in a sales environment.  Now, for people like me who are far from “rock stars” you do the best you can and based on the 10 years I’ve been here, I think I’m getting closer!! 

Over those 10 years I’ve had the ability to wear multiple hats.  I’ve been a Pre/Post Sales Engineer doing selling/rack-n-stack and break/fix stuff, to a Regional Storage Architect and a Regional Systems Engineering Manager and it’s been an AWESOME ride.   I can honestly say that I’m just as excited today, as I was on my very first day and the education that has come from being” immersed in storage” these past 10 years has turned this “dyed in the wool” network guy into a “dyed in the wool” storage guy. I can’t tell you how happy I am they took a chance on me all those years ago!!

So, this is my first blog, my hope is to have the time to do more entries.  I have a lot of ideas but as people with Attention Deficit Disorder can tell you, it’s sometime difficult to stay focused long enough to edit it down and post it without changing it 400 times !! 🙂

Until then, here’s looking to another 10 years !! 🙂