Tag Archives: DAS

What is your Windows 7 strategy?

 

My new favorite question !!  “What is your Windows 7 strategy?”   What appears to be a pretty straightforward question has led to some FANTASTIC discussions around Virtual Desktop opportunities lately.  It’s been my recent experience that most companies have been on Windows XP a few years longer than they would like to be so Windows 7 is something that is at the forefront of their mind.  Once the question is asked it usually starts moving towards a Virtual Desktop discussion (VDI) pretty quickly.  I’ve found that Windows 7 migration is one of the largest catalysts to get the momentum moving on a VDI architectural discussion (the others being VDI-Security, desktop disaster recovery as well as ”bring your own computer” or BYOC).  This makes sense since most of the time you buy a new PC with Windows 7 and not upgrade an existing desktop to Windows 7.  This is especially true if the existing desktop hardware is at least two years old. 

 

The upside to the VDI and Windows 7 discussion is this: it’s about as good a time as any to move to VDI then at any other point in time.  You already must replace the desktop hardware, so you might as well investigate the viability of virtualizing the desktop.  Sometimes it takes a little more than just a Windows 7 strategy to get things moving.  VDI also has some really nice side benefits like helping with local and remote users and their disaster recovery ramifications.  For instance, for a few years I ran the Systems Engineering Group for my region at Xiotech.  At the time I had 8 or 9 engineers and it never failed to happen that at least once a quarter someone’s laptop hard drive would crash, or better yet, get stolen – it always seemed that the AE’s had the most issues 🙂 .  Each time that happened it took that SE or AE two or three days to get back up to speed and that’s if they had a good backup and we could ship them a replacement laptop quickly.  If not, it was a week.  What a VDI Architecture could have delivered was a “Desktop/Laptop Disaster Recovery process”.  In this case, they could have simply stopped into a store such as BestBuy, Target, or Fry’s Electronics and purchased a laptop, walked into a Starbucks (WiFi) and downloaded the VDI client and away they would go.  Now, if you don’t have remote users, think of the ability to swap out a failed desktop and have the end user up and running in an hour.  It’s HUGE!!!  The whole BYOC (Bring Your Own Computer) concept really can take this to the next level.  Imagine the helpdesk team never having to image a laptop or desktop hard drive ever again.  You give the end-users a set dollar amount and require them to pick up 2 or 3 years of 24/7 phone support on the laptop/desktop and just supply them with a remote desktop access. 

 

Let’s look at the security enhancements you get with a Virtual Desktop architecture.  If you are a financial trading company, maybe you want to be able to lock down the company’s information as tightly as possible.  If you are a K-12 or University facility, maybe you want to keep the students from altering the OS/applications that might affect the next student.  The ability for a new desktop instance to be spawned at each unique login means that if a student loads a virus into the desktop, the next time someone logs in, they get a fresh, virus-free image.  What seems to have gotten the most attention in these K-12 institutions is the ability to offer the “Computer Lab” applications to home users.  Imagine a student logging into a VDI session from anywhere in the world and having the same level of access as if they were on campus.  The same look/feel etc.  Maybe the student has an iPad or a NetBook.  They can then access the lab 24/7/365.

So if you are struggling with concept of VDI and are looking for different ways to justify it hopefully I’ve left you with a few things to consider.  If nothing more, ask yourself (or the desktop team) the following:  “What is our Windows 7 strategy?”

@StorageTexan

Advertisement

The Emprise 9000- Scaleout SAN Architecture

If you look up Emprise in the Merriam-Webster dictionary you will see that it means “an adventurous or daring enterprise.”   That pretty much describes the Emprise product family’s launch 2 years ago.  We did something that no one else was, or is doing today.  Imagine being able to start from scratch on a storage solution, and I’m not talking about controller software.  I’m talking a complete re-engineering/architecting a solution that is built with enough resiliencies to offer the only zero-cost 5-year hardware warranty in the storage industry.  Not only is it super reliable, but it’s ridiculously fast and predictable.  When you can support 600+ Virtual Desktop (Performance VDI) “bootstorm” instances at a whopping 20 IOPS per bootup in 3U of space, I would classify that as wicked fast!!!  

 

In those 2 years we have not sat around on our laurels.  Steve Sicola’s team, headed up by our VP of Technology David “Gus” Gustavsson, has really outdone themselves with our latest Emprise product launch.  Not only did we move our entire user interface from “Web Services” to a RESTful API (ISE-Manager (blog about this later) and our iPhone/iPad App), he also released our 20(ea) 2.5” disk drive DataPac which has 40(ea) 2.5” drives in 3U of space for about 19.2TB’s space and a TON of performance.  His team also released our ISE Analyzer (advanced reporting solution built on our CoreteX/RESTful API (www.CorteXdeveloper.com )– I’ll blog about that soon) and our next release of our Emprise product family, the Emprise 9000.  I swear his team doesn’t sleep!!!

 

 So, the Emprise 9000 is a pretty unique solution in the market.  Today, when you think “scale out” architecture the first thing you might think about is NAS.  Hopefully our ISE NAS !!  We hope moving forward you will also think of our Emprise 9000.  The Emprise 9000’s ability to scale to 12 controllers puts it way above the 8 controllers the 3PAR solution scales to and above the 2-controllers the rest of the storage world produces (EMC Clariion, Compellent, HP EVA, IBM XIV etc).  When married with our Intelligent Storage Element (ISE) it truly gives our customers the most robust, scalable solution in the storage market today.  

 

Let’s be clear, the Emprise 9000 is not just a controller update.  It’s a combination of better, faster controllers, RESTful API and our ISE technology combined to solve performance starved applications issues like Virtual Desktops, Exchange, OLTP, Data Warehouse, Virtual Servers as well as various other types of applications found in datacenters today.  The ability to give predictable performance whether the solution is 10% utilized, or 97% utilized is a very unique feature.  Did I mention it comes with our free zero-cost hardware maintenance?  24x7x365 !!!

So for those keeping a tally at home, and for those competitors that want a little more information on what the Emprise 9000 can do, here is a quick list: (this is not all the features)

  • Each controller has a Dual quad-core Nehalem CPU’s!!
  • Scale-out to 12 Controller pairs
  • 8Gb Fibre Channel ports
    • N-Port Virtualization (NPIV)
  • 1Gb or 10Gb iSCSI ports                (10GB later this quarter)
    • You can run both FC and iSCSI in the same solution.
  • Scalable from 1 to 96(ea) ISE’s of any size
    • Max capacity would be 1.8PB with 96(ea) 19.2 TB ISE’s
  • Support for greater then 2TB LUNS
    • Up to 256TB size LUN
  • Thin Provisioned Volumes
  • Snapshots
    • READ only Snapshots
    • Writeable Snapshots as well. Think “smart-clone” technology of VDI
  • Heterogeneous Migration
    • You want to migrate off that EMC, HP, 3PAR, HDS, etc – we can do it natively in our storage controllers.
  • Sync/Async native IP or FC replication

 

So, as you can see it’s a pretty impressive list!!  And as with all new products, we will be adding new features pretty quickly so stay tuned to announcements from us around the 9000.  BUT there’s more!!  But I can’t really go into it today 🙂  Just stick around a couple of months for some even cooler stuff Gus’s team will be rolling out.   I just got back from a week in Vegas getting feed by a firehose about all the stuff we will be rolling out by the end of the year.  WOW !!  Impressive to say the least !!! 🙂

@StorageTexan <-Follow me on Twitter

Stalled Virtualization Projects-How Xiotech can help you unstick these deployments

Stalled Virtualization Projects? – How Xiotech can help you UN-STICK these deployments.

Xiotech is in a huge partner recruitment phase this year.  Things have been going just fantastic! However, one problem we are running into is trying to get some of these larger partners to give us the time of day.  Shocking I know- who wouldn’t want to carve out 60 mins of their time to talk to “Ziotech” (I didn’t misspell that – it happens to us ALL THE TIME).  Once we get our foot in the door it’s usually 20 mins of them explaining that they carry EMC, NetApp, Pillar, HDS, and Compellent, etc.  They always explain to us they just can’t take on yet another storage vendor.  What’s really interesting is we typically tell them we don’t want to replace any of their current storage offerings.  This usually leads to a skeptical look from them 🙂  I usually tell them that we have a very unique ability to “un-stick” Virtual Desktop opportunities. Let me explain a little further.

It never fails- VDI projects seem to get stalled, or simply get stuck in some rut that the prospect or partner can’t get out of.  Now, a stuck project can come in many shapes and sizes.  Sometimes it’s time and effort, sometime it’s other projects in the way. But the vast majority of the time its Cost/ROI/TCO type things.  Not just from a justification point of view, but most of the time from the upfront CAPEX view.  This has been especially true with 1000+ seat solutions.  Like I said, I just keep hearing more and more about these situations from our partners.  What normally follows is, “well the project is on hold due to funding issues.” So how can we differentiate ourselves in this kind of opportunity?  Funny you should ask that!!

I typically like to describe our ISE solution as a solution that has a VERY unique ability to do 3 things better than the rest. 

#1 – We give you true cost predictability over the life of the project. 

Let’s be honest, if you are about to deploy a 5000+ VDI desktop solution you are probably going to do this project over a 3 year time frame right?  Even if it’s only 500, why is it that as we look into these solutions further we only see 3 years of maintenance on the most expensive CAPEX which is storage?  By the time you get all 5000+ systems up and running it’ll be time for year 4 and 5 maintenance on your storage foundation.   If this isn’t your first storage rodeo, then you know that years 4 and 5 can be the most painful in regards to costs. Not to mention, what’s really cool about our solutions is the “Lego” style approach to design.  We can tell you what the cost is for each ISE you want to buy, since they are in 3U “blades” you can simply add the number you need to meet whatever metric you have in place and know the cost of each one.  As you can see, we do “Cost Predictability” pretty well.

#2 – We give you performance predictability. 

With our 3U Emprise 5000 we have the very unique ability to predict performance characteristics.  This is very difficult with legacy storage vendors.  Their array IOPS pool can swing 80% plus or minus depending on how full from a capacity point of view their solution is.  We deliver 348 SPC-1 IOPS per SPINDLE based on 97% capacity utilization.  Keep in mind most engineers for legacy storage arrays use 150 to 120 IOPS per spindle.  So based on that alone, we can deliver the same performance with ½ the spindles!!    

#3 – We can give you capacity predictability. 

Because of the linearity of our solution, when you purchase the Emprise 5000 we can tell you exactly how much useable, after RAID capacity you will have available.  Best practice useable capacity for our solution is 96% full.  That’s where we do all of our performance testing.  Compared with the industry average of anywhere from 60% to 80% your capacity “mileage” will vary !!

 So why should this be important to solution providers and customers?  So back to my VDI comments.  If you are in the process of evaluating, or even moving down the path to deploy VDI how important is it for you to fully understand your storage costs when trying to design this out?  If I could tell you that you can support 2000 VDI instance in 3U of space, and that 3U of space can hold 19TB’s of capacity and that solution cost $50,000 (I’m pulling this number out of the…..well, air) that could really be a pivotal point in getting your project off the ground don’t you think?  Like I said, no one deploys a 5000 seats solution at one time.  You do this over a number of years.  With our Storage Blades, you can do just that.  You simply purchase the one ISE at a time.  With its predictable costs, capacity and most importantly, it’s predictable performance you have the luxury of growing your deployment overtime, without having to worry about a huge upfront CAPEX hit.  Not to mention a 5 year hardware warranty better aligns with the finance side of the house and their typical 5 year amortization process.  No hidden year 4 and 5 Maintenance costs !! 

So, if you are looking at a VDI project or you’ve looked at it in the past and just couldn’t justify it, give us a call.  Maybe we can help lower your entry costs and get this project unstuck !!

Thanks,

@StorageTexan

Xiotech Storage Blade – 101

How Xiotech Storage Blades have the potential to change the storage paradigm.

It’s inevitable, whether I’m talking with a value added reseller (VAR) or a net-new prospect, I’m always asked to explain how our solution is so different then everyone else’s.  I figured it was a great opportunity to address this in a blog post. 

Xiotech recently released a whitepaper authored by Jack Fegreus of OpenBench Labs.  His ISE overview was so spot on that I wanted to copy/paste some of the whitepaper here.  I would encourage you to read his full whitepaper as well, which includes his testing results.  I’m pretty sure you will be as impressed as I was.

Before you continue reading, I need you to take a moment to suspend everything you understand about storage architecture, both good and bad.  I would like you to read this post with an open mind, setting aside your biases as much as possible.  If you can do this, it will make a LOT more sense.

**********

<Copied from http://www.infostor.com/index/articles/display/3933581853/articles/infostor/openbench-lab-review/2010/april-2010/a-radical_approach.html>

The heart of ISE—pronounced, “ice”— technology is a multi-drive sealed DataPac with specially matched Seagate Fibre Channel drives. The standard drive firmware used for off-the-shelf commercial disks has been replaced with firmware that provides detailed information about internal disk structures. ISE leverages this detailed disk structure information to access data more precisely and boost I/O performance on the order of 25%. From a bottom line perspective, however, the most powerful technological impact of ISE comes in the form of autonomic self-healing storage that reduces service requirements.

In a traditional storage subsystem, the drives, drive enclosures and the system controllers are all manufactured independently. That scheme leaves controller and drive firmware to handle all of the compatibility issues that must be addressed to ensure device interoperation. Not only does this create significant processing overhead, it reduces the useful knowledge about the components to a lowest common denominator: the standard SCSI control set.

Relieved of the burden of device compatibility issues, ISE tightly integrates the firmware on its Managed Reliability Controllers (MRCs) with the special firmware used exclusively by all of the drives in a DataPac. Over an internal point-to-point switched network, and not a traditional arbitrated loop, MRCs are able to leverage advanced drive telemetry and exploit detailed knowledge about the internal structure of all DataPac components. What’s more, ISE architecture moves I/O processing and cache circuitry into the MRC.
 
A highlight of the integration between MRCs and DataPacs is the striping of data at the level of an individual drive head. Through such precise access to data, ISE technology significantly reduces data exposure on a drive. Only the surfaces of affected heads with allocated space, not an entire drive, will ever need to be rebuilt. What’s more, precise knowledge about underlying components allows an ISE to reduce the rate at which DataPac components fail, repair many component failures in-situ, and minimize the impact of failures that cannot be repaired. The remedial reconditioning that MRCs are able to implement extends to such capabilities as remanufacturing disks through head sparing and depopulation, reformatting low-level track data, and even rewriting servo and data tracks.

ISE technology transforms the notion of “RAID level” into a characteristic of a logical volume that IT administrators assign at the time that the logical volume is created. This eliminates the need for IT administrators to create storage pools for one or more levels of RAID redundancy in order to allocate logical drives. Also gone is the first stumbling block to better resource utilization: There is no need for IT administrators to pre-allocate disk drives for fixed RAID-level storage pools. Within Xiotech’s ISE architecture, DataPacs function as flexible RAID storage pools, from which logical drives are provisioned and assigned a RAID level for data redundancy on an ad hoc basis.

What’s more, the ISE separates the function of the two internal MRCs from that of the two external Fibre Channel ports. The two FC ports balance FC frame traffic to optimize flow of I/O packets on the SAN fabric. Then the MRCs balance I/O requests to maximize I/O throughput for the DataPacs.

In effect, Xiotech’s ISE technology treats a sealed DataPac as a virtual super disk and makes a DataPac the base configurable unit, which slashes operating costs by taking the execution of low-level device-management tasks out of the hands of administrators. This heal-in-place technology also allows ISE-based systems, such as the Emprise 5000, to reach reliability levels that are impossible for standard storage arrays. Most importantly for IT and OEM users of the Emprise 5000 storage, Xiotech is able to provide a five-year warranty that eliminates storage service renewal costs for a five-year lifespan.

******************

Now, I’m going to keep this same open mind when I say the following: the Emprise 5000 storage blade just makes storage controllers better.  We make one and we’ve seen it first-hand.  We saw a significant jump in performance once we moved from the typical drive bays and drives that everyone else uses with the ISE.  Not to mention, with its native switch fabric architecture, it allowed us to scale our Emprise 7000 storage controllers to 1PB of capacity.  What’s really cool (open mind for me) is we’ve improved performance and reliability for a lot of storage controllers like DataCore, FalconStor, IBM-SVC and HDS USP-V, not to mention significant boosts as well for applications and OS’s. 

Feel free to close your mind now 🙂

@StorageTexan

10,000 Exchange Users in 3U of space

 

This is a pretty cool video done by the Technical Marketing team at Xiotech.  10,000 Exchange users in 3U of space!!   No fancy/expensive SSD needed for this !!

In summary:

250 VDI instances or

10,000 Exchange Users or

750 DVD Quality Video’s or

25,000 MP3’s

In 3U of space.  Now that’s WICKED FAST !!!

 As they say in Minny – “that’s not to shabby !!”

By the way, if by chance 10,000 is just not enough users for you.  Don’t worry, add a second ISE and DOUBLE IT TO 20,000.  Need 30,000, then add a THIRD ISE.  100,000 users in 10 ISE or 30U of RackSpace.  Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!!  I should say, our ISE comes with 1GB.  It’s not the size that counts, it’s HOW YOU USE IT !! 🙂

One Pillar Axiom 600 with one FC Slammer
24GB of cache <—-  WOW !!!!!
4 SSD Bricks for databases. Each with:
Two dedicated RAID controllers
13 50GB SSDs <— I’m going to guess that these aren’t very cheap.
2 SATA bricks for Logs. Each with:
13 500GB 7,200 RPM SATA disk drives

Hitachi AMS 2300 = 10,800 users – 400+ Pages PLUS !!! <– I have to say it again, WOW 400+ pages on this bad boy !!!

240 300GB 15K RPM SAS disks, <— Ahh ya  – TONZ of spindles !!!  We had 20(ea) 3.5″ Drives to do our testing. 
16GB of cache and
8(ea) 4Gb/s Fibre Channel paths was used for these tests.
Testing used 8 Sun Fire 4600 M2 servers with 32GB of RAM,
four dual-core AMD Opteron CPUs,
8(ea) Emulex 4Gbit/s Fibre Channel adapters and
Windows Server 2003 R2 Enterprise x64 with Service Pack 2.

The old pain in the DAS

The old pain in the DAS !! 

Is it me, or are others in the field seeing more and more companies being pushed to look at DAS solutions for their application environments?  It strikes me as pretty interesting.  I’m sure mostly it’s positioned around reducing the overall cost of the solution, but I also think it has a lot to do with assuring predictability around performance that you can expect when not having to fight for IOPS among other applications.  In fact, Devin Ganger did a great blog post around this very subject in regards to Exchange 2010.  It was a pretty cool read. I left a comment on his site, it pretty much matches (some may call it plagiarizing myself 🙂 ) my discussion here but I’ve had some time to expand a little more.  As such, here are my  thoughts on my own site 🙂

Let’s take a look back 10 years or so.  DAS solutions at the time signified low cost, low-to moderate performance and basic RAID-enabled reliability, administration time overhead, downtime for any required modification, as well as an inability to scale.  Pick any 5 of those 6 and I think most of us can agree on it.  Not to mention the DAS solution was missing key features that the applications vendors didn’t include like full block copies, replication, deduplication, etc.  Back then we spent a lot of time educating people on the benefits of SAN over DAS.  We touted the ability to put all your storage eggs in one highly reliable, networked, very redundant, easy-to-replicate-to-a-DR site-solution, which could be shared amongst many servers, to gain utilization efficiency.  We talked about cool features like “Boot from SAN” as well as full block Snapshots, replicating your data around the world and the only real way to do that was via a Storage array with those features. 

Fast forward to today and the storage array controllers are not only doing RAID and Cache protection (which is super important), they also doing thin provisioning, CDP, replication, dedupe (in some cases), snapshots (full copy and COW or ROW), multi-tier scheduled migration, CIFS, NFS, FC, iSCSI, FCoE, etc etc.  It’s getting to the point that performance predictability is pretty much going away not to mention, it takes a spreadsheet to understand the licensing that goes along with these features.  Reliability of the code, and mixing of different technologies (1GB, 2GB, 4GB, FC Drive bays, SAS connections, SATA connections, JBODs, SBODs, loops) as well as all the various “plumbing” connectivity options most arrays offer today is not making it any more stable.  2TB drive rebuild times are a great example of adding even more for controllers to handle.  Not to mention, the fundamental building block of a Storage array is data protection.  Rob Peglar over at the Xiotech blog did a really great job of describing “Controller Feature Creep”.  If you haven’t read it, you should.  Also, David Black over at “The Black Liszt” discussed “Mainframes and Storage Blades” he’s got a really cool “back to the future” discussion. 

Today it appears that application and OS/hypervisor vendors have caught up with all the issues we positioned against just years ago.  Exchange 2010 is a great example of this.  VSphere is another one.  Many application vendors now have native deduplication and compression built into their file systems, COW snapshots are a no-brainer, and replication can be done natively by the app which gives some really unique DR capabilities.  Not to mention, some applications support the ability to migrate data from Tier 1, to Tier 2 and Tier 3 based not only on a single “last touched” attribute, but also on file attributes like content (.PDF, .MP3), importance, duration, deletion policy and everything else without caring about the backend storage or what brand it is.  We are seeing major database vendors support controlling all aspects of the volumes on which logs, tablespaces, redo/undo, temporary space, etc. are held.  Just carve up 10TB’s and assign it to the application and it will take care of thin provisioning and all sorts of other ‘cool’ features.   

At the end of the day the “pain in the DAS” that we knew and loved to compete against is being replaced with “Intelligent DAS” and application aware storage capabilities.  All this gives the end user a unique ability to make some pretty interesting choices.  They can continue down the path with the typical storage array controller route, or they can identify opportunities that leverage native abilities in the application and “Intelligent DAS” solutions on the market today to vastly lower their total cost of ownership.   The question the end user needs to ask is, ‘What functionality is already included in the application/operating system I’m running?’ vs. ‘What do I need my storage  system to provide because my application doesn’t have this feature?’  At the end of the day, it’s Win-Win for the consumer, as well as a really cool place to be in the industry.  Like I’ve said in my “Cool things Commvault is doing with REST”, when you couple Intelligent DAS and application aware storage with a RESTful open standards interface, it really starts to open up some cool things. 2010 is going to be an exciting year for Storage.  Commvault has already started this parade so now its all about “who’s next”. 

 @StorageTexan <– Click on me to follow on Twitter. 

 PS – I’ve added an e-mail subscription capability to my site (as well as RSS feeds).  In the upper right corner of this site you will see a button to sign up.  Each time I post a new blog, you will get an e-mail of it.  Also, you will need to confirm the subscription via the e-mail you receive.