Xiotech Storage Blade – 101

How Xiotech Storage Blades have the potential to change the storage paradigm.

It’s inevitable, whether I’m talking with a value added reseller (VAR) or a net-new prospect, I’m always asked to explain how our solution is so different then everyone else’s.  I figured it was a great opportunity to address this in a blog post. 

Xiotech recently released a whitepaper authored by Jack Fegreus of OpenBench Labs.  His ISE overview was so spot on that I wanted to copy/paste some of the whitepaper here.  I would encourage you to read his full whitepaper as well, which includes his testing results.  I’m pretty sure you will be as impressed as I was.

Before you continue reading, I need you to take a moment to suspend everything you understand about storage architecture, both good and bad.  I would like you to read this post with an open mind, setting aside your biases as much as possible.  If you can do this, it will make a LOT more sense.

**********

<Copied from http://www.infostor.com/index/articles/display/3933581853/articles/infostor/openbench-lab-review/2010/april-2010/a-radical_approach.html>

The heart of ISE—pronounced, “ice”— technology is a multi-drive sealed DataPac with specially matched Seagate Fibre Channel drives. The standard drive firmware used for off-the-shelf commercial disks has been replaced with firmware that provides detailed information about internal disk structures. ISE leverages this detailed disk structure information to access data more precisely and boost I/O performance on the order of 25%. From a bottom line perspective, however, the most powerful technological impact of ISE comes in the form of autonomic self-healing storage that reduces service requirements.

In a traditional storage subsystem, the drives, drive enclosures and the system controllers are all manufactured independently. That scheme leaves controller and drive firmware to handle all of the compatibility issues that must be addressed to ensure device interoperation. Not only does this create significant processing overhead, it reduces the useful knowledge about the components to a lowest common denominator: the standard SCSI control set.

Relieved of the burden of device compatibility issues, ISE tightly integrates the firmware on its Managed Reliability Controllers (MRCs) with the special firmware used exclusively by all of the drives in a DataPac. Over an internal point-to-point switched network, and not a traditional arbitrated loop, MRCs are able to leverage advanced drive telemetry and exploit detailed knowledge about the internal structure of all DataPac components. What’s more, ISE architecture moves I/O processing and cache circuitry into the MRC.
 
A highlight of the integration between MRCs and DataPacs is the striping of data at the level of an individual drive head. Through such precise access to data, ISE technology significantly reduces data exposure on a drive. Only the surfaces of affected heads with allocated space, not an entire drive, will ever need to be rebuilt. What’s more, precise knowledge about underlying components allows an ISE to reduce the rate at which DataPac components fail, repair many component failures in-situ, and minimize the impact of failures that cannot be repaired. The remedial reconditioning that MRCs are able to implement extends to such capabilities as remanufacturing disks through head sparing and depopulation, reformatting low-level track data, and even rewriting servo and data tracks.

ISE technology transforms the notion of “RAID level” into a characteristic of a logical volume that IT administrators assign at the time that the logical volume is created. This eliminates the need for IT administrators to create storage pools for one or more levels of RAID redundancy in order to allocate logical drives. Also gone is the first stumbling block to better resource utilization: There is no need for IT administrators to pre-allocate disk drives for fixed RAID-level storage pools. Within Xiotech’s ISE architecture, DataPacs function as flexible RAID storage pools, from which logical drives are provisioned and assigned a RAID level for data redundancy on an ad hoc basis.

What’s more, the ISE separates the function of the two internal MRCs from that of the two external Fibre Channel ports. The two FC ports balance FC frame traffic to optimize flow of I/O packets on the SAN fabric. Then the MRCs balance I/O requests to maximize I/O throughput for the DataPacs.

In effect, Xiotech’s ISE technology treats a sealed DataPac as a virtual super disk and makes a DataPac the base configurable unit, which slashes operating costs by taking the execution of low-level device-management tasks out of the hands of administrators. This heal-in-place technology also allows ISE-based systems, such as the Emprise 5000, to reach reliability levels that are impossible for standard storage arrays. Most importantly for IT and OEM users of the Emprise 5000 storage, Xiotech is able to provide a five-year warranty that eliminates storage service renewal costs for a five-year lifespan.

******************

Now, I’m going to keep this same open mind when I say the following: the Emprise 5000 storage blade just makes storage controllers better.  We make one and we’ve seen it first-hand.  We saw a significant jump in performance once we moved from the typical drive bays and drives that everyone else uses with the ISE.  Not to mention, with its native switch fabric architecture, it allowed us to scale our Emprise 7000 storage controllers to 1PB of capacity.  What’s really cool (open mind for me) is we’ve improved performance and reliability for a lot of storage controllers like DataCore, FalconStor, IBM-SVC and HDS USP-V, not to mention significant boosts as well for applications and OS’s. 

Feel free to close your mind now 🙂

@StorageTexan

Advertisement

20 responses to “Xiotech Storage Blade – 101

  1. Thanks. What’s not clear to me at all (even after turning off the competitive part of my brain, felt like being lobotomized):

    1. Are volumes capable of striping multiple ISEs – and how many? (and what happens to RAID then)
    2. The exact failure scenarios
    3. What does “striping the data at the level of an individual drive head” mean?

    The posts are severely lacking in detail. There’s a lot of interesting stuff and how you use custom firmware (as do we, BTW) and how low-level you get, but what’s not clear is how failures are handled. I get it that the pack is “sealed” but how many disks are dedicated to sparing, in case the motor gives out? Fixing UREs and surface errors is only part of the answer.

    How are lost writes detected and handled?

    At what point do you need to replace a pack, and what are the ramifications? (everything fails).

    Data resiliency is paramount and if I can’t understand it, either I’m being thick, or customers will indeed have a really hard time understanding it (or both).

    The way I understand it: A volume that’s R5 has bits of it in different drives, kinda like how Compellent (and I think 3Par) do it. Sounds like you don’t make RAID on discrete drives, you RAID chunklets (I do like 3Par’s term). There also seem to be exchoes of XIV in the design.

    Am I right so far at all?

    So, no matter how you do it, is it true that if you lose 2 of said chunks on a R5 volume then it’s kaput?

    Ultimately, it’s all about the applications. The architecture is a means to an end. It’s only important inasmuch as it helps or hinders the applications, be that performance, resiliency, recoverability, etc.

    D

  2. The paradigm of the ISE is much different than than of a disc drive or an array controller. It is a level of integration akin to that of the first creation of the SCSI disc, when part of the old arrays that could only handle a small number of spindles because of the complexities of controlling sectors and index data as well as reads and writes. Before the SCSI disc, Hosts or servers did the job of data protection with mirroring (or host based volume shadowing).

    After SCSI came alone, RAID was born, as well as every storage feature on the planet because of the fact that Windows and Linux had not features like older mainfram like OS like IBM, DEC, Amdahl, Unisys, Hitachi, etc.

    This stopped all the investment in hardware at the array level except to try and put more and more software features in along with more and more invididual discs drives, that more and more people knew nothing but the API in which to talk to them, let alone how they work, or how they fail.

    As far as the questions raised:
    1. How is striping achieved across Ises
    2. What are the failure scenarios
    3. What is striping at the head level mean?

    These are the incorrect questions to ask with respect to ISE technology.

    This technology came from a $100MM effort at Seagate Technology to come up with an architecture to control a fixed number of devices, a new storage device, that solves the issues of environmentals, failure modes, fault tolerance, performance, and serviceability.

    All the old metrics of the past twenty years do NOT apply here.

    Back in 1979 at Digital and I’m sure at IBM, the question was postulated, “What if we had a reliable disc’. At that point, assumptions were made whereby discs were then striped at the host level and mirrored for disaster tolerance. We at Xiotech have technology that allows just this.

    In order to achieve greater performance than a single ISE can handle, striping is performed above the ISE. This leads to linear scalbility across multiple ISEs. This is performed by either software in a volume manager (Symantec, Oracle ASM, Windows, or Linux, etc), OR a Virtualizer such as a NAS head, San Volume Controller, DataCore, FalconStor, or Intelligent Switch. All these devices deal with volumes or LUNs.

    All the LUNs created within an ISE are either RAID 1 or RAID-5 equivalents. Most applications only require the performance from a single ISE, because we have proven we get twice the I/O per spindle at application levels even after RAIDing a volume. For those applications that require more performance, striping is performed above the ISE in the manner mentioned. Xiotech has customers running very large databases running over 1 billion transactions per day with this method. These mission critical applications also mirror the data to another site using the same software or virtualizer that did the striping, covering any issue with data resiliiency.

    Moreover, ISE is very different from the stand point of failure scenarios…. First and foremost, we PREVENT failures in the first place. We did this at Seagate with full knowledge of how drives operate and how they fail, which is not catastrophic except in the most rarest scenario. This is why we do not need anything past RAID 1 or RAID 5 equivalents with our RAGS patented technology (Redundancy Allocation Grid System).

    If one prevents the failures in the first place, then the discs are actually fault tolerant devices. This is un-intelligible by most OEMs because they do not understand how discs run or fail because they never designed them or really looked at what happens when things go wrong. First, drives do NOT lose motors except in the rarest cases. Anyone can put fear, uncertainty and doubt in the minds of customers to push an agenda. Almost all drives never fail hard, and because of that Xiotech’s technology from Seagate actually places the engineers’ knowledge of how the operate inside the ISE.

    By preventing failures in the first place, then being able to ‘remanufacture drives’ in place, while also being able to work around failed ‘surfaces’ or pieces of ‘surfaces’ beyond a block, the life of drive is extended immensely. In fact, most drives can be fully read after what other controllers consider a failure, thereby reducing rebuild times to lower times than that of any normal RAID5 rebuild by an order of magnitude. This eliminates the need for RAID6 at all. While the possibility exists for a full drive failure, we have field data now after 3 years of ISEs in the field, and 5 years of testing in lab that shows that our failure rate is 1000x less than that of existing arrays, either by real or ‘No Trouble Found’ Failures called out by arrays. We have the data, we monitor 24 hours a day across all ISEs in the field today. This relates to replacement of ISE Datapacs at a rate of .1% after five years, not yearly.

    We will not disclose what we do specifically, so pls feel free to check the US patent office for one of the 80 patents associated with RAGS, Managed Reliability, as well as packaging of the Datapacs themselves. Xiotech ISEs do passive and active things to prevent failures, fix apparent failures, and maintain application performance while doing maintenance upon pieces of an iSE, of which the storage devices within represent 80% of the components within the ISE.

    Xiotech does not use SMART, because that is late in the process of drive failures and issues with drives. Rather, we use indications early on if anything the drive is doing is having problems getting done. Based on this we have a patented process to recover the drive after first COPYING data off that drive vs rebuilds. In fact, we only do rebuilds on data we cannot read, which is very very rare. Remember half of our code is intelligent error recovery code that knows what do with drives that are less than perfect. Drives very very rarely fail hard, let alone lose motors, etc. The days of the 10k.6 problems at Seagate are gone…and if they come back we are ready as that issue would have not even been seen by customers with ISE technology. Correlated failures are myths in this industry, as any correlations are stopped at the factory. The issue then was letting too many drives out with a higher percentage of issues and not having anyway to get ‘telemetry’ back from the drives to know there a problem before there was an epidemic. This does not happen with ISE, because our Telemetry will alert us to any anomalies across the population of ISEs well before any epidemic occurs, thereby allowing in-line fixes or quick solutions if the failure rate of devices within ISE reach levels to which even an ISE can handle, which is 10x that of other arrays.

    As far as lost writes, as much as they are rare, the ISE takes care of these by back ground parity and data verifications for the situations where the ISE can cover them. In cases where drivers fail to retry operations, or applications lose metadata, no array can cover these events. If a drive gives a positive completion of an error, but in actuality loses the write or corrupts the data, the background scans within ISE will find and repair these with either normal parity or verification, or with DIF, the ANSI standard for end to end checks to prevent data corruption in storage devices.

    Striping at the head level is a logical term applied to our ability to deal with heads or larger segments of storage devices failing and still being able to use that device, even at reduced capacity. This ia all because of our patented RAGS data layout technology, a 5th generation data layout from the team that has been together for 30 years from Digital, Compaq, HP, to Seagate and then Xiotech. RAGS allows any device to get smaller, from something of a group of blocks more than a single block, to a head, bank or whatever, and keep going.

    With respect to spare space or ‘hot spares’, the ISE does hold back some space not counted in the full capacity utilization allowed by customers. This space is used in the event of a piece of a device or a whole device ever failing. The linear mathematics of RAGS allows this all to occur very very quickly, also NOT affecting the customer application while doing so. This is what is meant by “Self-Healing”, not just traditional RAID Rebuild. The amount of capacity held back varies, but can be as much as 20%, however, Xiotech does not charge for this capacity. We only charge for the capacity purchased by the rated capacity of a given datapac which ranged from 2-10TB per DataPac.

    ISE is a completely different paradigm for a storage device. It is neither a disc nor an array. It is a closed loop control system that is a “Balanced Building Block of Performance, Data Integrity, and Recovery Oriented Storage”. We use telemetry from each and every ISE in the field to close the loop so that trends of usage, environment, failure modes can be dealt with before they affect any customer.

    ISE is meant to be used everywhere and to be invisible to the customer, as storage that just runs, hence the 5 year warranty we offer standard. ISE technology is meant to extend refresh cycles for storage from 3 yrs to well over 5 years. The theory of the ISE architecture is such that the mean lifetime of an ISE is in the range of 7-13 years. This is NOT an array, nor is is a disc. It is a new device, meant to be a new building block for datacenters.

    Thanks,
    Steve Sicola
    Xiotech Chief Technology Officer

  3. Steve, thanks for the detail. Clearly our industry needs to think less about post mortem RAID rebuilds and more about predictive failures analysis. But – I am left wondering – why not offer another layer of protection a la dual parity if things like motor failures and other catastrophic drive events do still occur? Even if the chance is one in a billion – someone is going to be that unlucky soul and I surely don’t want it to be me. In my opinion RAID should still be one component in a wholistic approach to storage system resiliancy.

    Interestingly, NetApp (I am an employee) has been doing this since 1995 with a feature called Maintenance Center. I describe this in my book “Evolution of the Storage Brain”:

    “…included with Data ONTAP version 7.1 and above, Maintenance Center was designed to improve storage reliability by reducing the number of unnecessary disk replacements due to transient errors.

    Here’s how this process typically works: Once the health management system identifies the disk drive as a potential failure, the disk is no longer automatically “failed.” Instead, it’s deactivated and sent to the Maintenance Center.

    User data is then migrated from the disk onto a spare, through reconstruction or rapid RAID recovery, depending on the type of errors being received. This process occurs without user intervention.

    A defined set of errors and thresholds are used to select disks for maintenance. Once in the Maintenance Center, the disk is tested in the background, without disrupting the other operations of the system. If the transient errors can be repaired, the disk will be returned to the spares pool. If not, the disk is failed.

    In many cases, this type of testing can correct errors that would have previously caused either the drive to be failed or other system interruption.

    The monitored errors and error thresholds evolve with new disk technologies and with diagnoses gathered from drives sent to Maintenance Center.

    Regards,

    DrDedupe

  4. Hi Larry,

    Thanks for your comment. MG is certainly an interesting technique – do the hot spare, then try to perform some elementary actions on the drive. If those don’t work, just fail it. Some would call this glorified hot sparing, I wouldn’t myself.

    Anyway, the ISE goes far, far beyond those elementary techniques, as Steve mentioned. The economic problem, however, is that you must buy and spin a hot spare (which does not participate in the IOPS pool) to make this work. In ISE, this is not the case, all drives and all heads of all drives are active all the time. If ISE does an MR action, only the head(s) involved are acted upon; it is non-optimal to declare and spare an entire drive down if only one head is transient. RAGS and other techniques (patented) are all about head-level operations.

    Again, thanks for responding to the blog. Cheers.

  5. @Rob Peglar – glorified hot sparing, intelligent hot sparing, predictive failure analysis – Maintenance Center isn’t the term I would have used, but they didn’t ask me 🙂

    Anyway ISE is certainly another step in the right direction – RAID rebuilds won’t make much sense with 10TB drives over the horizon someday.

    Kudos to Xiotech, BTW the original 1998 Magnitude is mentioned with fondness in my book. Groundbreaking product.

  6. Steve, thanks for your detailed response. It is very interesting that you try to stay so close to the disk and fast data (not necessarily whole disk) rebuilds will become even more important the larger disks get as Larry mentioned, but I’m a bit sceptical about a few things:

    1. What do you do if there is a latent physical defect in your sole source of disk drives that is not recoverable with disk drive manufacturing IP? (happened with Seagate drives in the past).

    2. The spare capacity for rebuilds – this seems similar to XIV, only they just do mirroring. How did you decide on the percentage of spare space to leave for rebuilds? What if you were off?

    3. Staying with a single source manufacturer of drives (however big) has potential drawbacks. What if they have a bad batch, or what if some other vendor comes up with a dramatically better drive technology, yet all your IP is tied into Seagate stuff?

    4. I see the indisputable benefit of making the fault domains smaller (individual platters/heads) – but making claims that something like RAID 6 is not necessary because, the way you do things, failures are less likely than with normal R5 – not sure if I agree with that. R6 is an orthogonal method to what you already do to help with reliability. Complementary.

    5. The ISE hasn’t been out for 5 years, right? Is the 5-year warranty and all the reliability claims based on mathematical models? What data is there to prove the overall system is more reliable?

    6. 1 billion transactions per day – OK, how many operations per second on the disk drives at the busiest period? If it’s Oracle, 1 transaction could be multiple IOPS on the drives. A day has many many seconds 🙂

    7. If all that’s needed to make disks more reliable is smarter software, why is all this intelligence not already on the Seagate drives?

    8. Indeed, drive motors seldom fail. What happens with more frequency is that bearings and lubricants develop issues.

    9. Correlated failures can happen due to external events.

    10. Last but not least – if, for whatever reason, you have truly used up your 20% of spare space in the ISE due to whatever problems, and you’re not close to the end of your warranty, what’s the mechanism for regaining the peace-of-mind the original 20% of spare space provided?

    In general I don’t much like answers like “with out technology this is highly unlikely” – a good engineering design has answers for pretty hairy worst case scenarios.

    The answers may not always be what the customer wants to hear, but at least they exist and show thought has been put in the design.

    Thx

    D

  7. ***Updated answers to reflect a kinder and gentler StorageTexan ****

    Okay – let me see if I can answer most of them !! : )

    1. What do you do if there is a latent physical defect in your sole source of disk drives that is not recoverable with disk drive manufacturing IP? (happened with Seagate drives in the past).

    Sicola brought this up in his response above. I guess if you are really wrapped around the axle, the short answer is we would deal with them just like NetApp did when it was hit with this. We replace them. If we had a datapac at risk, we would replace under their 5 year warranty.

    2. The spare capacity for rebuilds – this seems similar to XIV, only they just do mirroring. How did you decide on the percentage of spare space to leave for rebuilds? What if you were off?

    It was a lot to do with math modeling and failure rates etc to go with the 5 year warranty. 20% for 3.5” disk drives, based on a 5 year warranty is all math and statistics. Our first rule of thumb is “thou shalt not remove customer capacity”. We start this off by not charging customers for their hot spares like others do. In other words, if the customer purchased 8TB’s worth of capacity we shipped them 10TB’s – we do all of our “managed reliability” using the 2TB’s of space. (In this example) At the end of the day, we give the customer a 5 year warranty for a reason. Like any other warranty, if something goes wrong we will take care of it, at our expense. 20%, 30%, 10% at the end of the day, that’s the space we are using to do our manage reliability features and that’s space we do not charge the customers for.

    3. Staying with a single source manufacturer of drives (however big) has potential drawbacks. What if they have a bad batch, or what if some other vendor comes up with a dramatically better drive technology, yet all your IP is tied into Seagate stuff?

    There seems to be a lot of sole sourcing in our industry. NetApp/WAFL is a great example. It’s arguably the only product they sell/flagship product. It in itself is a sole source solution.

    As far as Xiotech sole sourcing disk drives from Seagate, It’s true and we make no apologies for it. That’s a business and technology decision on our part. Xiotech has a VERY long history with Seagate up to, and including being 100% owned by them. Not to mention, the ISE was born and created at Seagate. From a drive manufacturing point of view, Seagate is the clear winner in that regards. By the way, normally I see sole sourcing as an issue from a manufacturing point of view. In other words, Seagate knows we only sole source their drives so from a competitive pricing point of view, it’s not a good negotiation place on our part 🙂 Good thing we are “Family” 🙂

    4. I see the indisputable benefit of making the fault domains smaller (individual platters/heads) – but making claims that something like RAID 6 is not necessary because, the way you do things, failures are less likely than with normal R5 – not sure if I agree with that. R6 is an orthogonal method to what you already do to help with reliability. Complementary.

    Go re-read Sicola’s comments. Also, read the Lubbers patent on RAGS. RAID 6 is defined in the patent, we can implement it if our customers demand it. ISE has been on the market for 23 months now, and so far they have not.

    5. The ISE hasn’t been out for 5 years, right? Is the 5-year warranty and all the reliability claims based on mathematical models? What data is there to prove the overall system is more reliable?

    Mathematical modeling is one part of how we get to 5 years. This is not unlike other products in the market. No one runs their product for three years in the lab before they slap a 3 year warranty on it. At this point, so far, so good. Keep in mind, If something goes wrong, it’s on our dime which is why we include a free 5 year warranty. Our data to date indicate that we have actually been conservative; i.e. the behavior is even better than we expected initially. If a prospect would like to dive deeper into this, we would be glad to put them under NDA and share with them our real world results. I can tell you that after 23 months, we are still happy with our 5 year comitment.

    6. 1 billion transactions per day – OK, how many operations per second on the disk drives at the busiest period? If it’s Oracle, 1 transaction could be multiple IOPS on the drives. A day has many many seconds

    I would just direct you to check out our SPC-1 findings; 348 IOPS per disk speaks for itself.

    7. If all that’s needed to make disks more reliable is smarter software, why is all this intelligence not already on the Seagate drives?

    It’s not all software. If you go back and re-read Sicola’s comments, there is a lot of things that makes the ISE what it is. Steve likes to describe the ISE as a Satellite that you send up into space. If you build it right, and we think we have, then you shouldn’t have to touch it. You don’t build products like that the same way you do other things. You take the time to design it right, with input from very smart people and it takes 5 + years to develop. That is what the ISE is all about. The team that designed this have been doing storage for 30+ years. This solution has over 80 patents in it. Some of those are hardware patents.

    8. Indeed, drive motors seldom fail. What happens with more frequency is that bearings and lubricants develop issues.

    It’s all ball bearings now-a-day’s isn’t it? 🙂 On a serious note, you are correct, but i’ll point this out again. Seagate hired the team that built this solution. At some point, you have to admit that if anyone could think of all the various reasons a drive fails, and issues they’ve seen in the field it would be them. Combine their experience in the drive market, with a storage team that’s been doing this for 30 years and you get the ISE.

    9. Correlated failures can happen due to external events.

    Don’t disagree. Bad things happen all the time. I’ve seen flooded Datacenter’s on the 14th floor of a 30 floor building. You architect solutions to mitigate those sorts of events. A great man once said “Safety is not the absence of risk”

    10. Last but not least – if, for whatever reason, you have truly used up your 20% of spare space in the ISE due to whatever problems, and you’re not close to the end of your warranty, what’s the mechanism for regaining the peace-of-mind the original 20% of spare space provided?

    We would replace it. Free of charge, no questions asked. It’s covered under warranty. Car manufactures have been doing 5 year warranties for a long time. How come other array vendors FREAK OUT when someone else offers this sort of maintenance contract?

    In general I don’t much like answers like “without technology this is highly unlikely” – a good engineering design has answers for pretty hairy worst case scenarios.

    No one says we don’t have the answers. If you really want to get your hands around all of this i encourage you to go read the patents – it explains a LOT. For those prospects that don’t want to read all of that, just let us know and we will put you under NDA, or you can fly out to our Colorado Springs facility and ask the developers all the questions. Again, we have taken on the hairiest of the worst case scenarios – after all, it was Seagate themselves who developed this technology originally. No one better to know all the scenarios.

    The answers may not always be what the customer wants to hear, but at least they exist and show thought has been put in the design.

    Agreed. More thought has been put into ISE design than has been for nearly 25 years in RAID array design.

    @StorageTexan

  8. Eric Burnett

    NetApp does not sole source drives. The key remark is “latent defect”. Replacing a bad batch, with potentially another bad batch is not a solution, nor is it relying upon the mfg to ship you a new batch with customers waiting…

    Question #2 was valid, which you did not answer.

    Again, NetApp does not sole source drives, nor do most storage array mfgs that I know off.

    These are all good questions, it’s just that the answers come off as quite defensive.

    Posting a blog and asking readers to “go read the patents” instead of providing insight to some of the questions is a waste of everyone’s time, including your potential prospects…

  9. Hey Eric ,
    Thanks for your comment. In hindsight, I should have answered a few of the questions in a different manner. Not to mention, all these questions are coming from our compeitiors so forgive me for not wanting to be “too forth coming”. I’ll rectify that with your comments and I’ll edit my response above.

    #2 The spare capacity for rebuilds –
    this seems similar to XIV, only they just do mirroring. How did you decide on the percentage of spare space to leave for rebuilds? What if you were off?

    It was a lot to do with math modeling and failure rates etc to go with the 5 year warranty. 20% for 3.5” disk drives, based on a 5 year warranty is all math and statistics. Our first rule of thumb is “thou shalt not remove customer capacity”. We start this off by not charging customers for their hot spares like others do. In other words, if the customer purchased 8TB’s worth of capacity we shipped them 10TB’s – we do all of our “managed reliability” using the 2TB’s of space. (In this example) At the end of the day, we give the customer a 5 year warranty for a reason. Like any other warranty, if something goes wrong we will take care of it, at our expense. 20%, 30%, 10% at the end of the day, that’s the space we are using to do our manage reliability features and that’s space we do not charge the customers for.

    Sole sourcing of drives:

    My first comment wasn’t about NetApp sole sourcing drives. I was commenting on a competitor that in itself is a sole source company. In a NetApp example, all they sell is WAFL enabled storage. So, if NetApp has a problem, a customer that has a WAFL solution is in the same boat.

    As far as Xiotech sole sourcing disk drives from Seagate, It’s true and we make no apologies for it. That’s a business and technology decision on our part. Xiotech has a VERY long history with Seagate up to, and including being 100% owned by them. Not to mention, the ISE was born and created at Seagate. From a drive manufacturing point of view, Seagate is the clear winner in that regards.

    By the way, normally I see sole sourcing as an issue from a manufacturing point of view. In other words, Seagate knows we only sole source their drives so from a competitive pricing point of view, it’s not a good negotiation place on our part 🙂 Good thing we are “Family” 🙂

    As far as asking people to read patents.

    It all boils down to intellectual property around how, and why we do what we do. There is only so much I can share in a public forum, not under NDA. If a customer really needed to understand, to the Nth degree how something works, we would put them under NDA and answer their questions.

    From a Blog point of view, I feel pretty stronglythat I’ve shared a lot of information in this space. I think I’ve answered a lot more questions than others thought, or expected me to. Directing competitors to read our patents is my way of saying “I’ve given you all the information I’m comfortable giving you in a public space”.

    Hopefully this answers your questions. Thanks again for commenting.

    @StorageTexan

  10. FYI – i edited my response for Dimitris above. It might have been a tad too snarky. It’s now more reflective of a kinder, gentler StorageTexan 🙂

    Thanks
    @StorageTexan

  11. Hi Tommy,

    Am I a competitor? Sure, but most of all I’m a technologist and I am intrigued by any cool new tech. After all, if ISEs are that good, NetApp could use them instead of our current shelves, and your revenue would increase exponentially (we sold over 1 Exabyte of storage last year).

    I’m just trying to understand how stuff works.

    If I can’t (and it seems many other storage people that aren’t competitors can’t either), then potential customers might also have a hard time understanding.

    People don’t like buying what they don’t understand.

    I’m not saying the stuff isn’t good, I’m saying there’s very little info on why it’s good.

    As Eric mentioned above, I highly doubt customers will try and decipher patent documents. They need something reasonably easy to follow. Because, for most of them, storage is just another thing they have to deal with. For us, it’s our job, so it’s OK to pore over highly technical docs.

    Also, saying it’s been OK for 23 months now means nothing, it’s like saying “my house hasn’t caught on fire for 23 months now”. That’s great, but what’s the process for when the house does catch on fire? That’s why NetApp has standardized on RAID-DP + our other data protection techniques – it’s more reliable than anything we’ve seen so far.

    Pillar has made similar claims in the past about how their RAID5 is better than RAID6, when someone starts claiming that 1+1=5 and not backing it up with any math, I feel they’re just doing a marketing spiel. NetApp has all the RAID-DP papers (hairy math and all) at netapp.com/library. No need to own a box to read the docs within, no visits to the patent office needed. Our openness is liked by customers.

    If you need to replace a whole ISE, what’s the process? Is it like VMware, where you evacuate the contents of the box to others with free space, then replace the thing? And is that non-disruptive?

    And, to accomplish that, don’t you need enough space as what the ISE held initially?

    And is that taken from the “spare” spare or is it taken from the normal data storage area?

    These are the kinds of questions I have when I’m presented with a “black box” tech.

    I don’t think the questions are that sensitive to need NDA 🙂

    I totally get the benefits of single-sourcing. BTW, calling WAFL single source is not right, since WAFL is just the way of writing to the disk. When it comes to components, we multi-source. We do buy a ton from Seagate but if another vendor comes up with a much better drive tech for similar money, you bet we’re switching to that in a heartbeat after it passes all our tests.

    Regarding not charging for spare capacity: At the end of the day, all the customer cares about is how much their 8TB usable + 5 years of hardware warranty cost them, not whether you shipped them 10TB or not. Whether we do it with dual parity drives + WAFL reserve or you do it with 20% of spare space is immaterial to the end user.

    If we start getting into storage efficiency, you’ll get cranky again and have to edit your response after you calm down 🙂 (BTW I wasn’t offended).

    Regarding the SPC-1 benchmarks: Why don’t you publish one with many ISEs so we all see how it scales? The SPC-1 winners so far are 3Par and IBM (IBM with over 2,000 drives).

    If you can maintain over 340 IOPS per drive no matter what, then you’d need just 1000 drives to beat IBM, which would be like an ostrich feather in your cap 🙂

    A lot of SVC (and V-Series) customers would then be demanding your gear to put behind those intelligent controllers. Which would cause your gear to finally be certified 🙂

    Anyway, I don’t disagree you have put a lot of thought in the building blocks of storage, but a storage system is more than that.

    Maybe you should post a new blog with how you can help protect at the application layer.

    D

  12. Dimitris,
    Thanks again for the comments. I agree, our messaging could be better. The great news is Brian Reagan (New-Sr. VP of Marketing) is fixing that as quickly as possible. In the mean time, I’m trying to answer as many questions as I can on this forum.

    Let’s answer a few more of your questions:

    If you need to replace a whole ISE, what’s the process? Is it like VMware, where you evacuate the contents of the box to others with free space, then replace the thing? And is that non-disruptive?

    First and foremost, this would be covered under the 5 year hardware warranty. Keep in mind, everything about the ISE is redundant. Dual controllers, dual power supplies, dual everything and it has a passive mid-plane so there isn’t a lot of reason why we would need to swap out a whole solution. BUT if we had to for some “end of the world event” then it’s pretty easy. Oh ya, we would ship them a net-new ISE to do this swap out. Again, it’s all covered in our 5-Year FREE hardware warranty.

    The process really depends on a couple of things. If this is under management of a 3rd party storage controller like our Emprise 7000 then we could do an online, non-disruptive “Copy/Swap” which is something we’ve been doing for 10+ years. It’s very similar to the process of swapping your Xyratex drive bay and drives in the NetApp world (or our Magnitude 3D line). Just right click on the volumes that are striped across that particular ISE and start the evacuation of the data. The application has no clue we are doing this on the backend. That’s the beauty of storage virtualization. If it’s not behind a 3rd party controller then it’s typically behind some sort of logical volume manager. Think Oracle ASM or Symantec etc. Again this would be the same process any other storage vendor would go through in the event of having to swap out a whole drive bay (drives and all). Relatively easy and online.

    And, to accomplish that, don’t you need enough space as what the ISE held initially?

    Yes, if we HAD to swap out a whole ISE (datapac’s and all) we would need equivalent space. Again, this would fall under our 5 year Hardware warranty (free) and we would drop in the necessary parts to do it.

    And is that (ISE Swap space) taken from the “spare” spare or is it taken from the normal data storage area?

    No – Spare space is just for drive remediation, if it’s a whole ISE then we would need equivalent space.

    These are the kinds of questions I have when I’m presented with a “black box” tech

    I don’t disagree, neither would our sales and partner teams. These are questions we get asked a lot. Hopefully I’ve answered them – at least to the best of my ability !!

    Regarding not charging for spare capacity: At the end of the day, all the customer cares about is how much their 8TB usable + 5 years of hardware warranty cost them, not whether you shipped them 10TB or not. Whether we do it with dual parity drives + WAFL reserve or you do it with 20% of spare space is immaterial to the end user

    Couldn’t agree more, we are simply pointing out that customer don’t have to pay for their hotspares with Xiotech. They get the IOPS power of 10 drives and we only charged them for 8. That’s the key difference – in ISE, all the drives are used for I/O, all the time. Hotspare in BOD design can’t use the spare(s) for I/O. It makes a big difference when handling workloads. And I couldn’t agree more, usable capacity + 5year warranty.

    Regarding the SPC-1 benchmarks: Why don’t you publish one with many ISEs so we all see how it scales? The SPC-1 winners so far are 3Par and IBM (IBM with over 2,000 drives

    I believe we will be working on these. What would be cool is for you guys to work with us on getting the V-Series and ISE on the SPC-1 benchmark !! I’m pretty sure we would smoke the rest of them. But the fact remains that ISE are linear scale, since they have all components necessary by themselves; adding more ISE to the test is literally linear. No ISE depends on the presence or absence of any other ISE. In terms of the metric customers really care about – $/IOPS – ISE is the winner.

    A lot of SVC (and V-Series) customers would then be demanding your gear to put behind those intelligent controllers. Which would cause your gear to finally be certified

    IBM-SVC was one of the first 3rd party storage solutions to qualify the ISE. To date, NetApp seems to be the only one holding out; hopefully, not for long, since there is indeed demand for this solution from both end-user customers and existing NetApp resellers, and that demand grows every day. Hitachi USP-V, IBM-SVC, Datacore, FalconStor, and Symantec have all qualified our solution.

    Thanks again for the questions.
    @StorageTexan

  13. As a customer who owns a bit of each companies’ technology (Netapp and Xiotech) I’ll throw out a few comments:

    ISE scales linearly by nature, I don’t need an SPC benchmark for that. I have tested the capabilities of a single ISE. I have no arguments that the next ISE will perform the same way. If I stripe across them (and I have) I expect the performance of two ISE…and so on and so forth. Performance bottlenecks will not come from the ISE, they will come from the mechanism I choose to aggregate them with. It’s a block of specific performance and reliability characteristics you can build on. An SPC benchmark with multiple ISE would have to include some sort of technology to aggregate the devices together. The test would focus more on the scalability of the aggregator than the ISE, however it might be interesting to see a proven solution with two of them.

    As far as RAID 6/RAID DP in a 10 drive data pack with 20% spare space, why would I need RAID double parity? I guess there is a scenario in which I can lose two disks at virtually the same time and each of those disks will be completely unusable and unfixable within the box. There is also a risk of a backplane/passive circuit melting. Completely failed drives are extremely rare in an ISE, and Mr. Siccola wasn’t exaggerating then he said that these are 1000x more reliable than existing arrays. **Publishing the statistical data could put the discussion to rest. **

    Why is Netapp last to the party with V-Series support? I am 100% certain there is demand and it has been expressed to Netapp.

    Single sourcing can be a concern, and its much bigger than Xiotech. A bad batch of drives or a defective design could be a serious problem. Many vendors I work with choose a particular vendor for a particular speed and size disk drive, not two vendors for the same speed and size. The same can be said for Flash drives, where most array vendors are working with a single flash manufacturer. It’s a problem bigger than Xiotech, and it appears to be a trend. I can’t imagine that Netapp dual sources everything as mentioned. Are there two sources of NAND memory for the PAM modules? Can the Intel CPU’s be replaced with AMD in the event a defect is found in Intel’s CPU design…….. We live in a consolidated world, and this is going to continue to be more and more common. As a customer I mitigate this and other risks by not single sourcing my purchases. It is a best practice from a technology and procurement standpoint to utilize multiple vendors for each tier of disk. Following that best practices reduces my risk and gives me a better negotiating position during purchase time.

    Overall, as a customer, ISE represents a refreshing change in the storage industry. Capex is low. Opex is the lowest in the industry. Performance is predictable. Reliability is best of breed. And I can move to another vendor without any significant headache.

  14. I will re-iterate (I got another email today on this since I’ve been asking internally):

    While there have been requests to support Xiotech with NetApp V-Series, the volume was nowhere near enough to warrant the resources devoted to the task.

    It’s a matter of priorities.

    @ zaxstor: Regarding single vs multi-sourcing:

    NetApp has made the shift between CPU vendors multiple times, and, indeed, uses both Intel and AMD at the moment. No, you can’t replace an Intel CPU with an AMD one, but you CAN replace a Seagate drive with something else since they are commodity items that are designed to be interchangeable and to fit a standard.

    NetApp buys drives from various vendors and has to right-size them to fit the lowest common size denominator. This way, if Seagate gets hit with some massive issue again, we can shift very very easily to another supplier and just swap drives.

    Even the manufacturing of the units is done by multiple different plants in different countries, because you just never know what might happen (look at the recent mess with the volcanic ash).

    All things that a large company needs to worry about – we get orders that are tens of PB each (single customer), satisfying those large orders is not easy unless you have parallelization everywhere.

    Aside from the low-level data protection (which Xiotech, admittedly, deals with differently than the rest), there are so many other facets to achieving practical, useful disk arrays:

    – application awareness
    – efficient snapshots
    – thin provisioning
    – multi-protocol
    – replication
    – deduplication

    We all seem so focused in this discussion on the nuts and bolts of it. But what about the bigger picture? The ISE is not the entire array.

    I’m sure that, when it comes to replacing drives (or not), Xiotech is a good fit for places where you may not want to let anyone have access to your datacenter, or you don’t have personnel available. Effectively, you want to leave the boxes alone to fend for themselves for the 5 years.

    However, that doesn’t describe most customers.

    If, cost-wise, it’s similar to get dual-parity and the 5-year warranty with some other vendor, and get a ton of other benefits on top, then the value prop gets diluted.

    I also firmly believe that the reliability math changes as drive capacities change. This is one of the reasons people like dual parity. Vs “normal” RAID5, it’s thousands of times more reliable. And, out of 16 drives you lose 2 to the parity, so it’s not that inefficient.

    But I do like your remanufacturing process, we do some of that as Larry indicated, but the Xiotech process seems a lot deeper. I’m sure that cuts down on quite a few potential replacements (about half the drives typically end up being OK).

    I’d write more but I gotta go sell something to a bank now…

    D

  15. Dimitris,

    It seems like the discussion moved from discussing ISE to selling Netapp. Good Job!

  16. @Zaxstor – thank you for your comments. I very much appreciate it. It’s always great to get a customer’s opinion. I agree with publishing the statistical data – I’ll make sure to bring that up with our Marketing Group.

    Dimitris – as always, your insight and questions/comments are always welcomed and appreciated.

    If anyone else has ISE specific questions or comments, feel free to post them.

    Thanks
    Tommyt

  17. @zaxstor: look here and you’ll see Tommy did the exact same switch at my site 🙂

    http://bit.ly/dmJrcA

    I do have one ISE question: is the RAID within the ISE or done at the external controller? (I think the former based on the RAGS discussion but I could be misunderstanding this).

  18. Well – in all fairness to me on your blog – my comments were 100% geared towards getting NetApp to qualify your V-Series Unified Storage Platform in front of our ISE. In fact, I believe I even said the V-Series is a pretty cool solution !! And I didn’t even talk about our competing NAS solution !! In regards to derailing your conversation – all I was trying to do was enhance your V-Series Unified Storage messaging LOL !!

    • I do have one ISE question: is the RAID within the ISE or done at the external controller? (I think the former based on the RAGS discussion but I could be misunderstanding this).

    Great question – yes RAID is handled internally to each ISE.

  19. You pitched ISE behind the V-Series masterfully, I admit 🙂

    Still, the conversation had nothing to do with unified storage, which was the point of the post.

    But you did get me more interested in your stuff 🙂

    I think the ISE sounds like a reliable building block, in need of a really good controller on top to provide all the cool stuff.

    If we have enough demand we’ll certify it behind NetApp V-Series.

    D

  20. To D re: If we have enough demand we’ll certify it behind NetApp V-Series.

    If you certify it, they will come (with apologies to the movie Field of Dreams)