The most expensive part of Enterprise Storage Arrays, which should not be confused with a SAN, is the software. The majority of parts in most of the major vendors platforms are now made in the same factory, utilise drives from Seagate and Western Digital, and Intel chipsets where possible. The reason behind this is quite simple, economies of scale. the vendors can buy products at a lower price point, and use their Intellectual Property to create the fastest performing box of disk possible.
Software defined storage has a lot of buzz around it, and for good reason. The promise of policy driven storage, controlled through software and not hardware is almost irresistible, like a good ice-cream on a hot day. SNIA has a fantastic definition for SDS (https://www.snia.org/sds) and there are lots of articles on the pitfalls and tribulation of SDS, along with the amazing outcomes that it has given. As with most things your mileage may vary.
For my home lab though I’m not too interested in SDS at the moment, what I’d rather install is virtual versions of existing storage arrays. To this end I’m putting in place the following
- HPE StoreVirtual
- Data Domain
- HPE StoreOnce
- HPE Nimble
That’s quite a lot of storage, so I’ll be installing, and then turning off when not needed, each array save on compute resources. I’m sure I’ll add more and as each one is built up I’ll share a quick post on how I’ve put it all together.