The mass storage world is constantly changing. Every month there are new technologies, new companies, new products. One thing is certain, there is constantly more stuff to store. And it is a virtuous cycle. As storage gets bigger and more rapid, the latest applications are able to take advantage of these new capabilities. Big Data in the form of Hadoop databases and Virtual Desktop Infrastructure are only two of the up to date applications that have an insatiable need for added storage capacity and speed.
When organizations make selections on how to acquire storage, there are a number of different decision aspects that get taken into account. Essentially, the applications that depend on the stored data must be able to run just about all the time. Ideally, the data would be accessible 100% of the time, but it is very expensive to devise a system with that much uptime. So every organization has a system wherein the data is stored on a fast and dependable storage array, it is duplicated to a different location in some manner, and the vendors who provide the storage hardware and software have accessible and effective support in case there is an issue.
There are trade-offs to the various decision factors, and this is what produces market prospects for the different storage suppliers. It is a swiftly shifting landscape of elements that are continuously improved:
* Storage medium gets denser - Magnetic disk, Solid State Disk, and up coming technologies.
* Data transfer speeds advance in steps at different rates - SAS, Fibre Channel, Ethernet.
* Processors improve - Intel Architecture gets more cores, quicker processors.
* Memory - DRAM gets larger, denser, and quicker.
* Software - New features such as deduplication and thin provisioning add efficiency.
* Vendor reliability - support capacity, mergers, acquisition.
When I worked in the semiconductor field, I had the opportunity to work for one of Intel's manufacturer reps in a place where a large amount of the storage was designed and assembled. I saw the conversion from the i960 processors to Intel Architcture, and how most storage appliances were built around the same fundamental technology as servers.
The fascinating thing about X86 Architecture is that for processors and chipsets, some of them are intended for the Intel Embedded roadmap, meaning they will be constructed and supported for many years. This is extremely different from the Datacenter roadmap, where there is a continuous cycle of the latest processors and chipsets produced by the most recent lithography in Intel's newest fabs. Most people familiar with servers know about the Datacenter products, however,but fewer are aware of the Embedded processors and chipsets.
For organizations that produce dedicated storage appliances that have to endure multi-year design, testing, and manufacturing cycles, it makes a lot of logic to design around the Intel X86 Embedded roadmap, because the same products can be constructed and supported for years at a time. It makes sparing of components easier, as well as support, software maintenance, and bug fixes. Be that as it may, every few years storage appliance companies make a major design revision because the underlying processors and chipsets have to be transferred to the new Embedded version. That's why there continues to be forklift improvements every few years in the storage world, and it will continue to stay that way as long as hardware and software are tied together into a dedicated appliance using proprietary operating systems and software.
Servers used to be distributed as a grouping of hardware and software also - remember mainframes? I meet with customers who are still operating AS400 systems, because decades ago there was a custom application created on this highly dependable system that they continue to use. They want to be running a custom or standard application on a version of Linux, virtualized by VMware, on X86 hardware, attached to shared storage, just like all their other applications. But since getting unbundled from a proprietary appliance is difficult, they don't get to reap all the performance and reliability advantages of current computing.
Will the same thing occur in the storage world? If you look at the hardware that all storage devices are constructed around, they all have a great deal in common. Typical interfaces, processors, chipsets, hard drives, chasses. The only thing different is software, support, and the business behind it. Linux and Apache supplied the alternative to big company software and support that provided a more reliable and better performing product than both Solaris or Microsoft could come up with. Will the same thing come about in storage?
When organizations make selections on how to acquire storage, there are a number of different decision aspects that get taken into account. Essentially, the applications that depend on the stored data must be able to run just about all the time. Ideally, the data would be accessible 100% of the time, but it is very expensive to devise a system with that much uptime. So every organization has a system wherein the data is stored on a fast and dependable storage array, it is duplicated to a different location in some manner, and the vendors who provide the storage hardware and software have accessible and effective support in case there is an issue.
There are trade-offs to the various decision factors, and this is what produces market prospects for the different storage suppliers. It is a swiftly shifting landscape of elements that are continuously improved:
* Storage medium gets denser - Magnetic disk, Solid State Disk, and up coming technologies.
* Data transfer speeds advance in steps at different rates - SAS, Fibre Channel, Ethernet.
* Processors improve - Intel Architecture gets more cores, quicker processors.
* Memory - DRAM gets larger, denser, and quicker.
* Software - New features such as deduplication and thin provisioning add efficiency.
* Vendor reliability - support capacity, mergers, acquisition.
When I worked in the semiconductor field, I had the opportunity to work for one of Intel's manufacturer reps in a place where a large amount of the storage was designed and assembled. I saw the conversion from the i960 processors to Intel Architcture, and how most storage appliances were built around the same fundamental technology as servers.
The fascinating thing about X86 Architecture is that for processors and chipsets, some of them are intended for the Intel Embedded roadmap, meaning they will be constructed and supported for many years. This is extremely different from the Datacenter roadmap, where there is a continuous cycle of the latest processors and chipsets produced by the most recent lithography in Intel's newest fabs. Most people familiar with servers know about the Datacenter products, however,but fewer are aware of the Embedded processors and chipsets.
For organizations that produce dedicated storage appliances that have to endure multi-year design, testing, and manufacturing cycles, it makes a lot of logic to design around the Intel X86 Embedded roadmap, because the same products can be constructed and supported for years at a time. It makes sparing of components easier, as well as support, software maintenance, and bug fixes. Be that as it may, every few years storage appliance companies make a major design revision because the underlying processors and chipsets have to be transferred to the new Embedded version. That's why there continues to be forklift improvements every few years in the storage world, and it will continue to stay that way as long as hardware and software are tied together into a dedicated appliance using proprietary operating systems and software.
Servers used to be distributed as a grouping of hardware and software also - remember mainframes? I meet with customers who are still operating AS400 systems, because decades ago there was a custom application created on this highly dependable system that they continue to use. They want to be running a custom or standard application on a version of Linux, virtualized by VMware, on X86 hardware, attached to shared storage, just like all their other applications. But since getting unbundled from a proprietary appliance is difficult, they don't get to reap all the performance and reliability advantages of current computing.
Will the same thing occur in the storage world? If you look at the hardware that all storage devices are constructed around, they all have a great deal in common. Typical interfaces, processors, chipsets, hard drives, chasses. The only thing different is software, support, and the business behind it. Linux and Apache supplied the alternative to big company software and support that provided a more reliable and better performing product than both Solaris or Microsoft could come up with. Will the same thing come about in storage?
About the Author:
A ZFS Storage appliance built on Cisco Storage Servers is the highest performing system available.