The short answer is that for now, no. Many organizations still use traditional spinning media in their data solutions and tend to be slow to change their infrastructure when it seems costly upfront.
The long answer is that when we look just a couple years down the line, it is clear that many enterprise data solutions will require the use of solid-state drives to accommodate the vast quantities of data.
In looking at this problem, it seems backwards that SSDs would be the solution when large data sets need to be handled. After all, HDDs maintain a much lower cost per GB of storage than SSDs.
While that’s true for now, SSDs have been steadily dropping in price for years now and show no signs of slowing. More importantly, the business world has been experiencing a paradigm shift in the way these data sets are handled. The problem no longer stems from the cost of the storage itself, but rather the cost of accessing the data.
To put it simply, the money saved and generated by having fast, efficient access to specific data sets now outweighs the cost of investing in these more costly storage solutions. Data infrastructure executives must now consider the total cost of ownership (TCOO) when looking toward the future.
As data grows at an immense rate year over year, the sea of information becomes harder and harder to navigate. It isn’t realistic to comb through hundreds of terabytes of information when you need to find something specific. Fortunately, virtualization has alleviated some of the issues that come with large data sets. By using software to treat the drives as a single machine or a small number of machines, access no longer requires knowing any technical information about the data, nor does it require knowing where the data is physically located.
Even though virtualization removes some issues of access, it doesn’t necessarily hold up when considering an HDD dominant infrastructure. One problem that arises from using HDDs in a virtual setup is that when you have multiple virtual machines trying to randomly access multiple drives, you can run into an I/O blender effect.
Hard drives have a relatively high latency in the physical time it takes for a read/write arm to access the correct portion of the platter. Because the virtualized data is stored randomly across the group of hard drives wherever space is available, this latency can cause a bottleneck effect when the amount of data increases and the number of virtual machines accessing different parts of different hard drives goes up. Though these latencies seem small when taken individually, they can easily add up to a very real data infrastructure problem.
As mentioned before, the time it takes to access specific data is now the issue of large data sets, not the cost of actually storing the data. This application is where SSDs really shine, especially when considering workloads that require many random sequential reads and writes. The lower latency of solid-state drives and greater IOPS can help to keep up with the tough demands of enterprise data storage, hopefully eliminating the blender effect just mentioned.
When your organization has many users that require immediate access to widespread pieces of data across a vast virtual pool, you can’t afford harmful delays. These delays affect the cost of business and your quality of service undoubtedly decreases, not to mention the fact that nobody likes to wait when they’re trying to get a piece of information.
One other obvious advantage to integrating SSDs into a large data solution is the much lower heat output of these drives. They consume less power than HDDs and since there aren’t moving parts, there is not as much heat that needs to be dissipated by complex and expensive data center cooling systems. This leads to a lower TCOO when considering just how much money goes into effectively running a data center or any other large enterprise data solution.
At this point it should be noted that there are numerous ways to handle enterprise data situations and different organizations will require different solutions. It depends greatly on whether their workloads are more write-intensive, read-intensive, or mixed-use. It should go without saying that an enterprise data solution should be tailored to fit the needs of the organization, but you would be surprised how readily people will purchase the latest and greatest technology without considering what they really need.
An area that prevents many enterprise data consumers from switching to solid-state drives is the ability to recover data or selectively erase it. This is one of the chief concerns for the Data Recovery/Erase Special Interest Group that Gillware is part of.
To be brief, it is much more difficult/costly to recover data from a solid-state drive than it is to recover from a hard drive. In the event that important data is lost due to any number of unforeseeable events, organizations can expect either heavy costs to recover or a complete data loss.
It is likewise more difficult to selectively erase data on solid-state drives than it is on hard drives. This is an especially important topic when it comes to confidential information, such as government, military, or financial information. If any confidential information stored on an SSD is compromised, the only way to ensure it is gone through an audit trail is by fully erasing and decommissioning the device, something that is neither practical nor cost effective.
The Data Recovery/Erase SIG has been working alongside SSD consumers and manufacturers so that some of these issues might be addressed. It’s fairly obvious that there are a lot of temporal and economic advantages to switching to SSDs in enterprise storage, but until these issues are resolved, some industries simply can’t risk making that switch.
There are plenty of other ways that SSDs are affecting HDD usage in enterprise data solutions, but these few should adequately capture where the enterprise data industry is heading over the next few years as well as the concerns that come along with it. As SSD costs drop, big data volume increases, and time goes on, we will likely see a widespread adoption of solid state drives in the enterprise storage market. This hypothesis is made with faith that the DR/E SIG is able to accomplish its goals over the coming years, something we are very confident in.
One final thought: even if your organization does not require a sophisticated, virtualized solution to handle your data needs, you should always consider whether or not your data is being adequately handled with the system you currently use. It doesn’t take a large organization to create an inefficient, inadequate data solution. Being proactive rather than reactive when it comes to your data is always better.