Operational Savings of a vSAN : Building a Business Case for vSAN
   
Building a Business Case for vSAN
Building a business case for a change of storage platform strategy is based on selecting the correct storage infrastructure for the tenants’ applications. Understanding which technology works with their applications, providing for the consumer's availability and performance requirements, and meeting the data storage needs that the tenants have today and are going to have going forward, is key to the decision making process.
The challenge for service providers is to see beyond the short term, and what's new and trending in the storage market, and to arrive at a strategy to achieve a resilient and high performance storage infrastructure for their tenants in the long term at a cost acceptable to both provider and consumer. This obviously requires an objective approach, with a clear assessment of the components that make up a storage infrastructure's cost, and a clear-headed assessment of the alternative ways in which storage can be delivered to their consumers, storage that provides the storage functionality that the market requires, but that also optimizes both CapEx and OpEx.
Therefore, this requires that several questions be considered as part of building the business case. These might include:
Which technology enhances application performance?
Which technology provides greater data availability?
Which technology can be deployed, configured, and managed quickly and effectively using available, on-staff skills?
Which technology provides greater, if not optimal, storage capacity efficiency?
Which technology enables storage flexibility; that is, ability to add capacity or performance in the future without affecting the applications?
To these questions, service provider architects and decision makers also add another: which technology fits with available budget?
Ultimately, hardware costs and software licenses, plus leasing costs and a number of other factors, all contribute to the acquisition expense of storage infrastructure, which service providers typically seek to spread over the increasingly lengthy lifecycle of the hardware and software license agreements. It is noteworthy that most storage hardware ships with a three-year warranty and maintenance package, and increasing that agreement when it ends typically costs as much as purchasing a new array.
In addition, as outlined previously, hardware and software acquisition expense typically only accounts for a relatively small amount of the estimated annual cost of ownership of the storage infrastructure. According to leading analysts, the cost to acquire (CapEx) is overshadowed by the cost to own, operate, and manage (OpEx), as shown in the following figure.
Figure 6. Typical Storage Cost Factors
Source: Multiple storage cost of ownership studies from Gartner, Forrester, and Clipper Group.
 
The figure above illustrates how the storage budget is distributed, and the following figure looks at the OpEx costs over the expected lifecycle of the solution. As shown in
, OpEx is measured at approximately triple that of the initial CapEx investment over the four-year period. Therefore, ignoring OpEx is a big mistake.
Figure 7. OpEx − Over a Four-Year Period
Source: Estimated using Gartner CP Profiles Database and IT Key Metrics Data 2014 Report.
 
As clearly visible from
, a storage system total cost of ownership includes far more outlays than the initial obvious costs on hardware, software and disk capacity. Taking this holistic view of these various factors affecting the service provider’s budget over the storage infrastructure’s lifecycle facilitates a clear assessment of the TCO, and ultimately a more informed purchasing decision. These factors include, but are not limited to the following elements:
Cost to buy, implement, run, cool, and expand the storage system
Manage, integrate, and test the storage system
Confirm storage system reliability and availability
Downtime incurred, caused by routine maintenance or unexpected equipment failure
Vendor lock-ins associated with storage systems that use proprietary components
Licensing software at time of purchase and upon each capacity increase
As shown in the preceding two figures, Gartner and other analysts suggest that management and operational costs are the real driver behind the total cost of ownership associated with storage infrastructure. As shown in the following figure, storage infrastructure TCO is calculated by combining both CapEx and OpEx costs and dividing that figure by the product life expectancy.
From a simplified standpoint, the total annual cost of ownership of storage can be calculated with the following formula.
Figure 8. Simplified Annual Total Cost of Ownership
 
In many cases it is the mixed or heterogeneous nature of storage infrastructure that contributes to, and increases, the difficulties associated with the unified management of a storage platform. While this is true for a number of service providers, the challenges associated with the heterogenetic nature of many storage infrastructures do not justify the cost of replacement with a homogeneous suite of hardware. This heterogeneous infrastructure has emerged in most cases as service providers have tried to deliver choice within their data centers, or in some cases as a result of continually trying to leverage a new, best-of-breed technology each year, in order to attract forward-thinking consumers. This situation has been compounded by storage hardware vendors who have failed to diversify their product offerings enough to meet the varying storage needs of different workload types, data, or storage delivery protocols. Even those vendors who do offer a full range of storage hardware solutions typically do not offer any common management tools across all solutions, especially when some products have been acquired as a result of the vendor’s acquisitions.
Two other factors to take into account are capacity allocation efficiency and capacity utilization efficiency, both of which can have a significant impact on the bottom line when it comes to storage infrastructure and its cost of ownership. Allocation efficiency is the measure of how efficiently storage capacity is allocated. Utilization efficiency refers to the placement of the right data on the right storage, based on factors such as frequency of use, consumer performance expectations, or service-level driven availability requirements.
In the past, with a traditional shared storage model, storage administrators tended to deploy capacity in a sub-optimal manner, often purchasing Tier 1 storage to host customer data from applications that did not require the expensive and high performance attributes of the hardware provided.
However, the movement to flatten storage infrastructure, as seen by applications such as Hadoop and also the model employed by many of the hyper-converged software-based storage models, including vSAN, is eliminating many of the benefits of tiered storage altogether, alongside the negative impact of capacity allocation efficiency and capacity utilization efficiency. Tiered storage, when employed correctly, was supposed to enable the placement of data on the performance, capacity, or cost-appropriate storage, and as such, reduce the overall cost of storage ownership. A flattened storage model simplifies this approach through both cost of acquisition and operational management, so do these benefits outweigh the value of tiered storage, and consign the tiered model to the history books?