Yu Chen, Pratik Gupta, Xiang Han, Lucerna Huayanay Velasco, Alex Kim, and Felipe Caro
In 2009, Facebook designed one of the world’s most energy efficient data centers, one that could handle unprecedented scale at the lowest possible cost. In 2011, Facebook shared its data center’s design with the public and launched the Open Compute Project (OCP) to create a movement in the hardware space that would bring about the same kind of creativity and collaboration we see in open source software. Currently, hyperscale service providers such as Google, Amazon, and Facebook, to mention a few, design and build their own data centers against other companies which outsource their data center requirements. Now through the Open Compute Project, these companies have access to the world’s latest designs of data centers. This movement has also impacted the data center supplier side, given that they also have access to new designs that can help them manufacture more efficient products.
Since 2011, OCP has received support from service providers like financial services firms, manufacturers like HP, Dell, and Cisco and earned enormous awareness in the tech community.
Exhibit 1: Timeline of the creation of the Open Compute Project (OCP)
On top of influencing the different stakeholders in the data center manufacturing supply chain, it has also added value to it through infomediation, disintermediation, and aggregation. We will explore how OCP has impacted each.
Infomediation
Through the OCP platform, top tech companies share its designs’ R&D results and accelerate data center improvement across the world. The amount of designs that are shared allows companies to find compatibility between different products and accelerates the speed of technology advancement. Furthermore, OCP provides large and small companies access to top suppliers.
Disintermediation
Given that OCP provides companies with detailed designs, OEMs become less relevant in the supply chain. This helps smaller companies avoid being locked-in by an OEM, and as a result, reduces the bullwhip effect through user-driven supply and modular design. OCP shifts the data server’s push/pull boundary to make it more of a pull supply chain (made to order).
Aggregation
Upon mass adoption of an OCP solution, the demand and supply for data servers can both be aggregated due to the following factors: (1) Pooling data server orders from different companies is feasible if they all use a single OCP design. (2) Modular, standardized, and inter-compatible design of data centers delays customization, hence supporting the aggregation of data server orders.
Exhibit 2: OCP’s value added to data server supply chain
Although OCP brings many benefits to the new supply chain, it still faces big challenges for mass adoption. After 5 years of OCP’s establishment, the actual OCP footprint in data centers remains relatively small, and the server market is still heavily dominated by major OEM players.
Exhibit 3: Data Center market share and main competitors
[Source: Gartner]
The challenges that OCP faces with each of the supply chain’s stakeholders are the following:
- For OCP itself, the fragmentation of product specifications and the slow certification movement are two major issues. Most OCP members share their own customized designs, increasing the number of designs available in this platform. This makes it hard for companies, especially small-medium businesses to adopt such designs to their own needs. Meanwhile, in order for the designs to be implemented in the OpenStack clouds and proprietary clouds, OCP hardware must be certified by big software companies like VMware. But unfortunately this certification progress is very slow.
- For the end users, especially small-medium business users, there is an obvious gap in the value chain. End users cannot really adopt the OCP solution without the ability to handle component procurement, manufacturing, testing and maintenance.
- Today there are still insufficient systems integrators (SI) that work on OCP solutions. On the one hand, traditional SI/OEMs don’t have enough incentives because OCP hardware will cannibalize their existing product portfolios. On the other hand, new SI originated from contract manufacturers are only good at manufacturing, but lack the capabilities of providing integrated solutions and other services such as maintenance.
- ODM and EMS are not used to serving fragmented small-medium businesses with diversified requirements and low volume. They normally put higher priority to support hyper scale companies so their lead time to smaller-medium customers might be longer. The goal for low cost and high efficiency will be hard to achieve because of this.
So what can Facebook and sponsors of the Open Compute Project do moving forward to decrease the barriers to adoption? First, identifying existing hardware product lines where standardization and compatibility intersect are key if the benefits of modular design are to be realized for end customers. Therefore, it is crucial that the primary sponsors of OCP work closely with server hardware partners and systems integrators such as Hewlett Packard Enterprise (HPE) and Quanta Cloud Technology to ensure that products designs are as standardized and modularized as possible: HPE’s existing line of Blade Servers, with its features rooted in a standardized infrastructure and management platform with a streamlined and modularized hardware interface is a perfect example of a product where OCP and Systems Integrators’ value propositions intersect.
In addition, ODM and EMS must pool demand from different orders to achieve the economies of scale necessary to incentivize large-scale participation in OCP. This can only happen, of course, if three practices are put into place by different parties within the value chain:
- A process of legitimate and efficient certification by major cloud software vendors for OCP hardware: this is crucial for end customer adoption, especially as it adds value in terms of product support, lifetime, and ease of use.
- A differentiated value proposition for ODM/EMS’ customer segments, and corresponding supply chains. For large enterprise-level customers, an efficient supply chain must be created to leverage scale. For smaller-sized customers, responsiveness is key to drive end customer adoption and retention.
- A flexible supply chain capable of mass-customization for Systems Integrators as they move up the value chain to generate revenue through software and services. With hardware increasingly becoming commoditized (as it has become with PCs, printers, microprocessor chips, etc.), Systems Integrators must preempt future cannibalization of their existing product portfolios by developing the ability to add differentiated value for end customers through software/management services. Indeed, this is already the direction that prominent Systems Integrators are headed: HPE’s hardware offerings are almost always bundled with virtualization, networking, unified API, and management services to maximize the dimensions of product differentiation.
All of the pieces in the value chain are there to make a successful mainstream transition to the 21st century IT supply chain. The challenge, ultimately, is aligning the incentives of a diverse collection of suppliers and buyers, big and small.
Recent Comments