Some believe the cloud is something you bolt on top of a static operating system. At Microsoft, we fundamentally disagree with this view. The issues of cloud computing are all classic operating system issues. In a recent blog post, the President of Microsoft Server and Tools Business Satya Nadella stated:
“At the most basic level, any operating system has two 'jobs': it needs to manage the underlying hardware, and it needs to provide a platform for applications. The fundamental role of an operating system has not changed, but the scale at which servers are deployed and the type of applications now available or in development are changing massively.”
Microsoft’s vision of a CloudOS is to execute these missions in the context of a data center (instead of a single server). The datacenter abstraction layer (DAL) is our work with the industry to provide a common management abstraction for all the resources of a data center to make it simple and easy to adopt and deploy cloud computing. The DAL is not specific to one operating system; it benefits UNIX cloud-computing efforts every bit as much as Windows.
The DAL uses the existing DMTF standards-based management stack to manage all the resources of a data center: physical servers, storage devices, networking devices, hypervisors, operating systems, application frameworks, services, and applications. The DAL is different than other cloud efforts in that it uses and builds on proven management stack technologies instead of inventing new ones.
The figure below shows the relationship between devices, the DAL, and the applications and services that use the DAL:
Microsoft believes in the power of cloud computing. Cloud computing drives productivity which drives business value which drives profits. Though this sounds simple, the path to the cloud has been much more complicated. Data centers were already complex, and moving to cloud computing was just more of everything, moving faster with more importance on things “just working” Virtualization increased the need for automation, but proprietary APIs and GUIs work against automation.
Setting up single machines used to be complex, but consider how Plug and Play transformed that experience. Plug and Play has produced a world where you can go into any computer store, purchase a device without much thought, plug it into a computer, and the device “just works.” The system discovers the device, finds the appropriate software to manage it, and makes it available for use—it’s simple, safe, and easy. The mission of the DAL is to deliver a data center Plug and Play experience. In order to work effectively at cloud scales, we needed to dramatically reduce the complexity of data centers. The DAL accomplishes that by creating an ecosystem where everything can be managed from Windows or Linux by using a single standards-based management stack.
Vendors like the DAL because it does a great job answering the key vendor question:
What allows me to maximize my profits by delivering the best customer value at minimal costs while differentiating from competitors?
The data center Plug and Play experience of products with the Microsoft Windows logo minimizes the risk and effort to purchase, deploy, and operate those products. As a result, IT budget can move from planning, evaluation, deployment, debugging, and systems integration to purchasing new products. We believe in the virtuous cycle of IT and that this increase in IT time to value will drive business productivity, which will drive profits and increase IT budgets.
Unlike other approaches, DAL standards are optimized for interoperability and differentiation. The DMTF CIM model makes it easy for vendors to extend the standard models to provide proprietary functions while still specifying a required level of interoperability. A well-defined set of conventions allows vendors to implement extensions so that Windows PowerShell can generate high-level, task-oriented cmdlets. This means that every single Windows client or server computer (starting with Windows 7 and Windows Server 2008) will be able to take advantage of a vendor’s differentiating functions without the vendor having to write or deliver any Windows code. When a new version of Windows is released, the devices will be manageable from day one without additional vendor effort required.
The CIM standard used by the DAL is sophisticated and flexible enough to use as a management model for all devices. Although these DMTF standards have been around for years, they have been a challenge to implement, and existing implementations have been too large for mobile and embedded devices. To address these challenges, Microsoft has built a highly portable, small footprint, high performance CIM Object Manager called Open Management Infrastructure(OMI) that is designed specifically to implement the DMTF standards. Microsoft worked with The Open Group to make the source code for OMI available to everyone under an Apache 2 license. OMI is written to be easy to implement in Linux and UNIX systems.
Partners that adopt OMI get the following:
Though the DAL will take multiple releases to provide a Plug and Play experience for every data center device, the first phase of the DAL has shipped with Windows 8, Windows Server 2012 and System Center 2012. These products use a DMTF standards-based management stack to provide multi resource management of physical servers, Storage Area Networks (SANs), hypervisors, and Windows/Linux operating systems.
In the first wave of DAL deliverables, standards-based management became the primary mechanism to manage Windows, and DCOM is provided for backwards compatibility. We have the largest increase in WMI provider coverage, and now Windows can be fully managed from any machine using the WSMAN protocol. Windows delivers simplified APIs that dramatically reduce the skill and effort required to write a management provider or management agent. Windows PowerShell uses these new APIs to allow IT pros to script automated solutions using high level, task-oriented abstractions. The new SMI-S service in Windows allows SANs to integrate with Windows merely by implementing the specified standards. The service discovers these devices on the network, enumerates their resources and makes them available to users using the same WMI, Windows PowerShell, GUI tools, and System Center products used to manage Windows storage resources. The System Center products use CIM and the WSMAN protocol to manage remote physical servers and Linux OSes.
Today’s System Center products manage networking devices using plug-ins developed and delivered by networking vendors. Microsoft are working with these networking vendors to include networking devices into the DAL by defining and implementing CIM schemas that will allow Windows to directly manage these devices using standards-based management to deliver a true Plug and Play experience. Customers will be able to look to see if a networking device has a Windows logo; if it does, customers will be able to purchase and deploy networking devices that will “just work.”