Infrastructure

Project Autobuild

Editor's note: This blog has been updated.

LinkedIn’s data center infrastructure has grown at a massive scale. Starting with one server setup, one cabinet at a time, we’re now powering on hundreds of servers at a time and seeing them come online automatically. You could call it “Zero Touch Provisioning” for systems. This evolution presented us with many challenges, however. This post will describe how our Systems Platform team solved several of the problems they faced during this transformation by using a combination of innovations from the open source community and the team’s own technical talent.

IPMI

At LinkedIn, we use BMC’s (Baseboard Management Controller) a.k.a. CIMC’s (we will refer it by CIMC in this post) IPMI (Intelligent Platform Management Interface) functionality on physical servers. We use IPMI to:

  • Toggle BIOS parameters on servers;
  • Manage power/boot-order before OS installation;
  • Monitor System Event Logs (SEL) during the lifetime of the servers;
  • Remotely connect servers over console.

Therefore, it is an important step for us to set up CIMC so that we can install the operating system on a server remotely.

Manually building server infrastructure at LinkedIn

At LinkedIn, we were building and imaging servers by following a set process after servers were racked, cabled, and powered on by data center technicians:

  • Read MAC address registered on the switch port by the CIMC interface.
  • Get the server name and IP address from internal inventory.
  • Generate DHCP configuration with MAC, hostname, and IP. Get CIMC online.
  • Configure CIMC with a static IP and push other requisite configuration remotely.
  • Read the physical MAC address of the server from CIMC and generate kickstart/jumpstart configuration.
  • Set the boot order to PXE and reboot using IPMI, and then the OS gets installed on the server.

We had developed individual scripts to work through Steps 1-6, but the whole process was still manual. We were looking to automate the server build and image process, described above, instead. Initially, to automate the kickstart generation and OS imaging, we had used Cobbler. Cobbler is an open source Linux installation server suite aimed at automating Linux operating system imaging.

By using Cobbler, we could automate only Steps 5-6. For Steps 1-4, we found it very challenging to extend Cobbler’s functionality to support CIMC configuration (Steps 3-4). Also, Solaris Operating System was considered out of scope initially, but a few infrastructure services still needed it, which is why we had to account for it in our process.

At this stage, there were two challenges for us to solve: an automated CIMC configuration, and Solaris OS provisioning from a Linux platform, as Cobbler can only run on Linux. We came to the realization that Cobbler could not meet all of our requirements, and we had to look beyond Cobbler for our solutions.

Autobuild with Foreman, or Zero Touch Provisioning for systems

The efforts needed to make Cobbler work for our situation were already proving to be challenging. So, we began to consider an alternate plan.

The Systems Platform team was already working on a server imaging automation process called “Autobuild” and tried Cobbler as a component. In the process of Autobuild development, we stumbled upon Foreman. Foreman is an open source lifecycle management tool for physical and virtual servers. Foreman works in a clustered environment with a central Foreman-Server and multiple Foreman-Proxy servers. Foreman has built-in support for Solaris OS provisioning from a Linux platform. It supports a functionality to boot new servers on a minimal operating system image (named DiscoveryOS). When booted, DiscoveryOS can run specific scripts to perform any tasks defined by the scripts.

The final architecture of Autobuild, with Foreman as a component, is described further below.

Autobuild flow chart

Here’s a brief outline of the new server discovery, CIMC configuration, and automatic OS installation process:      

  1. Power on Discovery - Servers boot up to a Random IP from pool (per cabinet) defined in the DHCP range from Foreman-Proxy server.      
  2. Discovery - DiscoveryOS is loaded on the server via Foreman-Proxy and custom scripts in DiscoveryOS configure CIMC’s network. Now, we no longer need to configure CIMC manually, as the unattended power-on of servers loads DiscoveryOS, which configures CIMC’s network. The discovered server also gets reported to Foreman-Server, which triggers custom hooks on the Foreman-Server to add that particular server into Autobuild for complete CIMC configuration (SPconfig).     
  3. Hook running on Foreman-Server adds discovered server to Autobuild through a Web API entrypoint (discover), so complete CIMC configuration can be performed on the discovered server.     
  4. Router component consumes the request for the discovered server. The router’s sole job is to determine location of the primary component and forward the request to the primary’s database through an API layer.     
  5. Workflow daemon picks up request from the primary's database and pushes it to spd_request queue.    
  6. Spd daemon process picks up the CIMC config request from spd_request queue and executes CIMC Configuration (SPConfig) on the discovered server. Results of the CIMC configuration are sent back to the primary's database via spd_request and workflow daemon.   
  7. At this stage, the server is ready to be imaged with an operating system.  
  8. Once someone names the server in the internal inventory, dnsd daemon (daemon process dealing with DNS) adds the server’s DNS name into the internal DNS inventory.  
  9. After DNS is completed, a provision call is sent to Foreman by osd daemon.   
  10. ipmid daemon (daemon process dealing with remote IPMI commands) sets PXE and reboots server to get the OS installed.   
  11. After an OS is installed, CFEngine calls a token URL in Foreman to mark server build complete.   
  12. On the Foreman server, another hook gets called that communicates build complete for the server to the Autobuild.

At the end of the day, what do we get with this system?

Implementing this new system at LinkedIn has given us a complex but automated way to bring up server infrastructure, and has had several benefits. First and foremost, this has saved us innumerable hours of manual work. We are able to handle large growth more quickly, and this system saves us time that would otherwise be spent configuring and building servers. Since Foreman has out-of-the-box support for Solaris provisioning, we can now maintain one provisioning platform for both Linux and Solaris. It has given us the ability to build dense compute servers without needing to maintain separate cabling for CIMC.

Ability to build infrastructure with dense compute servers

LinkedIn decided to use dense compute servers for its newest data center in Hillsboro, Oregon. Dense compute has given us the ability to double up the servers per cabinet and also take full benefit of available power in the data center. But, as the number of servers per cabinet grew, cabling for the cabinet also became complex. We had previously used separate cabling for server and for CIMC interface. Now, we needed to connect two network cables and two power cables per server. This would make 320 cables running down each cabinet of 80 servers.

LinkedIn’s infrastructure team proposed using CIMC interface's shared network mode utilizing NC-SI. NC-SI opens an internal physical link to CIMC hardware via the server's network interface. This allowed us to use only one network cable per server. But now in order to configure CIMC, we needed to have access on the server via its main network interface, and without having a CIMC configured, we could not connect to the server. Foreman's DiscoveryOS solved this problem. When DiscoveryOS is booted on the main server, custom scripts which run inside the DiscoveryOS configure the CIMC network and make CIMC reachable remotely through the server's main network interface.

Acknowledgements

The members of the Project Autobuild team are Phirince Philip, Tyler Longwell, Nitin Sonawane, William Orr, Pradeep Sanders, and Milind Talekar.