Official Discord for 1stAmender - Click to Join Us!

Programmable chips turn Azure into a supercomputer

current events


Tags: USA  

current events

Programmable chips turn Azure into a supercomputer published by nherting
Writer Rating: 0
Posted on 2016-09-27
Writer Description: current events
This writer has written 195 articles.


Microsoft is embarking on a major upgrade of its Azure systems. New hardware the company is installing in its 34 datacenters around the world still contains the mix of processors, RAM, storage, and networking hardware that you'll find in any cloud system, but to these Microsoft is adding something new: field programmable gate arrays (FPGAs), highly configurable processors that can be rewired using software in order to provide hardware accelerated implementations of software algorithms.

The company first investigated using FPGAs to accelerate the Bing search engine. In "Project Catapult," Microsoft added off-the-shelf FPGAs on PCIe cards from Altera (now owned by Intel) to some Bing servers and programmed those FPGAs to perform parts of the Bing ranking algorithm in hardware. The result was a 40-fold speed-up compared to a software implementation running on a regular CPU.

A common next step after achieving success with an FPGA is to then create an application specific integrated circuit (ASIC) to make a dedicated, hardcoded equivalent to the FPGA. This is what Microsoft did with the Holographic Processing Unit in its HoloLens headset, for example, because the ASIC has greatly reduced power consumption and size. But the Bing team stuck with FPGAs because their algorithms change dozens of times a year. An ASIC would take many months to produce, meaning that by the time it arrived, it would already be obsolete.

With this pilot program successful, the company then looked at ways to use FPGAs more widelyacross not just Bing but Azure and Office 365 as well. Azure's acute problem wasn't search engine algorithms; Azure CTO Mark Russinovich told us it was network traffic.

Azure runs a vast number of virtual machines on a substantial number of physical servers, using a close relative of Microsoft's Hyper-V hypervisor to manage the virtualization of the physical hardware. Virtual machines have one or more virtual network adaptors through which they send and receive network traffic. The hypervisor handles the physical network adaptors that are actually connected to Azure's networking infrastructure. Moving traffic to and from the virtual adaptors, while simultaneously applying rules to handle load-balancing, traffic routing, and so on, incurs a CPU cost.

This cost is negligible at low data rates, but when pushing multiple gigabits of traffic per second to a VM, Microsoft was finding that the CPU burden was considerable. Entire processor cores had to be dedicated to this network workload, meaning that they couldn't be used to run VMs.

The Bing team didn't have this network use case, so the way they used FPGAs wasn't immediately suitable. The solution that Microsoft came up with to handle both Bing and Azure was to add a second connection to the FPGAs. In addition to their PCIe connection to each hardware server, the FPGA boards were also plumbed directly into the Azure network, enabling them to send and receive network traffic without needing to pass it through the host system's network interface.

That PCIe interface is shared with the virtual machines directly, giving the virtual machines the power to send and receive network traffic without having to bounce it through the host first. The result? Azure virtual machines can push 25 gigabits per second of network traffic, with a latency of 100 microseconds. That's a tenfold latency improvement and can be done without demanding any host CPU resources.

Technically, something similar could be achieved with a suitable network card; higher-end network cards can similarly be shared and directly assigned to virtual machines, allowing the host to be bypassed. But this process is usually subject to limitations (for example, a given card may only be assignable to 4 VMs simultaneously), and it doesn't offer the same flexibility. When programming the FPGAs for Azure's networking, Microsoft can build the load balancing and other rules directly into the FPGA. With a shared network card, these rules would still have to be handled on the host processor within a device driver.

This flexibility means that for Azure, just as with Bing before it, FPGAs are a better solution than ASICs.

This solution of network-connected FPGAs works as well for Azure as it does for Bing, and Microsoft is now rolling it out to its data centers. Bing can use the FPGAs for workloads such as ranking, feature extraction from photos, and machine learning. Azure can use them for accelerated networking.

Currently, the FPGA networking is in preview; once it's widespread enough across Microsoft's datacenters, it will become standard, with the long-term goal being to use FPGA networking for every Azure virtual machine.

Networking is the first workload in Azure, but it's not going to be the only one. In principle, Microsoft could offer a menu of FPGA-accelerated algorithms (pattern matching, machine learning, and certain kinds of large-scale number crunching would all be good candidates) that virtual machine users could opt into, and longer term custom programming of the FPGAs could be an option.

Microsoft gave a demonstration at its Ignite conference this week of just what this power could be used for. Distinguished engineer Doug Burger, who had led the Project Catapult work, demonstrated a machine translation of all 3 billion words of English Wikipedia on thousands of FPGAs simultaneously, crunching through the entire set of data in just a tenth of a second. The total processing power of all those FPGAs together was estimated at about 1 exa-operations—one billion billion, 1018 operations—per second, working on an artificial intelligence-style workload.

While not a primary target for the company, the scale of cloud compute power—combined with the increasingly high performance interconnects that FPGAs and other technology enable—mean that cloud-scale systems are in time likely to be competitive with and eventually surpass the supercomputers used for high-performance computing workloads. With this kind of processing power on tap, it won't be too long before Azure becomes a collection of supercomputers available to anyone to rent by the hour.

   

Sources:
http://arstechnica.com/information-technology/2016/09/programmable-chips-turning-azure-into-a-supercomputing-powerhouse/

Article Rating: 0.0000



You have the right to stay anonymous in your comments, share at your own discretion.

No comments yet.