2017 U of Texas, Austin IAP Cloud Workshop - Agenda, Abstracts and Bios

Please check back later for updates

Platinum Sponsors: Cavium, Samsung, and Western Digital

Gates-Dell Complex, Room GDC 6.302 - Friday, November 10, 2017

8:00-8:30AM

Badge Pick-up – Coffee/Tea and Breakfast Food/Snacks

8:25-8:30AM  Welcome – Prof. Derek Chiou and Prof. Simon Peter, UT
8:30-9:00AM Prof. Derek Chiou, UT and Microsoft, "Soft Logic in a Data Center"
9:00-9:20AM Kumaran Siva, Intel, "FPGA-Ready Cloud Workloads"
9:20-9:40AM Prof. Vijay Chidambaram, UT, "PebblesDB: Building Key-Value Stores using Fragmented Log-Structured Merge Trees"
9:40-10:00AM  Dr. Cyril Guyot, Western Digital Research, "Low-latency software architecture for distributed shared memory"
10:00-10:30AM Prof. Mattan Erez, UT, "Quality of Service Techniques for Mixed-Latency Workloads"
10:30-11:00AM Lightning Round of Student Posters
11:00-12:30PM Lunch and Cloud Poster Viewing
12:30-1:00PM Dr. Dan Stanzione, TACC, "HPC vs. The Cloud -- In the End There Can Be Only One?"
1:00-1:30PM Dr. Rick Kessler, Cavium, "ThunderX2 in HPC"
1:30-2:00PM

Prof. Lizy Kurian John, UT, "Computing In-Situ and In-Transit"

2:00-2:30PM Dr. Pankaj Mehra, AwarenaaS, "Data Centric Computer Architecture"
2:30-3:00PM Break - Refreshments and Poster Viewing
3:00-3:30PM Prof. Simon Peter, UT, "Cross Media File Storage with Strata"
3:30-3:50PM Dr. Srinath Setty, Microsoft, "VOLT project: Trustworthy distributed ledgers by leveraging an untrusted service provider"
3:50-4:10PM Prof. Hakim Weatherspoon, Cornell, "SuperCloud: Opportunities and Challenges"
4:10-4:30PM Dr. Rishi Sinha, Facebook, "Networking at Facebook Through the Layers"
4:30-4:50PM Prof. Emmett Witchel, UT, "Securing Untrusted Computation on Secret Data"
4:50-5:30PM Reception - Refreshments and Poster Awards


Abstracts and Bios

Vijay Chidambaram, UT, "PebblesDB: Building Key-Value Stores using Fragmented Log-Structured Merge Trees"
Abstract: Key-value stores such as LevelDB and RocksDB offer excellent write throughput, but suffer high write amplification. The write amplification problem is due to the Log-Structured Merge Trees data structure that underlies these key-value stores. To remedy this problem, this paper presents a novel data structure that is inspired by Skip Lists, termed Fragmented Log-Structured Merge Trees (FLSM). FLSM introduces the notion of guards to organize logs, and avoids rewriting data in the same level. We build PebblesDB, a highperformance key-value store, by modifying HyperLevelDB to use the FLSM data structure. We evaluate PebblesDB using micro-benchmarks and show that for write-intensive workloads, PebblesDB reduces write amplification by 2.4-3× compared to RocksDB, while increasing write throughput by 6.7×. We modify two widely-used NoSQL stores, MongoDB and HyperDex, to use PebblesDB as their underlying storage engine. Evaluating these applications using the YCSB benchmark shows that throughput is increased by 18-105% when using PebblesDB (compared to their default storage engines) while write IO is decreased by 35-55%.

Bio: Vijay Chidambaram is an Assistant Professor in the Computer Science Department at UT Austin. His research focus is on building storage systems that can achieve high performance and strong reliability. Specifically, he has contributed new reliability techniques in (local and distributed) storage systems, and built frameworks for finding reliability bugs in applications. His work has resulted in patent applications by VMware, Samsung, and Microsoft. His research has won the SIGOPS Dennis M. Ritchie Dissertation Award in 2016, a Best Paper Award at FAST 2017, and a Best Poster at ApSys 2017. He was awarded the Microsoft Research Fellowship in 2014, and the University of Wisconsin-Madison Alumni Scholarship in 2009. 


Derek Chiou, UT and Microsoft, "Soft Logic in a Data Center"
Abstract: Soft logic, such as is found in Field Programmable Gate Arrays (FPGAs) is supposedly significantly less efficient than hard logic.  However, Microsoft decided to deploy FPGAs in every new Azure and Bing data center server.  This talk will describe some of the reasons why Microsoft decided to do that and how researchers can experiment with such platforms. 


Bio: Derek Chiou is a Partner Architect at Microsoft where he leads the Azure Cloud Silicon team working on FPGAs and ASICs for data center applications and infrastructure and a researcher in the Electrical and Computer Engineering Department at The University of Texas at Austin. Until 2016, he was an associate professor at UT.  His research areas are FPGA acceleration, high performance computer simulation, rapid system design, computer architecture, parallel computing, Internet router architecture, and network processors.  Before going to UT, Dr. Chiou was a system architect and lead the performance modeling team at Avici Systems, a manufacturer of terabit core routers. Dr. Chiou received his Ph.D., S.M. and S.B. degrees in Electrical Engineering and Computer Science from MIT.


Mattan Erez, UT, "Quality of Service Techniques for Mixed-Latency Workloads"

Abstract: Quality of service (QoS) is a critical requirement for many client and server workloads. In particular, many workloads are a mix of latency-sensitive (soft realtime) and latency insensitive (throughput) tasks. I will discuss techniques my group has been researching on balancing the conflicting goals of minimizing missed task completion targets while maximizing task throughput and execution efficiency. I will focus on Dirigent, a mechanism that accurately manages the performance of latency-sensitive applications even when resources are heavily contended for. This is achieved through simple application profiling, accurate performance prediction, and hardware throttling (per-core frequency scaling and shared cache partitioning). By eliminating the performance variation due to resource contention, this framework substantially improves QoS of latency-sensitive applications with minimal loss in overall system throughput. I will also discuss memory-system aspects of QoS for latency-sensitive tasks along with mechanisms and thoughts on mitigating issues. 


Bio: Mattan Erez is an Associate Professor and a Temple Foundation Faculty Fellow at the Department of Electrical and Computer Engineering at the University of Texas at Austin. His research focuses on improving the performance, efficiency, and scalability of computing systems through advances in memory systems, hardware architecture, software systems, and programming models. The vision is to increase the cooperation across system layers and develop flexible and adaptive mechanisms for proportional resource usage. Mattan received a BSc in Electrical Engineering and a BA in Physics from the Technion, Israel Institute of Technology and his MS and PhD. in Electrical Engineering from Stanford University. He was awarded a Presidential Early Career Award for Scientists and Engineers from President Obama and received an Early Career Research Award from the Department of Energy and an NSF CAREER Award.


Cyril Guyot, Western Digital Research, "Low-latency software architecture for distributed shared memory"
Abstract: Low access latency of emerging NVM devices require rethinking the design of the management software traditionally used in distributed environments. In combination with high-performance networks, expected latency for accessing remote storage drops below 2us, migrating the critical path overhead towards the software management layer. To reduce  the software impact on performance and increase the overall system scalability, we depart from the traditional client-server architecture, and instead explore the design space based on one-sided (rdma) operations. This presentation describes our distributed storage management layer and the performance tradeoffs incurred by our design decisions with respect to consistency model and data placement.

Bio: Cyril's interests span distributed systems, machine learning, information theory and cryptography. In Western Digital's Research organization, he has been leading the Software Solutions and Algorithms team in developing novel algorithms and implementations for storage systems and persistent memory architectures. 


Prof. Lizy Kurian John, UT, "Computing In-Situ and In-Transit"
Abstract: Moving data consumes time and energy. Computation can potentially be made more efficient by computing where the data is located (Computing In Situ) however there are several challenges in determining which pieces of data should stay where, and what kinds of compute can be integrated with various memory and cache layers.  Some data movement is still essential and computation can also be accelerated by computing while data is being moved (Computing in Transit). This talk presents (i) various in-situ computing capabilities that will comprise both processing in or near main memory as well as computing within on-chip caches and memory close to the cores;  (ii) novel in-transit compute capabilities that will enable cutting down on and in many cases completely eliminating unnecessary roundtrip data transfers by processing of data transparently as it is transferred between main memory and local compute cores or accelerators or across the cache hierarchies.

 

Bio: Lizy Kurian John is B. N. Gafford Professor in the Electrical and Computer Engineering at UT Austin. She received her Ph. D in Computer Engineering from the Pennsylvania State University. Her research interests include workload characterization, performance evaluation, architectures with emerging memory technologies such as die-stacked DRAM, and high performance processor architectures for emerging workloads. She is recipient of NSF CAREER award, UT Austin Engineering Foundation Faculty Award,  Halliburton, Brown and Root Engineering Foundation Young Faculty Award  2001, University of Texas Alumni Association (Texas Exes) Teaching Award 2004, The Pennsylvania State University Outstanding Engineering Alumnus 2011, etc.  She has coauthored a book on Digital Systems Design using VHDL (Cengage Publishers, 2007, 2017),  a book on Digital Systems Design using Verilog (Cengage Publishers, 2014)  and has edited 4 books including a book on Computer Performance Evaluation and Benchmarking.  She holds 10 US patents and is a Fellow of IEEE.

 

Rick Kessler, Cavium, "ThunderX2 in HPC"
Abstract:
We present ThunderX2, Cavium’s 2nd generation ARM Server, focusing on performance characteristics with HPC applications.


Bio: Richard E. Kessler is Chief Technical Officer and a 15 year veteran at Cavium. Richard leads
the hardware architecture team for server and embedded computing at Cavium, and is a principal architect of Cavium OCTEON and ThunderX processor chips.

Prior to joining Cavium, Richard was a chip architect in the Digital/Compaq Alpha Group, and a supercomputer architect at Cray Research. Dr. Kessler received a Ph.D. in Computer Architecture from the University of Wisconsin-Madison, and has been awarded more than 50 patents.

 

Pankaj Mehra, AwarenaaS, "Data Centric Computer Architecture"
Abstract:
We examine the world of infrastructure (bits, cores, and fabrics) through the lens of data. The talk begins with a survey of data sources, data varieties, and their growth trends. We review the lifecycle of data in order to understand the processes by which data is turned into insightful information. The body of the talk takes a data-centric view of the world and derives a memory-centric computer architecture, in which the primacy of data is reflected in the engineering of infrastructure. We will see that bits, cores and fabrics line up very differently in a memory-centric architecture than in traditional architectures, and even basic operating systems concepts such as virtual memory have a new design.


The talk concludes with a survey of long-term workload and device architecture trends, especially machine learning and memristive devices, and how that aligns with a move to data centricity.

Bio:
Pankaj Mehra is Founder & CEO of AwarenaaS, a context aware computing startup. He has over 20 years of technical experience in architecting and optimizing scalable, intelligent information systems and services, and in designing new memory and storage technologies for persistence and acceleration. Prior to successive acquisitions by SanDisk and Western Digital, Pankaj was SVP and Chief Technology Officer at Fusion-io, where he was named a Top 50 CTO by ExecRank. He has also worked at Hewlett Packard, Compaq, and Tandem, and held academic, research, and visiting positions at NASA Ames, IIT Delhi CS&E, IBM TJ Watson Research Ctr, and UC Santa Cruz. He founded IntelliFabric, Inc. (2001), HP Labs Russia (2006), and Whodini, Inc. (2010), and was a contributing author to InfiniBand 1.0 specification. Pankaj is lead author of Load Balancing: An Automated Learning Approach (Wiley, 1995), lead editor of Artificial Neural Networks: Concepts & Theory (IEEE, 1992), and co-author of Storage, Data, and Information Systems (HP, 2008). He has held TPC-C and Terabyte Sort performance records, and his work was recognized in awards from NASA and Sandia National Labs, among others. Pankaj was appointed Distinguished Technologist at Hewlett-Packard in 2004, and Senior Fellow at SanDisk in 2014. He has served on editorial boards of IEEE Transactions on Computers journal and Internet Computing magazine. Pankaj received Ph.D. degree in Computer Science from The University of Illinois at Urbana-Champaign.

 

Simon Peter, UT, "Cross Media File Storage with Strata"
Abstract: Current hardware and application storage trends put immense pressure on the operating system's storage subsystem. On the hardware side, the market for storage devices has diversified to a multi-layer storage topology spanning multiple orders of magnitude in cost and performance. Above the file system, applications increasingly need to process small, random IO on vast data sets with low latency, high throughput, and simple crash consistency. File systems designed for a single storage layer cannot support all of these demands together.

In this talk, I characterize these hardware and software trends and then present Strata, a cross-media file system that leverages the strengths of one storage media to compensate for weaknesses of another. In doing so, Strata provides performance, capacity, and a simple, synchronous IO model all at once, while having a simpler design than that of file systems constrained by a single storage device. At its heart, Strata uses a log-structured approach with a novel split of responsibilities among user mode, kernel, and storage layers that separates the concerns of scalable, high-performance persistence from storage layer management. Strata achieves up to 2.8x— better throughput than a block-based cache provided by Linux's logical volume manager.

Bio:
Simon is an assistant professor at the University of Texas at Austin, where he conducts research in operating systems and networks. He received a Ph.D. in Computer Science from ETH Zurich in 2012 and an MSc in Computer Science from the Carl-von-Ossietzky University Oldenburg, Germany in 2006. Before joining UT Austin in 2016, he was a research associate at the University of Washington from 2012-2016.

Simon received two Jay Lepreau best paper awards (2014 and 2016) and the Madrona prize (2014). He has conducted further award-winning systems research at various locations, including MSR SVC and Cambridge, Intel Labs, and UC Riverside.

 

Srinath Setty, Microsoft, "VOLT project: Trustworthy distributed ledgers by leveraging an untrusted service provider"
Abstract:
This talk will introduce VOLT, a new system in which an autonomous participating member—without trusting anyone else—can efficiently share its internal state with its peer members and then construct an append-only history of valid operations on the shared state of all participants. VOLT achieves strong security through a new protocol that composes a simple verifier (a stateful process at each member) and an untrusted ledger—a powerful abstraction that encodes an append-only log of entries created via a fully untrusted mechanism. We  instantiate the untrusted ledger using a highly-available cloud service built atop a fault-tolerant cloud storage service. An experimental evaluation with a prototype implementation of our protocol demonstrates that VOLT scales to realistic workloads.

Bio: 
Srinath Setty is a Researcher at Microsoft Research, which he joined in the December of 2014. He works on building systems with rigorous security and privacy properties, in the context of cloud, datacenters, etc. His current project focuses on building high-throughput distributed ledgers with powerful security properties for enterprise use cases. He is also working on other topics including unobservable communication, formally verified secure systems, and verifiable databases. He got his PhD in computer science from UT Austin where he worked on the Pepper project, which ignited a thriving research area to refine deep theoretical constructs such as argument protocols and probabilistically checkable proofs (PCPs), and build systems with those theoretical constructs.

 

Rishi Sinha, Facebook, "Networking at Facebook Through the Layers"
Abstract: Underlying Facebook's servers and data centers is a vast and dense network that handles huge volumes of traffic at peak every day. It includes load balancers, switches and routers, links within data centers, and long-distance links between continents. In this talk, I provide an overview of this network infrastructure from the physical to application layers. I describe the network topology within our data centers, how people are load-balanced to these data centers, and the traffic patterns that arise within these data centers. I show how traffic patterns drive network design decisions. I also present Facebook's contributions to designs and standards in network devices, which have been heavily influenced by our desire to manage network devices like we manage servers.

Bio: 
Rishi Sinha is an engineer in the Capacity Engineering team at Facebook, where he works on driving the network towards greater efficiency, characterizing traffic users and traffic patterns in inter-machine traffic, and developing forecasts of network traffic at all levels. Prior to Facebook, Rishi was a software engineer at Brocade Communications, where he developed the software solution for detecting flow control bottlenecks on Brocade's Fibre Channel switches. Prior to that, he supported live streaming operations for Akamai Technologies. He received a PhD in Computer Science from the University of Southern California.

 

Kumaran Siva, Intel, "FPGA-Ready Cloud Workloads"
Abstract:
Present a view of cloud workloads which can be accelerated by FPGA technology and why FPGAs make sense. Some of the workloads that will be considered in this talk include securing East-West traffic, Autonomous Network acceleration, Storage stack acceleration, and Machine Learning.

Bio: 
Kumaran Siva manages Datacenter Business Group at Intel PSG, and responsible for strategic marketing for Intel’s FPGA business in the datacenter. Prior to Intel, Kumaran has worked at Broadcom, ARM, and PMC-Sierra where he held various roles in marketing and engineering of processor and networking product lines. He has an ungraduated degree from the University of Waterloo in Electrical Engineering and a master’s Degree from Harvard University.

 

Dan Stanzione, TACC, "HPC vs. The Cloud -- In the End There Can Be Only One?"

Abstract: The question is often asked, “When will High Performance Computing (HPC) move to the cloud?” Implicit in this question is the assertion that HPC and the Cloud are competing technologies solving the same problems – or the same technology just operated differently. This talk will cover both the HPC and the cloud technologies available at the Texas Advanced Computing Center (TACC), and discuss where they can be used synergistically rather than one vs. the other – and if they can coexist.  


Bio:
Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin since July 2014, previously serving as Deputy Director. He is the principal investigator (PI) for a National Science Foundation (NSF) grant to deploy and support Stampede2, a large scale supercomputer, which will have over twice the system performance of TACC’s original Stampede system. Stanzione is also the PI of TACC's Wrangler system, a supercomputer for data-focused applications. For six years he was co-director of CyVerse, a large-scale NSF life sciences cyberinfrastructure. Stanzione was also a co-principal investigator for TACC's Ranger and Lonestar supercomputers, large-scale NSF systems previously deployed at UT Austin. Stanzione received his bachelor's degree in electrical engineering and his master's degree and doctorate in computer engineering from Clemson University.

 

Hakim Weatherspoon, Cornell, "SuperCloud: Opportunities and Challenges"

Abstract: Supercloud is a cloud architecture that enables application migration as a service across different availability zones or cloud providers. The Supercloud provides interfaces to allocate, migrate, and terminate resources such as virtual machines and storage and presents a homogeneous network to tie these resources together. The Supercloud can span across all major public cloud providers such as Amazon EC2, Microsoft Azure, Google Compute Engine, and Rackspace etc., as well as private clouds. Supercloud users have the freedom to re-locate VMs to many data centers across the world, irrespective of owner and without having to implement complex re-configuration and state re-synchronization in their applications. Using the Supercloud, an application can easily offload from an overloaded data center to another one with a different infrastructure.  


Bio:
Hakim Weatherspoon is an associate professor in the Department of Computer Science at Cornell University. His research interests cover various aspects of fault-tolerance, reliability, security, and performance of large Internet-scale systems such as cloud computing and distributed systems. Professor Weatherspoon is an Alfred P. Sloan Fellow. Also, he is the recipient of the NSF CAREER, DARPA Computer Science Study Panel, IBM Faculty Award, the NetApp Faculty Fellowship, and the Future Internet Architecture award from the National Science Foundation (NSF). He received his PhD from the University of California, Berkeley, in the area of secure and fault-tolerant distributed wide-area storage systems (e.g. Antiquity, OceanStore, etc.) and received his B.S. in Computer Engineering from the University of Washington.

 

Emmett Witchel, UT, "Securing Untrusted Computation on Secret Data"

Abstract: Users of modern data-processing services such as tax preparation or genomic screening are forced to trust them with data that the users wish to keep secret. Our system, Ryoan, protects secret data while it is processed by services that the data owner does not trust.  Ryoan provides a distributed sandbox, leveraging hardware enclaves (e.g., Intel's software guard extensions (SGX)) to protect sandbox instances from potentially malicious computing platforms.

We also discuss using Ryoan to provide machine-learning-as-a-service without users having to disclose their training data to the machine learning service provider.


Bio: Emmett Witchel is a professor in computer science at The University of Texas at Austin.  He received his doctorate from MIT in 2004.  He and his group are interested in operating systems, security, and concurrency.