News
Microarchitecture Workshop on Wednesday, February 20, 2019
The Red Hat Collaboratory at Boston University will be holding a Microarchitecture Workshop (with a focus on security) on February 20, 2019 10:00 AM – 3:30 PM at the Hariri Institute for Computing, Boston University. The event will convene faculty, graduate students, and industry participants working in the microarchitecture area to share research and ideas as well as open doors to future collaborations. Learn more about this event at the Microarchitecture Workshop page.UKL, a Linux-based Unikernel with a Community-based Approach Created via the Boston University Red Hat Collaboratory

- Unikernels have been around for many years, but tend to take either a “clean slate” or “fork” approach, disconnecting their codebases from the broader community of developers who help with maintainership and pre-existing users who likely won’t want to take the leap to port their applications to something new.
- UKL takes a community-based approach, by focusing on producing something that could be upstreamed and by working to keep any changes to the codebases involved extremely minimal.
Colloquium: Software-Configured Compute Environments
Red Hat Collaboratory at Boston University Colloquium
Ulrich Drepper
Engineer, Office of the CTO, Red HatSoftware-Configured Compute Environments
Abstract
Hardware and software environments are designed as a compromise between many different requirements. This sacrifices performance, among other aspects, while at the same time the need for compute increases. Specialists can certainly create more optimized systems. The challenge is to automate this. To research these new systems we need hardware specialists to create re-configurable processors, compiler writers to deduce the best architecture from source code and generate configurations for hardware and OS, improved OSes to efficiently run that code. All that while preserving API and ideally ABI compatibility. First steps toward this are already on the way in the BU/Red Hat collaboration with the OpenShell project to utilize FPGAs as a suitable platform and the Linux-based unikernel project to optimize the runtime environment.Bio
Ulrich Drepper
Agenda
- 11:30 AM – 12:00 PM: Pizza & Networking
- 12:00 – 1:00 PM: Talk and Discussion
Questions?
Contact the Collaboratory with any questions you may have about this event.Recording of Event
This talk was held as scheduled. A recording can be accessed here. Slides can be accessed here.Colloquium: Chameleon: New Capabilities for Experimental Computer Science
Red Hat Collaboratory at Boston University Colloquium
Kate Keahey
Senior Fellow, Computation Institute, University of Chicago and Computer Scientist, Mathematics and Computer Science Division, Argonne National LaboratoryChameleon: New Capabilities for Experimental Computer Science
Abstract
Computer Science experimental testbeds allow investigators to explore a broad range of different state-of-the-art hardware options, assess scalability of their systems, and provide conditions that allow deep reconfigurability and isolation so that one user does not impact the experiments of another. An experimental testbed is also in a unique position to provide methods facilitating experiment analysis and crucially, improve repeatability and reproducibility of experiments both from the perspective of the original experimenter and those building on or extending their results. Providing these capabilities at least partially within a commodity framework improves the sustainability of systems experiments and thus makes them available to a broader range of experimenters. Chameleon is a large-scale, deeply reconfigurable testbed built specifically to support the features described above. It currently consists of almost 20,000 cores, over 5PB of total disk space hosted at the University of Chicago and TACC, and leverages 100 Gbps connection between the sites. The hardware includes a large-scale homogenous partition to support large-scale experiments, as well a diversity of configurations and architectures including Infiniband, GPUs, FPGAs, storage hierarchies with a mix of HDDs, SDDs, NVRAM, and high memory as well as non-x86 architectures such as ARMs and Atoms. To support systems experiments, Chameleon provides a configuration system giving users full control of the software stack including root privileges, kernel customization, and console access. To date, Chameleon has supported 2,700+ users working on over 350+ projects. This talk will describe the evolution of the testbed as well as the current work towards broadening the range of supported experiments. In particular, I will discuss recently deployed hardware (as well as short-term plans) and new networking capabilities allowing experimenters to deploy their own switch controllers and experiment with Software Defined Networking (SDN). I will also describe new capabilities targeted at improving experiment management, monitoring, and analysis as well as tying together testbed features to improve experiment repeatability. Finally, I will outline our plans for packaging the Chameleon infrastructure to allow others to reproduce its configuration easily and thereby making the process of configuring a CS testbed more sustainable.Bio
Kate Keahey is one of the
Agenda
- 11:30 AM – 12:00 PM: Pizza & Networking
- 12:00 – 1:00 PM: Talk and Discussion
Questions?
Contact the Collaboratory with any questions you may have about this event.Tech Talk: Why consider a career as an SRE?
Red Hat Collaboratory at Boston University Tech Talk
Mike Saparov
Senior Director of Engineering,Red Hat Service Reliability TeamWhy consider a career as an SRE?
Abstract
What do SREs actually do? And why do they get paid so much? :) Service Reliability Engineering (SRE) team was originally created at Google to manage their massive infrastructure. According Google’s SRE page, SRE is what you get when you treat operations as if it’s a software problem. So why suddenly is every organization – from Red Hat to AWS to Bank of America and TicketMaster – interested in SREs and SRE culture? Hint: it may be related to a technological shift brought by containers and kubernetes.About the Speaker
Mike Saparov is a Senior Director of SRE @ Red Hat. He joined Red Hat as part of CoreOS, where as a VP of Engineering he oversaw development of CoreOS Linux, etcd, and Tectonic, the first immutable Kubernetes distro.
Prior to that Mike was an engineering executive at a number of Silicon Valley companies, founded his own startup, wrote compilers, and did research on High Performance Computing at Stanford.
Questions?
Contact the Collaboratory with any questions you may have about this event.Colloquium: Networking as a First-Class Cloud Resource
Red Hat Collaboratory at Boston University Colloquium
Rodrigo Fonseca
Associate Professor, Computer Science Department, Brown UniversityNetworking as a First-Class Cloud Resource
Abstract
Tenants in a cloud can specify, and are generally charged by, resources such as CPU, storage, and memory. There are dozens of different bundles of these resources tenants can choose from, and many different pricing schemes, including spot markets for left over resource. This is not the case for networking, however. Most of the time, networking is treated as basic infrastructure, and tenants, apart from connectivity, have very little to choose from in terms of network properties such as priorities, bandwidth, or deadlines for flows. In this talk I look into why that is, and whether networking could be treated as a first-class resource. The networking community has developed plenty of mechanisms for different networking properties, and programmable network elements enable much more fine-grained control and allocation of network resources. We argue that there may be a catch-22, as tenants can’t specify what they want, and providers, not seeing different needs, don’t provide different services, or charge differently for these services. I will discuss a prototype we have designed with the Mass Open Cloud project, which provides a much more expressive interface between tenants and the cloud for networking resources, improving efficiency, fostering innovation, and even allowing for a marketplace for networking resources.Bio
Rodrigo Fonseca
Agenda
- 11:30 AM – 12:00 PM: Pizza & Networking
- 12:00 – 1:00 PM: Talk and Discussion
Questions?
Contact the Collaboratory with any questions you may have about this event.Recording of Event
This talk was held as scheduled. A recording can be accessed here. Slides can be accessed here.Colloquium: The Future of Enterprise Application Development in the Cloud
Red Hat Collaboratory at Boston University Colloquium
Mark Little
Red Hat, Vice President of Engineering and JBoss Middleware CTOThe Future of Enterprise Application Development in the Cloud
Abstract
Since the dawn of the cloud, developers have been inundated with a range of different recommended architectural approaches such as Web Services, REST or microservices, as well as just as many different frameworks or stacks, including AWS, Java EE, Spring Boot and now Eclipse MicroProfile. Throw in the explosion of programming languages, such as Golang and Swift and it’s no wonder a developer today could be forgiven for being confused about where is the right place to start. In this session we will look at how cloud and large scale distributed systems problems are influencing the research behind, and evolution of application development stacks and frameworks. We will describe some of the fundamental research challenges in areas including reactive programming, serverless, fault tolerance and multi-tenancy.Bio
Mark Little
Agenda
- 11:30 AM – 12:00 PM: Pizza & Networking
- 12:00 – 1:00 PM: Talk and Discussion
Questions?
Contact the Collaboratory with any questions you may have about this event.Recording of Event
This talk was held as scheduled. A recording can be accessed here. Slides can be accessed here.
Boston University ranked among Most Innovative Universities in US News & World Report

Colloquium: Towards Tail Latency-Aware Caching in Large Web Services
Red Hat Collaboratory at Boston University Colloquium
Daniel S. Berger
2018 Mark Stehlik Postdoctoral Fellow in the Computer Science Department at Carnegie Mellon UniversityTowards Tail Latency-Aware Caching in Large Web Services
Abstract
Tail latency is of great importance in user-facing web services. However, achieving low tail latency is challenging, because typical user requests result in multiple queries to a variety of complex backends (databases, recommender systems, ad systems, etc.), where the request is not complete until all of its queries have completed. In this talk we present our findings for the case of several large web services at Microsoft. We analyze production system request structures and find that requests vary greatly in the backends that they access and in the number of queries made to each backend. Furthermore, we find that backend query latencies vary by more than two orders of magnitude across backends and vary widely over time, resulting in high request tail latencies. This talk proposes a novel solution for maintaining low request tail latency: repurpose existing caches to mitigate the effects of backend latency variability. Our solution, RobinHood, dynamically reallocates cache resources from the cache-rich (backends which don’t affect request latency) to the cache-poor (backends which affect request latency). We evaluate RobinHood with production traces on a 50-server cluster with 20 different backend systems. We find that, in the presence of load spikes, RobinHood meets a 150ms SLO 99.7% of the time, whereas the next best policy only meets this SLO 70% of the time. The team working on this project includes Benjamin Berg (CMU), Timothy Zhu (Penn State), Mor Harchol-Balter (CMU), and Siddhartha Sen (MSR). Will appear at USENIX OSDI 2018.Bio
Daniel S. Berger is the 2018 Mark Stehlik Postdoctoral Fellow in the Computer Science Department at Carnegie Mellon University. His research interests intersect systems, mathematical modeling, and performance testing. Daniel’s research explores how caching can be used to reduce tail latency in large web services and CDNs. Daniel has received his Ph.D (2018) from the University of Kaiserslautern, Germany, and has spent extended visits at CMU (2015-2017), Warwick University (2014), T-Labs Berlin (2013), ETH Zurich (2012), and at the University of Waterloo (2011). Previously, Daniel worked as a data scientist at the German Cancer Research Center (2008-2010) and as a project scientist at CMU (2017-2018).Agenda
- 11:30 AM – 12:00 PM: Pizza & Networking
- 12:00 – 1:00 PM: Talk and Discussion
Questions?
Contact the Collaboratory with any questions you may have about this event.Recording of Event
This talk was held as scheduled. A recording can be accessed here. Slides can be accessed here.Intern Presentation Series: Container Verification Pipeline, Dataverse/Solr, and Design/Marketing
Every Friday for the past month, Red Hat interns in the Boston office have come together to present their work. Below is an event recap for the following presentations: Container Metrics, Dataverse/Solr, and Design/Marketing.Container Verification Pipeline
The first presentation was given by Lance Galletti and covered his work on the Container Verification Pipeline (CVP). This summer, Lance has been developing a tool to automate container testing and a dashboard to view container performance metrics. The purpose of the project is to empower developers with the data and resources they need to make informed decisions throughout the development process.
Dataverse / Solr
The second presentation was given by Charles Thao and Tommy Monson on Dataverse and Solr. Dataverse is an open-source platform to publish, cite, and archive research data. It is a powerful tool for data storage and research, as it supports multiple types of data, users, and workflows. This summer, Charles and Tommy have been working on improving the Dataverse’s security and redesigning the Dataverse installation for containers. The duo have also been working on scaling Solr, an open source enterprise search platform.