News

UKL, a Linux-based Unikernel with a Community-based Approach Created via the Boston University Red Hat Collaboratory

By Mairin DuffyNovember 27th, 2018in Linux Unikernels, News
Over at the now + Next Red Hat blog, Ph.D. researcher Ali Raza details a Collaboratory effort he worked on this summer as an intern to build UKL, a Linux-based Unikernel. Ali worked under the advisement of Dr. Orran Kreiger at BU as well as Red Hatters Richard W.M. Jones and Ulrich Drepper (who recently gave a Collaboratory Colloquium talk), with consultation from James Cadden and Tommy Unger from the Boston University Elastic Building Block Runtime (EbbRT) team. What is different about UKL in comparison to other unikernel projects?
  • Unikernels have been around for many years, but tend to take either a “clean slate” or “fork” approach, disconnecting their codebases from the broader community of developers who help with maintainership and pre-existing users who likely won’t want to take the leap to port their applications to something new.
  • UKL takes a community-based approach, by focusing on producing something that could be upstreamed and by working to keep any changes to the codebases involved extremely minimal.
The initial version of UKL involves changing just one line of the Linux codebase, with minor changes made to glibc by Ulrich Drepper. With such minimal changes, compatibility with existing applications that run on Linux can be maintained, and the potential for upstream adoption is much greater. Read more about this exciting Collaboratory project, including the roadmap ahead, in Ali’s blog post: UKL: A Unikernel Based on Linux You can also check out our Linux Unikernels Collaboratory project page.

Colloquium: Software-Configured Compute Environments

Red Hat Collaboratory at Boston University Colloquium

Ulrich Drepper

Engineer, Office of the CTO, Red Hat

Software-Configured Compute Environments

Abstract

Hardware and software environments are designed as a compromise between many different requirements. This sacrifices performance, among other aspects, while at the same time the need for compute increases. Specialists can certainly create more optimized systems. The challenge is to automate this. To research these new systems we need hardware specialists to create re-configurable processors, compiler writers to deduce the best architecture from source code and generate configurations for hardware and OS, improved OSes to efficiently run that code. All that while preserving API and ideally ABI compatibility. First steps toward this are already on the way in the BU/Red Hat collaboration with the OpenShell project to utilize FPGAs as a suitable platform and the Linux-based unikernel project to optimize the runtime environment.

Bio

Ulrich Drepper joined Red Hat again in 2017, after a seven year hiatus when he worked for Goldman Sachs. He is part of the office of the CTO and concentrates on technologies for machine learning and high-performance computing. At Goldman Sachs he worked in the technology division, in the last position as a member of the data science research group. His previous stint at Red Hat lasted 14 years. The last position was as member of the office of the CTO to collect and disseminate information relevant to the Red Hat Enterprise Linux product. His main interests are in the areas of low-level technologies like machine and processor architectures, programming language, compilers, high-performance and low-latency computing. In addition he is interested in using statistics and machine learning for performance analysis of programs and security of application and OS environments. He worked on several revisions of the POSIX standard and was invited expert for both the C and C++ standard committees. Ulrich received his Diploma in Informatics from the University of Karlsruhe, Germany.

Agenda

  • 11:30 AM – 12:00 PM: Pizza & Networking
  • 12:00 – 1:00 PM: Talk and Discussion

Questions?

Contact the Collaboratory with any questions you may have about this event.

Colloquium: Chameleon: New Capabilities for Experimental Computer Science

Red Hat Collaboratory at Boston University Colloquium

Kate Keahey

Senior Fellow, Computation Institute, University of Chicago and Computer Scientist, Mathematics and Computer Science Division, Argonne National Laboratory

Chameleon: New Capabilities for Experimental Computer Science

Abstract

Computer Science experimental testbeds allow investigators to explore a broad range of different state-of-the-art hardware options, assess scalability of their systems, and provide conditions that allow deep reconfigurability and isolation so that one user does not impact the experiments of another. An experimental testbed is also in a unique position to provide methods facilitating experiment analysis and crucially, improve repeatability and reproducibility of experiments both from the perspective of the original experimenter and those building on or extending their results. Providing these capabilities at least partially within a commodity framework improves the sustainability of systems experiments and thus makes them available to a broader range of experimenters. Chameleon is a large-scale, deeply reconfigurable testbed built specifically to support the features described above. It currently consists of almost 20,000 cores, over 5PB of total disk space hosted at the University of Chicago and TACC, and leverages 100 Gbps connection between the sites. The hardware includes a large-scale homogenous partition to support large-scale experiments, as well a diversity of configurations and architectures including Infiniband, GPUs, FPGAs, storage hierarchies with a mix of HDDs, SDDs, NVRAM, and high memory as well as non-x86 architectures such as ARMs and Atoms. To support systems experiments, Chameleon provides a configuration system giving users full control of the software stack including root privileges, kernel customization, and console access. To date, Chameleon has supported 2,700+ users working on over 350+ projects. This talk will describe the evolution of the testbed as well as the current work towards broadening the range of supported experiments. In particular, I will discuss recently deployed hardware (as well as short-term plans) and new networking capabilities allowing experimenters to deploy their own switch controllers and experiment with Software Defined Networking (SDN). I will also describe new capabilities targeted at improving experiment management, monitoring, and analysis as well as tying together testbed features to improve experiment repeatability. Finally, I will outline our plans for packaging the Chameleon infrastructure to allow others to reproduce its configuration easily and thereby making the process of configuring a CS testbed more sustainable.

Bio

Kate Keahey is one of the pioneers of infrastructure cloud computing. She created the Nimbus project, recognized as the first open source Infrastructure-as-a-Service implementation, and continues to work on research aligning cloud computing concepts with the needs of scientific datacenters and applications. To facilitate such research for the community at large, Kate leads the Chameleon project, providing a deeply reconfigurable, large-scale, and open experimental platform for Computer Science research. To foster the recognition of contributions to science made by software projects, Kate co-founded and serves as co-Editor-in-Chief of the SoftwareX journal, a new format designed to publish software contributions. Kate is a Scientist at Argonne National Laboratory and a Senior Fellow at the Computation Institute at the University of Chicago.

Agenda

  • 11:30 AM – 12:00 PM: Pizza & Networking
  • 12:00 – 1:00 PM: Talk and Discussion

Questions?

Contact the Collaboratory with any questions you may have about this event.

Tech Talk: Why consider a career as an SRE?

By Mairin DuffyOctober 23rd, 2018in Events, News, Tech Talks, Upcoming

Red Hat Collaboratory at Boston University Tech Talk

Mike Saparov

Senior Director of Engineering,Red Hat Service Reliability Team

Why consider a career as an SRE?

Abstract

What do SREs actually do? And why do they get paid so much? :) Service Reliability Engineering (SRE) team was originally created at Google to manage their massive infrastructure. According Google’s SRE page, SRE is what you get when you treat operations as if it’s a software problem. So why suddenly is every organization – from Red Hat to AWS to Bank of America and TicketMaster – interested in SREs and SRE culture? Hint: it may be related to a technological shift brought by containers and kubernetes.

About the Speaker

Mike Saparov is a Senior Director of SRE @ Red Hat. He joined Red Hat as part of CoreOS, where as a VP of Engineering he oversaw development of CoreOS Linux, etcd, and Tectonic, the first immutable Kubernetes distro.

Prior to that Mike was an engineering executive at a number of Silicon Valley companies, founded his own startup, wrote compilers, and did research on High Performance Computing at Stanford.

Questions?

Contact the Collaboratory with any questions you may have about this event.

Colloquium: Networking as a First-Class Cloud Resource

By Mairin DuffySeptember 18th, 2018in Colloquium Series, Events, News, Upcoming

Red Hat Collaboratory at Boston University Colloquium

Rodrigo Fonseca

Associate Professor, Computer Science Department, Brown University

Networking as a First-Class Cloud Resource

Abstract

Tenants in a cloud can specify, and are generally charged by, resources such as CPU, storage, and memory. There are dozens of different bundles of these resources tenants can choose from, and many different pricing schemes, including spot markets for left over resource. This is not the case for networking, however. Most of the time, networking is treated as basic infrastructure, and tenants, apart from connectivity, have very little to choose from in terms of network properties such as priorities, bandwidth, or deadlines for flows. In this talk I look into why that is, and whether networking could be treated as a first-class resource. The networking community has developed plenty of mechanisms for different networking properties, and programmable network elements enable much more fine-grained control and allocation of network resources. We argue that there may be a catch-22, as tenants can’t specify what they want, and providers, not seeing different needs, don’t provide different services, or charge differently for these services. I will discuss a prototype we have designed with the Mass Open Cloud project, which provides a much more expressive interface between tenants and the cloud for networking resources, improving efficiency, fostering innovation, and even allowing for a marketplace for networking resources.

Bio

Rodrigo Fonseca is an associate professor at Brown University’s Computer Science Department. He holds a PhD from UC Berkeley, and a MSc and BSc from UFMG. Prior to Brown, he was also a visiting researcher at Yahoo! Research. He is broadly interested in networking, distributed systems, and operating systems, and is the recipient of an NSF CAREER award, and of a 2015 SOSP Best Paper Award. His research involves seeking better ways to build, operate, and diagnose distributed systems, including large-scale internet systems, cloud computing, and mobile computing.

Agenda

  • 11:30 AM – 12:00 PM: Pizza & Networking
  • 12:00 – 1:00 PM: Talk and Discussion

Questions?

Contact the Collaboratory with any questions you may have about this event.

Colloquium: The Future of Enterprise Application Development in the Cloud

Red Hat Collaboratory at Boston University Colloquium

Mark Little

Red Hat, Vice President of Engineering and JBoss Middleware CTO

The Future of Enterprise Application Development in the Cloud

Abstract

Since the dawn of the cloud, developers have been inundated with a range of different recommended architectural approaches such as Web Services, REST or microservices, as well as just as many different frameworks or stacks, including AWS, Java EE, Spring Boot and now Eclipse MicroProfile. Throw in the explosion of programming languages, such as Golang and Swift and it’s no wonder a developer today could be forgiven for being confused about where is the right place to start. In this session we will look at how cloud and large scale distributed systems problems are influencing the research behind, and evolution of application development stacks and frameworks. We will describe some of the fundamental research challenges in areas including reactive programming, serverless, fault tolerance and multi-tenancy.

Bio

Mark Little leads the technical direction, research, and development for Red Hat JBoss Middleware. Prior to taking over this role in 2008, Mark served as the SOA technical development manager and director of standards. Additionally, Mark was a chief architect, and co-founder at Arjuna Technologies, a spin-off from HP, where he was Distinguished Engineer. He has worked in the area of reliable distributed systems since the mid-80s with a PhD in fault-tolerant distributed systems, replication, and transactions. Mark is also a professor at Newcastle University and Lyon University.

Agenda

  • 11:30 AM – 12:00 PM: Pizza & Networking
  • 12:00 – 1:00 PM: Talk and Discussion

Questions?

Contact the Collaboratory with any questions you may have about this event.

Recording of Event

This talk was held as scheduled. A recording can be accessed here.  Slides can be accessed here.

Boston University ranked among Most Innovative Universities in US News & World Report

By Mairin DuffySeptember 11th, 2018in News
Jeff Costello (ENG’17) (from left), Jason Yung (ENG’17), and Dong Hoon Kim (ENG’19) at the Engineering Product Innovation Center (EPIC). Photo by Jackie Ricciardi courtesy BU Today
BU Today posted a story on the new US News & World Report education rankings that place Boston University at the #28 slot of the “Most Innovative Schools” in the US. The Red Hat Collaboratory was cited as one of the programs involved in BU’s innovation engine leading to this ranking. BU was one of only 36 universities from the full list of 301 schools to make the Most Innovative School listing. Read the full story at BU Today: “US News Lists BU among Most Innovative Universities”.

Colloquium: Towards Tail Latency-Aware Caching in Large Web Services

Red Hat Collaboratory at Boston University Colloquium

Daniel S. Berger

2018 Mark Stehlik Postdoctoral Fellow in the Computer Science Department at Carnegie Mellon University

Towards Tail Latency-Aware Caching in Large Web Services

Abstract

Tail latency is of great importance in user-facing web services. However, achieving low tail latency is challenging, because typical user requests result in multiple queries to a variety of complex backends (databases, recommender systems, ad systems, etc.), where the request is not complete until all of its queries have completed. In this talk we present our findings for the case of several large web services at Microsoft. We analyze production system request structures and find that requests vary greatly in the backends that they access and in the number of queries made to each backend. Furthermore, we find that backend query latencies vary by more than two orders of magnitude across backends and vary widely over time, resulting in high request tail latencies. This talk proposes a novel solution for maintaining low request tail latency: repurpose existing caches to mitigate the effects of backend latency variability. Our solution, RobinHood, dynamically reallocates cache resources from the cache-rich (backends which don’t affect request latency) to the cache-poor (backends which affect request latency). We evaluate RobinHood with production traces on a 50-server cluster with 20 different backend systems. We find that, in the presence of load spikes, RobinHood meets a 150ms SLO 99.7% of the time, whereas the next best policy only meets this SLO 70% of the time. The team working on this project includes Benjamin Berg (CMU), Timothy Zhu (Penn State), Mor Harchol-Balter (CMU), and Siddhartha Sen (MSR). Will appear at USENIX OSDI 2018.

Bio

Daniel S. Berger is the 2018 Mark Stehlik Postdoctoral Fellow in the Computer Science Department at Carnegie Mellon University. His research interests intersect systems, mathematical modeling, and performance testing. Daniel’s research explores how caching can be used to reduce tail latency in large web services and CDNs. Daniel has received his Ph.D (2018) from the University of Kaiserslautern, Germany, and has spent extended visits at CMU (2015-2017), Warwick University (2014), T-Labs Berlin (2013), ETH Zurich (2012), and at the University of Waterloo (2011). Previously, Daniel worked as a data scientist at the German Cancer Research Center (2008-2010) and as a project scientist at CMU (2017-2018).

Agenda

  • 11:30 AM – 12:00 PM: Pizza & Networking
  • 12:00 – 1:00 PM: Talk and Discussion

Questions?

Contact the Collaboratory with any questions you may have about this event.

Recording of Event

This talk was held as scheduled. A recording can be accessed here.  Slides can be accessed here.

Intern Presentation Series: Container Verification Pipeline, Dataverse/Solr, and Design/Marketing

Every Friday for the past month, Red Hat interns in the Boston office have come together to present their work. Below is an event recap for the following presentations: Container Metrics, Dataverse/Solr, and Design/Marketing.

Container  Verification Pipeline

The first presentation was given by Lance Galletti and covered his work on the Container Verification Pipeline (CVP). This summer, Lance has been developing a tool to automate container testing and a dashboard to view container performance metrics. The purpose of the project is to empower developers with the data and resources they need to make informed decisions throughout the development process.
Lance Galletti

Dataverse / Solr

The second presentation was given by Charles Thao and Tommy Monson on Dataverse and Solr. Dataverse is an open-source platform to publish, cite, and archive research data. It is a powerful tool for data storage and research, as it supports multiple types of data, users, and workflows. This summer, Charles and Tommy have been working on improving the Dataverse’s security and redesigning the Dataverse installation for containers. The duo have also been working on scaling Solr, an open source enterprise search platform.
Charles Thao

Design/Marketing

The last presentation was given by Fiona Whittington and Grace Colbert, who work on a variety of projects for Red Hat’s global Executive Briefing Center (EBC). Specifically, the duo is responsible for delivering graphic design and marketing support for internal and external projects within the EBC. Currently, Grace and Fiona are finishing a semester-long project to develop an interactive web application for the office that will be demoed at the Grace Hopper Celebration in September. View more information about internship opportunities at the Collaboratory.

From Intern to Employee: A Feature Interview with Urvashi Mohnani, Associate Software Engineer at Red Hat

Urvashi Mohnani, Software Engineer at Red Hat, is a regular contributor to the open source community and Red Hat technology. As someone who entered college with no computer science experience, Urvashi never envisioned herself as a programmer. “I joined Red Hat about a year ago as an intern. I graduated with a Bachelors in Electrical and Computer Engineering and found my passion in software by the end of my senior year,” she said. “ I didn’t have much industry experience in Software Engineering when I started at Red Hat, but my mentor and supervisor, Dan Walsh, was really helpful in getting me up to speed with the Red Hat Container Technologies like CRI-O and OpenShift.”
Photo by Fiona Whittington
When it came to transitioning into a full-time position, Urvashi choose Red Hat because of the open culture and strong community. “Open source has helped me feel confident and supported because I have a community to collaborate with. I’m never stuck wondering with questions like: Can I tell them about this? Can I do that? It’s all out in the open.” During her time at Red Hat, Urvashi has been involved in contributing to the ChRIS project. This was an opportunity made possible by the Red Hat Boston University Collaboratory.
Photo by Fiona Whittington
“The ChRIS project is an initiative to cut down the time radiologists spend staring at images everyday, so that patients can receive instantaneous results.” she said. “Open source is great because you can share and get help from the community. It’s nice not to be restricted to only a few developers on a team.” When she isn’t contributing directly to the Collaboratory projects, Urvashi works with the OpenShift Runtimes team to improve products like Podman, Buildah, CRI-O, and Skopeo. “These are basically products for image building, running containers, and container debugging,” she explained. “The goal is to make our products the default.”
Photo by Fiona Whittington
In the future, Urvashi hopes to get her MBA and continue developing her career at Red Hat, her main motivation being to give back to underprivileged children in her home country of Ghana. If you want to learn more about internship and career opportunities at Red Hat, visit their job portal starting in August.