On April 6 2018, the Collaboratory hosted a roundtable session on Baremetal Provisioning. The focus of this roundtable was twofold:
- To create a roadmap for upstreaming the secure elastic baremetal provisioning capabilities of HIL, M2 (formerly BMI), and Bolted. These projects are used in and developed by the MOC and have important features for operating clouds that support bare metal to untrusted tenants. The goal of upstreaming these projects is to integrate them into upstream projects Red Hat supports or other projects to secure a long-term home for them.
- To define research areas around elastic bare metal provisioning that the BU/RH Collaboratory should consider funding in the coming year.
The participants at this roundtable included representatives from the following projects:
What follows are notes from the roundtable discussions.
Ansible Networking / HIL merge
This seems like an obvious move and we should proceed with it ASAP. Three interesting points raised by the Ansible Networking team during the roundtable:
- It appears that the work to make HIL use Ansible Networking (using Ansible Runner, etc.) will be very similar to the work to make an ML2 driver for Ansible Networking, and we should therefore collaborate closely on that effort.
- Ansible Networking lacks a thin, stateful API for operators who want to use it for ongoing management of their networks. HIL could become that API and might well gain a lot of upstream uptake if we position it correctly and make some noise about it.
- Once HIL uses Ansible networking; most of the development challenges for HIL disappear; it becomes a narrow maintainable API.
We need to determine next steps on how to work together on this.
Keylime and attestation
We believe Keylime could fill a very useful niche upstream, especially in that the only project that rivals it — Intel’s OpenCIT — appears to be open-source in name only and probably not truly open to outside contributors. Therefore, as a group we should devote some effort to packaging Keylime, building a CI infrastructure around it, and generally making it reasonable to use in a production environment. We should also:
- Consider whether it should find a home with an existing upstream project like Katello (part of Satellite) or FreeIPA (Red Hat identity management project)
- Do the work necessary to integrate it with the Fedora early boot components
- Examine — as a research project? — how to integrate attestation via Keylime with the Ironic state machine
We discussed the general undesirability of making Ironic into a CMDB and having a single (even if HA) Ironic instance manage an entire data center. A good solution to this would be working on federating Ironic, such that a large installation with a single CMDB could still use federated Ironics to manage provisioning for various pieces of it.
This idea needs to be fleshed out further but the Ironic folks in attendance were enthusiastic about it.
FLOCX and blockchain/smart contracts
This seems like a particularly wild idea, but a couple folks mentioned the idea — given a central CMDB of hardware resources — of a blockchain-based implementation of the FLOCX pricing model, where donating hardware or resources gains credits for usage over time. Could this be a research project going forward?
A major project, irrespective of the rest, is the need for a inventory system that everything can use.
M2/BMI and Foreman
M2/BMI and Ironic are focused on somewhat different use cases. M2/BMI is focused on pets with persistent state that are network booted and letting user deploy their own instance; Ironic is focused on cattle and a single datacenter deployment.
As it turns out, Foreman has the need for a lightweight boot-from-volume management tool, including possible integration with attestation steps. We should investigate whether M2 could find a long-term home in or near the Foreman project, and what would need to happen to M2 and Foreman to make that possible. This could provide a long-term stateful app (“pet”) model that would be valuable both to the Red Hat scalability lab and to other projects doing true CI/CD and scalability tests.
HIL and Ironic
Another key project is to add a HIL driver to Ironic — this could be handy for a case where a full undercloud (with e.g. Neutron and Nova) is not present and Ironic is used standalone.
CI/CD for OpenStack
Finally, we would like to carve CI/CD environments for OpenStack out of NECI Lab moving forward. We need to talk about how to make that possible, and equitable for NECI.
Questions? Feedback? Want to get involved?
You are most welcome to contact us with any questions or feedback you have about this event or to get help getting starting working on these projects.