Since the announcement of RDO and Red Hat OpenStack at the Spring 2013 OpenStack Summit, these have arguably become two of the most popular ways to install OpenStack. Both use the puppet-openstack modules to install OpenStack, and are just a sampling of the OpenStack installers that are based on Puppet.

While the modules were created here at Puppet Labs, earlier this year we released them to the community as a set of projects in Stackforge. The vibrant community of contributors who keep making the modules more awesome continues to grow.

 

However, you may not know that the modules are useful beyond installation. They can be used for complete lifecycle management of OpenStack:  initial installation; scaling a cluster out; managing resources; and providing a framework for migrating with confidence to new releases of OpenStack.

Installation

If you’ve ever deployed OpenStack from source, you’re familiar with the multitude of options that every project offers. The nova.conf file has hundreds of options across dozens of deployment targets. It can be difficult to know which settings are required, which have reasonable defaults, and what combinations of settings will conflict with each other.

The puppet-openstack modules provide an API for reliably configuring an OpenStack deployment, and help to identify essential settings and manage their complex interdependencies. For example, the init.pp file for Nova collects all the required parameters for setting up Nova, with reasonable defaults in place. Keystone authentication and database configuration are separated out, giving you the tools to make sure that both Nova and the services it depends on are configured properly and consistently. The selection and configuration of hypervisor, networking model, queueing backend, and available APIs are broken out into manageable pieces, giving you complete and reliable control over the configuration of your deployment. Instead of trying to tease out the meaning and usefulness of hundreds of configuration options, you can leverage the expert knowledge of the module developers who have captured that information in a reusable and maintainable way.

For an example of how you can do a basic multi-node installation of OpenStack, take a look at the puppetlabs-grizzly and puppetlabs-havana (currently under development) modules. Based on a four-node model of controller, networking, storage, and compute nodes, they build out the system by completely capturing deployment architectural decisions such as the networking model, storage backend, and messaging queue. The modules also collect system-specific details such as network cidr, passwords, and controller IP address in a single user-provided Hiera database.

Management

For management of your OpenStack cluster, you get two benefits. The Grizzly and Havana modules allow you to define specific roles for nodes, gathering node-specific information from Facter, and containing cluster-specific information within the modules. In combination with OpenStack’s built-in discovery and scaling capabilities, this allows to you scale out your cloud just by assigning roles to new hardware as needed.

If your private cloud services catch on and VMs become oversubscribed, adding new compute nodes is as simple as racking a new node and assigning the compute role to it. It’s fast, easy and scalable. The same goes for storage: Once you’ve added storage hardware to your system, you just need to assign it the storage role to bring it all online.

The puppet-openstack modules give you the ability to manage OpenStack-specific resources with a large collection of types and providers. Examples include users, tenants, and roles for Keystone, routers for Neutron, and cells for Nova. These resources, typically managed through command line calls or by interacting with the Horizon interface, can instead be created, updated, and removed directly within Puppet manifests.

Essential resources in your OpenStack cluster are documented and defined declaratively, rather than in an ad-hoc, interactive way. It’s less error-prone, easy to update, and you get continuous documentation of what’s happening in your system. For an example of this in practice, take a look at the user and tenant resources in puppetlabs-havana and an example of a user maintained in a Hiera database.

Migration

OpenStack has a very rapid development cycle, with new major releases arriving every six months. While this momentum is fantastic for bringing new features and projects to the system, an unfortunate side effect is that migration between major versions becomes a hard problem. The puppet-openstack modules don’t directly offer a migration path yet, but they can make your migration easier. Here’s how:

  • To start with, your current deployment is documented within your manifests. That means you can reuse the information captured in those manifests to configure your new installation.
  • Your updated manifests are supported by a stable API. The development team makes API changes only when it’s absolutely necessary to retain parity with the current OpenStack release, and makes a point of clearly calling out deprecated and unsupported features in the documentation. As much as possible, we try to abstract away major architecture changes and leave a stable configuration API.
  • Puppet makes it easy to bring up a test cluster alongside an existing OpenStack cluster. You can perform a database migration against a backup of your working system, using the database migration tools provided by the OpenStack core teams, then gradually transition from the old release to the new by swapping old node descriptions for new ones.

A comparison of the puppetlabs-grizzly and puppetlabs-havana modules themselves shows you what an upgrade looks like. You can see the similarities between the configuration variables, as well as the essential differences and upgrades. It’s entirely possible to run two full OpenStack deplyments from different releases, all captured by the same configuration management system.

So go ahead, take a look at the puppet-openstack modules, and the Grizzly and Havana deployments based on them. You can also get involved in the development over at Stackforge and Launchpad. Engage directly with the community on our mailing list and on IRC at #puppet-openstack. We’ll look forward to seeing you!

About the Author

Chris Hoge is the Senior Software Engineer for OpenStack at Puppet Labs. Previously, he managed a research OpenStack cluster at the University of Oregon. When his head isn’t up in the cloud, he enjoys long-distance running, salsa dancing, and throwing a ball for his endlessly energetic Border Collie.

Last updated: February 7, 2024