15 Automating Experiments
This chapter focuses on automating profiles and experiments. By automating
your experiments and building your profiles in a more structured fashion,
you can be a more conscientious Powder user—
15.1 Portal API
Powder provides a remote API for interacting with experiments: getting status, creating, modifying, connecting, capturing disk images, and terminating.
We provide a python client (interactive tool and library): see https://gitlab.flux.utah.edu/stoller/portal-tools. Its README file provides comprehensive instruction on usage, and the repository also contains an API description.
The Portal API is under active development, so please let us know (see the Getting Help section) if an existing feature you need is not yet exposed in the API.
15.2 Runtime Configuration: Ansible
you want to reuse building blocks from other profiles
you want to share your configuration as a reusable artifact
you should consider using a configuration management tool to configure resources in your experiments. (You might further consider using a workflow engine to task-automation-stackstorm, or the use of a test automation framework to test-automation-robot, as discussed in the following sections.
To assist with configuration automation, we provide platform-level
integration with Ansible,
a widely-used and accessible configuration management tool. Ansible
provides a built-in standard library of configuration modules for nearly any
common UNIX-like task, and many more are available to integrate with other
systems and software. If you haven’t used a configuration management tool
before, this may seem like a waste of time, but in addition to a standard
library of configuration helpers, Ansible tasks have common error-handling
support, and only make and apply changes when necessary—
Ansible extensions to geni-lib that allow you to declare Ansible roles, collections, and playbooks; bind Ansible roles to nodes in your profile; and map profile parameters to Ansible variables to override default values.
a small bootstrapping repository, designed to be included in a profile as a git submodule, that includes basic scripts to bootstrap Ansible on one of your experiment’s nodes, and automatically apply Ansible configuration as described in your profile to those nodes.
an emulab-common Ansible collection, which includes plugins, playbooks, and common roles that expose Powder profile and runtime configuration as Ansible facts and provide playbooks for common configuration tasks. This module’s playbooks can also parse the experiment manifest (description of allocated resources, effectively an annotated RSpec), extracting the Ansible roles, playbooks, and overrides; and automatically generate (and run) shell scripts that themselves install the roles and collections defined in the profile and run playbooks to configure nodes.
15.2.1 Ansible extensions to geni-lib
Roles group related tasks, template files, vars, etc into a well-known directory structure. Roles are typically bound to nodes in an inventory, and often include simple playbooks that execute the role’s tasks on associated nodes.
Playbooks list configuration tasks. A playbook is typically associated with a role to pull in per-role tasks, but it can also function as a standalone task list.
RoleBindings bind previously-declared Roles to nodes.
Overrides bind profile parameters to Role variables. Variables are the typical way to configure the details of how a role configures a node. The emulab-common Ansible collection contains code to automatically generate per-role and per-node override value files, so that when you create an experiment, parameter values can be automatically mapped to Ansible configuration.
(We attempt to support the most common Ansible usage patterns with this set of profile extensions, but there are many ways to perform a given configuration with Ansible, so we do not support every possible pattern.)
15.2.2 Bootstrapping Ansible in Experiments
Our small Ansible bootstrapping repository can be included as a git submodule in a git-based profile. It provides a script that can be used as your experiment’s startup command. This script installs Ansible (either from Linux distribution packages, or into a Python virtualenv via pip, and you can specify a particular version); extracts the Ansible abstractions from your experiment’s manifest; autogenerates shell wrappers (which we call "entrypoints") that run the necessary Ansible playbooks; and runs the wrappers. You can customize this script’s behavior via environment variables. It’s easy to fork and modify if necessary.
To add this as a submodule to an existing or new profile’s git repository, simply run the following command, likely in your repository’s top-level directory:
git submodule add https://gitlab.flux.utah.edu/emulab/ansible/emulab-ansible-bootstrap.git
and add and commit. Then follow the instructions in the shim’s README.md to run the shim at experiment runtime. To summarize: you will choose a single node in your experiment to act as the "head" or "manager" node. The "head" node should add the emulab-ansible-bootstrap/head.sh script as its startup script, and managed clients should add the emulab-ansible-bootstrap/client.sh script.
You can configure the shim to install Ansible in different ways (e.g., using the system packages, choose versions, etc), but the default is to install Ansible==4.0.0 (see the declaration of ANSIBLE_VERSION) in a Python virtualenv, located on your head node in /local/setup/venv/default. If you ever need to run ansible-playbook or any of the other scripts manually, you would run (in the bash shell):
. /local/setup/venv/default/bin/activate ansible -m ping localhost deactivate
If your profile uses the geni-lib Ansible extensions to bind Roles and Playbooks to nodes (and/or Overrides to profile parameters), the shim generates an Ansible inventory for you in /local/setup/ansible/inventory.ini. This inventory contains an all group that lists all nodes in your experiment, as well as per-role groups containing the nodes that were bound to Roles. Any profile parameters that were associated with Overrides in your profile are written into /local/setup/ansible/vars (host- and group-specific variables, if any, are written into per-host files in the /local/setup/ansible/host_vars and /local/setup/ansible/group_vars directories.
Finally, the shim generates shell script wrappers (/local/setup/ansible/entrypoints/*.sh) that run each Ansible playbook defined in your profile, and a top-level driver script (/local/setup/ansible/run-automation.sh that runs them in the order listed in the profile. The per-playbook driver scripts ("entrypoints") simply enter the proper Python virtualenv and run the playbook via ansible-playbook with the proper become and overrides arguments, as defined in your profile.
(The top-level automation driver script will run on your "head" node automatically, unless you define EMULAB_ANSIBLE_NOAUTO=1 in the profile startup command that runs head.sh from the shim.)
15.2.3 Automating Ansible Configuration in Experiments
We provide the emulab-common Ansible collection that provides plugins and a small library of useful roles to help bridge the gap between a profile, a Powder experiment’s physical and virtual resources, and Ansible roles and playbooks.
emulab.common.gather_manifest_facts: obtains the geni-lib XML manifest for the experiment in which Ansible is running, and parses and converts into a dictionary named geni_manifest within the global ansible_facts dictionary.
emulab.common.gather_emulab_facts: contextualizes the logical names of resources you provided in your profile within the physical and virtual resources in your experiment, and provides mappings from one to the other. It places these values into the global ansible_facts dictionary, and each key inserted is prefixed with emulab_ (e.g., "emulab_controlif": "eth0").
emulab.common.generate_emulab_automation: generates shell script wrapper scripts that run the Ansible playbooks specified in your profile (driver script to run all playbooks in series in /local/setup/ansible/run-automation.sh, and per-playbook scripts in files in /local/setup/ansible/entrypoints). You could re-run this module manually to regenerate the automation files, had they been manually modified prior.
When you call one or both of the former two modules, they add more information to the global ansible_facts dictionary, and you can use these facts in playbooks, task files, and templates. For instance, suppose you wanted to add a task block to run iperf in server mode, but only on a particular experiment network. You could find the IP address to listen on, assuming your node was named node-0, and is a member of a LAN named lan-0, by referencing the variable ansible_facts["emulab_topomap"]["nodes"]["node-0"]["lans"]["lan-0"]. If you needed the network mask for the lan-0 network, you could find that via ansible_facts["emulab_topomap"]["lans"]["lan-0"]["mask"]. If you needed node-0’s FQDN, you could find that in ansible_facts["emulab_fqdn"].
Next, the collection provides several playbooks, each associated with a role:
How to use the basic emulab.common.* roles – need to add autogen of playbooks when the role decls don’t declare them.
How to use the emulab/manifest facts, e.g. to get a node-lan IP addr and device etc.
How to use the ssl, nfs, docker, lvm, rootfs roles
15.2.4 Example: workflow-manager profile
Our workflow-manager profile (source code here) provides custom roles that install Stackstorm and KiwiTCMS, which also use roles from the emulab-common module. Specifically, the profile’s geni-lib script
declares two roles: stackstorm and kiwitcms, both in the ansible/ subdirectory of the repository, each associated with a playbook in the ansible/ subdirectory,
declares a number of parameters, and binds some to Ansible overrides.
The stackstorm and kiwitcms roles in this profile demonstrate a traditional use of Ansible.
15.2.5 Example: Kubernetes profile
We provide a fully Ansible version of our standard Kubernetes profile (source code here). All configuration in this profile is performed via Ansible playbooks, roles, and tasks, once the bootstrapping shim has ensured that Ansible is present on the designated experiment node. Like the standard Kubernetes profile, this profile uses the Kubespray installer at its core