Service APIs

Base Service Class

class enoslib.service.service.Service

Bases: object

A service is a reusable piece of software.

backup()
deploy()
destroy()

Docker

Docker Service Class

class enoslib.service.docker.docker.Docker(*, agent=None, registry=None, registry_opts=None, bind_volumes='/tmp/docker/volumes')

Bases: enoslib.service.service.Service

Deploy docker agents on the nodes and registry cache(optionnal)

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy docker and optionnaly a registry on your nodes.

Examples

# Use an internal registry on the first agent
docker = Docker(agent=roles["agent"])

# Use an internal registry on the specified host
docker = Docker(agent=roles["agent"],
                registry=roles["registry"])

# Use an external registry
docker = Docker(agent=roles["compute"] + roles["control"],
                registry_opts = {"type": "external",
                                 "ip": "192.168.42.1",
                                 "port": 4000})
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
from enoslib.service import Docker
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration

import logging
import os

logging.basicConfig(level=logging.INFO)

conf = Configuration()\
       .add_machine(roles=["control"],
                    flavour="tiny",
                    number=1)\
       .add_machine(roles=["compute"],
                    flavour="tiny",
                    number=1)\
        .add_network(roles=["mynetwork"],
                      cidr="192.168.42.0/24")\
       .finalize()

# claim the resources
provider = Enos_vagrant(conf)
roles, networks = provider.init()

# generate an inventory compatible with ansible
roles = discover_networks(roles, networks)

docker = Docker(registry=roles["control"], agent=roles["compute"])
docker.deploy()
docker.backup()
docker.destroy()

# destroy the boxes
provider.destroy()
Parameters:
  • agent (list) – list of enoslib.Host where the docker agent will be installed
  • registry (list) – list of enoslib.Host where the docker registry will be installed.
  • registry_opts (dict) – registry options. The dictionnary must comply with the schema.
  • bind_volumes (str) – If set the default volume directory (/var/lib/docker/volumes) will be bind mounted in this directory. The rationale is that on Grid’5000, there isn’t much disk space on /var/lib by default. Set it to False to disable the fallback to the default location.
SCHEMA = {'oneOf': [{'type': 'object', 'properties': {'type': {'const': 'external'}, 'ip': {'type': 'string'}, 'port': {'type': 'number'}}, 'additionalProperties': False, 'required': ['type', 'ip', 'port']}, {'type': 'object', 'properties': {'type': {'const': 'internal'}}, 'additionalProperties': False, 'required': ['type']}, {'type': 'object', 'properties': {'type': {'const': 'none'}}, 'additionalProperties': False, 'required': ['type']}]}
backup()

Backup docker.

No backup implemented yet. But what do you want to backup exactly?

deploy()

Deploy docker and optionnaly a docker registry cache.

destroy()

Destroy docker

No destroy implemented yet. But what do you want to destroy exactly?

Dstat (monitoring)

Dstat Service Class

class enoslib.service.dstat.dstat.Dstat(*, nodes: List[enoslib.host.Host], options: str = '', remote_working_dir: str = '/builds/dstat', priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>, <enoslib.api.play_on object>])

Bases: enoslib.service.service.Service

Deploy dstat on all hosts.

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy a simple monitoring stack based on dstat on your nodes. It’s opinionated out of the box but allow for some convenient customizations.

dstat metrics are dumped into a csv file by default (-o option) and retrieved when backuping.

Parameters:
  • nodes – the nodes to install dstat on
  • options – options to pass to dstat.
  • priors – priors to apply

Examples

backup(backup_dir: Optional[str] = None)

Backup the dstat monitoring stack.

This fetches all the remote dstat csv files under the backup_dir.

Parameters:backup_dir (str) – path of the backup directory to use.
deploy()

Deploy the dstat monitoring stack.

destroy()

Destroy the dtsat monitoring stack.

This kills the dstat processes on the nodes. Metric files survive to destroy.

Locust (Load generation)

Locust Service Class

class enoslib.service.locust.locust.Locust(master: Optional[List[enoslib.host.Host]] = None, agents: Optional[List[enoslib.host.Host]] = None, network: Optional[str] = None, remote_working_dir: str = '/builds/locust', priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>, <enoslib.api.play_on object>])

Bases: enoslib.service.service.Service

Deploy a distributed Locust (see locust.io)

This aims at deploying a distributed locust for load testing. Locust can be deployed either with its web interface or headless.

Please note that this module assume that discover_network has been run before

Parameters:
  • master – list of enoslib.Host where the master will be installed
  • agents – list of enoslib.Host where the slave will be installed
  • network – network role on which master, agents and targeted hosts are deployed
  • remote_working_dir – path to a remote location that will be used as working directory

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration
from enoslib.service import Locust

provider_conf = {
    "backend": "virtualbox",
    "resources": {
        "machines": [{
            "roles": ["master"],
            "flavour": "tiny",
            "number": 1,
        },{
            "roles": ["agent"],
            "flavour": "tiny",
            "number": 1,
        }],
        "networks": [{"roles": ["r1"], "cidr": "172.16.42.0/16"}]
    }
}

conf = Configuration.from_dictionnary(provider_conf)
provider = Enos_vagrant(conf)
roles, networks = provider.init()

roles = discover_networks(roles, networks)

l = Locust(master=roles["master"],
            agents=roles["agent"],
            network="r1")

l.deploy()
l.run_with_ui('expe')
ui_address = roles["master"][0].extra["r1_ip"]
print("LOCUST : The Locust UI is available at http://%s:8089" % ui_address)
deploy()

Install Locust on master and agent hosts

destroy()

Stop locust.

run_headless(expe_dir: str, locustfile: str = 'locustfile.py', nb_clients: int = 1, hatch_rate: int = 1, run_time: str = '60s', density: int = 1, environment: Optional[Dict[KT, VT]] = None)

Run locust headless (see https://docs.locust.io/en/stable/running-locust-without-web-ui.html)

Parameters:
  • expe_dir – path (relative or absolute) to the experiment directory
  • locustfile – path (relative or absolute) to the main locustfile
  • nb_clients – total number of clients to spawn
  • hatch_rate – number of clients to spawn per second
  • run_time – time of the experiment. e.g 300s, 20m, 3h, 1h30m, etc.
  • density – number of locust slave to run per agent node
  • environment – environment to pass to the execution
run_with_ui(expe_dir: str, locustfile: str = 'locustfile.py', density: int = 1, environment: Optional[Dict[KT, VT]] = None)

Run locust with its web user interface.

Parameters:
  • expe_dir – path (relative or absolute) to the experiment directory
  • file_name – path (relative or absolute) to the main locustfile
  • density – number of locust slave to run per agent node
  • environment – environment to pass to the execution

Monitoring

Monitoring Service Class

class enoslib.service.monitoring.monitoring.Monitoring(*, collector: List[enoslib.host.Host] = None, agent: List[enoslib.host.Host] = None, ui: List[enoslib.host.Host] = None, network: List[enoslib.host.Host] = None, agent_conf: Optional[str] = None, remote_working_dir: str = '/builds/monitoring', collector_env: Optional[Dict[KT, VT]] = None, agent_env: Optional[Dict[KT, VT]] = None, ui_env: Optional[Dict[KT, VT]] = None, priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>, <enoslib.api.play_on object>, <enoslib.api.play_on object>])

Bases: enoslib.service.service.Service

Deploy a TIG stack: Telegraf, InfluxDB, Grafana.

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy a monitoring stack on your nodes. It’s opinionated out of the box but allow for some convenient customizations.

Parameters:
  • collector – list of enoslib.Host where the collector will be installed
  • agent – list of enoslib.Host where the agent will be installed
  • ui – list of enoslib.Host where the UI will be installed
  • network – network role to use for the monitoring traffic. Agents will us this network to send their metrics to the collector. If none is given, the agent will us the address attribute of enoslib.Host of the collector (the first on currently)
  • agent_conf – path to an alternative configuration file
  • collector_env – environment variables to pass in the collector process environment
  • agent_env – environment variables to pass in the agent process envionment
  • ui_env – environment variables to pass in the ui process environment
  • prior – priors to apply

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
from enoslib.api import discover_networks
from enoslib.infra.enos_g5k.provider import G5k
from enoslib.infra.enos_g5k.configuration import (Configuration,
                                                  NetworkConfiguration)
from enoslib.service import Monitoring

import logging

logging.basicConfig(level=logging.INFO)

# claim the resources
conf = Configuration.from_settings(job_type="allow_classic_ssh",
                                   job_name="test-non-deploy")
network = NetworkConfiguration(id="n1",
                               type="prod",
                               roles=["my_network"],
                               site="rennes")
conf.add_network_conf(network)\
    .add_machine(roles=["control"],
                 cluster="paravance",
                 nodes=1,
                 primary_network=network)\
    .add_machine(roles=["compute"],
                 cluster="paravance",
                 nodes=1,
                 primary_network=network)\
    .finalize()

provider = G5k(conf)
roles, networks = provider.init()

roles = discover_networks(roles, networks)

m = Monitoring(collector=roles["control"], agent=roles["compute"], ui=roles["control"])
m.deploy()

ui_address = roles["control"][0].extra["my_network_ip"]
print("The UI is available at http://%s:3000" % ui_address)
print("user=admin, password=admin")

m.backup()
m.destroy()

# destroy the boxes
provider.destroy()
backup(backup_dir: Optional[str] = None)

Backup the monitoring stack.

Parameters:backup_dir (str) – path of the backup directory to use.
deploy()

Deploy the monitoring stack

destroy()

Destroy the monitoring stack.

This destroys all the container and associated volumes.

Network Emulation (Netem)

Netem Service Class

class enoslib.service.netem.netem.Netem(network_constraints, *, roles=None, extra_vars=None)

Bases: enoslib.service.service.Service

SCHEMA = {'additionnalProperties': False, 'constraint': {'additionnalProperties': False, 'properties': {'delay': {'type': 'string'}, 'dst': {'type': 'string'}, 'loss': {'type': 'number'}, 'rate': {'type': 'string'}, 'src': {'type': 'string'}}, 'required': ['src', 'dst'], 'type': 'object'}, 'properties': {'constraints': {'items': {'$ref': '#/constraint'}, 'type': 'array'}, 'default_delay': {'type': 'string'}, 'default_loss': {'type': 'number'}, 'default_rate': {'type': 'string'}, 'enable': {'type': 'boolean'}, 'except': {'items': {'type': 'string'}, 'type': 'array'}, 'groups': {'items': {'type': 'string'}, 'type': 'array'}}, 'required': ['default_delay', 'default_rate'], 'type': 'object'}
backup()

What do you want to backup here?

deploy()

Emulate network links.

Read network_constraints and apply tc rules on all the nodes. Constraints are applied between groups of machines. Theses groups are described in the network_constraints variable and must be found in the inventory file. The newtwork constraints support delay, rate and loss.

Parameters:
  • network_constraints (dict) – network constraints to apply
  • roles (dict) – role->hosts mapping as returned by enoslib.infra.provider.Provider.init()
  • inventory_path (string) – path to an inventory
  • extra_vars (dict) – extra_vars to pass to ansible

Examples

  • Using defaults

The following will apply the network constraints between every groups. For instance the constraints will be applied for the communication between “n1” and “n3” but not between “n1” and “n2”. Note that using default leads to symetric constraints.

roles = {
    "grp1": ["n1", "n2"],
    "grp2": ["n3", "n4"],
    "grp3": ["n3", "n4"],
}

tc = {
    "enable": True,
    "default_delay": "20ms",
    "default_rate": "1gbit",
}
netem = Netem(tc, roles=roles)
netem.deploy()

If you want to control more precisely which groups need to be taken into account, you can use except or groups key

tc = {
    "enable": True,
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "except": "grp3"
}
netem = Netem(tc, roles=roles)
netem.deploy()

If you want to control more precisely which groups need to be taken into account:

tc = {
    "enable": True,
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "groups": ["grp1", "grp2"]
}
netem = Netem(tc, roles=roles)
netem.deploy()
  • Using src and dst

The following will enforce a symetric constraint between grp1 and grp2.

tc = {
    "enable": True,
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "constraints": [{
        "src": "grp1"
        "dst": "grp2"
        "delay": "10ms"
        "symetric": True
    }]
}
netem = Netem(tc, roles=roles)
netem.deploy()

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from enoslib.service import Netem
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration

import logging
import os

logging.basicConfig(level=logging.INFO)

conf = Configuration()\
       .add_machine(roles=["control"],
                    flavour="tiny",
                    number=1)\
       .add_machine(roles=["compute"],
                    flavour="tiny",
                    number=1)\
        .add_network(roles=["mynetwork"],
                      cidr="192.168.42.0/24")\
       .finalize()

# claim the resources
provider = Enos_vagrant(conf)
roles, networks = provider.init()

# generate an inventory compatible with ansible
roles = discover_networks(roles, networks)

tc = {
    "enable": True,
    "default_delay": "20ms",
    "default_rate": "1gbit",
}

netem = Netem(tc, roles=roles)
netem.deploy()
netem.validate()
netem.backup()
netem.destroy()

# destroy the boxes
provider.destroy()
destroy()

Reset the network constraints (latency, bandwidth …)

Remove any filter that have been applied to shape the traffic.

validate(*, output_dir=None)

Validate the network parameters (latency, bandwidth …)

Performs flent, ping tests to validate the constraints set by enoslib.service.netem.Netem.deploy(). Reports are available in the tmp directory used by enos.

Parameters:
  • roles (dict) – role->hosts mapping as returned by enoslib.infra.provider.Provider.init()
  • inventory_path (str) – path to an inventory
  • output_dir (str) – directory where validation files will be stored. Default to enoslib.constants.TMP_DIRNAME.
enoslib.service.netem.netem.expand_groups(grp)

Expand group names.

Parameters:grp (string) – group names to expand
Returns:list of groups

Examples

  • grp[1-3] will be expanded to [grp1, grp2, grp3]
  • grp1 will be expanded to [grp1]

Skydive

Skydive Service Class

class enoslib.service.skydive.skydive.Skydive(*, analyzers=None, agents=None, networks=None, extra_vars=None, priors=[<enoslib.api.play_on object>, <enoslib.api.play_on object>, <enoslib.api.play_on object>])

Bases: enoslib.service.service.Service

Deploy Skydive (see http://skydive.network/).

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy a skydive stack on your nodes. It’s opinionated out of the box but allow for some convenient customizations.

It is based on the Ansible playbooks found in https://github.com/skydive-project/skydive

Parameters:
  • analyzers (list) – list of enoslib.Host where the skydive analyzers will be installed
  • agents (list) – list of enoslib.Host where the agent will be installed
  • networks (list) – list of networks as returned by a provider. This is visually see them in the interface
  • extra_vars (dict) – extra variables to pass to the deployment

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from enoslib.service import Skydive
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration

import logging


logging.basicConfig(level=logging.INFO)

conf = Configuration()\
       .add_machine(roles=["control"],
                    flavour="tiny",
                    number=1)\
       .add_machine(roles=["compute"],
                    flavour="tiny",
                    number=1)\
        .add_network(roles=["mynetwork"],
                      cidr="192.168.42.0/24")\
       .finalize()

# claim the resources
provider = Enos_vagrant(conf)
roles, networks = provider.init()

# generate an inventory compatible with ansible
roles = discover_networks(roles, networks)

s = Skydive(analyzers=roles["control"],
            agents=roles["compute"] + roles["control"])
s.deploy()

ui_address = roles["control"][0].extra["mynetwork_ip"]
print("The UI is available at http://%s:8082" % ui_address)

s.backup()
s.destroy()

# destroy the boxes
provider.destroy()
build_fabric()
deploy()