Service APIs

Base Service Class

Classes

Service A service is a reusable piece of software.
class enoslib.service.service.Service

Bases: object

A service is a reusable piece of software.

Methods

backup() (abstract) Backup the service.
deploy() (abstract) Deploy the service.
destroy() (abstract) Destroy the service.
backup()

(abstract) Backup the service.

deploy()

(abstract) Deploy the service.

destroy()

(abstract) Destroy the service.

Conda & Dask

Conda & Dask Service Class

Classes

Conda(*, nodes) Manage Conda on your nodes.
Dask(scheduler, worker, env_file) Initialize a Dask cluster on the nodes.
conda_play_on(env_name, **kwargs) Run Ansible modules in the context of a Conda environment.

Functions

conda_run_command(command, env_name, **kwargs) Run a single shell command in the context of a Conda environment.
class enoslib.service.conda.conda.Conda(*, nodes: List[enoslib.host.Host])

Bases: enoslib.service.service.Service

Manage Conda on your nodes.

This installs miniconda on the nodes(latest version). Optionaly it can also prepare an environment.

Parameters:nodes – The list of the nodes to install conda on.

Methods

backup() Not implemented.
deploy(env_file, env_name, packages) Deploy a conda environment.
destroy() Not implemented.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
from enoslib.infra.enos_g5k.provider import G5k
from enoslib.infra.enos_g5k.configuration import (Configuration,
                                                  NetworkConfiguration)
from enoslib.service.conda import Conda, conda_run_command, conda_play_on
import logging
import os
import time

logging.basicConfig(level=logging.DEBUG)

# claim the resources
conf = Configuration.from_settings(job_type="allow_classic_ssh",
                                   job_name="conda")
network = NetworkConfiguration(id="n1",
                               type="prod",
                               roles=["my_network"],
                               site="rennes")
conf.add_network_conf(network)\
    .add_machine(roles=["control"],
                 cluster="parapluie",
                 nodes=2,
                 primary_network=network)\
    .finalize()

provider = G5k(conf)
roles, networks = provider.init()

# let's provision a new env
m = Conda(nodes=roles["control"])
m.deploy(env_name="plop", packages=["dask"])

# make use of this environment new environment
r = conda_run_command("conda env export", "plop", roles=roles)
print(r)

# make use of an existing environment (somewhere in ~/miniconda3 most probably)
# this is practical because the env can be created on a shared filesystem
# ans use on all the nodes
r = conda_run_command("conda env export", "spark", roles=roles, run_as="msimonin")
print(r)

# run this in the new (local to the node) environment
with conda_play_on("plop", roles=roles) as p:
    p.shell("conda env export > /tmp/plop.env")
    p.fetch(src="/tmp/plop.env", dest="/tmp/plop.env")

# run this in a shared environment
with conda_play_on("spark", roles=roles, run_as="msimonin") as p:
    # launch a script that requires spark n'co
    p.shell("conda env export > /tmp/spark.env")
    p.fetch(src="/tmp/plop.env", dest="/tmp/spark.env")
backup()

Not implemented.

deploy(env_file: Optional[str] = None, env_name: str = '', packages: Optional[List[str]] = None)

Deploy a conda environment.

Parameters:
  • env_file – create an environment based on this file. if specified the following arguments will be ignored.
  • env_name – name of the environment to create(if env_file is absent).
  • packages – list of packages to install in the environment named env_name.
destroy()

Not implemented.

class enoslib.service.conda.conda.Dask(scheduler: enoslib.host.Host, worker: List[enoslib.host.Host], env_file: Optional[str] = None)

Bases: enoslib.service.service.Service

Initialize a Dask cluster on the nodes.

Parameters:
  • scheduler – the scheduler host
  • worker – the workers Hosts
  • env_file – conda environment with you specific dependencies. Dask should be present in this environment.

Methods

deploy() (abstract) Deploy the service.
destroy() (abstract) Destroy the service.

Examples

deploy()

(abstract) Deploy the service.

destroy()

(abstract) Destroy the service.

class enoslib.service.conda.conda.conda_play_on(env_name: str, **kwargs)

Bases: enoslib.api.play_on

Run Ansible modules in the context of a Conda environment.

enoslib.service.conda.conda.conda_run_command(command: str, env_name: str, **kwargs)

Run a single shell command in the context of a Conda environment.

Wrapper around enoslib.api.run_command() that is conda aware.

Parameters:
  • command – The command to run
  • env_name – An existing env_name in which the command will be run

Docker

Docker Service Class

Classes

Docker(*[, agent, registry, registry_opts, …]) Deploy docker agents on the nodes and registry cache(optionnal)
class enoslib.service.docker.docker.Docker(*, agent=None, registry=None, registry_opts=None, bind_var_docker=None)

Bases: enoslib.service.service.Service

Deploy docker agents on the nodes and registry cache(optionnal)

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy docker and optionnaly a registry on your nodes.

Examples

Attributes

SCHEMA dict() -> new empty dictionary

Methods

backup() (Not implemented) Backup docker.
deploy() Deploy docker and optionnaly a docker registry cache.
destroy() (Not implemented) Destroy docker
# Use an internal registry on the first agent
docker = Docker(agent=roles["agent"])

# Use an internal registry on the specified host
docker = Docker(agent=roles["agent"],
                registry=roles["registry"])

# Use an external registry
docker = Docker(agent=roles["compute"] + roles["control"],
                registry_opts = {"type": "external",
                                 "ip": "192.168.42.1",
                                 "port": 4000})
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
from enoslib.service import Docker
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration

import logging
import os

logging.basicConfig(level=logging.INFO)

conf = Configuration()\
       .add_machine(roles=["control"],
                    flavour="tiny",
                    number=1)\
       .add_machine(roles=["compute"],
                    flavour="tiny",
                    number=1)\
        .add_network(roles=["mynetwork"],
                      cidr="192.168.42.0/24")\
       .finalize()

# claim the resources
provider = Enos_vagrant(conf)
roles, networks = provider.init()

# generate an inventory compatible with ansible
roles = discover_networks(roles, networks)

docker = Docker(registry=roles["control"], agent=roles["compute"])
docker.deploy()
docker.backup()
docker.destroy()

# destroy the boxes
provider.destroy()
Parameters:
  • agent (list) – list of enoslib.Host where the docker agent will be installed
  • registry (list) – list of enoslib.Host where the docker registry will be installed.
  • registry_opts (dict) – registry options. The dictionnary must comply with the schema.
  • bind_var_docker (str) – If set the default docker state directory (/var/lib/docker/) will be bind mounted in this directory. The rationale is that on Grid’5000, there isn’t much disk space on /var/lib by default. Set it to False to disable the fallback to the default location.
SCHEMA = {'oneOf': [{'type': 'object', 'properties': {'type': {'const': 'external'}, 'ip': {'type': 'string'}, 'port': {'type': 'number'}}, 'additionalProperties': False, 'required': ['type', 'ip', 'port']}, {'type': 'object', 'properties': {'type': {'const': 'internal'}}, 'additionalProperties': False, 'required': ['type']}, {'type': 'object', 'properties': {'type': {'const': 'none'}}, 'additionalProperties': False, 'required': ['type']}]}
backup()

(Not implemented) Backup docker.

Feel free to share your ideas.

deploy()

Deploy docker and optionnaly a docker registry cache.

destroy()

(Not implemented) Destroy docker

Feel free to share your ideas.

Dstat (monitoring)

Dstat Service Class

Classes

Dstat(*, nodes, options, remote_working_dir, …) Deploy dstat on all hosts.
class enoslib.service.dstat.dstat.Dstat(*, nodes: List[enoslib.host.Host], options: str = '', remote_working_dir: str = '/builds/dstat', priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>], extra_vars: Dict[KT, VT] = None)

Bases: enoslib.service.service.Service

Deploy dstat on all hosts.

This assumes a debian/ubuntu based environment and aims at producing a quick way to deploy a simple monitoring stack based on dstat on your nodes. It’s opinionated out of the box but allow for some convenient customizations.

dstat metrics are dumped into a csv file by default (-o option) and retrieved when backuping.

Parameters:
  • nodes – the nodes to install dstat on
  • options – options to pass to dstat.
  • priors – priors to apply
  • extra_vars – extra vars to pass to Ansible

Methods

backup(backup_dir) Backup the dstat monitoring stack.
deploy() Deploy the dstat monitoring stack.
destroy() Destroy the dtsat monitoring stack.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from enoslib.infra.enos_g5k.provider import G5k
from enoslib.infra.enos_g5k.configuration import (Configuration,
                                                  NetworkConfiguration)
from enoslib.service import Dstat

import logging
import os
import time

logging.basicConfig(level=logging.INFO)

# path to the inventory
inventory = os.path.join(os.getcwd(), "hosts")

# claim the resources
conf = Configuration.from_settings(job_type="allow_classic_ssh",
                                   job_name="test-non-deploy")
network = NetworkConfiguration(id="n1",
                               type="prod",
                               roles=["my_network"],
                               site="rennes")
conf.add_network_conf(network)\
    .add_machine(roles=["control"],
                 cluster="paravance",
                 nodes=2,
                 primary_network=network)\
    .finalize()

provider = G5k(conf)
roles, networks = provider.init()

m = Dstat(nodes=roles["control"])

m.deploy()

time.sleep(10)
m.destroy()
m.backup()


# destroy the boxes
# provider.destroy()
backup(backup_dir: Optional[str] = None)

Backup the dstat monitoring stack.

This fetches all the remote dstat csv files under the backup_dir.

Parameters:backup_dir (str) – path of the backup directory to use.
deploy()

Deploy the dstat monitoring stack.

destroy()

Destroy the dtsat monitoring stack.

This kills the dstat processes on the nodes. Metric files survive to destroy.

Locust (Load generation)

Locust Service Class

Classes

Locust(master, agents, network, …) Deploy a distributed Locust (see locust.io)
class enoslib.service.locust.locust.Locust(master: Optional[List[enoslib.host.Host]] = None, agents: Optional[List[enoslib.host.Host]] = None, network: Optional[str] = None, remote_working_dir: str = '/builds/locust', priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>], extra_vars: Dict[KT, VT] = None)

Bases: enoslib.service.service.Service

Deploy a distributed Locust (see locust.io)

This aims at deploying a distributed locust for load testing. Locust can be deployed either with its web interface or headless.

Please note that this module assume that discover_network has been run before

Parameters:
  • master – list of enoslib.Host where the master will be installed
  • agents – list of enoslib.Host where the slave will be installed
  • network – network role on which master, agents and targeted hosts are deployed
  • remote_working_dir – path to a remote location that will be used as working directory

Methods

deploy() Install Locust on master and agent hosts
destroy() Stop locust.
run_headless(expe_dir, locustfile, …) Run locust headless
run_with_ui(expe_dir, locustfile, density, …) Run locust with its web user interface.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration
from enoslib.service import Locust

provider_conf = {
    "backend": "virtualbox",
    "resources": {
        "machines": [{
            "roles": ["master"],
            "flavour": "tiny",
            "number": 1,
        },{
            "roles": ["agent"],
            "flavour": "tiny",
            "number": 1,
        }],
        "networks": [{"roles": ["r1"], "cidr": "172.16.42.0/16"}]
    }
}

conf = Configuration.from_dictionnary(provider_conf)
provider = Enos_vagrant(conf)
roles, networks = provider.init()

roles = discover_networks(roles, networks)

l = Locust(master=roles["master"],
            agents=roles["agent"],
            network="r1")

l.deploy()
l.run_with_ui('expe')
ui_address = roles["master"][0].extra["r1_ip"]
print("LOCUST : The Locust UI is available at http://%s:8089" % ui_address)
deploy()

Install Locust on master and agent hosts

destroy()

Stop locust.

run_headless(expe_dir: str, locustfile: str = 'locustfile.py', nb_clients: int = 1, hatch_rate: int = 1, run_time: str = '60s', density: int = 1, environment: Optional[Dict[KT, VT]] = None)

Run locust headless (see https://docs.locust.io/en/stable/running-locust-without-web-ui.html)

Parameters:
  • expe_dir – path (relative or absolute) to the experiment directory
  • locustfile – path (relative or absolute) to the main locustfile
  • nb_clients – total number of clients to spawn
  • hatch_rate – number of clients to spawn per second
  • run_time – time of the experiment. e.g 300s, 20m, 3h, 1h30m, etc.
  • density – number of locust slave to run per agent node
  • environment – environment to pass to the execution
run_with_ui(expe_dir: str, locustfile: str = 'locustfile.py', density: int = 1, environment: Optional[Dict[KT, VT]] = None)

Run locust with its web user interface.

Parameters:
  • expe_dir – path (relative or absolute) to the experiment directory
  • file_name – path (relative or absolute) to the main locustfile
  • density – number of locust slave to run per agent node
  • environment – environment to pass to the execution

Monitoring

Monitoring Service Class

Classes

Monitoring(*, collector, agent, ui, network, …) Deploy a TIG stack: Telegraf, InfluxDB, Grafana.
class enoslib.service.monitoring.monitoring.Monitoring(*, collector: List[enoslib.host.Host] = None, agent: List[enoslib.host.Host] = None, ui: List[enoslib.host.Host] = None, network: List[enoslib.host.Host] = None, agent_conf: Optional[str] = None, remote_working_dir: str = '/builds/monitoring', collector_env: Optional[Dict[KT, VT]] = None, agent_env: Optional[Dict[KT, VT]] = None, ui_env: Optional[Dict[KT, VT]] = None, priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>, <enoslib.api.play_on object>], extra_vars: Dict[KT, VT] = None)

Bases: enoslib.service.service.Service

Deploy a TIG stack: Telegraf, InfluxDB, Grafana.

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy a monitoring stack on your nodes. It’s opinionated out of the box but allow for some convenient customizations.

Parameters:
  • collector – list of enoslib.Host where the collector will be installed
  • agent – list of enoslib.Host where the agent will be installed
  • ui – list of enoslib.Host where the UI will be installed
  • network – network role to use for the monitoring traffic. Agents will us this network to send their metrics to the collector. If none is given, the agent will us the address attribute of enoslib.Host of the collector (the first on currently)
  • agent_conf – path to an alternative configuration file
  • collector_env – environment variables to pass in the collector process environment
  • agent_env – environment variables to pass in the agent process envionment
  • ui_env – environment variables to pass in the ui process environment
  • prior – priors to apply
  • extra_vars – extra variables to pass to Ansible

Methods

backup(backup_dir) Backup the monitoring stack.
deploy() Deploy the monitoring stack
destroy() Destroy the monitoring stack.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
from enoslib.api import discover_networks
from enoslib.infra.enos_g5k.provider import G5k
from enoslib.infra.enos_g5k.configuration import (Configuration,
                                                  NetworkConfiguration)
from enoslib.service import Monitoring

import logging

logging.basicConfig(level=logging.INFO)

# claim the resources
conf = Configuration.from_settings(job_type="allow_classic_ssh",
                                   job_name="test-non-deploy")
network = NetworkConfiguration(id="n1",
                               type="prod",
                               roles=["my_network"],
                               site="rennes")
conf.add_network_conf(network)\
    .add_machine(roles=["control"],
                 cluster="paravance",
                 nodes=1,
                 primary_network=network)\
    .add_machine(roles=["compute"],
                 cluster="paravance",
                 nodes=1,
                 primary_network=network)\
    .finalize()

provider = G5k(conf)
roles, networks = provider.init()

roles = discover_networks(roles, networks)

m = Monitoring(collector=roles["control"], agent=roles["compute"], ui=roles["control"])
m.deploy()

ui_address = roles["control"][0].extra["my_network_ip"]
print("The UI is available at http://%s:3000" % ui_address)
print("user=admin, password=admin")

m.backup()
m.destroy()

# destroy the boxes
provider.destroy()
backup(backup_dir: Optional[str] = None)

Backup the monitoring stack.

Parameters:backup_dir (str) – path of the backup directory to use.
deploy()

Deploy the monitoring stack

destroy()

Destroy the monitoring stack.

This destroys all the container and associated volumes.

Network Emulation (Netem & SimpleNetem)

Netem & SimpleNetem Service Class

To enforce network constraint, EnOSlib provides two different services: the Netem Service and the SimpleNetem Service. The former tends to be used when heterogeneous constraints are required between your hosts while the latter can be used to set homogeneous constraints between your hosts.

Netem and SimpleNetem Class

Classes

Netem(network_constraints, Any], *, roles, …) Set heterogeneous constraints between your hosts.
SimpleNetem(options, network, *, hosts[, …]) Set homogeneous network constraints between your hosts.

Functions

expand_groups(grp) Expand group names.
class enoslib.service.netem.netem.Netem(network_constraints: Dict[Any, Any], *, roles: MutableMapping[str, List[enoslib.host.Host]] = None, extra_vars: Dict[Any, Any] = None, chunk_size: int = 100)

Bases: enoslib.service.service.Service

Set heterogeneous constraints between your hosts.

It allows to setup complex network topology. For a much simpler way of applying constraints see enoslib.service.netem.SimpleNetem

Parameters:
  • network_constraints – the decription of the wanted constraints (see the schema below)
  • roles – the enoslib roles to consider
  • extra_vars – extra variables to pass to Ansible when running (e.g. callback options)
  • chunk_size – For large deployments, the commands to apply can become too long. It can be split in chunks of size chunk_size.

Methods

backup() (Not Implemented) Backup.
deploy() Enforce network links emulation.
destroy() Reset the network constraints(latency, bandwidth …)
is_valid(network_constraints) Validate the network_constraints (syntax only).
validate(*[, output_dir]) Validate the network parameters(latency, bandwidth …)

Network constraint schema:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
SCHEMA = {
    "description": "Network constraint description",
    "type": "object",
    "properties": {
        "default_delay": {
            "type": "string",
            "description": "default delay to apply on all groups (e.g. 10ms)",
        },
        "default_rate": {
            "type": "string",
            "description": "default rate to apply on all groups (e.g. 1gbit)",
        },
        "default_loss": {
            "type": "number",
            "description": "default loss (percen) to apply on all groups (e.g. 0.1)",
        },
        "except": {
            "type": "array",
            "items": {"type": "string"},
            "description": "Exclude this groups",
        },
        "groups": {
            "type": "array",
            "items": {"type": "string"},
            "description": "Include only this group",
        },
        "constraints": {"type": "array", "items": {"$ref": "#/constraint"}},
    },
    "required": ["default_delay", "default_rate"],
    "oneOf": [{"required": ["groups"]}, {"required": ["except"]}],
    "additionnalProperties": False,
    "constraint": {
        "type": "object",
        "description": {"Override constraints between specific groups"},
        "properties": {
            "src": {"type": "string", "description": "Source group"},
            "dst": {"type": "string", "description": "Destination group"},
            "delay": {"type": "string", "description": "Delay to apply"},
            "rate": {"type": "string", "description": "Rate to apply"},
            "loss": {"type": "number", "description": "Loss to apply (percentage)"},
            "network": {
                "type": "string",
                "description": "Network role to consider (default to all).",
            },
        },
        "additionnalProperties": False,
        "required": ["src", "dst"],
    },
}

Examples

  • Using defaults

The following will apply the network constraints between every groups. For instance the constraints will be applied for the communication between “n1” and “n3” but not between “n1” and “n2”. Note that using default leads to symetric constraints.

roles = {
    "grp1": ["n1", "n2"],
    "grp2": ["n3", "n4"],
    "grp3": ["n3", "n4"],
}

tc = {
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "groups": ["grp1", "grp2", "grp3"]
}
netem = Netem(tc, roles=roles)
netem.deploy()

Symetricaly, you can use except to exclude some groups.

tc = {
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "except": ["grp3"]
}
netem = Netem(tc, roles=roles)
netem.deploy()

except: [] is a way to apply the constraints between all groups.

  • Using src and dst

The following will enforce a symetric constraint between grp1 and grp2.

tc = {
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "groups": ["grp1", "grp2"]
    "constraints": [{
        "src": "grp1"
        "dst": "grp2"
        "delay": "10ms"
        "symetric": True
    }]
}
netem = Netem(tc, roles=roles)
netem.deploy()

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
from enoslib.service import Netem
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration

import logging
import os

logging.basicConfig(level=logging.INFO)

conf = Configuration()\
    .add_machine(roles=["control"],
                    flavour="tiny",
                    number=1)\
       .add_machine(roles=["compute"],
                    flavour="tiny",
                    number=1)\
    .add_network(roles=["mynetwork"],
                      cidr="192.168.42.0/24")\
       .finalize()

# claim the resources
provider = Enos_vagrant(conf)
roles, networks = provider.init()

# generate an inventory compatible with ansible
roles = discover_networks(roles, networks)

tc = {
    "enable": True,
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "groups": ["control", "compute"]
}

netem = Netem(tc, roles=roles)
netem.deploy()
netem.validate()
netem.backup()
netem.destroy()

# destroy the boxes
provider.destroy()

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
from enoslib.api import discover_networks
from enoslib.infra.enos_g5k.provider import G5k
from enoslib.infra.enos_g5k.configuration import Configuration, NetworkConfiguration
from enoslib.service import Netem

import logging
import os

logging.basicConfig(level=logging.DEBUG)


prod_network = NetworkConfiguration(
    id="n1",
    type="prod",
    roles=["my_network"],
    site="rennes"
)
conf = (
    Configuration.from_settings(job_name="test", job_type="allow_classic_ssh")
    .add_network_conf(prod_network)
    .add_machine(
        roles=["paris"],
        cluster="parapluie",
        nodes=1,
        primary_network=prod_network
    )
    .add_machine(
        roles=["berlin"],
        cluster="parapluie",
        nodes=1,
        primary_network=prod_network
    )
    .add_machine(
        roles=["londres"],
        cluster="parapluie",
        nodes=1,
        primary_network=prod_network
    )
    .finalize()
)
provider = G5k(conf)
roles, networks = provider.init()
roles = discover_networks(roles, networks)

# Building the network constraints
emulation_conf = {
    "default_delay": "20ms",
    "default_rate": "1gbit",
    "except": [],
    "constraints": [{
        "src": "paris",
        "dst": "londres",
        "symetric": True,
        "delay": "10ms"
    }]
}

logging.info(emulation_conf)

netem = Netem(emulation_conf, roles=roles)
netem.deploy()
netem.validate()
netem.destroy()
backup()

(Not Implemented) Backup.

Feel free to share your ideas.

deploy()

Enforce network links emulation.

destroy()

Reset the network constraints(latency, bandwidth …)

Remove any filter that have been applied to shape the traffic.

classmethod is_valid(network_constraints)

Validate the network_constraints (syntax only).

validate(*, output_dir=None)

Validate the network parameters(latency, bandwidth …)

Performs ping tests to validate the constraints set by : py: meth: enoslib.service.netem.Netem.deploy. Reports are available in the tmp directory used by enos.

Parameters:
  • roles (dict) – role -> hosts mapping as returned by : py: meth: enoslib.infra.provider.Provider.init
  • inventory_path (str) – path to an inventory
  • output_dir (str) – directory where validation files will be stored. Default to: py: const: enoslib.constants.TMP_DIRNAME.
class enoslib.service.netem.netem.SimpleNetem(options: str, network: str, *, hosts: List[enoslib.host.Host] = None, extra_vars=None)

Bases: enoslib.service.service.Service

Set homogeneous network constraints between your hosts.

Note that the network constraints are set in all the nodes for outgoing packets only.

Parameters:

Methods

backup() (abstract) Backup the service.
deploy() Apply the constraints on all the hosts.
destroy() (abstract) Destroy the service.
validate([output_dir])

Example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from enoslib.api import discover_networks
from enoslib.infra.enos_g5k.provider import G5k
from enoslib.infra.enos_g5k.configuration import Configuration, NetworkConfiguration
from enoslib.service import SimpleNetem

import logging
import os

logging.basicConfig(level=logging.DEBUG)


prod_network = NetworkConfiguration(
    id="n1",
    type="prod",
    roles=["my_network"],
    site="rennes"
)
conf = (
    Configuration.from_settings(job_type="allow_classic_ssh")
    .add_network_conf(prod_network)
    .add_machine(
        roles=["city", "paris"],
        cluster="parapluie",
        nodes=1,
        primary_network=prod_network
    )
    .add_machine(
        roles=["city", "berlin"],
        cluster="parapluie",
        nodes=1,
        primary_network=prod_network
    )
    .finalize()
)
provider = G5k(conf)
roles, networks = provider.init()
roles = discover_networks(roles, networks)

netem = SimpleNetem("delay 10ms", "my_network", hosts=roles["city"])
netem.deploy()
netem.validate()
netem.destroy()
backup()

(abstract) Backup the service.

deploy()

Apply the constraints on all the hosts.

destroy()

(abstract) Destroy the service.

validate(output_dir=None)
enoslib.service.netem.netem.expand_groups(grp)

Expand group names.

Parameters:grp (string) – group names to expand
Returns:list of groups

Examples

  • grp[1-3] will be expanded to [grp1, grp2, grp3]
  • grp1 will be expanded to [grp1]

Skydive

Skydive Service Class

Classes

Skydive(*, analyzers, agents, networks, …) Deploy Skydive (see http://skydive.network/).
class enoslib.service.skydive.skydive.Skydive(*, analyzers: List[enoslib.host.Host] = None, agents: List[enoslib.host.Host] = None, networks: List[enoslib.host.Host] = None, priors: List[enoslib.api.play_on] = [<enoslib.api.play_on object>, <enoslib.api.play_on object>], extra_vars: Dict[KT, VT] = None)

Bases: enoslib.service.service.Service

Deploy Skydive (see http://skydive.network/).

This assumes a debian/ubuntu base environment and aims at producing a quick way to deploy a skydive stack on your nodes. It’s opinionated out of the box but allow for some convenient customizations.

It is based on the Ansible playbooks found in https://github.com/skydive-project/skydive

Parameters:
  • analyzers – list of enoslib.Host where the skydive analyzers will be installed
  • agents – list of enoslib.Host where the agent will be installed
  • networks – list of networks as returned by a provider. This is visually see them in the interface
  • priors – prior to apply before deploying this service
  • extra_vars (dict) – extra variables to pass to the deployment

Methods

build_fabric()
deploy() Deploy Skydive service.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from enoslib.service import Skydive
from enoslib.api import discover_networks
from enoslib.infra.enos_vagrant.provider import Enos_vagrant
from enoslib.infra.enos_vagrant.configuration import Configuration

import logging


logging.basicConfig(level=logging.INFO)

conf = Configuration()\
       .add_machine(roles=["control"],
                    flavour="tiny",
                    number=1)\
       .add_machine(roles=["compute"],
                    flavour="tiny",
                    number=1)\
        .add_network(roles=["mynetwork"],
                      cidr="192.168.42.0/24")\
       .finalize()

# claim the resources
provider = Enos_vagrant(conf)
roles, networks = provider.init()

# generate an inventory compatible with ansible
roles = discover_networks(roles, networks)

s = Skydive(analyzers=roles["control"],
            agents=roles["compute"] + roles["control"])
s.deploy()

ui_address = roles["control"][0].extra["mynetwork_ip"]
print("The UI is available at http://%s:8082" % ui_address)

s.backup()
s.destroy()

# destroy the boxes
provider.destroy()
build_fabric()
deploy()

Deploy Skydive service.