Locust Service#

Locust Service Class#

Classes:

Locust(master[,Β workers,Β networks,Β ...])

Deploy a distributed Locust (see locust.io)

class enoslib.service.locust.locust.Locust(master: Host, workers: Iterable[Host] | None = None, networks: Iterable[Network] | None = None, worker_density: int = 1, local_expe_dir: str = '.', locustfile: str = 'locustfile.py', users: int = 1, spawn_rate: int = 1, run_time: int = 60, environment: Dict | None = None, remote_working_dir: str = '/builds/locust', backup_dir: Path | None = None, priors: List[actions] | None = None, extra_vars: Dict | None = None)#

Deploy a distributed Locust (see locust.io)

This aims at deploying a distributed locust for load testing. The Locust service has two modes of operations:

  • Web UI based:

let the user to interact graphically with the benchmark (see run_with_ui())

  • Headless:

ideal when load testing is part of a batch script. (see run_headless()) By default calling deploy() will deploy locust in headless mode (which is what you want in the general case).

Please note that this module assume that sync_info() has been run before to allow advanced network filtering for worker/master communication.

Parameters:
  • master – Host where the master will be installed

  • workers – list of Host where the workers will be installed

  • worker_density – number of worker per node to start (max 1 per CPU core seems reasonable)

  • networks – network role on which master, agents and targeted hosts are deployed

  • local_expe_dir – path to local directory containing all your Locust code. This will be copied on the remote machines.

  • locustfile – main locust entry point file (usually locusfile.py)

  • users – number of users to spawn

  • spawn_rate – rate of spawning: number of new users per seconds.

  • run_time – duration of the benchmark

  • environment – Environment variables to make available to the remote locust script. backup_dir: local directory where the backup will be performed

Examples

 1import logging
 2
 3import enoslib as en
 4
 5en.init_logging(level=logging.INFO)
 6en.check()
 7
 8provider_conf = {
 9    "backend": "libvirt",
10    "resources": {
11        "machines": [
12            {
13                "roles": ["master"],
14                "flavour": "tiny",
15                "number": 1,
16            },
17            {
18                "roles": ["agent"],
19                "flavour": "tiny",
20                "number": 1,
21            },
22        ],
23        "networks": [{"roles": ["r1"], "cidr": "172.16.42.0/16"}],
24    },
25}
26
27conf = en.VagrantConf.from_dictionary(provider_conf)
28provider = en.Vagrant(conf)
29roles, networks = provider.init()
30
31roles = en.sync_info(roles, networks)
32
33locust = en.Locust(
34    master=roles["master"][0],
35    workers=roles["agent"],
36    networks=networks["r1"],
37    local_expe_dir="expe",
38    run_time=100,
39)
40
41locust.deploy()
42locust.backup()

With the following expe/locustfile.py:

 1import time
 2
 3from locust import User, between, events, task
 4
 5
 6class QuickstartUser(User):
 7    wait_time = between(1, 2.5)
 8
 9    @task
10    def sleep1(self):
11        # faking a 1 second request
12        time.sleep(1)
13        events.request.fire(
14            request_type="noopclient",
15            name="sleep1",
16            response_time=1,
17            response_length=0,
18            response=None,
19            context=None,
20            exception=None,
21        )
backup(backup_dir: Path | None = None)#

Backup the locust files.

We backup the remote working dir of the master.

deploy()#

Install and run locust on the nodes in headless mode.

destroy()#

Stop locust.

run_headless()#

Run locust headless

see https://docs.locust.io/en/stable/running-without-web-ui.html

run_ui()#

Run locust with its web user interface.

Beware this will start a new Locust cluster.