Provider::Distem

This tutorial leverages the Distem provider: a provider that creates containers for you on Grid’5000.

Note

More details on : http://distem.gforge.inria.fr/

Hint

For a complete schema reference see Distem Schema

Installation

On Grid’5000, you can go with a virtualenv :

$ virtualenv -p python3 venv
$ source venv/bin/activate
$ pip install -U pip

$ pip install enoslib

Configuration

Since python-grid5000 is used behind the scene, the configuration is read from a configuration file located in the home directory. It can be created with the following:

echo '
username: MYLOGIN
password: MYPASSWORD
' > ~/.python-grid5000.yaml

chmod 600 ~/.python-grid5000.yaml

With the above you can access the Grid’5000 API from you local machine aswell.

External access

If you want to control you experiment from the outside of Grid’5000 (e.g from your local machine) you can refer to the following. You can jump this section if you work from inside Grid’5000.

SSH external access

  • Solution 1: use the Grid’5000 VPN
  • Solution 2: configure you ~/.ssh/config properly:
Host *.grid5000.fr
ProxyCommand ssh -A <login>@194.254.60.33 -W "$(basename %h):%p"
User <login>
ForwardAgent yes

Accessing HTTP services inside Grid’5000

If you want to control you experiment from the outside of Grid’5000 (e.g from your local machine). For instance the Distem provider is starting a web server to handle the client requests. In order to access it propertly externally you drom your local machine can either

  • Solution 1 (general): use the Grid’5000 VPN

  • Solution 2 (HTTP traffic only): create a socks tunnel from your local machine to Grid’5000
    # on one shell
    ssh -ND 2100 access.grid5000.fr
    
    # on another shell
    export https_proxy="socks5h://localhost:2100"
    export http_proxy="socks5h://localhost:2100"
    
    # Note that browsers can work with proxy socks
    chromium-browser --proxy-server="socks5://127.0.0.1:2100" &
    
  • Solution 3 (ad’hoc): create a forwarding port tunnel

    # on one shell
    ssh -Nl 3000:paravance-42.rennes.grid5000.fr:3000 access.grid5000.fr
    
    # Now all traffic that goes on localhost:3000 is forwarded to paravance-42.rennes.grid5000.fr:3000
    

Basic example

We’ll imagine a system that requires 50 compute machines and 1 controller machines. We express this using the Distem provider:

Hint

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
from enoslib.api import play_on, discover_networks
from enoslib.infra.enos_distem.provider import Distem
from enoslib.infra.enos_distem.configuration import Configuration

import logging
import os


FORCE = False
logging.basicConfig(level=logging.DEBUG)


# claim the resources
conf = (
    Configuration
    .from_settings(
        job_name="wip-distem",
        force_deploy=FORCE,
        image="file:///home/msimonin/public/distem-stretch.tgz"
    )
    .add_machine(
        roles=["server"],
        cluster="paravance",
        number=1,
        flavour="large"
    )
    .add_machine(
        roles=["client"],
        cluster="paravance",
        number=1,
        flavour="large"
    )
    .finalize()
)

provider = Distem(conf)

roles, networks = provider.init()

print(roles)
print(networks)
gateway = networks[0]['gateway']
print("Gateway : %s" % gateway)

roles = discover_networks(roles, networks)

with play_on(roles=roles, gather_facts=False) as p:
    # We first need internet connectivity
    # Netmask for a subnet in g5k is a /14 netmask
    p.shell("ifconfig if0 $(hostname -I) netmask 255.252.0.0")
    p.shell("route add default gw %s dev if0" % gateway)


# Experimentation logic starts here
with play_on(roles=roles) as p:
    # flent requires python3, so we default python to python3
    p.apt_repository(repo="deb http://deb.debian.org/debian stretch main contrib non-free",
                     state="present")
    p.apt(name=["flent", "netperf", "python3-setuptools"],
          state="present")

with play_on(pattern_hosts="server", roles=roles) as p:
    p.shell("nohup netperf &")

with play_on(pattern_hosts="client", roles=roles) as p:
    p.shell("flent rrul -p all_scaled "
            + "-l 60 "
            + "-H {{ hostvars[groups['server'][0]].inventory_hostname }} "
            + "-t 'bufferbloat test' "
            + "-o result.png")
    p.fetch(src="result.png",
            dest="result")

Note

lxc-create -n myimg -t download --  --dist debian --release stretch --arch amd64
mount -o bind /dev /var/lib/lxc/myimg/rootfs/dev
chroot /var/lib/lxc/myimg/rootfs
rm /etc/resolv.conf
echo "nameserver 9.9.9.9" > /etc/resolv.conf
# distem requirements: sshd
apt install openssh-server
# enoslib requirements: python
apt install -y python3
update-alternatives --install /usr/bin/python python /usr/bin/python3 1
# your configuration goes here
exit
umount /var/lib/lxc/myimg/rootfs/dev
cd /var/lib/lxc/myimg/rootfs
tar -czvf ../distem-stretch.tgz .

EnOSlib bootsraps distem server and agents on your nodes and start the container for you. In particular: