In this article, Aris Tzermias demonstrates how you can deploy a RIPE Atlas software probe isolated from your private network using a dedicated PPP connection with your Internet Service Provider.
In February 2020, the RIPE NCC announced the availability of software probes to improve RIPE Atlas coverage. The idea is that, instead of installing a network device (the probe) to your network, you install the software package for the probe on a machine connected to your network. RIPE provides packages to install the probe at CentOS and Debian distributions. There is even a Docker image developed by the community. I used the latter option to host another probe on my home's OrangePI.
One might reasonably wonder though: "Hey, do you trust that this device won't intercept your local network traffic, or tamper with your other devices in any way?". As stated on the FAQ, the probe executes measurement commands against the public internet only. For the security cautious, it is advised that the probe should be installed outside of one's local network (e.g., on a DMZ network).
I had followed this recommendation on my hardware probe before, but how am I going to perform such isolation in Docker? To make matters worse, I am using the CPE provided by my ISP, which has very limited capabilities in terms of configuring new networks. One possible solution is to reject any traffic from my probe's network to internal networks via iptables. However, this would require additional steps when redeploying the RIPE Atlas probe. Ideally, my goal is to deploy the software probe out-of-the-box using a Docker compose manifest, without any additional commands.
So what if the network traffic of our software probe container can be proxied to another service, which in turn tunnels it to the public Internet? In this post we are going to describe how to achieve this, by spinning up a dedicated Point-to-Point Protocol (PPP) connection with our ISP in Docker and use it as a "gateway" to our software probe container.
Design
The overall design can be depicted on the following figure:
Our goal is to initiate a new PPPoE connection from a container inside an isolated network. Thus our RIPE Atlas probe can only perform its measurements via PPP connection, and without interfering with adjacent containers on the same host.
PPP Client
There are two prerequisites for ensuring that PPP is established correctly. First of all, you need to ensure that the CPE (modem/router) of your ISP has "PPPoE Passthrough" enabled. Some ISPs have this setting enabled by default. Next, is to retrieve your ISP username and password. Some ISPs (like mine) had handed over the username and password on paper along with my CPE equipment. On other circumstances, you might want to fetch a configuration backup from your CPE and retrieve credentials from it. YMMV. With those in hand we can proceed with configuring PPP client.
The required PPP configuration can be easily generated using the pppoeconf tool. It is an ncurses based tool that helps you populate /etc/ppp/peers/dsl-provider and /etc/ppp/chap-secrets with the correct values. The next step is to dockerise the PPP client. I created a small Docker image (tzermias/ppp) that simply invokes PPP client. Providing that our configuration is under a file named "dsl-provider" and located under /etc/ppp/peers directory, we could invoke the container as follows:
$ docker run --name ppp -d --privileged \
--device=/dev/ppp \
- v/etc/ppp:/etc/ppp \
tzermias/ppp call dsl-provider
The PPP client requires access to /dev/ppp device to work properly, so we need to expose it accordingly. Furthermore, you need to use macvlan network driver for the ppp container to work properly. A dedicated macvlan network can be created using the command below.
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1
-o parent=eth0 ppp
The ppp container can be attached to the network by adding the --network ppp flag on the docker run command above.
If everything went well, logs of ppp container would indicate a successful call is performed and a new IPv4 address is assigned to ppp0 interface of our container.
$ docker logs ppp
Plugin rp-pppoe.so loaded.
PPP session is 394
Connected to d8:67:d9:48:XX:XX via interface eth0
Using interface ppp0
Connect: ppp0 <--> eth0
PAP authentication succeeded
peer from calling number D8:67:D9:48:XX:XX authorized
local LL address fe80::dead:beef:dead:beed
remote LL address fe80::beef:beef:dead:dead
replacing old default route to eth0 [172.16.86.1]
local IP address XX.XX.XX.XX
remote IP address YY.YY.YY.YY
The PPP client will also create a resolv.conf file under /etc/ppp, containing DNS nameservers obtained from our peer. For this, you need to have the usepeerdns option in your /etc/ppp/peers/dsl-provider file.
Container Networking
Now comes the interesting part. How can we ensure that any network traffic from our ripe-atlas container will only pass through the ppp container we started earlier? The answer is to share the network stack of this container. Apart from the usual bridge, host or none options that one can pass to --network setting, there a special option named container, which allows you to use the network stack of another container.According to docker run reference:
With the network set to container a container will share the network stack of another container. The other container’s name must be provided in the format of --network container:<name|id>.
...
Example running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
With ppp container running, we can easily validate that our setup works, just by spinning up a test container as follows, and perform some tests (e.g a traceroute). Don't forget to mount /etc/ppp/resolv.conf file in this container in order to have proper DNS resolution.
$ docker run -it --network ppp \
-v /etc/ppp/resolv.conf:/etc/resolv.conf:ro \
busybox:latest traceroute google.com
Now we are ready to spin up our ripe-atlas probe!
RIPE Atlas in Docker
I used ripe-atlas image by jamesits for this purpose. Probe installation and registration are straightforward and well-documented.On first run, probe generates an SSH keypair and saves it under /var/atlas-probe/etc/probe-key.pub. Simply create a new directory where any relevant ripe-atlas files will be stored and mount it to the container.
$ mkdir -p atlas-probe
$ docker run --network "container:ppp" \
--detach --restart=always --log-opt max-size=10m \
--cpus=1 --memory=64m --memory-reservation=64m \
--cap-add=SYS_ADMIN --cap-add=NET_RAW --cap-add=CHOWN \
--mount type=tmpfs,destination=/var/atlasdata,tmpfs-size=64M \
-v ${PWD}/atlas-probe/etc:/var/atlas-probe/etc \
-v ${PWD}/atlas-probe/status:/var/atlas-probe/status \
-v /etc/ppp/resolv.conf:/etc/resolv.conf:ro \
-e RXTXRPT=yes --name ripe-atlas \
jamesits/ripe-atlas:latest
After uploading the probe's public key to RIPE Atlas WebUI, probe is ready to accept new measurements.
IPv6 Support
For those who wish the probe run IPv6 measurements, you can enable IPv6 support by performing the following modifications:
Firstly you need to enable IPv6 support in Docker daemon. Ensure that your /etc/docker/daemon.json contains the following:
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
Then, ensure that Docker daemon is aware of those settings:
$ systemctl restart docker
The next step is to make sure that our ppp container has IPv6 support enabled. For this we need to start our container with net.ipv6.conf.all.disable_ipv6=0 set.
And the last step (eventually) is to enable IPv6 support when performing the PPP call. This can be achieved by adding the following two options in our /etc/ppp/peers/dsl-provider config.
+ipv6
ipv6cp-use-persistent
Putting It Together
To sum up, one should execute the following commands to have the probe and the PPP client up and running:
# Create network
docker network create -d macvlan --subnet=172.16.86.0/24 \
--gateway=172.16.86.1 -o parent=eth0 ppp
# Create ppp container
docker run --name ppp --network ppp --restart unless-stopped\
-d --privileged --device=/dev/ppp \
--sysctl net.ipv6.conf.all.disable_ipv6=0 \
-v /etc/ppp:/etc/ppp \
tzermias/ppp call dsl-provider
# Create directory for atlas-probe config
mkdir -p atlas-probe
# Create ripe-atlas container
docker run --network "container:ppp" \
--detach --restart=always --log-opt max-size=10m \
--cpus=1 --memory=64m --memory-reservation=64m \
--cap-add=SYS_ADMIN --cap-add=NET_RAW --cap-add=CHOWN \
--mount type=tmpfs,destination=/var/atlasdata,tmpfs-size=64M \
-v ${PWD}/atlas-probe/etc:/var/atlas-probe/etc \
-v ${PWD}/atlas-probe/status:/var/atlas-probe/status \
-v /etc/ppp/resolv.conf:/etc/resolv.conf:ro \
-e RXTXRPT=yes --name ripe-atlas \
jamesits/ripe-atlas:latest
I also created the following docker-compose.yaml file to spin up the network and the containers to avoid typing those commands again and again.
---
version: '2.1'
networks:
ppp:
driver: macvlan
driver_opts:
parent: eth0
ipam:
config:
- subnet: 172.16.86.0/24
gateway: 172.16.86.1
services:
ppp:
container_name: ppp
image: tzermias/ppp:latest
restart: always
privileged: true
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
volumes:
- /etc/ppp:/etc/ppp
devices:
- /dev/ppp:/dev/ppp
networks:
- ppp
command:
- "call"
- "dsl-provider"
ripe-atlas:
container_name: ripe-atlas
image: jamesits/ripe-atlas:latest
restart: unless-stopped
volumes:
- ${PWD}/atlas-probe/etc:/var/atlas-probe/etc
- ${PWD}/atlas-probe/status:/var/atlas-probe/status
- /etc/ppp/resolv.conf:/etc/resolv.conf:ro
tmpfs:
- /var/atlasdata:size=64M
environment:
- RXTXRPT=yes
cap_add:
- SYS_ADMIN
- NET_RAW
- CHOWN
network_mode: "service:ppp"
depends_on:
- ppp
For those who wish to use this probe for their measurements, just use probe #1001487.
Software probes can increase the reach of RIPE Atlas platforms on more networks and geographic locations. From 260 probes currently deployed in Greece, we can clearly observe a large portion of them being deployed in Athens metro area. Wouldn't it be nice to have Internet measurements on less densely populated areas, like islands? Furthermore, the option to share one container's network stack with another can be leveraged on other applications, like proxying traffic through a VPN tunnel. If you have a RaspberryPi sitting idle on your network, I'd suggest you give this option a try!
This post was originally posted on: blog.tzermias.com
Comments 1
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.
Ricardo Band •
I guess a comment explaining the nature of the documentation prefix "2001:db8:1::/64" would be nice. TLDR: don't use that prefix in production. It is meant to be used in examples like this blog post. On your network, use your own prefix. You can choose one that your ISP provides or an ULA that you generated.