Diving into the technical infrastructure of La Contre-Voie
Tech part 1: Feedback on self-hosting our servers
In this article, we take you behind the scenes to discover the hardware we use to bring you our new digital services.
This special article will therefore be written in technical jargon incomprehensible to the average person… but you can find our other articles on our digital support program or our new service offering if that’s not the kind of content you’re interested in.
At the launch of our roadmap, we pledged to develop our digital autonomy with small-scale self-hosted servers, and to launch our digital support program - which implies to widen our technical arsenal. Over the course of 2024, we’ve been installing our first self-hosted servers and configuring our first services on them.
Initially, this article was intended to present our servers, our SSO and the software we use, but as the article became far too long, we decided to split it into three.
This first article will therefore deal with the self-hosting of our servers.
At the end of 2023, we only had three… but today, our infrastructure is made up of eight servers distributed as follows:
All these servers communicate via a private WireGuard network, with the exception of sarus, our supervision machine.
🔗Virtual servers
We have four small VPSs at PulseHeberg:
- balearica and antigone are our historical servers, commissioned in 2022; balearica (8 GB RAM, 50 GB SSD) hosts a large part of our services and antigone (2 GB RAM, 250 GB HDD) serves as a storage machine.
- griza, commissioned in early 2024, hosts our mail service, and stanley is a new server hosting a secondary mail service. Both have 2 GB RAM and 250 GB HDD, like antigone.
- Initially, all mail was hosted on balearica, but we decided to dedicate an entire VPS to mail for reasons of storage space and stability.
Then, we also have a virtual server graciously donated by the association Picasoft since 2022 (if you’re reading this, thank you!), named sarus, which serves as our internal and external supervision machine.
As these are VPS servers, we don’t have much to say about their physical hosting conditions. The rest of this section will therefore focus on the servers we host ourselves.
🔗Self-hosted servers
Finally, we can tell you about our little selfhosting project.
The Bêta, a non-profit community center in Angoulême that we introduced in our previous article, has been hosting two of our servers since early 2024:
- pavonina, an Odroid microcomputer (model H3), 64-bit architecture, 64 GB RAM and 2 TB SSD storage, hosts all our à la carte services (including four Nextclouds) without a hitch, and consumes an average of ten watts.
- monacha, an old recovery laptop with 4 GB RAM and 128 GB SSD, serves as our CI/CD and test machine. It consumes an average of 15 watts.
And just a few weeks ago, the Maison des Peuples et de la Paix (MPP), a second Angouleme-based community center and our head office, has been hosting a server:
- demoiselle, another Odroid microcomputer (model HC4), ARM architecture, 4 GB RAM and 128 GB SSD, 8 watts average power consumption, will soon serve as our storage machine.
We are fortunate to have received a generous donation of 12b from the Distrilab at the end of 2023, of the two Odroid computers that now serve to run the next generation of our infrastructure, through our two servers pavonina and demoiselle. (12b, if you’re reading this, please know that without your contribution, it would have taken us another year. Thanks to you!)
🔗Other hardware features
The network: The Beta is equipped with a Freebox (with optical fiber) which was already there before we arrived, and to which our machines are connected. The MPP, on the other hand, is equipped with the Aquilenet fiber.
Electricity: we have purchased UPSs to power our servers and the Internet network in the event of breakdowns or micro-cuts. These are simple domestic UPS from Eaton, model 3S, with a capacity of 550 VA, controllable via USB.
Fire safety: we’ve had to deal with concerns from our hosts about the risk of our machines or batteries catching fire… Given the electricity consumed (less than 200 watts, and less than 50 watts per device), this low-voltage equipment isn’t much more dangerous than a battery-powered clock radio, but it’s so easy to take a few extra precautions that we’ve taken the initiative to address these concerns :
- by investing in two smoke detectors, one for the Beta and one for the MPP, neatly placed next to the batteries;
- by purchasing a USB thermometer (Elitech brand) to measure room temperature in real time (we still have to connect it to our supervision tool…).
🔗Security of the premises
The community centers that hosts us often have lots of public; sometimes more than a hundred people during lively evenings. It goes without saying that we can’t just set up a server in a busy place, and that electrical outlets and the Internet must never be accessible to the public.
To secure our installation, we have established criteria based on SecNumCloud (page 28, chapter 11), which defines three types of zones:
- public zones, which are those accessible by anyone;
- private areas, which may correspond to the establishment’s administrative offices, where the public may not enter unaccompanied;
- sensitive areas, which are intended to house servers and must not be in direct contact with a public area.
Well, that’s a very rough summary, as the standard is very wordy and demanding. We obviously don’t have the means to comply with it in its entirety, but it’s a useful resource for establishing the basic criteria for a secure infrastructure. For example, we haven’t yet installed a biometric identification system for access to our servers, and it’s not going to happen any time soon…
Thus, at both the Beta and the MPP, our servers are located in a locked cabinet accessible only from a private area (administrative offices to which the public is forbidden access). The Internet box and UPS are both stored in the cabinet with the servers.
In short: we do our best, we take hardware security seriously with the small means we have, even if it’s not quite up to the highest standards of datacenter server hosting.
🔗Security of hosted data
Even if we take great care to secure physical access to our servers, what happens in the event of a successful intrusion by a malicious person (or worse, a search warrant)?
We have a simple remedy, albeit a rather uncommon one on servers: full disk encryption. So we installed an SSH server in the boot partition of our machines, enabling us to decrypt the server remotely. The only drawback: if the server reboots for any reason (power failure, freeze…), human intervention (remotely) is required to decrypt disks and start services.
For x86 machines, we followed the TeDomum tutorial; for our ARM machine running on Armbian, we roughly followed this tutorial.
In this way, if someone manages to get into our host’s private offices, open the locked cabinet and retrieve our hard disks, they won’t be able to do anything with them without knowing the decryption password. In the case of a search, this will require the police to obtain the agreement of a judge and to call on our technical team.
🔗Contractual commitments
And finally, as an essential last step in our partnership with these two third places, we have mutually signed a hosting agreement.
This agreement, drawn up by us, formalizes the mutual commitments on our part, and on the part of the structures hosting us.
In broad terms, here are the commitments of the Beta and the MPP:
- Lend us a small, enclosed space with electricity and Internet access;
- Do not disconnect the machines without good reason (emergency, force majeure, risk of any kind to people or property, etc.);
- Do not intervene on machines or hard disks without our authorization;
- Allow us to intervene occasionally for maintenance purposes or in the event of machine breakdown;
- Allow us access to the Internet box parameters for configuration and supervision purposes;
- Ensure the security of the premises;
- In the event of termination of the agreement, to give us two weeks to move the premises, except in cases of force majeure.
And here are our commitments:
- To setup our experimental digital support program with the Beta and the MPP in its entirety;
- To provide these structures with a level of service corresponding to their technical needs (disk space, machine capacity, etc.), within reasonable limits and in proportion to the provided hosting service ;
- To not exceed 200 W of electricity consumption, not including the Internet box;
- To assume on behalf of the Beta and the MPP any legal responsibility that may be imputed to them concerning hosted data, with the exception of data put online by these structures themselves;
- To take care of the premises.
As you will have understood, this agreement is a non-pecuniary exchange of good practices, which has enabled us to beta-test our digital support program and to host our servers free of charge.
We don’t pay for electricity or premises, we can intervene on our servers at any time if need be, and in exchange, we support the Beta and the MPP at our expense and maintain their digital services for free.
🔗Future enhancements
For the time being, we’re considering several developments for our servers, to come by next year or later, depending on our priorities.
Increasing our storage capacity. We’ve been using low-capacity SSDs to start with, but if we need to store more data, we’ll need to increase the capacity of these drives. We’d also like to create data redundancy at hard disk level (with RAID 10, for example), in addition to the redundancy and backups we already perform at software level.
Shutdown some VPSes. Now that we have our self-hosted servers, we’ll be able to transfer certain services to these servers. In particular, we plan to terminate the hosting of balearica and antigone, renting a smaller VPS server to host the services we don’t want to transfer (mail, link shortener, etc.). This should also reduce our technical costs.
Host new servers. We’re going to add at least one more server to handle encoding for PeerTube videos, and then we’ll take it from there. It’s conceivable that we’ll separate the computing and storage servers to make maintenance easier, and invest in decent storage servers for our needs (a microcomputer might do the trick).
But before that, we’ll need to consolidate our infrastructure, including software and supervision (temperature, consumption measurements, alerts…).
And we’re done introducing our hardware! In the next article, we’ll talk about our unified authentication solution, or SSO.
And don’t forget: our actions depend directly on your support. If you appreciate this feedback, would like to see our association exist in the future, and have the means to do so, you can make a donation!
See you soon!