OpenSlides on your own server: My installation on the “Cube”

OpenSlides is a powerful platform for digital meetings, voting and structured decision-making processes. Many organisations use external hosting services, but for my way of working it was clear: I want an OpenSlides Server installation on my own server, completely under my control, with clear and transparent documentation.

In the following, I will describe step by step how I set up OpenSlides 4 on my home server cube. The server runs with Ubuntu, Docker and docker-compose and is accessible via the domain openslides.terruhn.it. The entire stack is cleanly separated, maintainable and can be updated quickly.

What do online elections mean for our democracy?

Before we get started, I would like to clarify one key aspect. Traditional elections are organised either anonymously or by name, and there are good reasons for both forms. When we as a people elect our government, we do so anonymously. This anonymity protects against influence by third parties, because in retrospect no one can be sure whether forced or bought voting behaviour was actually implemented. That is why we go behind the privacy screen individually. This protects voters and is an essential basis for free and legitimate elections.

At the same time, there is a paper trail, i.e. the option to recount ballot papers if necessary in the event of discrepancies. To ensure that no allocation to individual persons is possible, the poll workers only check the ballot papers for validity. Only valid votes that cannot be traced back to individual voters remain.

This is precisely where online voting systems reach their structural limits. Anonymous online elections cannot be implemented with the same traceability and verifiability, as there are no physical ballot papers that could be recounted independently.

Online, reliable and transparent elections can therefore be realised primarily by roll call. These are well suited to certain decision-making processes, but are not suitable for all use cases, such as a general election. For tools such as OpenSlides, this means that the area of application should be clearly defined and deliberately chosen.

Bruce Schneier has written extensively on this topic and I have been following him for decades. He is my benchmark when it comes to data security in the broadest sense.

Lessons learnt

There is a good reason why I am now rebuilding OpenSlides on the Cube for a third time. The first installation was essentially an exposed host with the aim of making the service accessible from the outside in the first place. The second step involved IPv6 accessibility, DNS, DyDNS, FritzBox, SSL and integration behind Caddy, i.e. clean technical access from the outside.

With this third round, the focus shifts once again: now it is no longer primarily about accessibility, but about operational stability, maintainability and an architecture that can still be clearly read and reliably maintained even after months of other projects.

The new structure is therefore not so much another attempt as a deliberate maturing of the set-up: away from an evolved history and towards a comprehensible, reproducible and sustainable form of operation. The aim of this third round is to bring together what has been learnt so far and incorporate it into the architecture from the outset1

Target image and architecture

For the new setup on the Cube, I am now pursuing a clearly separated target image. Caddy remains the only public entry point and takes over the TLS termination as well as the connection to my existing external world of FritzBox, DNS and DyDNS.

OpenSlides itself runs behind this as a freshly created, internally connected instance in the Docker stack and no longer has its own public certificate or accessibility logic. In future, the entire payload will be located under /data/terruhn.it/openslides/ and thus deliberately outside of /, with separate areas for the application, persistent PostgreSQL data, backups and operational help files.

Backups are conceived as a combination of persistent data storage, additional database dumps and the already established Restic setup, while error and update messages are sent via the existing mail dispatch with msmtp. OpenSlides follows the official setup path as closely as possible so that the installation remains traceable, reproducible and maintainable in the long term.

This means that the Cube is no longer a customised structure, but a form of operation in which responsibilities, data paths and maintenance routes are clearly separated from the outset.

Preparation

For the rebuild, I initially just created the new target structure under /data/terruhn.it/openslides/ and fetched the current2 openslides management binary from github into the new app directory.

The clean basis for the fresh installation is now ready: separate directories for the application, PostgreSQL data, backups, log files and runbook as well as the tool with which the new OpenSlides instance is officially created.

mkdir -p /data/terruhn.it/openslides/{app,postgres,backups,log,runbook,archive}
chown -R root:root /data/terruhn.it/openslides
chmod 755 /data/terruhn.it/openslides
find /data/terruhn.it/openslides -maxdepth 1 -type d | sort
cd /data/terruhn.it/openslides/app
wget https://github.com/OpenSlides/openslides-manage-service/releases/download/latest/openslides
chmod 755 openslides
ls -l openslides

Home Improvement

Firstly, the new workspace was prepared under /data/terruhn.it/openslides/app.

mkdir -p /data/terruhn.it/openslides/{app,archive,backup,log,postgres,runbook}
mkdir -p /data/terruhn.it/openslides/app/secrets
chmod 700 /data/terruhn.it/openslides/app/secrets

The structure under /data/terruhn.it/openslides clearly separates the current application status, the permanent user data and the project-related operating files.

/data/terruhn.it/openslides/
├── app/ -> running compose/configuration files, .env, secrets
│ ├── config.yml
│ ├── docker-compose.yml
│ ├─── .env
│ └── secrets/
├── archive/ -> stored old statuses, exported intermediate statuses or discarded files
├── backup/ -> local backups relating to the OpenSlides installation
├── log/ -> project-related log files or collected diagnostic outputs
├── postgres/ -> persistent PostgreSQL data from OpenSlides
└── runbook/ -> accompanying operational documentation and work notes

That’s what I was thinking:

/data/terruhn.it/openslides/

This is the common root directory for the OpenSlides installation on the Cube.

There are typically two different types of things:

  • the work and configuration area
  • the persistent data

/data/terruhn.it/openslides/app

This is the application directory.

The files with which you control the installation are located here:

  • config.yml
  • docker-compose.yml
  • .env
  • secrets/

So this is the place for everything that describes and starts the stack.

This is your practical business directory for:

cd /data/terruhn.it/openslides/app
docker-compose ps
docker-compose up -d
vi docker-compose.yml

/data/terruhn.it/openslides/app/secrets

This is where the local secret files for the stack are located, for example:

  • auth_token_key
  • auth_cookie_key
  • manage_auth_password
  • internal_auth_password
  • postgres_password
  • superadmin

These files belong to the operation, not to the actual application data.

They are under app because they are referenced directly by the compose definition.

/data/terruhn.it/openslides/postgres

This is the persistent data area of the PostgreSQL database.

This is where the actual OpenSlides database is located, i.e. the database files themselves.

Separation is important:

  • app/ describes the stack
  • postgres/ contains the user data

This is helpful for backups and restores because you can see immediately:

  • To reconstruct the setup, you need app/
  • For the actual OpenSlides data you need postgres/

Why this separation makes sense

She makes three things clear:

1. configuration and data are not mixed

You do not have to search between Compose files and database blocks.

2. backups become more understandable

You can consciously distinguish between:

  • Operating files
  • User data

3. rebuilding becomes easier

If you want to recreate the stack, the data directory remains separate from it.

Rebuild OpenSlides

As Docker Compose derives its container and network names from the directory of the compose file by default, starting from /data/terruhn.it/openslides/app would lead to generic names with the prefix app_ without further specifications. In order to obtain descriptive object names from the outset, the project name was therefore predefined in the .env with COMPOSE_PROJECT_NAME=openslides. This results in comprehensible names such as openslides_proxy_1, openslides_postgres_1 or openslides_uplink.

cd /data/terruhn.it/openslides/app
cat .env
COMPOSE_PROJECT_NAME=openslides
OPENSLIDES_EMAIL_HOST_PASSWORD=SMTP_MAIL_PASSWORD_HERE

I use the OpenSlides binary to create the default configuration in config.yml.

cd /data/terruhn.it/openslides/app
./openslides config-create-default .

I then adapted the config.yml to the target image of the cube:

  • fixed release stand instead of latest
  • No local HTTPS within OpenSlides
  • no own ACME
  • Mail dispatch via SMTP
  • Binding of the OpenSlides proxy only to 127.0.0.1:8000

Among other things, this frame in config.yml was essential:

host: 127.0.0.1
port: 8000
enableLocalHTTPS: false
enableAutoHTTPS: false
defaults:
  containerRegistry: ghcr.io/openslides/openslides
  tag: 4.2.29

OpenSlides generates its own docker-compose.yml on this basis:

./openslides config --config config.yml .

In the next step, I further customised the generated docker-compose.yml to my specifications:

Firstly, the PostgreSQL data path.
Instead of the named volume provided by the generator

volumes:
- postgres-data:/var/lib/postgresql/data

we have set the bind mount on your data file system:

volumes:
- /data/terruhn.it/openslides/postgres:/var/lib/postgresql/data

Secondly, the network names.
We have set the networks described only functionally by the generator to descriptive names:

email:
name: openslides_email
internal: falsefrontend:
name: openslides_frontend
internal: truedata:
name: openslides_data
internal: true

Thirdly, the uplink network as an external coupling network to Caddy.
From the network managed internally by the stack

uplink:
internal: false

was:

uplink:
external: true
name: openslides_uplink

Fourthly, removing the named volume at the bottom of the compose file that is no longer required.
So this block was deleted:

volumes:
postgres-data:

Here is a patch, without guarantee:-)

cd /data/terruhn.it/openslides/app

python3 - <<'PY'
from pathlib import Path

p = Path("docker-compose.yml")
s = p.read_text()

s = s.replace(
    "      - postgres-data:/var/lib/postgresql/data",
    "      - /data/terruhn.it/openslides/postgres:/var/lib/postgresql/data"
)

s = s.replace(
    "  uplink:\n    internal: false",
    "  uplink:\n    external: true\n    name: openslides_uplink"
)
s = s.replace(
    "  email:\n    internal: false",
    "  email:\n    name: openslides_email\n    internal: false"
)
s = s.replace(
    "  frontend:\n    internal: true",
    "  frontend:\n    name: openslides_frontend\n    internal: true"
)
s = s.replace(
    "  data:\n    internal: true",
    "  data:\n    name: openslides_data\n    internal: true"
)

s = s.replace(
    "\nvolumes:\n  postgres-data:\n",
    "\n"
)

p.write_text(s)
PY

Mailing

Mail handling via SNMP was a separate point. The parameters required for this are in the OpenSlides configuration of the backendAction service: sender address, server, user name, port and the corresponding password from the .env. This means that the dispatch route remains consistent with the rest of the system without having to set up a separate mail substructure for OpenSlides.

services:
  backendAction:
    environment:
      DEFAULT_FROM_EMAIL: openslides@terruhn.it
      EMAIL_HOST: DEIN_MAILSERVERNAME_HIER
      EMAIL_HOST_PASSWORD: ${OPENSLIDES_EMAIL_HOST_PASSWORD}
      EMAIL_HOST_USER: DEIN_MAILUSERNAME_HIER
      EMAIL_PORT: 465
      EMAIL_CONNECTION_SECURITY: SSL/TLS

Conclusion

The required secrets were then generated and stored in the working directory. Together with the .env for mail dispatch, this results in an installation whose sensitive values are stored locally and traceably in one place.

umask 077
printf %s "$(openssl rand -base64 48 | tr -d '\n')" > secrets/auth_token_key
printf %s "$(openssl rand -base64 48 | tr -d '\n')" > secrets/auth_cookie_key
printf %s "$(openssl rand -base64 32 | tr -d '\n')" > secrets/manage_auth_password
printf %s "$(openssl rand -base64 32 | tr -d '\n')" > secrets/internal_auth_password
printf %s "$(openssl rand -base64 32 | tr -d '\n')" > secrets/postgres_password
printf %s "$(openssl rand -base64 24 | tr -d '\n')" > secrets/superadmin

The stack was then started and the internal services initialised. Access from outside was not yet direct at this point, but initially via a maintenance response in the caddy, so that the domain reacts meaningfully during the setup.

docker-compose up -d
docker-compose ps

The caddy entry for openslides.terruhn.it was only switched to the new local proxy when the OpenSlides stack was fully running internally. Public access thus ends at the cube, while OpenSlides itself remains internal.

openslides.terruhn.it {
reverse_proxy openslides_proxy_1:8000
}

A fresh OpenSlides stack is now on the cube, with a fixed version, data storage under /data, a clean reverse proxy in front of the service, integrated mail dispatch via the existing system path, descriptive object names and an architecture that can be maintained much more clearly than the previous intermediate versions.

Maintenance and care

(coming in April 20263

Backup and restore

(coming in April 20264

Performance forecast and possible bottlenecks

The performance of the installation results from several factors. OpenSlides generates only a moderate load on the cube itself, as the services mainly exchange lightweight JSON data. Most of the calculation takes place in the participants’ browser, while the server mainly answers short structured queries. PostgreSQL processes well-indexed data sets, and Redis delivers many answers directly from memory. This limits the resource requirements in the cube.

This forms the basis for a cautious assessment:

  • up to approx. 200 participants
    Operation is expected to be stable. The JSON updates are distributed reliably and the containers retain reserves for typical processes such as voting and applications.
  • Up to approx. 400 participants
    The installation will probably remain usable, but will react more sensitively to simultaneous actions by large groups. A targeted load test is recommended for this area at a later date.

Many potential bottlenecks do not arise in the server, but in the participants’ environment:

  • End devices
    Browser-based applications distribute a large part of the load to the user’s devices. Weaker or underutilised hardware can delay the display of JSON data.
  • WLAN infrastructure on site
    In rooms with many connected devices, the local WLAN limits the speed more than the server. Live updates may arrive more slowly as a result.
  • Upload bandwidth of the connection
    Parallel transmission to all participants uses the same upload route. JSON packets are small, but the sum of all updates remains a relevant factor.
  • Peak loads
    Simultaneous opening of large agenda items or the start of a vote generate higher loads in the short term, especially in the presenter and database service.

Legal categorisation for the operation of my OpenSlides 4 instance

A legally binding expert opinion for OpenSlides was issued by GOB Legal Rechtsanwaltsgesellschaft mbH on 25.03.2021. This opinion confirms that OpenSlides 3 is suitable for secret electronic voting and elections, subject to defined technical and organisational requirements.

In a letter dated 3 June 2024, Intevation GmbH clarifies that this expert opinion also applies unchanged to OpenSlides 4.

This means that all the technical requirements, security mechanisms and voting procedures described in the report also apply to version 4, which I operate on my server.

At the same time, the report makes it very clear what the overriding conditions are for legally compliant operation. These points do not concern the software itself, but the organisational and infrastructural environment. For my own installation on the Cube, this results in the following assessment.

What my OpenSlides 4 instance fulfils technically

OpenSlides 4 already fulfils the following requirements due to its architecture and my actual installation:

  • TLS-secured communication via Auto-HTTPS
  • Separation of voting authorisation and voting content
  • Anonymised storage of non-nominal votes
  • Prevention of multiple votes through transactions
  • Token-based voting without inference to the person
  • No transmission of a “receipt” to participants
  • JSON-based data exchange without storing sensitive metadata in the system

These points correspond to the requirements of the expert opinion on the technical system and are fully in line with the functionality of OpenSlides 4.

What my instance does not fulfil and why this is legally relevant

The report is based on hosting by Intevation GmbH, i.e. by an external, contractually bound service provider in certified data centres (ISO/IEC 27001).

My installation differs from this in several respects:

1. no external operator

I operate the instance myself, not as an external service provider.

→ The expert opinion assumes that the operator is not part of the voting organisation and is bound by a qualified confidentiality clause.
Naturally, I do not fulfil this requirement.

2. no professional data centre

The Cube is located in a private infrastructure, not in a certified data centre.

→ The general conditions assumed in the expert opinion are missing, such as

  • Access control
  • Redundancy and backup data centre
  • Emergency planning
  • documented IT processes

3. no formal obligation of third parties

Administrators are bound to secrecy in the expert opinion by their employment contract.

→ This organisational protection is not required for a private home server.

4 Statutory basis

Whether an organisation allows secret electronic voting is determined by its statutes. This applies regardless of the technology used.

→ This must be checked on a case-by-case basis, as it is outside the technical setup.

Conclusion for the use of my instance

OpenSlides 4 technically fulfils all the requirements described in the report.
However, my installation does not meet all of the organisational requirements that the expert opinion mentions as a prerequisite for legally binding secret ballots.

That means:

  • The instance is well suitedfor internal work processes, informal coordination and non-controversial decisions.
  • For officially secret elections with a risk of contestation, this would also be necessary:
    • Hosting in a professional infrastructure
    • Operation by an external service provider bound to confidentiality
    • Documented security processes
    • clear, statutory permission from the organisation

This makes it clear where OpenSlides 4 is legally secure and where a private installation – even if operated correctly from a technical point of view – does not fulfil the framework conditions required by the expert opinion.

  1. My many years of experience in the secure operation of critical infrastructure can now also contribute to this: For over 23 years, I was partially responsible for important infrastructure components of a large German company. It is precisely this focus on stability, traceability and reliable operation that now characterises the rebuilding of this OpenSlides installation. ↩︎
  2. 4.2.29 ↩︎
  3. Easter holidays for now:-) ↩︎
  4. see above ↩︎

Last Updated on March 29th, 2026 by Rene Terruhn