Operos 0.2.0 Installation Documentation


Operos minimum requirements

A minimum useful install of Operos requires 3 machines:

  • One controller node
  • Two workers

You can use bare-metal hosts, or VirtualBox VMs. The nodes must meet following requirements.

Controller node

The controller needs at least:

  • 2 CPUs,
  • 2GB of RAM,
  • 50GB or more of disk space, and
  • two network interfaces.

Worker nodes

Worker nodes require at least:

  • A single CPU
  • 2GB of ram
  • One network interface.
  • At least one block storage device of 20GB or more, but may have as many disks of any size as you want.



On the controller, one network interface will need to be able to reach the internet. By default the controller will NAT/forward traffic from the worker nodes over this interface so that they can pull Docker images. This is the interface that users will use to access the Kubernetes API and the Operos User Interface. Time synchronization via NTP is also done on this interface.

The controller’s second interface is connected to the cluster facing network. Worker nodes should be configured to PXE boot from this network.

If you are testing Operos using VirtualBox, the controller’s first interface should be a NAT interface, and the second should be a Host-only adapater. Ensure that there is no DHCP server running on the second network.

The worker’s interface should be connected to the controller private network.

If you are installing Operos using VirtualBox, ensure that the workers’ network interface is using the same Host-only adapter as the controller’s private interface. Also ensure that the worker is configured to only boot from the network via PXE. This setting is in: worker-virtual-machine-name->Settings->System boot order.

Controller Installation

Operos only needs to be installed once to the controller. Workers will then boot over the network from the controller and no further installation is necessary.

To begin installation:

  1. Retrieve the ISO image.

  2. Prepare the installation medium. You have several options:

    • Copy the ISO image to a USB key. You can copy the image to the USB key using dd command on Unix/Linux/BSD/or Apple Mac OS X machines.

      You will need to determine which device is your USB memory stick once it is inserted in the machine.

      Warning: selecting the wrong device could result in data loss or rendering your machine inoperable. Be careful.

      dd if=operos-0.2.<VERSION> of=/dev/<YOUR_USB_DEVICE_NAME>

      Once you have copied the image to the USB stick, insert it into the controller and configure the server to boot from the USB stick.

    • Burn the ISO to a DVD.

    • If you are installing Operos in a virtual machine using VirtualBox, select the Operos ISO filename as the contents of the virtual CDROM drive.

    • You can also use the remote media feature of the server you are designating as the controller to mount the ISO.

  3. Once the server has booted from the USB key you will be presented with a boot menu. Select the Operos Installer Option

    Boot screen

    The installer will then boot and you should see the Operos installer window.

    Select Install, you will be prompted to agree the the Operos End User License Agreement

    Once you have accepted the EULA Operos will probe network information. This process can take as long as 15 seconds.

Identify the cluster being created

The next screen will prompt you for information about the cluster you are installing. This is used to to issue certificates by the controller for the workers that will join the cluster later.

Cluster Identity

Select the Controller Private Network

The next screen will ask you to specify the network topology by indicating which network is the private controller interface. This interface should not have any other DHCP servers running on it as the Operos controller will act as the DHCP server for this Local Area Network. The installer UI will highlight interfaces with an existing DHCP server and warn you if you attempt to use this as the controller’s private network.

Private Network

Select and configure the controller Public Interface

The next screen will ask you which interface is the public controller interface. This interface used to reach the internet, and for users to access the Kubernetes API and Operos UI.

Public Network

Once you have selected the public interface you have the choice of allowing the controller to receive its network assignment from a DHCP server on that network, if applicable, or to statically configure the address.

Public Network Configuration

Configure Private Network

The next screen allows you to specify the parameters of the controller’s Private Network. For most installations the defaults are sensible and should not need tuning.

WARNING: You should exercise extreme caution when selecting the controller network interface. Any machine configured to PXE boot on this network will become part of the Operos cluster. As part of the process of configuring a node to be part of the cluster ALL DATA ON THE MACHINE’S DISK WILL BE WIPED. Pax Automa is not responsible for any resulting data loss.

Cluster Network Configuration

The first option specifies the subnet that the controller will use to allocate IP addresses to worker nodes, specified in CIDR notation.

The second option is a usually larger IP address allocation, also specified in CIDR notation, that controls the IP space of Kubernetes pods running on worker nodes. Pods will use these addresses to communicate with each other.

The third option is the service subnet to be used by Kubernetes.

The last option sets the domain space.

Configure the cluster disk

The installer will now ask you to pick one of the disks in the machine to be used as the controller’s permanent storage.

Warning: the installation process will erase the contents of this disk. Ensure that the disk does not contain important information.

Controller Disk Configuration

Select disk allocation policy

The next screen asks you to set the percentage of the Workers’ disks which should be allocated as ephemeral system storage space.


If this policy is set at the default of 50%, half of all block storage in each worker will be used for ephemeral storage. This storage is used to hold Docker container images, the runtime overlay file system of docker containers, and system logs. This data will not persist - if the machine fails, the data contained here will be lost. Further, if the disk configuration of the system changes (storage is added or removed), this space will be reinitialized on reboot and this data will be lost.

The other 50% of storage of each block device is added to the shared pool of blocks available within the Ceph cluster.

If you need to store data permanently, create a PersistentVolumeClaim in your Pod definition. Persistent volumes in Operos are backed by Ceph, meaning that all blocks will be replicated to two nodes under normal operating conditions.

Cluster storage policy

Set the root password

This password will be used to login the controller user interface. It will also be needed to login as the root user on the controller and worker nodes, either over SSH or via the console.

Root password

Confirm the details of your installation

The final screen will give you the opportunity to review the installation information. Once you select “next”, the installation will begin.


Installation complete

If the installation was successful you will be prompted to reboot the controller. Remember to remove the Operos installer media (USB Key/DVD) so that the machine boots into Operos following the reboot. If you forget select boot existing OS from the installer media screen to boot your actual controller.

Post reboot installation

Following the reboot, the controller needs to do some post installation tasks. The status screen displayed on the console of the machine can be used to observe this process. If you want to log into the machine, press Alt-Left arrow key to get a console prompt. Then enter “root” for the username, and the password you set during the install. If the post installation tasks fail, or in general operation you encounter an issue that you wish to report to Pax Automa, pressing Alt-D on this console on any node will send diagnostics to Pax Automa.

Once the console reports that the Kubernetes API is available and everything is green you can now proceed to boot workers.

Controller Status

Adding workers to the cluster

Adding workers to the cluster requires only that they are connected to the same local area networks as the controller’s private network and that they are configured to network boot. Once powered on and booting over the network they will configure themselves.

A warning about VirtualBox workers: VirtualBox contains a bug that prevents the virtual machine from booting from the network if the virtual machine is rebooted. You should explicity power off the virtual machine and restart it if the worker fails to PXE boot

Next steps

Now that your Operos cluster is installed you can begin to use it!

The first step is to login to the Operos UI. This UI is available at the address of the controller’s public interface received via DHCP or statically configured during the install process. If you don’t recall what the address is, it will be displayed on the console of the controller.

Getting Kubernetes credentials

One you have logged into the Operos UI using the username ‘root’ and the password you chose for the root user during install, click on the Users panel in the Operos sidebar and select “Download credentials”. Your browser should begin downloading a tar.gz file containing a kubeconfig file, along with a TLS certificate and private key that you can then use to interact with Kubernetes.