Codavel Performance Service

Bolina On-Premises - Installation Guide

Here you will find all the necessary instructions to deploy all the server infrastructure required to run Codavel Performance Service, while giving a brief overview of the system architecture and behavior.

Infrastructure Overview

The Codavel Performance Service is a service that improves content delivery for mobile apps, presenting robustness against latency and packet loss, regardless of the user’s network, device, location, or time.


Regarding its architecture, it is composed by multiple entities, each one with its own responsibility:

  • Bolina SDK: an SDK that can be integrated into any Android or iOS application, and enables the access to the Codavel Performance Service.


  • Management: Automatically orchestrates the number of running Bolina core instances, based on the overall system usage, and propagates the address and availability of each instance to Service discovery.


  • Bolina Core: Handles all Bolina traffic sent from Bolina SDK and translates it to the original protocol. Then, it fetches the resource from the Original Content Server, as a regular HTTP, before sending the data back to Bolina SDK through Bolina protocol. It also propagates periodically its availability to the Management component, which can be one of the following three stages:
    • Available: Below its capacity, and is ready to accept new clients.
    • Warning: Reaching its maximum capacity, and it should not accept new clients in order to stabilize its usage and avoid getting into the Critical stage.
    • Critical: Above its capacity, and the overall performance could be affected. It should stop receiving traffic until it drops at least to a Warning stage.


  • Service Discovery: Has information regarding the address of each running instance of Bolina Core and its availability. This data can be sent to a public HTTP endpoint at each cluster update.


  • Shield (Coming soon): Balances the incoming traffic between all the Bolina Core instances, redirecting all new clients to the most suitable instance, based on their availability, while blocking all the other non-authorized traffic from entering the infrastructure.


  • Cache (Coming soon): Stores the HTTP requests made by Bolina Core during a certain amount of time, in order to reduce the traffic to and usage of the Origin, and also reducing the latency between Bolina Core and the HTTP content.

Configure App

Before deploying all the necessary components to enable the Codavel Performance Service, an App must be configured in Codavel’s private area. In case you have already configured your App, you may skip this section. Otherwise, please follow these steps:


1. Login into

a. If you do not have access, please contact our sales team, they will give you access to the console.

2. Add a new app

a. Go to Home and click NEW APP

b. Fill in the fields, namely


      1. Give the app a name (descriptive)
      2. Add the package name (Android) and/or bundle identifier (iOS) of your app. If it doesn’t match with your app, the traffic will be discarded.
      3. Choose the App Type on-premises to use bolina on your own infrastructure
      4. Click in the button CREATE APP

3. The deployment is ready to use. The creation status should say Deployment ready, and you should receive an email notification shortly.

4. Click the app info button (?) to see the deployment id and the deployment secret

5. Take note of both deployment id and secret, you will need them later.

Deploy Server Components



  • A pre-configured Kubernetes cluster with at least 3 worker nodes;
  • Kubectl, to setup the deployment;
  • Kubeconfig file, to allow access to the cluster during the deployment;
  • Git client, to download our installation project;
  • Docker, to locally run one container during the installation process.


In case you are unable to fulfill any of these requirements, please contact our sales team at for a custom support.



The following instructions will deploy all the necessary components to enable the Codavel Performance Service, and must be followed in this order:


1. Download installation project


In order to access all the configuration files necessary for the deployment, enter the following command in a machine with access to the kubernetes cluster through kubectl.


git clone

cd kubernetes-deployment


2. Management


The management component of Bolina is composed of two different services:


Control: Receives information from all the running Bolina core instances and aggregates that data into storage. It is based on Consul, and of a cluster of three Consul instances, each one deployed as a replica of a kubernetes statefulset and running in different kubernetes nodes. Every time that a Bolina core instance changes its availability, or the number of instances running changes, it propagates the information to the Scaling service and to the Service Discovery component, if enabled.


Scaling: Watches the control service for any update, and based on the number of running Bolina core instances and its availability, scales the number of machines if necessary to ensure that the percentage of available instances is always above a certain threshold.


To provide access to Consul’s statefulset by the other components, we deploy a kubernetes service that redirects the traffic on the following ports to one Consul running node:

  • 8300 (TCP): Consul’s RPC address
  • 8500 (TCP): Consul’s HTTP API
  • 8600 (TCP and UDP): Consul’s DNS server
  • 8301 (TCP and UDP): Consul’s Serf LAN port
  • 8302 (TCP and UDP): Consul’s Serf WAN port


To deploy the Bolina Management component follow these steps:

  • Generate and store a Consul Gossip Encryption Key, a shared key that will be used to encrypt the communication between consul agents
export CONSUL_KEYGEN=$(sudo docker run -it --entrypoint=/bin/sh consul:latest -c "consul keygen")

kubectl create secret generic consul --from-literal="consul-encryption-key=${CONSUL_ENCRYPTION_KEY}"
  • Store the necessary Consul’s config files, that will be loaded to each consul node
kubectl create configmap consul-config-server --from-file=management/consul/config/server.json
kubectl create configmap consul-config-client --from-file=management/consul/config/client.json
  • Apply the consul’s service and cluster access roles
kubectl apply -f management/consul/consul-service-account.yaml

kubectl apply -f management/consul/consul-cluster-roles.yaml
  • Start the service, that will redirect the traffic on the previously stated ports to one of the three consul pods
kubectl create -f management/consul/consul-service.yaml
  • Deploy the control statefulset, that will start three consul pods
kubectl create -f management/consul/consul-statefulset.yaml
  • In order to orchestrate the number of running Bolina core instances, the scaling service needs a kube-config file that grants access to the kubernetes cluster. This could be done by creating a config-map with that file (replace with the complete path of the kube-config file, which must be named “config”. A common location is .kube/config inside your home folder):
kubectl create configmap kube-config --from-file=<path of kube config>


3. Bolina Core


The Bolina Core is deployed as a kubernetes deployment, where the number of replicas is controlled by the Scaling component, based on the system usage.

Each instance of Bolina core is represented by a kubernetes pod, and must have internet access and to be accessed directly (on port 9001/tcp and 9002/udp) through Bolina Shield (if enabled) or Bolina SDK. In order to achieve this, each pod runs on a different kubernetes node, using hostPort to expose the following ports on the node that hosts the containers:


  •  9001 (TCP): Bolina’s TCP traffic
  • 9002 (UDP): Bolina’s UDP traffic
  • 8300 (TCP): Consul’s RPC address
  • 8500 (TCP): Consul’s HTTP API
  • 8600 (TCP and UDP): Consul’s DNS server
  • 8301 (TCP and UDP): Consul’s Serf LAN port
  • 8302 (TCP and UDP): Consul’s Serf WAN port


The steps to create the deployment for the Bolina Core are the following:


  • Store the default Bolina configuration files, to be loaded by the Bolina container


kubectl create configmap bolina-config --from-file=bolina-core/bolina/config/bolina_config.template.json


  • [optional] Copy your server certificates to the appropriate place (replace and with the full path for your full chain and priv key, respectively). By default, the server will use self-signed certificates, not advised for production environments, and we strongly recommend to replace them with your own certificates with the following commands:


cp bolina-core/bolina/config/certs/


  • Store the server certificates in a kubernetes configmap
kubectl create configmap bolina-certs --from-file=bolina-core/bolina/config/certs/


  • Store your Deployment ID and Secret (see section Create Deployment for instructions on how to obtain that information), to be loaded by the Bolina container


kubectl create secret generic bolina --from-literal="bolina-deployment-id=" --from-literal="bolina-deployment-secret="
  •  [optional] By default, each client will advertise the kubernetes node IP address, which is only accessible from inside the kubernetes cluster. If you did not enable the Bolina Shield component and wish to access the service from the outside, edit the line 76 of the file bolina-core/bolina/bolina-deployment.yaml and replace the value $NODE_IP in the expression (…)export PUBLIC_IP=$NODE_IP(…) with the desired expression/command to fetch the machines public IP.

Example if using AWS:

export PUBLIC_IP=$(curl


  • Start Bolina core Deployment. This will create only the configuration for the deployment, as the number of replicas starts at 0. This value is updated afterwards by the Scaling component


kubectl create -f bolina-core/bolina/bolina-deployment.yaml


  • Start the scaling deployment, what will launch one pod and start orchestrating the number of Bolina core instances


kubectl create -f management/scaling/watcher.yaml


4. [optional] Service Discovery


The information regarding the available Bolina instances can be obtained by using our Service Discovery component. This entity parses the information obtained from our management component and propagates that information to a configurable HTTP endpoint each time there is an update on the number of instances running or its availability.


If you wish to enable this component, follow this instructions:


  • Define the HTTP endpoint to where the information should be sent (replace <https://your_endpoint_url:port/endpoint> with the actual endpoint):


kubectl create secret generic bolina-service-discovery --from-literal="service-discovery-endpoint=<https://your_endpoint_url:port/endpoint>"


  • Start the service discovery, that will launch one pod that will start communicating with the defined endpoint and propagate the information


kubectl create -f service-discovery/control-plane.yaml


For each update, the configured endpoint will receive a POST message containing a JSON message composed by an array of Bolina Endpoints. Each value represents an available connection to a Bolina core endpoint, and has the following information:


  •  IP Address (String): [“Address“][“IP“]
  • TCP Port (int): [“Address“][“Port“][“TCP”]
  • UDP Port (int): [“Address“][“Port“][“TCP”]
  • Server ID (String: a random identifier, unique for each Bolina core instance): [“Metadata“][“ServerID“]
  • Status (String: information regarding the instance’s availability): [“Metadata“][“Status“]
    • “passing”: Instance can accept new clients
    • “warning”: Instance should only receive data from current clients
    • “critical”: The instances is not available


Below is an example for two Bolina endpoints, available at the IP address and listening for UDP on port 9002, listening for TCP on port 9001 and with the ID -805390934 and 202130133, respectively:


"Address": {
"IP": "",
"Port": {
"TCP": 9001,
"UDP": 9002
"Metadata": {
"ServerID": "-805390934",
"Status": "passing"
"Address": {
"IP": "",
"Port": {
"TCP": 9001,
"UDP": 9002
"Metadata": {
"ServerID": "202130133",
"Status": "passing"