Deploying The Livepatch Server Snap On Public Clouds

This section details how to launch the Livepatch server snap on public clouds, with single or multi unit deployments using an auto scaling solution from the cloud provider.

Prerequisites

Before deploying the server snap, you will need the following:

  • One or more VMS with at least Ubuntu 22.04 LTS, with at least 2 CPU cores and at least 4GB of RAM,
  • A PostgreSQL DNS for an instance with a database ready to be initialized.
  • A secure vault for storing secrets (via an AWS IAM role, or Azure Key Vault)
  • A storage solution: S3, Ceph, or PostgreSQL connection string.
  • An Ubuntu Pro token from ubuntu.com/pro/dashboard.
  • Optional: the CVE service deployed and ready for connections.

Note: This setup is incompatible with Ubuntu core.

Sensitive data such as connection strings, and the pro token, should be stored in a vault we will access later during the installation.

Livepatch server installation with cloud-init

You can easily create VMs and initialize Livepatch with cloud-init: create a cloud-config.yaml file with the following template including setup steps to install Livepatch server:

#cloud-config
package_update: true
package_upgrade: true
packages:
  - snapd
  - jq

write_files:
  - path: /etc/livepatch/livepatch.env
    permissions: "0600"
    owner: root:root
    content: |
      export URLTEMPLATE="http://<address>/v1/patches/{filename}"
  - path: /etc/livepatch/database_url
    permissions: "0600"
    owner: root:root
  - path: /etc/livepatch/pro_token
    permissions: "0600"
    owner: root:root
  - path: /etc/livepatch/livepatch_admin_user
    permissions: "0600"
    owner: root:root
    # Storage option
       ...

runcmd:
  - |
    set -e
    snap wait system seed.loaded
    snap install aws-cli --classic
    . /etc/livepatch/livepatch.env        
    # setup PostgreSQL connection  
    ...    
    # set pro token
    ...    
    # setup storage
    ...  
    # setup user
    ...
  - |
    set -e
    snap install canonical-livepatch-server
    snap refresh --hold canonical-livepatch-server
    canonical-livepatch-server.schema-tool "$(cat /etc/livepatch/database_url)"      
    
    snap set canonical-livepatch-server \
    token="$(cat /etc/livepatch/pro_token)" \
    lp.database.connection-string="$(cat /etc/livepatch/database_url)" \
    lp.server.url-template=$URLTEMPLATE \
    lp.auth.basic.enabled=true \
    lp.patch-sync.enabled=true \
    lp.patch-sync.interval=12h \
    lp.auth.basic.users="$(cat /etc/livepatch/livepatch_admin_user)" \
    lp.server.server-address=0.0.0.0:80 \ 
    # optional, if you have the CVE snap deployed
    lp.cve-lookup.enabled="true" \
    lp.cve-sync.enabled="true" \
    lp.cve-sync.source-url="http(s)://<host>:port"
  - |
    rm /etc/livepatch/*


final_message: The system is up, up to date, and Livepatch server is active after $UPTIME second

This template config performs the following commands:

  • Waits until snapd is ready.
  • Installs the canonical-livepatch-server snap and stops the server temporarily for configuration.
  • Initializes the provided database with the latest schema (if the database schema is up to date, this operation won’t perform updates).
  • Points Livepatch server to the configured database
  • Enables the server with an Ubuntu Pro token.
  • Sets up an admin user.
  • Sets the server for all traffic on port 80.
  • Starts the server with the provided configuration.
  • Cleans up temporary files used in configuration.

This cloud-init module expects to find required parameters (secrets, user strings, database DNS) in root-only files located at /etc/livepatch/ during boot time. Specifics on getting required secrets are unique to the cloud environment. We recommend using the cloud’s vault provider to securely access the secrets and write them to the required locations. The files are deleted during the last step of the cloud-init setup.

In the config, you may also want to setup basic authentication, and add a user for the admin tool with these commands:

snap set canonical-livepatch-server lp.auth.basic.enabled=true
snap set canonical-livepatch-server lp.auth.basic.users="$(cat /etc/livepatch/livepatch_admin_user)"

Where the user and hashed password in the form of username:hashedpassword are fetched from a secure vault and written to a root-only file /etc/livepatch/livepatch_admin_user.

Make sure to make this file root accessible only by adding a field in the write_files section:

  - path: /etc/livepatch/livepatch_admin_user
    permissions: "0600"
    owner: root:root

Note: Livepatch server requires the pro token, database connection string, and storage connection string to operate, but do not put these values in the configuration file as plain text, use your cloud provider’s secrets manager.

Additionally, you can set up periodic patch syncs with the hosted Livepatch server with:

snap set canonical-livepatch-server lp.patch-sync.enabled=true
snap set canonical-livepatch-server lp.patch-sync.interval=12h

All configuration can be done with a single snap set command by putting all configuration values in a single line, as done in the template cloud-init module.

With all the configurations written in the cloud-config.yaml you can easily create VM instances by passing in the config with Livepatch server installed and ready to receive traffic.

Automatically Scaling Deployments With A Cloud Provider

Many cloud providers offer an automatic scaling solution for applications. This section will cover AWS.

Deploying and Autoscaling on AWS

Required Resources

To set up Livepatch server on AWS using an S3 bucket for patch storage, you will need:

  • An RDS instance with PostgreSQL 12 or 14 using password authentication.
    • DSN string with User and password should be stored in the AWS secrets vault.
  • A Launch template with an instance type of at least T3.Medium (2 vCPU, 4 GB RAM)
  • An Ubuntu Pro token stored in the AWS secrets vault.
  • S3 bucket with the s3:GetObject policy (bucket must be public for clients to download patches, but you can restrict the policy to a range of ip addresses).
  • An IAM role with the AWSSecretsManagerClientReadOnlyAccess role and AmazonS3ReadOnlyAccess role.
  • An IAM user with Allow effects for the actions s3:PutObject, s3:ListBucket, and s3:GetObject roles for the S3 bucket.
  • A secret entry for the IAM user with Key value pairs S3AccessKey (for the S3 access key), and SecretAccessKey (for the S3 secret key).
  • A Load balancer set to internet-facing.
  • An auto-scaling group using the launch template.
  • A security group to allow all internet access for HTTP and HTTPS.
  • A security group to allow access to only traffic from within AWS (for the server instances and PostgreSQL).

Creating The Deployment

With AWS, you can set up an auto-scaling group and a load balancer with an EC2 launch template configured to install and set up a Livepatch server instance.

To create a launch template, log in to the AWS console and navigate to the Launch Templates section in the EC2 overview page.

You want to select an instance type that has sufficient CPU and memory capacity. The minimum instance type for Livepatch server to run efficiently is the t3.medium instance type, with 2 vCPU and 4 GB memory. For the storage options, the default volume size of 8 GB will be sufficient.

For the network settings, select a security group that only allows network traffic from within AWS. We will set up a load balancer with internet access and redirect to our server instances later. For debugging purposes, you can create an SSH Key Pair to connect to the instance.

In the advanced details section, set the IAM instance profile to the profile with the AWSSecretsManagerClientReadOnlyAccess role and AmazonS3ReadOnlyAccess role. Also in the Advanced Details section, scroll down to the user data field. This is where you can upload or copy and paste the cloud-init module. The following example shows the cloud-init template from earlier configured to run on AWS using S3 buckets as patch storage.

#cloud-config
package_update: true
package_upgrade: true
packages:
  - snapd
  - jq

write_files:
  - path: /etc/livepatch/Livepatch.env
    permissions: '0600'
    owner: root:root
    content: |
      # database variables 
      export DB_ENDPOINT=<db endpoint>
      export PORT=5432
      export DB_SECRET_ID=<db secret id>

      # pro token variables
      export TOKEN_SECRET_ID=<token secret id>
      export ADMIN_USER_SECRET_ID=<admin user secret id>
      
      # S3 variables
      export S3_SECRET_ID=<s3 secret id>
      export S3BUCKET=<bucket name>
      export S3ENDPOINT=<bucket endpoint>
      export S3REGION=<region> 
      export URLTEMPLATE=https://<bucket name>.s3-<region>.amazonaws.com/{filaname}
  - path: /etc/livepatch/database_url
    permissions: "0600"
    owner: root:root
  - path: /etc/livepatch/pro_token
    permissions: "0600"
    owner: root:root
  - path: /etc/livepatch/livepatch_admin_user
    permissions: "0600"
    owner: root:root
  - path: /etc/livepatch/s3
    permissions: "0600"
    owner: root:root

runcmd:
  - |
    set -e
    snap wait system seed.loaded
    snap install aws-cli --classic
    . /etc/livepatch/Livepatch.env        
    # setup PostgreSQL connection  
    aws secretsmanager get-secret-value \
     --secret-id $DB_SECRET_ID \
     --query SecretString \
     --output text \
     --region eu-north-1 | \
    jq -r \
     --arg DB_ENDPOINT "$DB_ENDPOINT" \
     --arg PORT "$PORT" \
     '"PostgreSQL://\(.username):\(.password | @uri)@\($DB_ENDPOINT):\($PORT)"' > /etc/livepatch/database_url
    
    # set pro token
    aws secretsmanager get-secret-value \
    --secret-id $TOKEN_SECRET_ID \
    --query SecretString \
    --region eu-north-1 > /etc/livepatch/pro_token
    
    # setup S3 storage
    aws secretsmanager get-secret-value \
        --secret-id $S3_SECRET_ID \
        --query SecretString \
        --region eu-north-1 \
        --output text > /etc/livepatch/s3

    # setup user
    aws secretsmanager get-secret-value \
        --secret-id $ADMIN_USER_SECRET_ID \
        --query SecretString \
        --region eu-north-1 \
        --output text > /etc/livepatch/livepatch_admin_user

  - |
    set -e
    snap install canonical-Livepatch-server
    snap refresh --hold canonical-Livepatch-server
    canonical-Livepatch-server.schema-tool "$(cat /etc/livepatch/database_url)"      
  
    
    snap set canonical-Livepatch-server \
    token="$(cat /etc/livepatch/pro_token)" \
    lp.database.connection-string="$(cat /etc/livepatch/database_url)" \
    lp.patch-storage.type=s3 \
    lp.server.url-template=$URLTEMPLATE \
    lp.patch-storage.s3-bucket="$S3BUCKET" \
    lp.patch-storage.s3-endpoint="$S3ENDPOINT" \
    lp.patch-storage.s3-region="$S3REGION" \
    lp.patch-storage.s3-secure=true \
    lp.patch-storage.s3-access-key="$(cat /etc/livepatch/s3 | jq -r '.S3AccessKey' )" \
    lp.patch-storage.s3-secret-key="$(cat /etc/livepatch/s3 | jq -r '.SecretAccessKey' )" \
    lp.auth.basic.enabled=true \
    lp.patch-sync.enabled=true \
    lp.patch-sync.interval=12h \
    lp.auth.basic.users="$(cat /etc/livepatch/livepatch_admin_user)" \
    lp.server.server-address=0.0.0.0:80
  - |
    rm /etc/livepatch/*

final_message: The system is up, up to date, and Livepatch server is active after $UPTIME second

Where the blanked out values will be where you put your relevant resource information. For this template, the S3 and database secrets are assumed to be stored in a JSON object, while the Ubuntu Pro token and admin user string are plaintext.

Once you have a launch template created, test the deployment by going to Launch Instances and then to Launch Instance from Template. On the launch page, make sure to set the template version to the correct version if you have multiple versions. The instance will take a few minutes to create and configure Livepatch server. Once the instance is ready, you can check if the deployment is complete by running:

sudo snap logs canonical-Livepatch-server

If there are no error logs, then the server has successfully initialized. If there are errors, check the status of cloud-init with:

cloud-init status --long

If the status shows: error, then something went wrong during the cloud-init procedure. You can view the command output logs at /var/log/cloud-init-output.log.

With the server launch template ready, next navigate to the auto-scaling groups section and head to Create Auto Scaling Group. Give it a name and select the launch template we just created in the previous step.

Next, select the region and availability zones you want instances to be created in. Give the auto-scaling group a security group that has access to inside AWS (for the server instances), but allow all HTTP and HTTPS traffic so a load balancer can route incoming traffic. The default VPC will suffice for a multi-unit deployment.

If you already have a load balancer set up, you can link the auto-scaling group to it. If not, select Attach to a new load balancer. Set the default routing (forward to) option to Create a target group, which will set the load balancer to forward traffic to all instances created by the auto scaling group.

On the next page, configure the group size and scaling options, and optionally an automatic scaling policy and maintenance policy.

Once the auto-scaling group is created, it will begin to provision Livepatch server instances based on the given launch template. You can then log in with the admin tool (assuming you defined an admin user in the cloud-init config) by setting the endpoint URL to the public URL of the load balancer. Make sure the security settings for the load balancer allow external traffic so you can login and run administrative duties with the admin tool.

1 Like