Skip to main content
This page provides instructions on how to configure the Helm chart to install Hoop in any cloud provider.

Quick Start

This is the standard installation to evaluate Hoop. We recommend it for Proof of Concepts or testing environments.
On this installation, clients establishes connections without TLS, which is subject to interception. Make sure to deploy it over a secure network and use it only with non production resources.
1

Deploy the Gateway

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade hoop --install oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  --namespace hoopdev --create-namespace \
  --set postgres.enabled=true \
  --set defaultAgent.enabled=true \
  --set dataMasking.enabled=true \
  --set dataMasking.analyzer.replicas=2 \
  --set config.POSTGRES_DB_URI=postgres://root:default-pwd@hoopgateway-pg/postgres?sslmode=disable \
  --set config.API_URL=http://localhost:8009
2

Access it

  1. Forward the hoopgateway service ports to your local machine to access the WebApp
kubectl port-forward service/hoopgateway 8009:8009 8010:8010 -n hoopdev
  1. Visit the Webapp at http://127.0.0.1:8009/login
The default installation method install a Postgres database with Host mounted storage. In case the node is decommissioned, all data will be lost.For more durable setups, use a Persistent Volume by providing the option below:
  • --set postgres.storageClassName=<your-storage-class>

Helm Install

To install the latest version in a new namespace (example: hoopdev). Issue the command below:
VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml \
  --namespace hoopdev

Overwriting or passing new attributes

It is possible to add new attributes or overwrite an attribute from a base values.yaml file. In the example below a default agent is deployed as a sidecar container and with a using a specific version of the gateway.
helm upgrade --install hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml \
  --set defaultAgent.enabled=true \
  --set image.gw.tag=1.45.0

Database Configuration

Hoop uses Postgres as the backend storage of all data in the system. It uses the schema private to create the tables of the system. The command below creates a database and a user with privileges to access the database and the required schema.
CREATE DATABASE hoopdb;
CREATE USER hoopuser WITH ENCRYPTED PASSWORD 'my-secure-password';
-- switch to the created database
\c hoopdb
CREATE SCHEMA IF NOT EXISTS private;
GRANT ALL PRIVILEGES ON DATABASE hoopdb TO hoopuser;
GRANT ALL PRIVILEGES ON SCHEMA public to hoopuser;
GRANT ALL PRIVILEGES ON SCHEMA private to hoopuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO hoopuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA private TO hoopuser;
In case of using a password with special characters, make sure to url encode it properly when setting the connection string.
Use these values to assemble the configuration for POSTGRES_DB_URI:
  • POSTGRES_DB_URI=postgres://hoopuser:<passwd>@<db-host>:5432/hoopdb
Make sure to include ?sslmode=disable option in the Postgres connection string in case your database setup doesn’t support TLS.

Agent Deployment

  config:
    HOOP_KEY: '<agent-key-dsn>'
# base configuration
config:
  HOOP_KEY: '<agent-key-dsn>'
  LOG_ENCODING: 'json' # json|console
  LOG_LEVEL: 'info' # debug|info|warn|error
  LOG_GRPC: '0' # 0|1|2

# image default configuration
image:
  repository: hoophq/hoopdev
  pullPolicy: Always
  tag: latest

# define extra secret configuration to load as environment variables
extraSecret: {}

# -- Deployment strategy
deploymentStrategy:
  type: Recreate

# -- CPU/Memory resource requests/limits
resources: {}
#   limits:
#     cpu: 1024m
#     memory: 1Gi
#   requests:
#     cpu: 1024m
#     memory: 1Gi

# -- Node labels for pod assignment
nodeSelector: {}

# -- Toleration labels for pod assignment
tolerations: []

# -- Affinity settings for pod assignment
affinity: {}

Helm

Make sure you have helm installed in your machine. Check Helm installation page
VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm upgrade --install hoopagent \
	oci://ghcr.io/hoophq/helm-charts/hoopagent-chart --version $VERSION \
	--set "config.HOOP_KEY=<AUTH-KEY>"

Using Helm Manifests

VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm template hoopagent \
  oci://ghcr.io/hoophq/helm-charts/hoopagent-chart --version $VERSION \
  --set 'config.HOOP_KEY=<AUTH-KEY>' \
  --set 'image.tag=1.36.16' \
  --set 'extraSecret=AWS_REGION=us-east-1'
Starting from version 1.21.9, there is only one way to configure the agent key, which is by using the config.HOOP_KEY configuration. This requires creating a key in a DSN format in the API. To use legacy options, use the Helm chart version 1.21.4.

Standalone Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hoopagent
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hoopagent
  template:
    metadata:
      labels:
        app: hoopagent
    spec:
      containers:
      - name: hoopagent
        image: hoophq/hoopdev
        env:
        - name: HOOP_KEY
          value: '<AUTH-KEY>'

Sidecar Container

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: myapp
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: myapp
    template:
      metadata:
        labels:
          app: myapp
      spec:
        containers:
        - name: myapp
          image: myapp
          ports:
          - containerPort: 8000
            name: http
            protocol: TCP
        - name: hoopagent
          image: hoophq/hoopdev
          env:
          - name: HOOP_KEY
            value: '<AUTH-KEY>'

Gateway Chart Configuration

Check the environment variables section for more information about the configuration of the section config. Example:
config:
  POSTGRES_DB_URI: 'postgres://user:pwd@host:port/db'
  (...)

TLS Configuration

Starting with version 1.45+,we are transitioning to expose native protocols directly on the Gateway, eliminating the need to forward ports locally through the Hoop Command Line. This new approach requires the Gateway to terminate TLS connections, ensuring secure protocol negotiation and protecting data in transit.
  • To deploy the gateway with a valid certificate, make to define the environment variables TLS_KEY and TLS_CERT.
The certificate file may contain the Root and Intermediate CA’s as well. Make sure to include them in the proper order, the example below show how the certificates must be generated:
<SERVER-CERT>
<INTERMEDIATE-CA>
<ROOT_CA>
config:
  TLS_KEY: 'base64://<pem-encoded-private-key>'
  TLS_CERT: 'base64://<pem-encoded-full-certificate>'
Example of how to encode the files:
Use the output of each value to configure the attributes above.
echo "base64://$(cat /tmp/tls/server.key |base64)
echo "base64://$(cat /tmp/tls/server.crt |base64)

Authentication

Local Authentication manages users and passwords locally and sign JWT access tokens to users.
config:
  POSTGRES_DB_URI: 'postgres://<user>:<pwd>@<db-host>:<port>/<dbname>'
  API_URL: 'https://hoopdev.yourdomain.tld'

Default Database

The chart allows deploying a Postgres database as part of the installation.
# -- Enable PostgreSQL
postgres:
  # it default to host mount when enabled
  enabled: false

  # set a storage class name to use a Persistent Volume Claim
  storageClassName: null

  # -- Size of PVC
  size: 10Gi
  # annotations: {}
It creates a default Service resource named hoopgateway-pg. This service name could be used in the POSTGRES_DB_URI configuration.

Persistence

We recommend using SSD for large deployments, it will help speed the I/O when handling many concurrent requests. The following example shows how to enable a 50GB persistent volume when using AWS/EKS.
persistence:
  # -- Use persistent volume for write ahead log sessions
  enabled: true
  storageClassName: gp2

  # -- Size of persistent volume claim
  size: 50Gi

Ingress Configuration

This section covers the ingress configuration. The gateway requires exposing the ports HTTP/8009 and HTTP2/8010. The ingress configuration establishes these two differing configurations based on the ingress controller in use.
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.
2

Ingress Configuration

# HTTP/8009 - API / WebApp
ingressApi:
  enabled: true
  # the public DNS name
  host: 'hoopgateway.yourdomain.tld'
  # the ingress class, in this case alb
  ingressClassName: 'alb'
  annotations:
    # uses the ACM service to use a valid public certificate issued by AWS
    alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...'
    # the group name allows resuing the same lb for both protocols (HTTP/gRPC)
    alb.ingress.kubernetes.io/group.name: 'hoopdev'
    alb.ingress.kubernetes.io/healthcheck-path: '/'
    alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: 'internet-facing'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/target-type: 'ip'

# HTTP/8010 - gRPC Service
ingressGrpc:
  enabled: true
  # the public DNS name
  host: 'hoopdev.yourdomain.tld'
  # the ingress class, in this case alb
  ingressClassName: 'alb'
  annotations:
    # configures the type of the protocol
    alb.ingress.kubernetes.io/backend-protocol-version: 'GRPC'
    # the certificate could be reused for the same protocol
    alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:...'
    # the group name allows resuing the same lb for both protocols (HTTP/gRPC)
    alb.ingress.kubernetes.io/group.name: 'hoopdev'
    alb.ingress.kubernetes.io/healthcheck-path: '/'
    alb.ingress.kubernetes.io/healthcheck-protocol: 'HTTP'
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 8443}]'
    alb.ingress.kubernetes.io/scheme: 'internet-facing'
    alb.ingress.kubernetes.io/target-type: 'ip'

Proxy Protocol Services

Starting from version 1.45+, the protocols are served directly on the Gateway. These ports must be exposed to your network via a Network Load Balancer. Configuration Example:
proxyService:
  enabled: true
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
  ports:
  - name: api
    port: 443
    targetPort: 8009
  - name: grpcs
    port: 8443
    targetPort: 8010
  - name: pgproxy
    port: 15432
    targetPort: 15432
  - name: sshproxy
    port: 12222
    targetPort: 12222
  - name: rdpproxy
    port: 13389
    targetPort: 13389
This setup requires configuring TLS directly on the gateway.

Computing Resources

The helm-chart defaults to 1vCPU and 1GB, which is suitable for evaluation purposes only. For production setups, we recommend allocating at least 8GB/4vCPU to the gateway process.
resources:
  gw:
    limits:
      cpu: 4096m
      memory: 8Gi
    requests:
      cpu: 4096m
      memory: 8Gi

Image Configuration

By default, the latest version of all images is used. If you want to use a specific image or pin the versions, use the image attribute section.
image:
  gw:
    repository: hoophq/hoop
    pullPolicy: Always
    tag: latest

Default Agent Sidecar

Adding this section will deploy a default agent as a sidecar container.
defaultAgent:
  enabled: true
  imageRepository: 'hoophq/hoopdev'
  imageTag: latest
  imagePullPolicy: Always
  grpcHost: 127.0.0.1:8009
The grpcHost allows configuring the host to connect when starting the agent. In case the gateway has TLS configured (TLS_CA env set), the host must match the certificate SAN.

Data Masking Configuration

To enable the Data Masking feature, you need to configure the dataMasking section in your values.yaml file. It will deploy the Microsoft Presidio on the same namespace as the Hoop Gateway.
dataMasking:
  enabled: true
  # https://github.com/microsoft/presidio/releases
  version: latest
  # best-effort | strict
  mode: best-effort

  analyzer:
    replicas: 2
    resources:
      requests:
        cpu: 2048m
        memory: 1024Mi
      limits:
        cpu: 2500m
        memory: 2048Mi
When the dataMasking attribute is enabled, it takes control over the following configurations:
  • DLP_MODE
  • DLP_PROVIDER
  • MSPRESIDIO_ANALYZER_URL
  • MSPRESIDIO_ANONYMIZER_URL
  • GOOGLE_APPLICATION_CREDENTIALS_JSON
If you need more control over the deployment, we recommend using a standalone helm chart of Presidio. See more details above in the Presidio Deployment section.
This attribute is available starting from version 1.37.16+ of the Helm chart.

Node Selector

This configuration describes a pod that has a node selector, disktype: ssd. This means that the pod will get scheduled on a node that has a disktype=ssd label. See this documentation for more information.
# -- Node labels for pod assignment
nodeSelector:
  disktype: ssd

Tolerations

See this article explaining how to configure tolerations
# -- Toleration labels for pod assignment
tolerations:
- effect: NoExecute
  key: spot
  value: "true"
- effect: NoSchedule
  key: spot
  value: "true"

Node Affinity

See this article explaining how to configure affinity and anti-affinity rules
# -- Affinity settings for pod assignment
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - antarctica-east1
          - antarctica-west1
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 1
      preference:
        matchExpressions:
        - key: another-node-label-key
          operator: In
          values:
          - another-node-label-value

Presidio Deployment

The Data Masking feature uses Microsoft Presidio. We provide a Helm chart that gives more control over the deployment.
helm upgrade --install presidio \
  oci://ghcr.io/hoophq/helm-charts/presidio-chart --version v0.1.0 \
  -f values.yaml
We recommend starting with at least 2 replicas to handle concurrent requests more efficiently:
analyzer:
  replicas: 2

  resources:
    requests:
      cpu: 2048m
      memory: 1024Mi
    limits:
      cpu: 2500m
      memory: 2048Mi
It will guarantee at least four concurrent requests when performing model inference. If you want the workload to be burstable, decrease the CPU requests to 1024m. For maximum performance, ensure CPU requests are always set to 2 vCPUs and enable the Horizontal Pod Autoscaler.
These services must be respectively configured in the Gateway with the following environment variables:
DLP_PROVIDER=mspresidio
MSPRESIDIO_ANALYZER_URL=http://presidio-envoy-lb:3010
MSPRESIDIO_ANONYMIZER_URL=http://presidio-envoy-lb:3010
For more information about Presidio Deployment, see this section.

Generating Manifests

If you prefer using manifests over Helm, we recommend this approach. It allows you to track any modifications to the chart whenever a new version appears. You can apply a diff to your versioned files to identify what has been altered.
VERSION=$(curl -s https://releases.hoop.dev/release/latest.txt)
helm template hoop \
  oci://ghcr.io/hoophq/helm-charts/hoop-chart --version $VERSION \
  -f values.yaml