Red Hat OpenShift deployment guide#

Starburst Enterprise platform (SEP) is available as an operator on OpenShift.

Prerequisites#

The following are required to deploy SEP on OpenShift:

  • Access to an OpenShift cluster with correctly-sized nodes, using IAM credentials, and with sufficient Elastic IPs.

  • Previously installed and configured Kubernetes, including access to kubectl.

  • An editor suitable for editing YAML files.

  • Your SEP license file.

  • The latest OpenShift Container Platform (OCP) client for your platform as described in the OpenShift documentation and the oc executable copied into your path, usually /usr/local/bin.

Getting up and running#

Before you get started installing SEP, we suggest you read our reference documentation.

Installation#

You can install SEP on OpenShift using one of the following methods:

  • Starburst’s Kubernetes deployment. This preferred method allows you to deploy any supported version of SEP.

  • Starburst’s certified operator in the OpenShift Operator Hub. Note that the supported SEP version may be several versions behind the latest release. We strongly recommend using the Kubernetes deployment instead.

Starburst’s Kubernetes deployment#

We strongly recommend deploying SEP on OpenShift using Kubernetes. Starburst’s Kubernetes deployment on OpenShift follows the same steps as with any other Kubernetes service. More information on Starburst’s Kubernetes deployment is available in our Kubernetes reference documentation.

Starburst Enterprise Helm operator deployment#

In OpenShift, the Helm operator utilizes SEP Helm charts to deploy SEP and the Hive Metastore Service as separate custom resources.

Before you deploy SEP, you must install the operator on your OpenShift cluster. There are two ways to install the operator:

  • The starburst-enterprise-helm-operator Helm chart

  • The OpenShift OperatorHub

Note

Starburst recommends installing the operator using the starburst-enterprise-helm-operator Helm chart since it allows you to specify your desired SEP version.

Using the Helm chart#

To install the operator with the starburst-enterprise-helm-operator Helm chart, execute the following helm upgrade command with the --install flag. Specify your desired SEP version using the --version argument, and pass registry access credentials from the registry-access.yaml file using the --values argument.

$ helm upgrade my-sep-operator starburstdata/starburst-enterprise-helm-operator \
    --install \
    --version 462.0.0 \
    --values ./registry-access.yaml
Using the OpenShift OperatorHub#

Use the following steps to install the operator with the OpenShift OperatorHub:

  1. Log into the OCP web console using your administrator login for Red Hat OCP.

  2. In the left-hand menu, click Project > Create project, and provide a meaningful name for the project, such as starburst-enterprise.

  3. Click Create to create the project. Creating a separate project for your SEP deployment makes it easier to distinguish between each of the SEP resources in your OpenShift cluster.

  4. In the left-hand menu, click Operators > Operator hub.

  5. At the top of the screen, expand the Project drop-down, and select the name of the project you just created.

  6. In the Operator Hub search, search for starburst.

  7. Click on the Starburst Enterprise Helm operator tile, then click Install.

  8. On the Create Operator Subscription page, in the Installation mode section, choose A specific namespace on the cluster.

  9. In the Installed namespace field, select the Starburst project you just created.

  10. Leave all other options as default, and click Subscribe to finish the installation.

Now that the Starburst Enterprise Helm operator is installed in your OCP cluster, you must add a license file.

License configuration#

The Starburst Enterprise Helm operator is not configured with a license file by default. Use the following steps to add a license to your SEP cluster:

  1. In the OCP web console, go to Workloads > Secrets, then click Create.

  2. Expand the dropdown menu, and select Key-value secret.

  3. In the Secret name field, input starburstdata for the secret name.

  4. In the Key field, input starburstdata.license

  5. In the Value field, click Browse, then select your Starburst Enterprise license file from your local machine.

  6. Click Create to create the secret.

Once you have created the secret, you must add the license file property to the operator’s ClusterServiceVersion (CSV). Use the following steps to add this property:

  1. In the OCP web console, go to Installed Operators.

  2. Click Starburst Enterprise Helm operator.

  3. Click the YAML tab.

  4. Add the license file property "starburstPlatformLicense": "starburstdata" on the first indented level under the spec field for the Starburst Enterprise resource. The following example shows the proper format:

    spec:
      starburstPlatformLicense: starburstdata
    
  5. Click Save, then Reload.

  6. Make additional configuration changes based on your specific needs:

Resource installation#

After you have completed all the configurations for your deployment of SEP, you can install each resource:

  1. In the OCP web console, click Installed Operators > SEP Helm Operator.

  2. Click Create instance for the resource you want to install, and provide a meaningful name for this instance of the resource, such as starburst-enterprise-production. Click Create.

  3. Go to Workloads > Pods to track the installation status of the resource.

To access your SEP cluster in OpenShift, you must create a route that points to the appropriate Kubernetes service. To create a route, go to the OCP web console, and select Networking > Routes. Be sure to select starburst in the Service field and ensure the correct port mapping in the Target port field is selected based on your cluster’s configuration.

If you need Ranger in your cluster, you can install it using Starburst’s Helm chart. To configure Ranger, read our Configuring Starburst Enterprise with Ranger in Kubernetes documentation.

Next steps#

Your cluster is now operational! You can now connect to it with your client tools, and start querying your data sources.

Follow these steps to quickly test your deployed cluster:

  1. Create a route to the default ‘starburst’ service. If you changed the name in the expose section, use the new name.

  2. Run the following command using the CLI with the configured route:

trino --server <URL from route> --catalog tpch
  1. Run SHOW SCHEMAS; in the CLI, and you can see a list of schemas available to query with names such as tiny, sf1, sf100, and others.

We’ve created an operations guide to get you started with common first steps in cluster operations.

It includes some great advice about starting with a small, initial configuration that is built upon in our cluster sizing and performance video training.

Troubleshooting#

SEP is powerful, enterprise-grade software with many moving parts. As such, if you find you need help troubleshooting, here are some helpful resources:

FAQ#

Q: Once it’s deployed, how do I access my cluster?

A: You can use the CLI on a terminal or the Web UI to access your cluster. For example:

  • Trino CLI command: ./trino --server example-starburst-enterprise.apps.demo.rht-sbu.io --catalog hive

  • Web UI URL: http://example-starburst-enterprise.apps.demo.rht-sbu.io

  • Many other client applications can be connected, and used to run queries, created dashboards and more.

Q: I need to make administrative changes that require a shell prompt. How do I get a command line shell prompt in a container within my cluster?

A: On OCP, you’ll get a shell prompt for a pod. To get a shell prompt for a pod, you’ll need the name of the pod you want to work from. To do so, log in to your cluster. For example:

oc login -u kubeadmin -p XXXXX-XXXXX-XXXXX-XXXX https://api.demo.rht-sbu.io:6443

Get the list of running pods:

❯ oc get pod -o wide
NAME                                                 READY   STATUS    RESTARTS   AGE   IP            NODE                                         NOMINATED NODE   READINESS GATES
hive-XXXXXXXXX-lhj7l        1/1     Running   0          27m   10.131.2.XX   ip-10-0-139-XXX.us-west-2.compute.internal    <none>           <none>
starburst-enterprise-coordinator-example-XXXXXXXXX-4bzrv   1/1     Running   0          27m   10.129.2.XX   ip-10-0-153-XXX.us-west-2.compute.internal     <none>           <none>
starburst-enterprise-operator-7c4ff6dd8f-2xxrr                     1/1     Running   0          41m   10.131.2.XX   ip-10-0-139-XXX.us-west-2.compute.internal    <none>           <none>
starburst-enterprise-worker-example-XXXXXXXXX-522j8        1/1     Running   0          27m   10.131.2.XX   ip-10-0-139-XXX.us-west-2.compute.internal    <none>           <none>
starburst-enterprise-worker-example-XXXXXXXXX-kwxhr        1/1     Running   0          27m   10.130.2.XX   ip-10-0-162-XXX.us-west-2.compute.internal   <none>           <none>
starburst-enterprise-worker-example-XXXXXXXXX-phlqq        1/1     Running   0          27m   10.129.2.XX   ip-10-0-153-XXX.us-west-2.compute.internal     <none>           <none>

The pod name is the first value in a record. Use the pod name to open a shell:

❯ oc rsh starburst-enterprise-coordinator-example-XXXXXXXXX-4bzrv

A shell prompt will appear. For example, on OCP 4.4:

sh-4.4$

Q: Is there a way to get a shell prompt through the OCP web console?

A: Yes. Log in to your OCP web console and navigate to Workloads > Pods. Select the pod you want a terminal for, and click the Terminal tab.

Q: I’ve added a new data source. How do I update the configuration to recognize it?

A: Using the making configuration changes section to edit your YAML configuration, find additionalCatalogs, and add an entry for your new data source. For example, to add a PostgreSQL data source called mydatabase:

    mydatabase: |
      connector.name=postgresql
      connection-url=jdbc:postgresql://172.30.XX.64:5432/pgbench
      connection-user=pgbench
      connection-password=postgres123

Once your changes are complete, click Save and then Reload to deploy your changes. Note that this restarts the coordinator and all workers on the cluster, and might take a little while.