Snowflake catalogs #

You can use a Snowflake catalog to configure access to Snowflake in the following deployments:

Create a catalog #

To create a Snowflake catalog, select Catalogs in the main navigation and click Create catalog. On the Select a data source screen, click Snowflake.

Select a data source

Select a cloud provider #

The Cloud provider configuration is necessary to allow Starburst Galaxy to correctly match catalogs and clusters.

The data source configured in a catalog, and the cluster must operate in the same cloud provider and region for performance and cost reasons.

Select cloud provider

If you select the Google Cloud icon, a pop-up note reminds you that Snowflake can only be used with Google Cloud as a read only catalog. No write operations can be performed.

Define catalog name and description #

The Name of the catalog is visible in the Query editor and other clients. It is used to identify the catalog when writing SQL or showing the catalog and its nested schemas and tables in client applications.

The name is displayed in the Query editor, and when running a SHOW CATALOGS command. It is used to fully qualify the name of any table in SQL queries following the catalogname.schemaname.tablename syntax. For example, you can run the following query in the sample cluster without first setting the catalog or schema context.

SELECT * FROM tpch.sf1.nation;

The Description is a short, optional paragraph that provides further details about the catalog. It appears in the Starburst Galaxy user interface and can help other users determine what data can be accessed with the catalog.

Enter catalog name and description

Configure the connection #

The connection to the database requires username and password authentication and details to connect to the database server, typically hostname or IP address and port. The following sections detail the setup for the supported cloud providers.

Snowflake configuration #

The user credentials must belong to a user that has been granted permissions to use the Snowflake role with the required access rights.

To configure the connection to your Snowflake cluster, you must provide the following details:

  • Snowflake account identifier, your Snowflake identifier specifies your Snowflake account ID in the form, where full-account-name may include additional segments that identify the region and cloud platform where your account is hosted. Example: See account identifiers to learn more about Snowflake identifier formats.

  • Username, enter a Snowflake username.

  • Password, specify the password for this username.

  • Database name, specify the database you want to connect to.

  • Warehouse name, specify the warehouse you want to connect to.

  • Snowflake role, select a role with the required access rights.

    Snowflake connection dialog

Test the connection #

Once you have configured the connection details, click Test connection to confirm data access is working. If the test is successful, you can save the catalog.

If the test fails, look over your entries in the configuration fields, correct any errors, and try again. If the test continues to fail, Galaxy provides diagnostic information that you can use to fix the data source configuration in the cloud provider system.

Test connection

Connect catalog #

Click Connect catalog, and proceed to set permissions where you can grant access to certain roles.

Set permissions #

This optional step allows you to configure read access, read only access, and full read and write access to the catalog.

Click Skip to go straight to adding the catalog to a cluster. If you skip this step, you can add read only access as well as full read and write access to your catalog for any role later. Skipping this step leaves only administrative roles, and the current role while creating the catalog, with access to all schemas and tables within the catalog.

Setting read only permissions grants the specified roles read only access to the catalog. As a result users have read only access to all contained schema, tables, and views.

Setting read/write permissions grants the specified roles full read and write access to the catalog. As a result users have full read and write access to all contained schema, tables, and views.

Use the following steps to assign read/write access to roles:

  • In the Role-level permissions section, expand the menu in the Roles with read and write access field.
  • From the list, select one or more roles to grant read and write access to.
  • Expand the menu in the Roles with read access field.
  • Select one or more roles from the list to grant read access to.
  • Click Save access controls.

      Set permissions for read and write screenshot

Add to cluster #

You can add your catalog to a cluster later by editing a cluster. Click Skip to proceed to the catalogs page.

Use the following steps to add your catalog to an existing cluster or create a new cluster in the same cloud region:

  • In the Add to cluster section, expand the menu in the Select cluster field.
  • Select one or more existing clusters from the drop down menu.
  • Click Create a new cluster to create a new cluster in the same region, and add it to the cluster selection menu.
  • Click Add to cluster to view your new catalog’s configuration.

      Add to cluster

The Pending changes to clusters dialog appears when you try to add a catalog to a running cluster.

  • In the Pending changes to cluster dialog, click Return to catalogs to edit the catalog or create a new catalog.
  • Click Go to clusters to confirm the addition of the catalog to the running cluster.
  • On the Clusters page, click the Update icon beside the running cluster, to add the catalog.

      pending changes to cluster dialog

SQL support #

The catalog provides read access and write access to data and metadata in the Snowflake database. It supports the following features:

The following section provides details for the SQL support with Snowflake catalogs.

Data management details #

If a WHERE clause is specified, the DELETE operation only works if the predicate in the clause can be fully pushed down to the data source.