Starburst PostgreSQL connector#
The Starburst PostgreSQL connector is an extended version of the PostgreSQL connector with configuration and usage identical.
The following improvements are included:
The connector supports all of the SQL statements listed in the PostgreSQL connector documentation.
ALTER TABLE EXECUTE#
The Starburst enhanced connector supports the following commands for use with ALTER TABLE EXECUTE:
collect_statistics command is used with
Managed statistics to collect statistics for a table
and its columns.
The following statement collects statistics for the
and all of its columns:
ALTER TABLE example_table EXECUTE collect_statistics;
Collecting statistics for all columns in a table may be unnecessarily
performance-intensive, especially for wide tables. To only collect statistics
for a subset of columns, you can include the
columns parameter with an
array of column names. For example:
ALTER TABLE example_table EXECUTE collect_statistics(columns => ARRAY['customer','line_item']);
The connector includes a number of performance improvements, detailed in the following sections.
Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.
You can disable dynamic filtering by setting the
property in your catalog configuration file to
By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.
You can configure the
dynamic-filtering.wait-timeout property in your
catalog properties file:
You can use the
dynamic_filtering_wait_timeout catalog session property in a specific session:
SET SESSION example.dynamic_filtering_wait_timeout = 1s;
The maximum size of dynamic filter predicate, that is pushed down to the
connector during table scan for a column, is configured using the
domain-compaction-threshold property in the catalog
You can use the
SET SESSION domain_compaction_threshold = 10;
domain-compaction-threshold is set to
When the dynamic predicate for a column exceeds this threshold, it is compacted
into a single range predicate.
For example, if the dynamic filter collected for a date column
dt on the
fact table selects more than 32 days, the filtering condition is simplified from
dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to
dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased
table scan overhead due to a large
IN list getting pushed down to the data
Metrics about dynamic filtering are reported in a JMX table for each catalog:
Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.
Starburst Cached Views#
The connectors supports table scan redirection to improve performance and reduce load on the data source.
JDBC connection pooling#
When JDBC connection pooling is enabled, each node creates and maintains a connection pool instead of opening and closing separate connections to the data source. Each connection is available to connect to the data source and retrieve data. After completion of an operation, the connection is returned to the pool and can be reused. This improves performance by a small amount, reduces the load on any required authentication system used for establishing the connection, and helps avoid running into connection limits on data sources.
JDBC connection pooling is disabled by default. You can enable JDBC connection
pooling by setting the
connection-pool.enabled property to
true in your
catalog configuration file:
The following catalog configuration properties can be used to tune connection pooling:
Enable connection pooling for the catalog.
The maximum number of idle and active connections in the pool.
The maximum lifetime of a connection. When a connection reaches this lifetime it is removed, regardless of how recently it has been active.
The maximum size of the JDBC data source cache.
The expiration time of a cached data source when it is no longer accessed.
The connector supports Managed statistics allowing SEP to collect and store table and column statistics that can then be used for performance optimizations in query planning.
Statistics must be collected manually using the built-in
command, see collect_statistics for details and
The connector includes a number of security-related features, detailed in the following sections.
The PostgreSQL connector supports user impersonation.
User impersonation can be enabled in the catalog file:
User impersonation in PostgreSQL connector is based on
SET ROLE. For more
details visit the PostgreSQL
The connector supports Kerberos authentication using either a keytab or credential cache.
To configure Kerberos authentication with a keytab, add the following catalog configuration properties to the catalog properties file:
postgresql.authentication.type=KERBEROS firstname.lastname@example.org kerberos.client.keytab=etc/kerberos/example.keytab kerberos.config=etc/kerberos/krb5.conf
With this configuration the user
email@example.com, defined in the
principal property, is used to connect to the database, and the related Kerberos
service ticket is located in the
To configure Kerberos authentication with a credential cache, add the following catalog configuration properties to the catalog properties file:
postgresql.authentication.type=KERBEROS firstname.lastname@example.org kerberos.client.credential-cache.location=etc/kerberos/example.cache kerberos.config=etc/kerberos/krb5.conf
In these configurations the user
email@example.com, as defined in the
principal property, connects to the database. The related Kerberos service
ticket is located in the
etc/kerberos/example.keytab file, or cache
credentials in the
Kerberos credential pass-through#
The PostgreSQL connector can be configured to pass through Kerberos credentials, received by SEP, to the PostgreSQL database.
Configure Kerberos and SEP, following the instructions in Kerberos credential pass-through.
Then configure the connector to pass through the credentials from the server to the database in your catalog properties file and ensure the Kerberos client configuration properties are in place on all nodes.
postgresql.authentication.type=KERBEROS_PASS_THROUGH http.authentication.krb5.config=/etc/krb5.conf http-server.authentication.krb5.service-name=exampleServiceName http-server.authentication.krb5.keytab=/path/to/Keytab/File
Now any database access via SEP is subject to the data access restrictions and permissions of the user supplied via Kerberos.
Password credential pass-through#
The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:
For more information about configurations and limitations, see Password credential pass-through.
AWS IAM authentication#
When the PostgreSQL database is deployed as an AWS RDS instance, the connector can use IAM authentication. This enhancement allows you to manage access control from SEP with IAM policies.
To enable IAM authentication, add the following configuration properties to the catalog configuration file:
postgresql.authentication.type=AWS connection-user=<RDS username> aws.region-name=<AWS region> aws.token-expiration-timeout=10m
You can also configure the connector to assume a specific IAM role for authentication before creating the access token, in order to apply policies specific to SEP. Alongside this role, you must include an (informal) external identifier of a user to assume this role.
To apply an IAM role to the connector, add the following configuration properties:
This table describes the configuration properties for IAM authentication:
The database account used to access the RDS database instance.
The name of the AWS region in which the RDS instance is deployed.
(Optional) Set an IAM role to assume for authentication before creating
the access token. If set,
(Optional) The informal identifier of the user who assumes
the IAM role set in
The amount of time to keep the generated RDS access tokens for each user
before they are regenerated. The maximum value is 15 minutes. Defaults to
The access key of the principal to authenticate with for the token generator service. Used for fixed authentication, setting this property disables automatic authentication.
The secret key of the principal to authenticate with for the token generator service. Used for fixed authentication, setting this property disables automatic authentication.
(Optional) A session token for temporary credentials, such as credentials obtained from SSO. Used for fixed authentication, setting this property disables automatic authentication.
By default the connector attempts to automatically obtain its authentication credentials from the environment. The default credential provider chain attempts to obtain credentials from the following sources, in order:
Java system properties:
Web identity token: credentials from the environment or container.
Credential profiles file: a profiles file at the default location (
~/.aws/credentials) shared by all AWS SDKs and the AWS CLI.
EC2 service credentials: credentials delivered through the Amazon EC2 container service, assuming the security manager has permission to access the value of the
Instance profile credentials: credentials delievered through the Amazon EC2 metadata service.
If the SEP cluster is running on an EC2 instance, these credentials most likely come from the metadata service.
Alternatively, you can set fixed credentials for authentication. This option disables the container’s automatic attempt to locate credentials. To use fixed credentials for authentication, set the following configuration properties:
aws.access-key=<access_key> aws.secret-key=<secret_key> # (Optional) You can use temporary credentials. For example, you can use temporary credentials from SSO aws.session-token=<session_token>