Starburst Synapse connector#

The Synapse connector allows you to query an external Azure Synapse SQL pool in Starburst Enterprise platform (SEP).

Requirements#

To connect to an Azure Synapse SQL pool, you need:

Configuration#

The connector can query a single Synapse SQL pool. Create a catalog properties file that specifies the Synapse connector by setting the connector.name to synapse.

For example, to access a Synapse SQL pool as example, create the file etc/catalog/example.properties. Replace the connection properties as appropriate for your setup:

connector.name=synapse
connection-url=jdbc:sqlserver://<serverName>:<portNumber>;database=<SQLpoolName>
connection-user=sepuser
connection-password=secret1v3

The connection-url parameter uses the SQL Server JDBC driver connection string syntax, from where the database parameter is inherited. This basic configuration can be used for both dedicated and serverless pools, as follows.

For dedicated SQL pools#

For this connection option:

  • The serverName component of the JDBC URL is shown in the Azure UI as the SQL endpoint in the following format: yourserver.sql.azuresynape.net.

  • For the database parameter, specify the name of the SQL pool that you create with DML such as CREATE SCHEMA synapse_pool.pool1.

Using the default port 1433, this results in a JDBC string like the following:

jdbc:sqlserver://yourserver.sql.azuresynapse.net:1433;database=pool1

For serverless SQL pools#

For this connection option:

  • The serverName component of the JDBC URL is found in the Azure UI as the SQL on-demand endpoint, in the following format: yourserver-ondemand.sql.azuresynapse.net.

  • For the database parameter, specify the fixed term master.

This results in a JDBC string like the following:

jdbc:sqlserver://yourserver-ondemand.sql.azuresynapse.net:1433;database=master

General configuration properties#

The following table describes general catalog configuration properties for the connector:

Property name

Description

case-insensitive-name-matching

Support case insensitive schema and table names. Defaults to false.

case-insensitive-name-matching.cache-ttl

Duration for which case insensitive schema and table names are cached. Defaults to 1m.

case-insensitive-name-matching.config-file

Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. Defaults to null.

case-insensitive-name-matching.config-file.refresh-period

Frequency with which Trino checks the name matching configuration file for changes. The duration value defaults to 0s (refresh disabled).

metadata.cache-ttl

Duration for which metadata, including table and column statistics, is cached. Defaults to 0s (caching disabled).

metadata.cache-missing

Cache the fact that metadata, including table and column statistics, is not available. Defaults to false.

metadata.schemas.cache-ttl

Duration for which schema metadata is cached. Defaults to the value of metadata.cache-ttl.

metadata.tables.cache-ttl

Duration for which table metadata is cached. Defaults to the value of metadata.cache-ttl.

metadata.statistics.cache-ttl

Duration for which tables statistics are cached. Defaults to the value of metadata.cache-ttl.

metadata.cache-maximum-size

Maximum number of objects stored in the metadata cache. Defaults to 10000.

write.batch-size

Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. Defaults to 1000.

dynamic-filtering.enabled

Push down dynamic filters into JDBC queries. Defaults to true.

dynamic-filtering.wait-timeout

Maximum duration for which Trino waits for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. Defaults to 20s.

Type mapping#

Because SEP and Synapse each support types that the other does not, this connector modifies some types when reading or writing data.

Synapse to SEP type mapping#

The following read type mapping applies when data is read from existing tables in Synapse, or inserted into existing tables in Synapse from SEP.

Synapse to SEP read type mapping#

Synapse type

SEP type

Notes

BIT

BOOLEAN

BIGINT

BIGINT

SMALLINT

SMALLINT

TINYINT

TINYINT

INT

INTEGER

DECIMAL(p, s), NUMERIC(p, s)`

DECIMAL(p, s)

for p <= 38

DOUBLE PRECISION

DOUBLE

FLOAT

DOUBLE

BINARY

VARBINARY

CHAR(n)

CHAR(n)

VARCHAR(n)

VARCHAR(n)

NCHAR(n)

CHAR(n)

NVARCHAR(n)

VARCHAR(n)

DATE

DATE

DATETIME2(n)

TIMESTAMP(n)

TIME

TIME

No other types are supported.

SEP to Synapse type mapping#

The following write type mapping applies when tables are created in Synapse from SEP.

SEP to Synapse write type mapping#

SEP type

Synapse type

Notes

BOOLEAN

BIT

BIGINT

BIGINT

INTEGER

INT

SMALLINT

SMALLINT

DOUBLE

DOUBLE PRECISION

CHAR(n <= 4000)

NCHAR(n)

VARCHAR(n <= 4000)

NVARCHAR(n)

VARBINARY

VARBINARY(8000)

DATE

DATE

TIME

TIME

TIMESTAMP(n <= 7) without time zone

DATETIME2(n)

TIMESTAMP(n > 7) without time zone

DATETIME2(7)

Type mapping configuration properties#

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property name

Description

Default value

unsupported-type-handling

Configure how unsupported column data types are handled:

  • IGNORE, column is not accessible.

  • CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.

The respective catalog session property is unsupported_type_handling.

IGNORE

jdbc-types-mapped-to-varchar

Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

SQL support#

The connector provides read and write access to data and metadata in the Synapse SQL pool. In addition to the globally available and read operation statements, the connector supports the following features:

Warning

Transact-SQL syntax and associated features are not supported.

SQL DELETE#

If a WHERE clause is specified, the DELETE operation only works if the predicate in the clause can be fully pushed down to the data source.

Creating tables#

Synapse-native table structure and table distribution options for CREATE TABLE are not supported.

In addition, Synapse cannot create tables in serverless pools, which also applies to catalogs using Synapse in SEP.

ALTER TABLE RENAME TO#

The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two

The following statement attempts to rename a table across schemas, and therefore is not supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two

ALTER TABLE EXECUTE#

The connector supports the following commands for use with ALTER TABLE EXECUTE:

collect_statistics#

The collect_statistics command is used with Managed statistics to collect statistics for a table and its columns.

The following statement collects statistics for the example_table table and all of its columns:

ALTER TABLE example_table EXECUTE collect_statistics;

Collecting statistics for all columns in a table may be unnecessarily performance-intensive, especially for wide tables. To only collect statistics for a subset of columns, you can include the columns parameter with an array of column names. For example:

ALTER TABLE example_table
    EXECUTE collect_statistics(columns => ARRAY['customer','line_item']);

Performance#

The connector includes a number of performance-enhancing features, detailed in the following sections.

Table statistics#

The Synapse SQL connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.

Table statistics are enabled by default in Synapse SQL.

The connector can retrieve and use information stored in single-column statistics. Synapse can automatically create column statistics for certain columns. If column statistics were not created automatically for a certain column, you can create them by executing the following statement in Synapse SQL:

CREATE STATISTICS my_statistics_name ON table_schema.table_name (column_name);

Synapse SQL routinely updates the statistics as long as the AUTO_CREATE_STATISTICS option is set to ON in Synapse SQL. In some cases, you may want to force statistics to update such as after defining new column statistics or after changing data in the table. You can do that by executing the following statement in Synapse SQL:

UPDATE STATISTICS table_schema.table_name(stat_name);

Managed statistics#

The connector supports Managed statistics allowing SEP to collect and store its own table and column statistics that can then be used for performance optimizations in query planning.

Statistics must be collected manually using the built-in collect_statistics command, see collect_statistics for details and examples.

Dynamic filtering#

Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.

You can disable dynamic filtering by setting the dynamic-filtering.enabled property in your catalog configuration file to false.

Wait timeout#

By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

You can configure the dynamic-filtering.wait-timeout property in your catalog properties file:

dynamic-filtering.wait-timeout=1m

You can use the dynamic_filtering_wait_timeout catalog session property in a specific session:

SET SESSION example.dynamic_filtering_wait_timeout = 1s;

Compaction#

The maximum size of dynamic filter predicate, that is pushed down to the connector during table scan for a column, is configured using the domain-compaction-threshold property in the catalog properties file:

domain-compaction-threshold=100

You can use the domain_compaction_threshold catalog session property:

SET SESSION domain_compaction_threshold = 10;

By default, domain-compaction-threshold is set to 32. When the dynamic predicate for a column exceeds this threshold, it is compacted into a single range predicate.

For example, if the dynamic filter collected for a date column dt on the fact table selects more than 32 days, the filtering condition is simplified from dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased table scan overhead due to a large IN list getting pushed down to the data source.

Metrics#

Metrics about dynamic filtering are reported in a JMX table for each catalog:

jmx.current."io.trino.plugin.jdbc:name=example,type=dynamicfilteringstats"

Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.

Pushdown#

The connector supports pushdown for a number of operations:

Aggregate pushdown for the following functions:

Cost-based join pushdown#

The connector supports cost-based Join pushdown to make intelligent decisions about whether to push down a join operation to the data source.

When cost-based join pushdown is enabled, the connector only pushes down join operations if the available Table statistics suggest that doing so improves performance. Note that if no table statistics are available, join operation pushdown does not occur to avoid a potential decrease in query performance.

The following table describes catalog configuration properties for join pushdown:

Property name

Description

Default value

join-pushdown.enabled

Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled.

true

join-pushdown.strategy

Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes.

AUTOMATIC

Predicate pushdown support#

The connector does not support pushdown of any predicates on columns with textual types like CHAR or VARCHAR. This ensures correctness of results since the data source may compare strings case-insensitively.

In the following example, the predicate is not pushed down for either query since name is a column of type VARCHAR:

SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';

Starburst Cached Views#

The connector supports table scan redirection to improve performance and reduce load on the data source.

Security#

The connector includes a number of security-related features, detailed in the following sections.

Active Directory password authentication#

The connector supports Active Directory password authentication. To enable it, edit the catalog properties file to include the authentication type and specify user and password added to AD service associated with Synapse instance:

connection-user=active-directory-user
connection-password=active-directory-user-password
synapse.authentication.type=ACTIVE_DIRECTORY_PASSWORD

User impersonation#

The Synapse SQL connector supports user impersonation.

User impersonation can be enabled in the catalog file:

synapse.impersonation.enabled=true

User impersonation in the Synapse connector is based on EXECUTE AS USER.

Password credential pass-through#

The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:

synapse.authentication.type=PASSWORD_PASS_THROUGH

Use the ACTIVE_DIRECTORY_PASSWORD_PASS_THROUGH authentication type to enable password pass-through for Active Directory password authentication:

synapse.authentication.type=ACTIVE_DIRECTORY_PASSWORD_PASS_THROUGH

For more information about configurations and limitations, see Password credential pass-through.

OAuth 2.0 token pass-through#

The Synapse connector supports OAuth 2.0 token pass-through.

Set the authentication type and OAuth 2.0 scope in the coordinator’s config properties file:

http-server.authentication.type=DELEGATED-OAUTH2
http-server.authentication.oauth2.scopes=<EXISTING_SCOPES>,session:role:TEST_ROLE

The session:role prefix determines the role assigned to the user after successful authentication.

Additionally, enable OAUTH2_PASS_THROUGH in the catalog properties file using the Synapse connector:

synapse.authentication.type=OAUTH2_PASS_THROUGH