Starburst SAP HANA connector#

The SAP HANA connector allows querying and creating tables in an external database. Connectors let Starburst Enterprise platform (SEP) join data provided by different databases, like SAP HANA and Hive, or different database instances.

Requirements#

To connect to SAP HANA, you need:

  • SAP HANA version 2.0 or higher.

  • Network access from the coordinator and workers to the SAP HANA server. Port 30015 the default port for instance 00.

  • A valid Starburst Enterprise license.

  • SAP HANA JDBC driver, acquired from SAP.

Configuration#

Before configuring a catalog with the SAP HANA connector, install the JDBC driver on your SEP nodes:

  1. Add the SAP HANA JDBC driver JAR file to the SEP plugin/sap-hana directory on all nodes.

  2. Restart SEP on every node.

Create the example catalog with a catalog properties file in etc/catalog named example.properties (replace example with your database name or some other descriptive name of the catalog) with the following contents:

connector.name=sap_hana
connection-url=jdbc:sap://Hostname:Port/?optionalparameters
connection-user=USERNAME
connection-password=PASSWORD

Refer to the SAP HANA for more information about format and parameters of the JDBC URL supported by the SAP HANA JDBC driver.

General configuration properties#

The following table describes general catalog configuration properties for the connector:

Property name

Description

case-insensitive-name-matching

Support case insensitive schema and table names. Defaults to false.

case-insensitive-name-matching.cache-ttl

Duration for which case insensitive schema and table names are cached. Defaults to 1m.

case-insensitive-name-matching.config-file

Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. Defaults to null.

case-insensitive-name-matching.config-file.refresh-period

Frequency with which Trino checks the name matching configuration file for changes. The duration value defaults to 0s (refresh disabled).

metadata.cache-ttl

Duration for which metadata, including table and column statistics, is cached. Defaults to 0s (caching disabled).

metadata.cache-missing

Cache the fact that metadata, including table and column statistics, is not available. Defaults to false.

metadata.schemas.cache-ttl

Duration for which schema metadata is cached. Defaults to the value of metadata.cache-ttl.

metadata.tables.cache-ttl

Duration for which table metadata is cached. Defaults to the value of metadata.cache-ttl.

metadata.statistics.cache-ttl

Duration for which tables statistics are cached. Defaults to the value of metadata.cache-ttl.

metadata.cache-maximum-size

Maximum number of objects stored in the metadata cache. Defaults to 10000.

write.batch-size

Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. Defaults to 1000.

dynamic-filtering.enabled

Push down dynamic filters into JDBC queries. Defaults to true.

dynamic-filtering.wait-timeout

Maximum duration for which Trino waits for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. Defaults to 20s.

Type mapping#

Because Trino and SAP HANA each support types that the other does not, this connector modifies some types when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.

SAP HANA to Trino type mapping#

The connector maps SAP HANA types to the corresponding Trino types according to the following table:

SAP HANA to Trino type mapping#

SAP HANA type

Trino type

Notes

BOOLEAN

BOOLEAN

TINYINT

TINYINT

SMALLINT

SMALLINT

INTEGER

INTEGER

BIGINT

BIGINT

REAL

REAL

DOUBLE

DOUBLE

FLOAT(p)

REAL for p <= 24, DOUBLE otherwise

DECIMAL(p, s)

DECIMAL(p, s)

DECIMAL

DOUBLE

SAP HANA’s DECIMAL with precision and scale not specified represents a floating-point decimal number

SMALLDECIMAL

DOUBLE

SAP HANA’s DECIMAL with precision and scale not specified represents a floating-point decimal number

NCHAR

CHAR

VARCHAR(n)

VARCHAR(n)

NVARCHAR(n)

VARCHAR(n)

ALPHANUM(n)

VARCHAR(n)

SHORTTEXT(n)

VARCHAR(n)

CLOB

VARCHAR (unbounded)

NCLOB

VARCHAR (unbounded)

TEXT

VARCHAR (unbounded)

BINTEXT

VARCHAR (unbounded)

VARBINARY(n)

VARBINARY

BLOB

VARBINARY

DATE

DATE

TIME

TIME(0)

SECONDDATE

TIMESTAMP(0)

TIMESTAMP

TIMESTAMP(7)

No other types are supported.

Trino to SAP HANA type mapping#

The connector maps Trino types to the corresponding SAP HANA types according to the following table:

Trino to SAP HANA type mapping#

Trino type

SAP HANA type

Notes

BOOLEAN

BOOLEAN

TINYINT

TINYINT

SMALLINT

SMALLINT

INTEGER

INTEGER

BIGINT

BIGINT

REAL

REAL

DOUBLE

DOUBLE

DECIMAL(p, s)

DECIMAL(p, s)

CHAR

CHAR or NCLOB

VARCHAR

NVARCHAR or CLOB

VARBINARY

BLOB

DATE

DATE

TIME(p)

TIME

TIMESTAMP(p)

SECONDDATE for p = 0, TIMESTAMP otherwise

No other types are supported.

Type mapping configuration properties#

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property name

Description

Default value

unsupported-type-handling

Configure how unsupported column data types are handled:

  • IGNORE, column is not accessible.

  • CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.

The respective catalog session property is unsupported_type_handling.

IGNORE

jdbc-types-mapped-to-varchar

Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

SQL support#

The connector provides read and write access to data and metadata in SAP HANA. In addition to the globally available and read operation statements, the connector supports the following features:

Views#

The connector can read data from views, including SAP HANA calculation views.

Table functions#

The connector provides specific table functions to access SAP HANA.

query(VARCHAR) -> table#

The query function allows you to query the underlying database directly. It requires syntax native to the data source, because the full query is pushed down and processed in the data source. This can be useful for accessing native features or for improving query performance in situations where running a query natively may be faster.

The query table function is available in the system schema of any catalog that uses the SAP HANA connector, such as example. The following example passes myQuery to the data source. myQuery has to be a valid query for the data source, and is required to return a table as a result:

SELECT
  *
FROM
  TABLE(
    example.system.query(
      query => 'myQuery'
    )
  );

Performance#

The connector includes a number of performance improvements, detailed in the following sections.

Parallelism#

The connector is able to read data from SAP HANA using multiple parallel connections for partitioned tables.

SAP HANA parallelism configuration properties#

Property name

Description

Default

sap-hana.parallelism-type

Determines the parallelism method. Possible values are:

  • NO_PARALLELISM, single JDBC connection

  • PARTITIONS, separate connection for each partition.

The corresponding catalog session property is parallelism_type.

NO_PARALLELISM

Table statistics#

The SAP HANA connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.

The statistics are collected by SAP HANA and retrieved by the connector.

You have to use the CREATE STATISTICS command in SAP HANA to initiate creation and ongoing collection and update of the relevant statistics. You can find more information about statistics collection in the SAP HANA documentation.

The connector and SEP support the statistic types HISTOGRAM, SIMPLE, and TOPK.

Note

The collection in SAP HANA can take considerable time and depends on the data size. You can use the MERGE DELTA command to affect availability of the statistics.

Pushdown#

The connector supports pushdown for a number of operations:

Aggregate pushdown for the following functions:

Additionally, for the aggregate functions below, pushdown is only supported for DOUBLE type columns:

Cost-based join pushdown#

The connector supports cost-based Join pushdown to make intelligent decisions about whether to push down a join operation to the data source.

When cost-based join pushdown is enabled, the connector only pushes down join operations if the available Table statistics suggest that doing so improves performance. Note that if no table statistics are available, join operation pushdown does not occur to avoid a potential decrease in query performance.

The following table describes catalog configuration properties for join pushdown:

Property name

Description

Default value

join-pushdown.enabled

Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled.

true

join-pushdown.strategy

Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes.

AUTOMATIC

Dynamic filtering#

Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.

You can disable dynamic filtering by setting the dynamic-filtering.enabled property in your catalog configuration file to false.

Wait timeout#

By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

You can configure the dynamic-filtering.wait-timeout property in your catalog properties file:

dynamic-filtering.wait-timeout=1m

You can use the dynamic_filtering_wait_timeout catalog session property in a specific session:

SET SESSION example.dynamic_filtering_wait_timeout = 1s;

Compaction#

The maximum size of dynamic filter predicate, that is pushed down to the connector during table scan for a column, is configured using the domain-compaction-threshold property in the catalog properties file:

domain-compaction-threshold=100

You can use the domain_compaction_threshold catalog session property:

SET SESSION domain_compaction_threshold = 10;

By default, domain-compaction-threshold is set to 32. When the dynamic predicate for a column exceeds this threshold, it is compacted into a single range predicate.

For example, if the dynamic filter collected for a date column dt on the fact table selects more than 32 days, the filtering condition is simplified from dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased table scan overhead due to a large IN list getting pushed down to the data source.

Metrics#

Metrics about dynamic filtering are reported in a JMX table for each catalog:

jmx.current."io.trino.plugin.jdbc:name=example,type=dynamicfilteringstats"

Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.

JDBC connection pooling#

When JDBC connection pooling is enabled, each node creates and maintains a connection pool instead of opening and closing separate connections to the data source. Each connection is available to connect to the data source and retrieve data. After completion of an operation, the connection is returned to the pool and can be reused. This improves performance by a small amount, reduces the load on any required authentication system used for establishing the connection, and helps avoid running into connection limits on data sources.

JDBC connection pooling is disabled by default. You can enable JDBC connection pooling by setting the connection-pool.enabled property to true in your catalog configuration file:

connection-pool.enabled=true

The following catalog configuration properties can be used to tune connection pooling:

JDBC connection pooling catalog configuration properties#

Property name

Description

Default value

connection-pool.enabled

Enable connection pooling for the catalog.

false

connection-pool.max-size

The maximum number of idle and active connections in the pool.

10

connection-pool.max-connection-lifetime

The maximum lifetime of a connection. When a connection reaches this lifetime it is removed, regardless of how recently it has been active.

30m

connection-pool.pool-cache-max-size

The maximum size of the JDBC data source cache.

1000

connection-pool.pool-cache-ttl

The expiration time of a cached data source when it is no longer accessed.

30m

Starburst Cached Views#

The connectors supports table scan redirection to improve performance and reduce load on the data source.

Security#

The connector includes a number of security-related features, detailed in the following sections.

Password credential pass-through#

The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:

sap-hana.authentication.type=PASSWORD_PASS_THROUGH

For more information about configurations and limitations, see Password credential pass-through.