Hive connector#
Overview#
The Hive connector allows querying data stored in an Apache Hive data warehouse. Hive is a combination of three components:
Data files in varying formats, that are typically stored in the Hadoop Distributed File System (HDFS) or in object storage systems such as Amazon S3.
Metadata about how the data files are mapped to schemas and tables. This metadata is stored in a database, such as MySQL, and is accessed via the Hive metastore service.
A query language called HiveQL. This query language is executed on a distributed computing framework such as MapReduce or Tez.
Trino only uses the first two components: the data and the metadata. It does not use HiveQL or any part of Hive’s execution environment.
Requirements#
The Hive connector requires a Hive metastore service (HMS), or a compatible implementation of the Hive metastore, such as AWS Glue Data Catalog.
Apache Hadoop 2.x and 3.x are supported, along with derivative distributions, including Cloudera CDH 5 and Hortonworks Data Platform (HDP).
Many distributed storage systems including HDFS, Amazon S3 or S3-compatible systems, Google Cloud Storage, and Azure Storage can be queried with the Hive connector.
The coordinator and all workers must have network access to the Hive metastore and the storage system. Hive metastore access with the Thrift protocol defaults to using port 9083.
Supported file types#
The following file types are supported for the Hive connector:
ORC
Parquet
Avro
RCText (RCFile using
ColumnarSerDe
)RCBinary (RCFile using
LazyBinaryColumnarSerDe
)SequenceFile
JSON (using
org.apache.hive.hcatalog.data.JsonSerDe
)CSV (using
org.apache.hadoop.hive.serde2.OpenCSVSerde
)TextFile
Metastore configuration for Avro#
In order to enable first-class support for Avro tables when using
Hive 3.x, you need to add the following property definition to the Hive metastore
configuration file hive-site.xml
(and restart the metastore service):
<property>
<!-- https://community.hortonworks.com/content/supportkb/247055/errorjavalangunsupportedoperationexception-storage.html -->
<name>metastore.storage.schema.reader.impl</name>
<value>org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader</value>
</property>
Supported table types#
Transactional and ACID tables#
When connecting to a Hive metastore version 3.x, the Hive connector supports reading from and writing to insert-only and ACID tables, with full support for partitioning and bucketing. Row-level DELETE is supported for ACID tables, as well as SQL UPDATE. UPDATE of partition key columns and bucket columns is not supported.
ACID tables created with Hive Streaming Ingest are not supported.
Materialized views#
The Hive connector supports reading from Hive materialized views. In Trino, these views are presented as regular, read-only tables.
Hive views#
Hive views are defined in HiveQL and stored in the Hive Metastore Service. They are analyzed to allow read access to the data.
The Hive connector includes support for reading Hive views with three different modes.
Disabled
Legacy
Experimental
You can configure the behavior in your catalog properties file.
Disabled
The default behavior is to ignore Hive views. This means that your business logic and data encoded in the views is not available in Trino.
Legacy
A very simple implementation to execute Hive views, and therefore allow read
access to the data in Trino, can be enabled with
hive.translate-hive-views=true
and
hive.legacy-hive-view-translation=true
.
For temporary usage of the legacy behavior for a specific catalog, you can set
the legacy_hive_view_translation
catalog session property to true
.
This legacy behavior interprets any HiveQL query that defines a view as if it is written in SQL. It does not do any translation, but instead relies on the fact that HiveQL is very similar to SQL.
This works for very simple Hive views, but can lead to problems for more complex queries. For example, if a HiveQL function has an identical signature but different behaviors to the SQL version, the returned results may differ. In more extreme cases the queries might fail, or not even be able to be parsed and executed.
Experimental
The new behavior is better engineered, and has the potential to become a lot more powerful than the legacy implementation. It can analyze, process, and rewrite Hive views and contained expressions and statements.
It is considered an experimental feature and continues to change with each release. However it is already suitable for many use cases, and usage is encouraged.
You can enable the experimental behavior with
hive.translate-hive-views=true
.
Keep in mind that numerous features are not yet implemented when experimenting with this feature. The following is an incomplete list of missing functionality:
HiveQL
current_date
,current_timestamp
, and othersHive function calls including
translate()
, window functions and othersCommon table expressions and simple case expressions
Honor timestamp precision setting
Support all Hive data types and correct mapping to Trino types
Ability to process custom UDFs
Configuration#
Create etc/catalog/hive.properties
with the following contents
to mount the hive
connector as the hive
catalog,
replacing example.net:9083
with the correct host and port
for your Hive metastore Thrift service:
connector.name=hive
hive.metastore.uri=thrift://example.net:9083
Multiple Hive clusters#
You can have as many catalogs as you need, so if you have additional
Hive clusters, simply add another properties file to etc/catalog
with a different name, making sure it ends in .properties
. For
example, if you name the property file sales.properties
, Trino
creates a catalog named sales
using the configured connector.
HDFS configuration#
For basic setups, Trino configures the HDFS client automatically and
does not require any configuration files. In some cases, such as when using
federated HDFS or NameNode high availability, it is necessary to specify
additional HDFS client options in order to access your HDFS cluster. To do so,
add the hive.config.resources
property to reference your HDFS config files:
hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
Only specify additional configuration files if necessary for your setup. We recommend reducing the configuration files to have the minimum set of required properties, as additional properties may cause problems.
The configuration files must exist on all Trino nodes. If you are referencing existing Hadoop config files, make sure to copy them to any Trino nodes that are not running Hadoop.
HDFS username and permissions#
Before running any CREATE TABLE
or CREATE TABLE AS
statements
for Hive tables in Trino, you need to check that the user Trino is
using to access HDFS has access to the Hive warehouse directory. The Hive
warehouse directory is specified by the configuration variable
hive.metastore.warehouse.dir
in hive-site.xml
, and the default
value is /user/hive/warehouse
.
When not using Kerberos with HDFS, Trino accesses HDFS using the
OS user of the Trino process. For example, if Trino is running as
nobody
, it accesses HDFS as nobody
. You can override this
username by setting the HADOOP_USER_NAME
system property in the
Trino JVM config, replacing hdfs_user
with the
appropriate username:
-DHADOOP_USER_NAME=hdfs_user
The hive
user generally works, since Hive is often started with
the hive
user and this user has access to the Hive warehouse.
Whenever you change the user Trino is using to access HDFS, remove
/tmp/presto-*
on HDFS, as the new user may not have access to
the existing temporary directories.
Accessing Hadoop clusters protected with Kerberos authentication#
Kerberos authentication is supported for both HDFS and the Hive metastore. However, Kerberos authentication by ticket cache is not yet supported.
The properties that apply to Hive connector security are listed in the Hive Configuration Properties table. Please see the Hive connector security configuration section for a more detailed discussion of the security options in the Hive connector.
Hive configuration properties#
Property Name |
Description |
Default |
---|---|---|
|
An optional comma-separated list of HDFS
configuration files. These files must exist on the
machines running Trino. Only specify this if
absolutely necessary to access HDFS.
Example: |
|
|
Enable reading data from subdirectories of table or
partition locations. If disabled, subdirectories are
ignored. This is equivalent to the
|
|
|
Ignore partitions when the file system location does not exist rather than failing the query. This skips data that may be expected to be part of the table. |
|
|
The default file format used when creating new tables. |
|
|
The compression codec to use when writing files.
Possible values are |
|
|
Force splits to be scheduled on the same node as the Hadoop DataNode process serving the split data. This is useful for installations where Trino is collocated with every DataNode. |
|
|
Should new partitions be written using the existing table format or the default Trino format? |
|
|
Can new data be inserted into existing partitions?
If |
|
|
What happens when data is inserted into an existing partition? Possible values are
|
|
|
Should empty files be created for buckets that have no data? |
|
|
Maximum number of partitions per writer. |
100 |
|
Maximum number of partitions for a single table scan. |
100,000 |
|
HDFS authentication type.
Possible values are |
|
|
Enable HDFS end user impersonation. |
|
|
The Kerberos principal that Trino will use when connecting to HDFS. |
|
|
HDFS client keytab location. |
|
|
Hadoop file system replication factor. |
|
|
||
|
Path of config file to use when |
|
|
Enable writes to non-managed (external) Hive tables. |
|
|
Enable creating non-managed (external) Hive tables. |
|
|
Enables automatic column level statistics collection on write. See Table Statistics for details. |
|
|
Enable query pushdown to AWS S3 Select service. |
|
|
Maximum number of simultaneously open connections to S3 for S3 Select pushdown. |
500 |
|
Cache directory listing for specific tables. Examples:
|
|
|
Maximum total number of cached file status entries. |
1,000,000 |
|
How long a cached directory listing should be considered valid. |
|
|
Adjusts binary encoded timestamp values to a specific time zone. For Hive 3.1+, this should be set to UTC. |
JVM default |
|
Specifies the precision to use for Hive columns of type
|
|
|
Controls whether the temporary staging directory configured
at |
|
|
Controls the location of temporary staging directory that
is used for write operations. The |
|
|
Enable translation for Hive views. |
|
|
Use the legacy algorithm to translate
Hive views. You can use the
|
|
|
Improve parallelism of partitioned and bucketed table writes. When disabled, the number of writing threads is limited to number of buckets. |
|
ORC format configuration properties#
The following properties are used to configure the read and write operations with ORC files performed by the Hive connector.
Property Name |
Description |
Default |
---|---|---|
|
Sets the default time zone for legacy ORC files that did not declare a time zone. |
JVM default |
|
Access ORC columns by name. By default, columns in ORC files are
accessed by their ordinal position in the Hive table definition. The
equivalent catalog session property is |
|
Parquet format configuration properties#
The following properties are used to configure the read and write operations with Parquet files performed by the Hive connector.
Property Name |
Description |
Default |
---|---|---|
|
Adjusts timestamp values to a specific time zone. For Hive 3.1+, set this to UTC. |
JVM default |
|
Access Parquet columns by name by default. Set this property to
|
|
Metastore configuration properties#
The required Hive metastore can be configured with a number of properties. Specific properties can be used to further configure the Thrift or Glue metastore.
Property Name |
Description |
Default |
---|---|---|
|
The type of Hive metastore to use. Trino currently supports
the default Hive Thrift metastore ( |
|
|
Duration how long cached metastore data should be considered valid. |
|
|
Maximum number of metastore data objects in the Hive metastore cache. |
10000 |
|
Asynchronously refresh cached metastore data after access if it is older than this but is not yet expired, allowing subsequent accesses to see fresh data. |
|
|
Maximum threads used to refresh cached metastore data. |
10 |
|
Timeout for Hive metastore requests. |
|
|
Controls whether to hide Delta Lake tables in table listings. Currently applies only when using the AWS Glue metastore. |
false |
Thrift metastore configuration properties#
Property Name |
Description |
Default |
---|---|---|
|
The URI(s) of the Hive metastore to connect to using the
Thrift protocol. If multiple URIs are provided, the first
URI is used by default, and the rest of the URIs are
fallback metastores. This property is required.
Example: |
|
|
The username Trino uses to access the Hive metastore. |
|
|
Hive metastore authentication type.
Possible values are |
|
|
Enable Hive metastore end user impersonation. |
|
|
Time to live delegation token cache for metastore. |
|
|
Delegation token cache maximum size. |
1,000 |
|
Use SSL when connecting to metastore. |
|
|
Path to private key and client certificate (key store). |
|
|
Password for the private key. |
|
|
Path to the server certificate chain (trust store). Required when SSL is enabled. |
|
|
Password for the trust store |
|
|
The Kerberos principal of the Hive metastore service. |
|
|
The Kerberos principal that Trino uses when connecting to the Hive metastore service. |
|
|
Hive metastore client keytab location. |
AWS Glue catalog configuration properties#
In order to use a Glue catalog, ensure to configure the metastore with
hive.metastore=glue
and provide further details with the following
properties:
Property Name |
Description |
---|---|
|
AWS region of the Glue Catalog. This is required when not
running in EC2, or when the catalog is in a different region.
Example: |
|
Glue API endpoint URL (optional).
Example: |
|
Pin Glue requests to the same region as the EC2 instance
where Trino is running, defaults to |
|
Max number of concurrent connections to Glue,
defaults to |
|
Maximum number of error retries for the Glue client,
defaults to |
|
Default warehouse directory for schemas created without an
explicit |
|
Fully qualified name of the Java class to use for obtaining AWS credentials. Can be used to supply a custom credentials provider. |
|
AWS access key to use to connect to the Glue Catalog. If
specified along with |
|
AWS secret key to use to connect to the Glue Catalog. If
specified along with |
|
The ID of the Glue Catalog in which the metadata database resides. |
|
ARN of an IAM role to assume when connecting to the Glue Catalog. |
|
External ID for the IAM role trust policy when connecting to the Glue Catalog. |
|
Number of segments for partitioned Glue tables, defaults
to |
|
Number of threads for parallel partition fetches from Glue,
defaults to |
|
Number of threads for parallel statistic fetches from Glue,
defaults to |
|
Number of threads for parallel statistic writes to Glue,
defaults to |
Google Cloud Storage configuration#
The Hive connector can access data stored in GCS, using the gs://
URI prefix.
Please refer to the Hive connector GCS tutorial for step-by-step instructions.
GCS configuration properties#
Property Name |
Description |
---|---|
|
JSON key file used to authenticate with Google Cloud Storage. |
|
Use client-provided OAuth token to access Google Cloud Storage. This is mutually exclusive with a global JSON key file. |
Performance tuning configuration properties#
The following table describes performance tuning properties for the Hive connector.
Warning
Performance tuning configuration properties are considered expert-level features. Altering these properties from their default values is likely to cause instability and performance degradation.
Property name |
Description |
Default value |
---|---|---|
|
The target number of buffered splits for each table scan in a query, before the scheduler tries to pause. |
|
|
The maximum number of splits generated per second per table scan. This can be used to reduce the load on the storage system. By default, there is no limit, which results in Presto maximizing the parallelization of data access. |
|
|
For each table scan, the coordinator first assigns file sections of up
to |
|
|
The size of a single file section assigned to a worker until
|
|
|
The largest size of a single file section assigned to a worker. Smaller splits result in more parallelism and thus can decrease latency, but also have more overhead and increase load on the system. |
|
Table statistics#
The Hive connector supports collecting and managing table statistics to improve query processing performance.
When writing data, the Hive connector always collects basic statistics
(numFiles
, numRows
, rawDataSize
, totalSize
)
and by default will also collect column level statistics:
Column Type |
Collectible Statistics |
---|---|
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values, min/max values |
|
number of nulls, number of distinct values |
|
number of nulls, number of distinct values |
|
number of nulls |
|
number of nulls, number of true/false values |
Updating table and partition statistics#
If your queries are complex and include joining large data sets, running ANALYZE on tables/partitions may improve query performance by collecting statistical information about the data.
When analyzing a partitioned table, the partitions to analyze can be specified
via the optional partitions
property, which is an array containing
the values of the partition keys in the order they are declared in the table schema:
ANALYZE table_name WITH (
partitions = ARRAY[
ARRAY['p1_value1', 'p1_value2'],
ARRAY['p2_value1', 'p2_value2']])
This query will collect statistics for two partitions with keys
p1_value1, p1_value2
and p2_value1, p2_value2
.
On wide tables, collecting statistics for all columns can be expensive and can have a
detrimental effect on query planning. It is also typically unnecessary - statistics are
only useful on specific columns, like join keys, predicates, grouping keys. One can
specify a subset of columns to be analyzed via the optional columns
property:
ANALYZE table_name WITH (
partitions = ARRAY[ARRAY['p2_value1', 'p2_value2']],
columns = ARRAY['col_1', 'col_2'])
This query collects statistics for columns col_1
and col_2
for the partition
with keys p2_value1, p2_value2
.
Note that if statistics were previously collected for all columns, they need to be dropped before re-analyzing just a subset:
CALL system.drop_stats('schema_name', 'table_name')
You can also drop statistics for selected partitions only:
CALL system.drop_stats(
schema_name => 'schema',
table_name => 'table',
partition_values => ARRAY[ARRAY['p2_value1', 'p2_value2']])
Dynamic filtering#
The Hive connector supports the dynamic filtering optimization. Dynamic partition pruning is supported for partitioned tables stored in any file format for broadcast as well as partitioned joins. Dynamic bucket pruning is supported for bucketed tables stored in any file format for broadcast joins only.
For tables stored in ORC or Parquet file format, dynamic filters are also pushed into local table scan on worker nodes for broadcast joins. Dynamic filter predicates pushed into the ORC and Parquet readers are used to perform stripe or row-group pruning and save on disk I/O. Sorting the data within ORC or Parquet files by the columns used in join criteria significantly improves the effectiveness of stripe or row-group pruning. This is because grouping similar data within the same stripe or row-group greatly improves the selectivity of the min/max indexes maintained at stripe or row-group level.
Delaying execution for dynamic filters#
It can often be beneficial to wait for the collection of dynamic filters before starting a table scan. This extra wait time can potentially result in significant overall savings in query and CPU time, if dynamic filtering is able to reduce the amount of scanned data.
For the Hive connector, a table scan can be delayed for a configured amount of
time until the collection of dynamic filters by using the configuration property
hive.dynamic-filtering-probe-blocking-timeout
in the catalog file or the catalog
session property <hive-catalog>.dynamic_filtering_probe_blocking_timeout
.
Schema evolution#
Hive allows the partitions in a table to have a different schema than the table. This occurs when the column types of a table are changed after partitions already exist (that use the original column types). The Hive connector supports this by allowing the same conversions as Hive:
varchar
to and fromtinyint
,smallint
,integer
andbigint
real
todouble
Widening conversions for integers, such as
tinyint
tosmallint
Any conversion failure results in null, which is the same behavior
as Hive. For example, converting the string 'foo'
to a number,
or converting the string '1234'
to a tinyint
(which has a
maximum value of 127
).
Avro schema evolution#
Trino supports querying and manipulating Hive tables with the Avro storage format, which has the schema set based on an Avro schema file/literal. Trino is also capable of creating the tables in Trino by infering the schema from a valid Avro schema file located locally, or remotely in HDFS/Web server.
To specify that the Avro schema should be used for interpreting table’s data one must use avro_schema_url
table property.
The schema can be placed remotely in
HDFS (e.g. avro_schema_url = 'hdfs://user/avro/schema/avro_data.avsc'
),
S3 (e.g. avro_schema_url = 's3n:///schema_bucket/schema/avro_data.avsc'
),
a web server (e.g. avro_schema_url = 'http://example.org/schema/avro_data.avsc'
)
as well as local file system. This URL, where the schema is located, must be accessible from the
Hive metastore and Trino coordinator/worker nodes.
The table created in Trino using avro_schema_url
behaves the same way as a Hive table with avro.schema.url
or avro.schema.literal
set.
Example:
CREATE TABLE hive.avro.avro_data (
id bigint
)
WITH (
format = 'AVRO',
avro_schema_url = '/usr/local/avro_data.avsc'
)
The columns listed in the DDL (id
in the above example) is ignored if avro_schema_url
is specified.
The table schema matches the schema in the Avro schema file. Before any read operation, the Avro schema is
accessed so the query result reflects any changes in schema. Thus Trino takes advantage of Avro’s backward compatibility abilities.
If the schema of the table changes in the Avro schema file, the new schema can still be used to read old data. Newly added/renamed fields must have a default value in the Avro schema file.
The schema evolution behavior is as follows:
Column added in new schema: Data created with an older schema produces a default value when table is using the new schema.
Column removed in new schema: Data created with an older schema no longer outputs the data from the column that was removed.
Column is renamed in the new schema: This is equivalent to removing the column and adding a new one, and data created with an older schema produces a default value when table is using the new schema.
Changing type of column in the new schema: If the type coercion is supported by Avro or the Hive connector, then the conversion happens. An error is thrown for incompatible types.
Limitations#
The following operations are not supported when avro_schema_url
is set:
CREATE TABLE AS
is not supported.Using partitioning(
partitioned_by
) or bucketing(bucketed_by
) columns are not supported inCREATE TABLE
.ALTER TABLE
commands modifying columns are not supported.
Procedures#
system.create_empty_partition(schema_name, table_name, partition_columns, partition_values)
Create an empty partition in the specified table.
system.sync_partition_metadata(schema_name, table_name, mode, case_sensitive)
Check and update partitions list in metastore. There are three modes available:
ADD
: add any partitions that exist on the file system, but not in the metastore.DROP
: drop any partitions that exist in the metastore, but not on the file system.FULL
: perform bothADD
andDROP
.
The
case_sensitive
argument is optional. The default value istrue
for compatibility with Hive’sMSCK REPAIR TABLE
behavior, which expects the partition column names in file system paths to use lowercase (e.g.col_x=SomeValue
). Partitions on the file system not conforming to this convention are ignored, unless the argument is set tofalse
.system.drop_stats(schema_name, table_name, partition_values)
Drops statistics for a subset of partitions or the entire table. The partitions are specified as an array whose elements are arrays of partition values (similar to the
partition_values
argument increate_empty_partition
). Ifpartition_values
argument is omitted, stats are dropped for the entire table.
system.register_partition(schema_name, table_name, partition_columns, partition_values, location)
Registers existing location as a new partition in the metastore for the specified table.
When the
location
argument is omitted, the partition location is constructed usingpartition_columns
andpartition_values
.Due to security reasons, the procedure is enabled only when
hive.allow-register-partition-procedure
is set totrue
.
system.unregister_partition(schema_name, table_name, partition_columns, partition_values)
Unregisters given, existing partition in the metastore for the specified table. The partition data is not deleted.
Special columns#
In addition to the defined columns, the Hive connector automatically exposes metadata in a number of hidden columns in each table. You can use these columns in your SQL statements like any other column, e.g., they can be selected directly or used in conditional statements.
$bucket
: Bucket number for this row$path
: Full file system path name of the file for this row$file_modified_time
: Date and time of the last modification of the file for this row$file_size
: Size of the file for this row$partition
: Partition name for this row
Special tables#
Table properties#
The raw Hive table properties are available as a hidden table, containing a
separate column per table property, with a single row containing the property
values. The properties table name is the same as the table name with
$properties
appended.
You can inspect the property names and values with a simple query:
SELECT * FROM hive.web."page_views$properties";
Examples#
The Hive connector supports querying and manipulating Hive tables and schemas (databases). While some uncommon operations need to be performed using Hive directly, most operations can be performed using Trino.
Create a new Hive schema named web
that stores tables in an
S3 bucket named my-bucket
:
CREATE SCHEMA hive.web
WITH (location = 's3://my-bucket/')
Create a new Hive table named page_views
in the web
schema
that is stored using the ORC file format, partitioned by date and
country, and bucketed by user into 50
buckets. Note that Hive
requires the partition columns to be the last columns in the table:
CREATE TABLE hive.web.page_views (
view_time timestamp,
user_id bigint,
page_url varchar,
ds date,
country varchar
)
WITH (
format = 'ORC',
partitioned_by = ARRAY['ds', 'country'],
bucketed_by = ARRAY['user_id'],
bucket_count = 50
)
Drop a partition from the page_views
table:
DELETE FROM hive.web.page_views
WHERE ds = DATE '2016-08-09'
AND country = 'US'
Add an empty partition to the page_views
table:
CALL system.create_empty_partition(
schema_name => 'web',
table_name => 'page_views',
partition_columns => ARRAY['ds', 'country'],
partition_values => ARRAY['2016-08-09', 'US']);
Drop stats for a partition of the page_views
table:
CALL system.drop_stats(
schema_name => 'web',
table_name => 'page_views',
partition_values => ARRAY['2016-08-09', 'US']);
Query the page_views
table:
SELECT * FROM hive.web.page_views
List the partitions of the page_views
table:
SELECT * FROM hive.web."page_views$partitions"
Create an external Hive table named request_logs
that points at
existing data in S3:
CREATE TABLE hive.web.request_logs (
request_time timestamp,
url varchar,
ip varchar,
user_agent varchar
)
WITH (
format = 'TEXTFILE',
external_location = 's3://my-bucket/data/logs/'
)
Collect statistics for the request_logs
table:
ANALYZE hive.web.request_logs;
The examples shown here should work on Google Cloud Storage after replacing s3://
with gs://
.
Cleaning up#
Drop the external table request_logs
. This only drops the metadata
for the table. The referenced data directory is not deleted:
DROP TABLE hive.web.request_logs
Drop a schema:
DROP SCHEMA hive.web
Hive connector limitations#
ALTER SCHEMA usage fails, since the Hive metastore does not support renaming schemas.
DELETE applied to non-transactional tables is only supported if the table is partitioned and the
WHERE
clause matches entire partitions. Transactional Hive tables with ORC format support “row-by-row” deletion, in which theWHERE
clause may match arbitrary sets of rows.UPDATE is only supported for transactional Hive tables with format ORC.
UPDATE
of partition or bucket columns is not supported.
Hive 3 related limitations#
For security reasons, the
sys
system catalog is not accessible.Hive’s
timestamp with local zone
data type is not supported. It is possible to read from a table with a column of this type, but the column data is not accessible. Writing to such a table is not supported.Due to Hive issues HIVE-21002 and HIVE-22167, Trino does not correctly read
timestamp
values from Parquet, RCBinary, or Avro file formats created by Hive 3.1 or later. When reading from these file formats, Trino returns different results than Hive.CREATE TABLE AS can be used to create transactional tables in ORC format like this:
CREATE TABLE <name> WITH ( format='ORC', transactional=true, ) AS <query>
Trino does not support gathering table statistics for Hive transactional tables. You need to use Hive to gather table statistics with
ANALYZE TABLE COMPUTE STATISTICS
after table creation.