Command Reference
Last updated
Last updated
Create new server objects.
A foreign server object represents a named external datasource, along with configuration information required to access the datasource. Server objects are referenced during foreign table creation in order to indicate the backing datasource for the created table.
Foreign Server Options for File Data Wrappers (delimited_file, parquet_file, and regex_parsed_file)
Foreign Server Options for ODBC Data Wrapper
ODBC HeavyConnect is currently in beta.
Examples
Create a local server that uses the delimited_file
data wrapper and the file path /home/my_user/data/csv/
:
Create an AWS Server that uses the delimited_file
data wrapper with an AWS S3 bucket "fsi-s3-bucket" in the us-west-1 AWS region:
Create an ODBC server using a data source name (DSN):
Create an ODBC server using a connection string:
To prevent collisions with available HeavyDB default servers, server names with default*, system*, and internal* (case insensitive) prefixes are not allowed.
Update server objects.
Allow users to update properties of existing server objects.
Update the my_csv_server object's base path “/home/my_user/data/csv/” to “/home/my_user/data/new_csv/”:
Change the my_csv_server object's name to my_new_csv_server:
Delete an existing server object
Delete the my_csv_server server object:
View information about existing server objects.
Creates the foreign table.
A foreign table acts as a proxy to an external data source for which access details are provided by a server object. A foreign table provides an interface for querying data from an external datasource in a similar way to a local table.
Foreign Table Options for File Data Wrappers (delimited_file, parquet_file, and regex_parsed_file)
Additional Foreign Table Options for the delimited_file Data Wrapper
All delimited file parser options mentioned in https://docs.heavy.ai/loading-and-exporting-data/command-line/load-data#csv-tsv-import
Additional Foreign Table Options for the regex_parsed_file Data Wrapper
Foreign Table Options for ODBC Data Wrapper
ODBC HeavyConnect is currently in beta.
Create a foreign table that accesses a local CSV file at /home/my_user/data/csv/my_file.csv using this default server object.
Create a foreign table that accesses a local CSV file at path /home/my_user/data/csv/my_other_file.csv using the above my_csv_server object.
The default "default_local_delimited" server object can also be used to accomplish the above. However, if you use a custom server object, you do not have to specify the full path when creating foreign tables.
Create a foreign table that accesses a remote Parquet file named my_parquet_file, referenced by my_parquet_server object, for which content could change every 2 days.
Use the CREATE USER MAPPING command to set access credentials when using S3. S3_Select requires set credentials even when using public buckets.
Update properties of existing foreign tables.
The following table options can be altered:
ALLOW_FILE_ROLL_OFF
BUFFER_SIZE
REFRESH_INTERVAL
REFRESH_START_DATE_TIME
REFRESH_TIMING_TYPE
REFRESH_UPDATE_TYPE
SQL_ORDER_BY
SQL_SELECT
Allow users to update properties of existing foreign tables.
Delete a foreign table.
Allows users to delete an existing foreign table.
Refresh a foreign table.
Allows users to do an on-demand refresh of one or more foreign tables.
A user mapping is an object that contains access credentials to an external datasource/foreign server. User mappings provide secure handling of user credentials when access to restricted external datasources or foreign servers is required. User mapping DDL commands can only be executed by super users or owners of referenced foreign servers. The following DDL commands are supported.
Create new user mappings.
Allows users to create new user mappings. See related AWS documentation for information about AWS credentials.
AWS credentials that are configured on the server, such as IAM roles, can be used in place of user mappings. Enable this option with the allow-s3-server-privileges
server configuration.
ODBC User Mapping Options
ODBC HeavyConnect is currently in beta.
Examples
Set up credentials for users to be able to query a foreign table (see related documentation for more details) that gets data from a previously created AWS S3 backed my_parquet_server,
Delete existing user mappings.
Allows users to delete existing user mappings.
Examples
Delete user mapping that was setup for my_parquet_server.
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Result | Definition |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameters | Usage |
---|---|
Parameter | Usage |
---|---|
Parameters | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
Parameter | Usage |
---|---|
server_name
The name of the foreign server to be created.
foreign_data_wrapper_name
The foreign data wrapper to use. Available data wrappers are:
delimited_file, parquet_file, regex_parsed_file, and odbc (beta).
STORAGE_TYPE
Available types are LOCAL_FILE, AWS_S3.
BASE_PATH
Base file path for the data files or directory.
S3_BUCKET
S3 bucket containing files. Only applies to AWS_S3 storage type
AWS_REGION
AWS region where files/objects are stored. Only applies to AWS_S3 storage type.
S3_ENDPOINT
Custom S3 endpoint, which allows data access from S3-compatible services, such as Google Cloud Storage. Similar in function to the COPY FROM command.
DATA_SOURCE_NAME
Data source name (DSN) configured in the odbc.ini file. Only one of data_source_name
or connection_string
is required.
CONNECTION_STRING
A set of semicolon-separated key=value
pairs, which define the connection parameters for an RDMS. For example, Driver=DriverName;Database=DatabaseName;Servername=HostName;Port=1234
. Only one of data_source_name
or connection_string
is required.
Parameter
Usage
server_name
The name of the foreign server to be updated.
new_server_name
The new name that should be assigned to the foreign server.
option
Clause specific to the option to be updated.
new_owner
Username of the new foreign server owner.
foreign_data_wrapper_name
The name of the foreign data wrapper that should be associated with the server object.
server_name
The name of the foreign server to be deleted.
server_name
Server name used in the CREATE/ALTER SERVER command.
owner
User who created the server object or who has been assigned as the owner of the server object via an ALTER SERVER command.
data_wrapper
Name of data wrapper associated with server object.
options
JSON string representation of options used in CREATE/ALTER SERVER command.
created_at
Timestamp when the server object was created.
table_name
The name of the foreign table to be created.
column_name
The name of the column to be created.
data_type
Data type of the column to be created.
NOT NULL
Specify that NULL Is not allowed for this column type.
ENCODING
Specify the encoding of the data type.
SERVER {custom_server_name | default_local_delimited | default_local_parquet | default_local_regex_parsed}
The server name used in the CREATE SERVER command.
ALLOW_FILE_ROLL_OFF
Allow the oldest files (based on file path sort order) in a directory of files to be rolled off when a refresh occurs. Only applies when REFRESH_UPDATE_TYPE is set to APPEND.
REFRESH_UPDATE_TYPE={APPEND | ALL}
Specifies what portion of the data is refreshed.
APPEND specifies that new data is added to existing. ALL specifies that updated data can exist anywhere within the data set.
REFRESH_TIMING_TYPE={MANUAL | SCHEDULED}
Identifies if data refresh activities are manual or automated.
REFRESH_START_DATE_TIME 'YYYY-MM-DDTHH:MM:SSZ'
Identifies the time to start data refreshes. Used with REFRESH_TIMING_TYPE=’SCHEDULED’. Time is specified in UTC time in ISO 8601 format and cannot be in the past.
REFRESH_INTERVAL interval_count{D | M}
Used with REFRESH_TIMING_TYPE=’SCHEDULED’. Available periods: D(ays), M(onth). To specify 5 days, use ‘5D’.
FILE_PATH
Name or path to the file that serves as the datasource for the foreign table. This file path is appended to the BASE_PATH specified in the foreign server.
REGEX_PATH_FILTER
FILE_SORT_ORDER_BY
FILE_SORT_REGEX
S3_ACCESS_TYPE
Used to identify the S3 CSV access type. Supported values are S3_Direct and S3_Select. This value is not required if using Parquet data formats.
The S3_Direct access type uses HeavyConnects's standard CSV data wrapper and HeavyDB supported
datatypes.
Choosing the S3_Select access type uses the S3 Select capability which can improve performance. S3 Select does not support GEO and ARRAY types and TEXT types with embedded row delimiters such as newlines. If your data contains these types, use the S3_Direct access type.
When using S3_Select with public buckets, the CREATE USER MAPPING FOR PUBLIC SERVER is required to set credentials but is not required when using public buckets and S3_Direct.
LINE_REGEX
Regular expression that matches an line in the given file. Each capturing group in the regular expression should correspond to a column in the foreign table definition. Capturing groups are expected to match columns in number and order specified in the foreign table definition.
LINE_START_REGEX
Regular expression pattern that should match the start of a new entry in the given file. This option is useful for use cases where entries span multiple lines e.g. when a message contains line breaks. If this option is specified, lines with beginnings that do not match this regular expression are appended to the previous line. This parameter is optional.
SQL_SELECT
SQL statement used to issue a query to an RDMS database.
SQL_ORDER_BY
Comma separated list of column names that provide a unique ordering for the result set returned by the SQL statement that is set in the SQL_SELECT option.
table_name
The name of the foreign table to be updated.
new_table_name
The new name of the foreign table.
column_name
The column name to be updated.
new_column_name
The new name of the column.
table_name
The name of the foreign table to be deleted.
EVICT
Indicate whether to evict all cached foreign table data while refreshing (TRUE) or not (FALSE).
table_name
The name of the foreign table(s) to be refreshed.
server_name
Name of the server the user mapping applies to.
s3_access_key
AWS access key id for accessing S3 objects (see related documentation for more details). This is provided as an option and only applies to AWS S3 based servers (i.e. delimited_file, parquet_file, or regex_parsed_file data wrappers with storage_type of AWS_S3).
s3_secret_key
AWS secret access key for accessing S3 objects (see related documentation for more details). This is provided as an option and only applies to AWS S3 based servers (i.e. delimited_file, parquet_file, or regex_parsed_file data wrappers with storage_type of AWS_S3).
s3_session_token
AWS session token generated for accessing S3 objects using temporary credentials; see AWS documentation for more information. This option applies only to AWS S3-based servers (for example, delimited_file, parquet_file, or regex_parsed_file data wrappers with storage type AWS_S3).
username
Username on the RDMS. This option only applies to user mappings that reference an odbc
foreign server, which uses data_source_name
.
password
Password credential for the RDMS. This option only applies to user mappings that reference an odbc
foreign server, which uses data_source_name
.
credential_string
A set of semicolon separated key=value
pairs, which define the access credential parameters for an RDMS. For example, Username=username;Password=password
. This option only applies to user mappings that reference an odbc
foreign server, which uses connection_string
.
server_name
Name of the server the user mapping applies to.