HEAVY.AI Docs
v8.1.0
v8.1.0
  • Welcome to HEAVY.AI Documentation
  • Overview
    • Overview
    • Release Notes
  • Installation and Configuration
    • System Requirements
      • Hardware Reference
      • Software Requirements
      • Licensing
    • Installation
      • Free Version
      • Installing on Docker
        • HEAVY.AI Installation using Docker on Ubuntu
      • Installing on Ubuntu
        • HEAVY.AI Installation on Ubuntu
        • Install NVIDIA Drivers and Vulkan on Ubuntu
      • Installing on Rocky Linux / RHEL
        • HEAVY.AI Installation on RHEL
        • Install NVIDIA Drivers and Vulkan on Rocky Linux and RHEL
      • Getting Started on AWS
      • Getting Started on GCP
      • Getting Started on Azure
      • Getting Started on Kubernetes (BETA)
      • Upgrading
        • Upgrading HEAVY.AI
        • Upgrading from Omnisci to HEAVY.AI 6.0
        • CUDA Compatibility Drivers
      • Uninstalling
      • Ports
    • Services and Utilities
      • Using Services
      • Using Utilities
    • Executor Resource Manager
    • Configuration Parameters
      • Overview
      • Configuration Parameters for HeavyDB
      • Configuration Parameters for HEAVY.AI Web Server
      • Configuration Parameters for HeavyIQ
    • Security
      • Roles and Privileges
        • Column-Level Security
      • Connecting Using SAML
      • Implementing a Secure Binary Interface
      • Encrypted Credentials in Custom Applications
      • LDAP Integration
    • Distributed Configuration
  • Loading and Exporting Data
    • Supported Data Sources
      • Kafka
      • Using HeavyImmerse Data Manager
      • Importing Geospatial Data
    • Command Line
      • Loading Data with SQL
      • Exporting Data
  • SQL
    • Data Definition (DDL)
      • Datatypes
      • Users and Databases
      • Tables
      • System Tables
      • Views
      • Policies
      • Comment
    • Data Manipulation (DML)
      • SQL Capabilities
        • ALTER SESSION SET
        • ALTER SYSTEM CLEAR
        • DELETE
        • EXPLAIN
        • INSERT
        • KILL QUERY
        • LIKELY/UNLIKELY
        • SELECT
        • SHOW
        • UPDATE
        • Arrays
        • Logical Operators and Conditional and Subquery Expressions
        • Table Expression and Join Support
        • Type Casts
      • Geospatial Capabilities
        • Uber H3 Hexagonal Modeling
      • Functions and Operators
      • System Table Functions
        • generate_random_strings
        • generate_series
        • tf_compute_dwell_times
        • tf_feature_self_similarity
        • tf_feature_similarity
        • tf_geo_rasterize
        • tf_geo_rasterize_slope
        • tf_graph_shortest_path
        • tf_graph_shortest_paths_distances
        • tf_load_point_cloud
        • tf_mandelbrot*
        • tf_point_cloud_metadata
        • tf_raster_contour_lines; tf_raster_contour_polygons
        • tf_raster_graph_shortest_slope_weighted_path
        • tf_rf_prop_max_signal (Directional Antennas)
        • ts_rf_prop_max_signal (Isotropic Antennas)
        • tf_rf_prop
      • Window Functions
      • Reserved Words
      • SQL Extensions
      • HeavyIQ LLM_TRANSFORM
  • HeavyImmerse
    • Introduction to HeavyImmerse
    • Admin Portal
    • Control Panel
    • Working with Dashboards
      • Dashboard List
      • Creating a Dashboard
      • Configuring a Dashboard
      • Duplicating and Sharing Dashboards
    • Measures and Dimensions
    • Using Parameters
    • Using Filters
    • Using Cross-link
    • Chart Animation
    • Multilayer Charts
    • SQL Editor
    • Customization
    • Joins (Beta)
    • Chart Types
      • Overview
      • Bubble
      • Choropleth
      • Combo
      • Contour
      • Cross-Section
      • Gauge
      • Geo Heatmap
      • Heatmap
      • Linemap
      • Number
      • Pie
      • Pointmap
      • Scatter Plot
      • Skew-T
      • Table
      • Text Widget
      • Wind Barb
    • Deprecated Charts
      • Bar
      • Combo - Original
      • Histogram
      • Line
      • Stacked Bar
    • HeavyIQ SQL Notebook
  • HEAVYIQ Conversational Analytics
    • HeavyIQ Overview
      • HeavyIQ Guidance
  • HeavyRF
    • Introduction to HeavyRF
    • Getting Started
    • HeavyRF Table Functions
  • HeavyConnect
    • HeavyConnect Release Overview
    • Getting Started
    • Best Practices
    • Examples
    • Command Reference
    • Parquet Data Wrapper Reference
    • ODBC Data Wrapper Reference
    • Raster Data Wrapper Reference
  • HeavyML (BETA)
    • HeavyML Overview
    • Clustering Algorithms
    • Regression Algorithms
      • Linear Regression
      • Random Forest Regression
      • Decision Tree Regression
      • Gradient Boosting Tree Regression
    • Principal Components Analysis
  • Python / Data Science
    • Data Science Foundation
    • JupyterLab Installation and Configuration
    • Using HEAVY.AI with JupyterLab
    • Python User-Defined Functions (UDFs) with the Remote Backend Compiler (RBC)
      • Installation
      • Registering and Using a Function
      • User-Defined Table Functions
      • RBC UDF/UDTF Example Notebooks
      • General UDF/UDTF Tutorial Notebooks
      • RBC API Reference
    • Ibis
    • Interactive Data Exploration with Altair
    • Additional Examples
      • Forecasting with HEAVY.AI and Prophet
  • APIs and Interfaces
    • Overview
    • heavysql
    • Thrift
    • JDBC
    • ODBC
    • Vega
      • Vega Tutorials
        • Vega at a Glance
        • Getting Started with Vega
        • Getting More from Your Data
        • Creating More Advanced Charts
        • Using Polys Marks Type
        • Vega Accumulator
        • Using Transform Aggregation
        • Improving Rendering with SQL Extensions
      • Vega Reference Overview
        • data Property
        • projections Property
        • scales Property
        • marks Property
      • Migration
        • Migrating Vega Code to Dynamic Poly Rendering
      • Try Vega
    • RJDBC
    • SQuirreL SQL
    • heavyai-connector
  • Tutorials and Demos
    • Loading Data
    • Using Heavy Immerse
    • Hello World
    • Creating a Kafka Streaming Application
    • Getting Started with Open Source
    • Try Vega
  • Troubleshooting and Special Topics
    • FAQs
    • Troubleshooting
    • Vulkan Renderer
    • Optimizing
    • Known Issues and Limitations
    • Logs and Monitoring
    • Archived Release Notes
      • Release 6.x
      • Release 5.x
      • Release 4.x
      • Release 3.x
Powered by GitBook
On this page
  • ODBC Driver Installation
  • ODBC Configuration Files
  • Setting Up ODBC Connectors for Heavy Immerse
  • Working with HEAVY.AI Docker Images
  • ODBC Data Wrapper Assumptions
  • HeavyDB to ODBC Data Type Mapping
  • RDMS-Specific Notes
Export as PDF
  1. HeavyConnect

ODBC Data Wrapper Reference

ODBC HeavyConnect and import is currently a beta feature.

This topic provides setup and configuration details and information about supported data types when using ODBC HeavyConnect or import. This reference also describes ODBC data wrapper assumptions and supported mappings of HeavyDB column types to ODBC column types, and provides RDMS-specific notes.

ODBC Driver Installation

To HeavyConnect to or import from an external RDMS using the ODBC data wrapper, make sure that an appropriate ODBC driver is installed for the RDMS you want to access. RDMS vendors websites usually provide ODBC driver download and configuration instructions, and in certain cases, ODBC drivers are managed by third-party vendors that provide these instructions.

ODBC Configuration Files

Two key configuration files need to be set up when using ODBC HeavyConnect or import: odbcinst.ini and odbc.ini.

The odbcinst.ini file specifies a list of installed drivers, along with required configuration for those drivers (the path to the driver binary). The following example is an odbcinst.ini configuration file for an instance with installed drivers for PostGreSQL, Redshift, and Snowflake:

[ODBC Drivers]
PostgreSQL=Installed
Redshift=Installed
Snowflake=Installed

[PostgreSQL]
Description=PostgreSQL ODBC driver
Driver=/usr/lib/x86_64-linux-gnu/odbc/psqlodbca.so

[Redshift]
Description=Redshift ODBC driver
Driver=/opt/amazon/redshiftodbc/lib/64/libamazonredshiftodbc64.so

[Snowflake]
Description=Snowflake ODBC Driver
Driver=/usr/lib/snowflake/odbc/lib/libSnowflake.so

Use an intuitive name for the drivers, such as the official name of the RDMS, so that users can easily know the driver names to use in connection strings.

The odbc.ini file contains configuration for database-specific connections. Each connection configuration is identified by a data source name (DSN). The following example is an odbc.ini file with configuration for the default postgres database on a local instance of PostgreSQL running on port 5432:

[ODBC Data Sources]
default_postgres_db=default_postgres_db

[default_postgres_db]
Description=Local default PostgreSQL database
Driver=PostgreSQL
Database=postgres
Servername=localhost
Port=5432

Locate odbcinst.ini and odbc.ini in the /etc/ directory, or in the home directory for cases where the server process is started under a specific user account.

The PostgreSQL unicode ODBC driver is currently not supported.

Setting Up ODBC Connectors for Heavy Immerse

In certain cases, an odbcinst.ini configuration file that omits the [ODBC Drivers] section is automatically generated when ODBC drivers (such as the PostgreSQL driver) are installed. This section of the configuration file is required for the ODBC-related connectors to appear in Heavy Immerse.

In addition, the driver names specified in odbcinst.ini must match the RDMS name for connectors to appear in Immerse. Specifically, the driver name should be PostgreSQL for PostgreSQL, Redshift for AWS Redshift, and Snowflake for Snowflake.

Working with HEAVY.AI Docker Images

For docker deployments of HEAVY.AI, ODBC driver installations can be managed in a Dockerfile that uses the HEAVY.AI Docker image as a base. For example, you can create a docker image with installed PostGreSQL, Redshift, and Snowflake ODBC drivers using a Dockerfile with the following content.

Always consult the license and terms and conditions of the ODBC drivers you install. The examples here are provided for illustrative purposes only.

FROM heavyai/heavyai-ee-cuda

# Install PostGreSQL ODBC driver.
# The Snowflake ODBC driver depends on unixodbc.
RUN apt-get update && apt-get install -y odbc-postgresql unixodbc

# Install Redshift ODBC driver.
RUN wget https://s3.amazonaws.com/redshift-downloads/drivers/odbc/1.4.52.1000/AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86_64.deb
RUN dpkg -i ./AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86_64.deb
RUN rm ./AmazonRedshiftODBC-64-bit-1.4.52.1000-1.x86_64.deb

# Install Snowflake ODBC driver.
RUN wget https://sfc-repo.snowflakecomputing.com/odbc/linux/latest/snowflake-odbc-2.25.2.x86_64.deb
RUN dpkg -i ./snowflake-odbc-2.25.2.x86_64.deb
RUN rm ./snowflake-odbc-2.25.2.x86_64.deb

You can then build a new image with installed ODBC drivers using the Dockerfile.

The odbcinst.ini and odbc.ini configuration files should be bound to /etc/odbcinst.ini and /etc/odbc.ini, respectively, when running the container.

ODBC Data Wrapper Assumptions

  • The remote database management system supports the MIN, MAX, and LENGTH SQL functions. These functions are used internally for optimizations and validations.

  • The combination of the query specified in the foreign table’s SQL_SELECT option and the SQL_ORDER_BY option is guaranteed to return a result set in the same order every time the query is executed.

  • Queries for geo columns are always returned as WKT strings. This implies that either geo columns are stored as text columns containing WKT strings, geo columns are cast to text (for example, using a function like ST_AsText) in the SQL_SELECT query, or the remote database management system, by default, projects geo columns as WKT strings.

HeavyDB to ODBC Data Type Mapping

Numeric and Boolean Types

HeavyDB (Down) \ ODBC (Right)
SQL_BIGINT
SQL_INTEGER
SQL_SMALLINT
SQL_TINYINT
SQL_DOUBLE
SQL_FLOAT
SQL_REAL
SQL_DECIMAL
SQL_NUMERIC
SQL_BIT

BIGINT

Yes (unsigned is not coercible)

Yes (if unsigned)

No

No

No

No

No

Coercible

No

No

BIGINT ENCODING FIXED (32) / INTEGER

Coercible

Yes

Yes (if unsigned)

No

No

No

No

Coercible

No

No

BIGINT ENCODING FIXED (16) / INTEGER ENCODING FIXED (16) / SMALLINT

Coercible

Coercible

Yes

Yes (if unsigned)

No

No

No

Coercible

No

No

BIGINT ENCODING FIXED (8) / INTEGER ENCODING FIXED (8) / SMALLINT ENCODING FIXED (8) / TINYINT

Coercible

Coercible

Coercible

Yes

No

No

No

Coercible

No

No

BOOLEAN

No

No

No

No

No

No

No

No

No

Yes

DECIMAL (Precision, Scale)

No

No

No

No

No

No

No

Yes (if precision and scale are within OmniSci’s range)

Yes (if precision and scale are within OmniSci’s range)

No

DOUBLE

No

No

No

No

Yes

Coercible

No

No

No

No

FLOAT

No

No

No

No

Coercible

Coercible

Yes

No

No

No

*[1] PostgreSQL requires setting the option BoolsAsChar=false to represent boolean columns as SQL_BIT.

Date and Time Types

HeavyDB (Down) \ ODBC (Right)
SQL_TYPE_DATE
SQL_TYPE_TIMESTAMP
SQL_TYPE_TIME
SQL_TYPE_UTCDATETIME
SQL_TYPE_UTCTIME

DATE / DATE ENCODING DAYS (32)

Coercible

No

No

No

No

DATE ENCODING DAYS (16)

Coercible

No

No

No

No

TIME

No

No

Yes

No

No

TIME ENCODING FIXED (32)

No

No

Yes

No

No

TIMESTAMP (0)

No

Coercible

No

No

No

TIMESTAMP (3)

No

Coercible

No

No

No

TIMESTAMP (6)

No

Coercible

No

No

No

TIMESTAMP (9)

No

Coercible

No

No

No

TIMESTAMP ENCODING FIXED (32)

No

Coercible

No

No

No

Geo and String Types

HeavyDB (Down) \ ODBC (Right)
SQL_CHAR
SQL_VARCHAR
SQL_LONGVARCHAR
SQL_WCHAR
SQL_WVARCHAR
SQL_WLONGVARCHAR

TEXT ENCODING DICT

Yes

Yes

Yes

Yes

Yes

Yes

TEXT ENCODING (16)

Yes

Yes

Yes

Yes

Yes

Yes

TEXT ENCODING (8)

Yes

Yes

Yes

Yes

Yes

Yes

TEXT ENCODING NONE

Yes

Yes

Yes

Yes

Yes

Yes

LINESTRING

No

Yes

Yes

No

No

No

MULTILINESTRING

No

Yes

Yes

No

No

No

MULTIPOLYGON

No

Yes

Yes

No

No

No

POINT

No

Yes

Yes

No

No

No

MULTIPOINT

No

Yes

Yes

No

No

No

POLYGON

No

Yes

Yes

No

No

No

Array Type

Array data types currently are not supported.

RDMS-Specific Notes

PostgreSQL

  • Accessing BOOLEAN data in PostgreSQL requires the BoolsAsChar=false option to be added to the connection string or DSN configuration.

Google BigQuery

  • The BigQuery ODBC connector uses the OAuth 2.0 protocol for authentication; steps are provided by the official installation guide.

    • In all authentication methods, the Principal member (e.g. Service Account, User Account) must have at least a roles/bigquery.dataViewer permission grant on the GCP project.

PreviousParquet Data Wrapper ReferenceNextRaster Data Wrapper Reference

Last updated 1 year ago

See for more details about ODBC data types, and consult the specific RDMS ODBC driver documentation for details about how the RDMS data types map to the ODBC data types.

The official PostgreSQL ODBC driver can be downloaded from

The official can be downloaded from .

Using a Service Account is the recommended method for authentication. Steps are provided in "Using a Google Service Account" in the .

If you are using a User Account for authentication, some steps may be missing from the in the "Using a Google User Account" section. Include the following steps:

https://docs.microsoft.com/en-us/sql/odbc/reference/appendixes/sql-data-types
https://odbc.postgresql.org/
BigQuery ODBC data connector installation and configuration guide
https://cloud.google.com/bigquery/docs/reference/odbc-jdbc-drivers
installation guide
installation guide