HEAVY.AI Docs
v7.2.4
v7.2.4
  • Welcome to HEAVY.AI Documentation
  • Overview
    • Overview
    • Release Notes
  • Installation and Configuration
    • System Requirements
      • Hardware Reference
      • Software Requirements
    • Installation
      • Free Version
      • Installing on CentOS
        • HEAVY.AI Installation on CentOS/RHEL
        • Install NVIDIA Drivers and Vulkan on CentOS/RHEL
      • Installing on Ubuntu
        • HEAVY.AI Installation on Ubuntu
        • Install NVIDIA Drivers and Vulkan on Ubuntu
      • Installing on Docker
        • HEAVY.AI Installation using Docker on Ubuntu
      • Getting Started on AWS
      • Getting Started on GCP
      • Getting Started on Azure
      • Getting Started on Kubernetes (BETA)
      • Upgrading
        • Upgrading HEAVY.AI
        • Upgrading from Omnisci to HEAVY.AI 6.0
        • CUDA Compatibility Drivers
      • Uninstalling
      • Ports
    • Services and Utilities
      • Using Services
      • Using Utilities
    • Executor Resource Manager
    • Configuration Parameters
      • Overview
      • Configuration Parameters for HeavyDB
      • Configuration Parameters for HEAVY.AI Web Server
    • Security
      • Roles and Privileges
      • Connecting Using SAML
      • Implementing a Secure Binary Interface
      • Encrypted Credentials in Custom Applications
      • LDAP Integration
    • Distributed Configuration
  • Loading and Exporting Data
    • Supported Data Sources
      • Kafka
      • Using Heavy Immerse Data Manager
      • Importing Geospatial Data
    • Command Line
      • Loading Data with SQL
      • Exporting Data
  • SQL
    • Data Definition (DDL)
      • Datatypes
      • Users and Databases
      • Tables
      • System Tables
      • Views
      • Policies
    • Data Manipulation (DML)
      • SQL Capabilities
        • ALTER SESSION SET
        • ALTER SYSTEM CLEAR
        • DELETE
        • EXPLAIN
        • INSERT
        • KILL QUERY
        • LIKELY/UNLIKELY
        • SELECT
        • SHOW
        • UPDATE
        • Arrays
        • Logical Operators and Conditional and Subquery Expressions
        • Table Expression and Join Support
        • Type Casts
      • Geospatial Capabilities
        • Uber H3 Hexagonal Modeling
      • Functions and Operators
      • System Table Functions
        • generate_random_strings
        • generate_series
        • tf_compute_dwell_times
        • tf_feature_self_similarity
        • tf_feature_similarity
        • tf_geo_rasterize
        • tf_geo_rasterize_slope
        • tf_graph_shortest_path
        • tf_graph_shortest_paths_distances
        • tf_load_point_cloud
        • tf_mandelbrot*
        • tf_point_cloud_metadata
        • tf_raster_contour_lines; tf_raster_contour_polygons
        • tf_raster_graph_shortest_slope_weighted_path
        • tf_rf_prop_max_signal (Directional Antennas)
        • ts_rf_prop_max_signal (Isotropic Antennas)
        • tf_rf_prop
      • Window Functions
      • Reserved Words
      • SQL Extensions
  • Heavy Immerse
    • Introduction to Heavy Immerse
    • Admin Portal
    • Control Panel
    • Working with Dashboards
      • Dashboard List
      • Creating a Dashboard
      • Configuring a Dashboard
      • Duplicating and Sharing Dashboards
    • Measures and Dimensions
    • Using Parameters
    • Using Filters
    • Using Cross-link
    • Chart Animation
    • Multilayer Charts
    • SQL Editor
    • Customization
    • Joins (Beta)
    • Chart Types
      • Overview
      • Bar
      • Bubble
      • Choropleth
      • Combo
      • Cross-Section
      • Contour
      • Gauge
      • Geo Heatmap
      • Heatmap
      • Histogram
      • Line
      • Linemap
      • New Combo
      • Number
      • Pie
      • Pointmap
      • Scatter Plot
      • Skew-T
      • Stacked Bar
      • Table
      • Text Widget
      • Wind Barb
  • HeavyRF
    • Introduction to HeavyRF
    • Getting Started
    • HeavyRF Table Functions
  • HeavyConnect
    • HeavyConnect Release Overview
    • Getting Started
    • Best Practices
    • Examples
    • Command Reference
    • Parquet Data Wrapper Reference
    • ODBC Data Wrapper Reference
  • HeavyML (BETA)
    • HeavyML Overview
    • Clustering Algorithms
    • Regression Algorithms
      • Linear Regression
      • Random Forest Regression
      • Decision Tree Regression
      • Gradient Boosting Tree Regression
    • Principal Components Analysis
  • Python / Data Science
    • Data Science Foundation
    • JupyterLab Installation and Configuration
    • Using HEAVY.AI with JupyterLab
    • Python User-Defined Functions (UDFs) with the Remote Backend Compiler (RBC)
      • Installation
      • Registering and Using a Function
      • User-Defined Table Functions
      • RBC UDF/UDTF Example Notebooks
      • General UDF/UDTF Tutorial Notebooks
      • RBC API Reference
    • Ibis
    • Interactive Data Exploration with Altair
    • Additional Examples
      • Forecasting with HEAVY.AI and Prophet
  • APIs and Interfaces
    • Overview
    • heavysql
    • Thrift
    • JDBC
    • ODBC
    • Vega
      • Vega Tutorials
        • Vega at a Glance
        • Getting Started with Vega
        • Getting More from Your Data
        • Creating More Advanced Charts
        • Using Polys Marks Type
        • Vega Accumulator
        • Using Transform Aggregation
        • Improving Rendering with SQL Extensions
      • Vega Reference Overview
        • data Property
        • projections Property
        • scales Property
        • marks Property
      • Migration
        • Migrating Vega Code to Dynamic Poly Rendering
      • Try Vega
    • RJDBC
    • SQuirreL SQL
    • heavyai-connector
  • Tutorials and Demos
    • Loading Data
    • Using Heavy Immerse
    • Hello World
    • Creating a Kafka Streaming Application
    • Getting Started with Open Source
    • Try Vega
  • Troubleshooting and Special Topics
    • FAQs
    • Troubleshooting
    • Vulkan Renderer
    • Optimizing
    • Known Issues and Limitations
    • Logs and Monitoring
    • Archived Release Notes
      • Release 6.x
      • Release 5.x
      • Release 4.x
      • Release 3.x
Powered by GitBook
On this page
  • HeavyDB
  • HeavyDB supports only Web Mercator projection
  • Variable length types are not supported when performing columnar conversion
  • Do not set BLOSC_* environment variables
  • OPTIMIZE TABLE with VACUUM increases metadata
  • ALTER TABLE ADD COLUMN does not work with geo column type
  • Possible integer overflow on select count(*) for tables with more than 2^32 rows
  • UPDATE limitations
  • CUDA error on NVIDIA DGX systems
  • HEAVY.AI Rendering Engine
  • Potential deadlock when handling table modify statements while a render is in flight
  • Sizing points by meters at large zoom levels introduces error
  • Heavy Immerse
  • Parameters and geo-join failure
  • High-precision timestamp limitation
  • Dashboard sharing limitations
  • Dashboards loaded by ID instead of name
  • Immerse backward compatibility
  • Unexpected update queries for Line and Histogram charts
  • Unexpected render of complex data fields exported to CSV format
  • Sorting by non-grouped column in a table chart might not work properly in a distributed configuration
  • Old share links no longer load and render immediately.
Export as PDF
  1. Troubleshooting and Special Topics

Known Issues and Limitations

Following are known issues, limitations, and changes to default behavior in HEAVY.AI.

HeavyDB

HeavyDB supports only Web Mercator projection

Because HeavyDB supports Web Mercator projection only, applications that use coordinates other than Web Mercator may not render data accurately on their maps.

Variable length types are not supported when performing columnar conversion

Whenever the result of a query is going to be used for another query (for example, CREATE TABLE AS SELECT, any multi-step query, and so on), HEAVY.AI performs columnar conversion to change the intermediate results into proper columnar format that all HEAVY.AI queries expect as input. Variable length types, including all geometry targets, are not supported when performing columnar conversion.

Do not set BLOSC_* environment variables

HEAVY.AI uses a compression library called BLOSC that reads operating system environment variables and changes behavior according to those variables. Do not set any of the environment variables listed below.

BLOSC_CLEVE
BLOSC_SHUFFLE
BLOSC_TYPESIZE
BLOSC_COMPRESSOR
BLOSC_NTHREADS
BLOSC_BLOCKSIZE
BLOSC_NOLOCK
BLOSC_SPLITMODE

OPTIMIZE TABLE with VACUUM increases metadata

The with (vacuum = 'true') option has the suboptimal effect of increasing the size of your metadata in HEAVY.AI Core version 4.5.0. Do not use the vacuum option.

ALTER TABLE ADD COLUMN does not work with geo column type

This issue was fixed in MapD version 4.1.1.

Possible integer overflow on select count(*) for tables with more than 2^32 rows

To prevent return of negative counts, set bigint-count = true in heavyai.conf.

UPDATE limitations

  • HEAVY.AI does not currently support UPDATE from a subquery. For example, the following will not work:

    UPDATE tempDataView SET marks = ( SELECT marks FROM tempData b WHERE tempDataView.Name = b.Name )
  • UPDATE is not currently supported on variable-length data types.

CUDA error on NVIDIA DGX systems

On NVIDIA DGX systems, you might get the following error:

2021-05-14T15:56:16.571249 E 40446 0 DBHandler.cpp:403 Unable to instantiate CudaMgr, falling back to CPU-only mode. CUDA Error (999): unknown error

This error occurs if no fabric manager is installed on the system. To resolve the issue, install the fabric manager on the system.

HEAVY.AI Rendering Engine

Potential deadlock when handling table modify statements while a render is in flight

In OmniSci 5.1.2, a deadlock can result if a render_vega request is executed at the same time as a table modification request (DROP/TRUNCATE/RENAME/APPEND TABLE) and the same table is referenced in both requests.

This is scheduled to be fixed in a future release. Until that time, HEAVY.AI recommends that you avoid executing a table modification request at the same time as a render_vega request against the same table.

Sizing points by meters at large zoom levels introduces error

When evaluating the new convert_meters_to_pixel_width and convert_meters_to_pixel_height extension functions for accuracy against circular polygons created with ST_Buffer in other packages, some errors are introduced by the extension functions somewhere at large zoom levels.

The resulting point/symbol sized by meters is just an approximate. It does not represent the exact area on the globe. There is more error in the approximate as you get closer to the poles in a mercator-projected view: a circle defined in meters should become egg-shaped, whereas the current symbol remains elliptical.

Workaround: If your clients are going to use these extension functions, HEAVY.AI recommends you use the legacysymbol vega mark type if the size in meters is large and zooming in close is useful for your analysis.

Heavy Immerse

Parameters and geo-join failure

In Release 5.6.0, parameters do not work with geo joins.

High-precision timestamp limitation

You can import higher-precision timestamps (3, 6, 9, milli, micro, nano, rather than the default of seconds) via the data manager, but you cannot use them as a part of the actual queries or filters for a chart (as opposed to displaying them as results). For example, you cannot use a high-precision timestamp as the time dimension for a combo chart.

Dashboard sharing limitations

  • In MapD 4.0, dashboards can be shared only as 'read-only'. Users with whom a dashboard has been shared cannot currently make edits to the dashboard.

  • For security reasons, dashboard sharing does not automatically provide permissions on underlying tables/views. For now, this requires a one-time setup by a superuser/administrative to configure a group of users or a role with permissions on the underlying objects.

  • Dashboard sharing does not currently work in HEAVY.AI Cloud because each HEAVY.AI Cloud user currently has a dedicated HEAVY.AI instance. This limitation will be addressed in a future release.

Dashboards loaded by ID instead of name

If your OmniSci instance is set up to autoload specific dashboards on login by specifying the name in servers.json, you need to update the entry to use the dashboard ID instead. If you do not, your dashboards will not autoload. Find the dashboard ID by running `\dash` from omnisql, and then update the servers.json entry accordingly:

[
   {
     "database": "omnisci",
     "master": "true",
     "username": "user",
     "password": "HyperInteractive",
     "url": "http://webserver.com:9092",
     "loadDashboard": "740",
     "GTM": "GTM-MDD8888"
   }
]

Immerse backward compatibility

MapD Immerse versions 3.4.0 and higher work only with MapD Core versions 3.4.0 and higher.

Unexpected update queries for Line and Histogram charts

Any update on Line or Histogram charts also starts an update query for range chart that is not required.

Unexpected render of complex data fields exported to CSV format

Binned columns and date extracts appear as JSON strings when exported to CSV format.

Sorting by non-grouped column in a table chart might not work properly in a distributed configuration

Old share links no longer load and render immediately.

PreviousOptimizingNextLogs and Monitoring

Last updated 2 years ago

ALTER TABLE ADD COLUMN for a geo column type in MapD 4.1 only partially adds the column. Any queries on that column will result in a system failure. If you encounter this issue, contact HEAVY.AI support at .

Old share links (for example, no longer load and render all charts immediately. You must resize the browser or otherwise cause the page to re-render to see all charts.

support@heavyai.com
https://www.mapd.com/demos/ships/#/link/mapd/228ae04e\