HEAVY.AI Docs
v8.1.0
v8.1.0
  • Welcome to HEAVY.AI Documentation
  • Overview
    • Overview
    • Release Notes
  • Installation and Configuration
    • System Requirements
      • Hardware Reference
      • Software Requirements
      • Licensing
    • Installation
      • Free Version
      • Installing on Docker
        • HEAVY.AI Installation using Docker on Ubuntu
      • Installing on Ubuntu
        • HEAVY.AI Installation on Ubuntu
        • Install NVIDIA Drivers and Vulkan on Ubuntu
      • Installing on Rocky Linux / RHEL
        • HEAVY.AI Installation on RHEL
        • Install NVIDIA Drivers and Vulkan on Rocky Linux and RHEL
      • Getting Started on AWS
      • Getting Started on GCP
      • Getting Started on Azure
      • Getting Started on Kubernetes (BETA)
      • Upgrading
        • Upgrading HEAVY.AI
        • Upgrading from Omnisci to HEAVY.AI 6.0
        • CUDA Compatibility Drivers
      • Uninstalling
      • Ports
    • Services and Utilities
      • Using Services
      • Using Utilities
    • Executor Resource Manager
    • Configuration Parameters
      • Overview
      • Configuration Parameters for HeavyDB
      • Configuration Parameters for HEAVY.AI Web Server
      • Configuration Parameters for HeavyIQ
    • Security
      • Roles and Privileges
        • Column-Level Security
      • Connecting Using SAML
      • Implementing a Secure Binary Interface
      • Encrypted Credentials in Custom Applications
      • LDAP Integration
    • Distributed Configuration
  • Loading and Exporting Data
    • Supported Data Sources
      • Kafka
      • Using HeavyImmerse Data Manager
      • Importing Geospatial Data
    • Command Line
      • Loading Data with SQL
      • Exporting Data
  • SQL
    • Data Definition (DDL)
      • Datatypes
      • Users and Databases
      • Tables
      • System Tables
      • Views
      • Policies
      • Comment
    • Data Manipulation (DML)
      • SQL Capabilities
        • ALTER SESSION SET
        • ALTER SYSTEM CLEAR
        • DELETE
        • EXPLAIN
        • INSERT
        • KILL QUERY
        • LIKELY/UNLIKELY
        • SELECT
        • SHOW
        • UPDATE
        • Arrays
        • Logical Operators and Conditional and Subquery Expressions
        • Table Expression and Join Support
        • Type Casts
      • Geospatial Capabilities
        • Uber H3 Hexagonal Modeling
      • Functions and Operators
      • System Table Functions
        • generate_random_strings
        • generate_series
        • tf_compute_dwell_times
        • tf_feature_self_similarity
        • tf_feature_similarity
        • tf_geo_rasterize
        • tf_geo_rasterize_slope
        • tf_graph_shortest_path
        • tf_graph_shortest_paths_distances
        • tf_load_point_cloud
        • tf_mandelbrot*
        • tf_point_cloud_metadata
        • tf_raster_contour_lines; tf_raster_contour_polygons
        • tf_raster_graph_shortest_slope_weighted_path
        • tf_rf_prop_max_signal (Directional Antennas)
        • ts_rf_prop_max_signal (Isotropic Antennas)
        • tf_rf_prop
      • Window Functions
      • Reserved Words
      • SQL Extensions
      • HeavyIQ LLM_TRANSFORM
  • HeavyImmerse
    • Introduction to HeavyImmerse
    • Admin Portal
    • Control Panel
    • Working with Dashboards
      • Dashboard List
      • Creating a Dashboard
      • Configuring a Dashboard
      • Duplicating and Sharing Dashboards
    • Measures and Dimensions
    • Using Parameters
    • Using Filters
    • Using Cross-link
    • Chart Animation
    • Multilayer Charts
    • SQL Editor
    • Customization
    • Joins (Beta)
    • Chart Types
      • Overview
      • Bubble
      • Choropleth
      • Combo
      • Contour
      • Cross-Section
      • Gauge
      • Geo Heatmap
      • Heatmap
      • Linemap
      • Number
      • Pie
      • Pointmap
      • Scatter Plot
      • Skew-T
      • Table
      • Text Widget
      • Wind Barb
    • Deprecated Charts
      • Bar
      • Combo - Original
      • Histogram
      • Line
      • Stacked Bar
    • HeavyIQ SQL Notebook
  • HEAVYIQ Conversational Analytics
    • HeavyIQ Overview
      • HeavyIQ Guidance
  • HeavyRF
    • Introduction to HeavyRF
    • Getting Started
    • HeavyRF Table Functions
  • HeavyConnect
    • HeavyConnect Release Overview
    • Getting Started
    • Best Practices
    • Examples
    • Command Reference
    • Parquet Data Wrapper Reference
    • ODBC Data Wrapper Reference
    • Raster Data Wrapper Reference
  • HeavyML (BETA)
    • HeavyML Overview
    • Clustering Algorithms
    • Regression Algorithms
      • Linear Regression
      • Random Forest Regression
      • Decision Tree Regression
      • Gradient Boosting Tree Regression
    • Principal Components Analysis
  • Python / Data Science
    • Data Science Foundation
    • JupyterLab Installation and Configuration
    • Using HEAVY.AI with JupyterLab
    • Python User-Defined Functions (UDFs) with the Remote Backend Compiler (RBC)
      • Installation
      • Registering and Using a Function
      • User-Defined Table Functions
      • RBC UDF/UDTF Example Notebooks
      • General UDF/UDTF Tutorial Notebooks
      • RBC API Reference
    • Ibis
    • Interactive Data Exploration with Altair
    • Additional Examples
      • Forecasting with HEAVY.AI and Prophet
  • APIs and Interfaces
    • Overview
    • heavysql
    • Thrift
    • JDBC
    • ODBC
    • Vega
      • Vega Tutorials
        • Vega at a Glance
        • Getting Started with Vega
        • Getting More from Your Data
        • Creating More Advanced Charts
        • Using Polys Marks Type
        • Vega Accumulator
        • Using Transform Aggregation
        • Improving Rendering with SQL Extensions
      • Vega Reference Overview
        • data Property
        • projections Property
        • scales Property
        • marks Property
      • Migration
        • Migrating Vega Code to Dynamic Poly Rendering
      • Try Vega
    • RJDBC
    • SQuirreL SQL
    • heavyai-connector
  • Tutorials and Demos
    • Loading Data
    • Using Heavy Immerse
    • Hello World
    • Creating a Kafka Streaming Application
    • Getting Started with Open Source
    • Try Vega
  • Troubleshooting and Special Topics
    • FAQs
    • Troubleshooting
    • Vulkan Renderer
    • Optimizing
    • Known Issues and Limitations
    • Logs and Monitoring
    • Archived Release Notes
      • Release 6.x
      • Release 5.x
      • Release 4.x
      • Release 3.x
Powered by GitBook
On this page
  • Step 1 - Create the Vega Specification
  • Specify the Visualization Area
  • Specify the Data Source
  • Specify the Graphical Properties of the Rendered Data Item
  • Specify How Input Data are Scaled to the Visualization Area
  • Step 2 - Connect to the Backend
  • Step 3 - Make the Render Request and Handle the Response
  • Source Code
  • HTML
  • JavaScript
Export as PDF
  1. APIs and Interfaces
  2. Vega
  3. Vega Tutorials

Getting Started with Vega

PreviousVega at a GlanceNextGetting More from Your Data

is located at the end of the tutorial.

This tutorial uses the same visualization as but elaborates on the runtime environment and implementation steps. The Vega usage pattern described here applies to all Vega implementations. Subsequent tutorials differ only in describing more advanced Vega features.

This visualization maps a continuous, quantitative input domain to a continuous output range. Again, the visualization shows tweets in the EMEA region, from a tweets data table:

Backend rendering using Vega involves the following steps:

Step 1 - Create the Vega Specification

A Vega JSON specification has the following general structure:

const exampleVega = {
  width: <numeric>,
  height: <numeric>,
  data: [ ... ],
  scales: [ ... ],
  marks: [ ... ]
};

Specify the Visualization Area

The width and height properties define the width and height of your visualization area, in pixels:

const exampleVega = {
  width: 384,
  height: 564,
  data: [ ... ],
  scales: [ ... ],
  marks: [ ... ]
};

Specify the Data Source

This example uses the following SQL statement to get the tweets data:

data: [
    {
        "name": "tweets",
        "sql": "SELECT goog_x as x, goog_y as y, tweets_nov_feb.rowid FROM tweets_nov_feb"
    }
]

Specify the Graphical Properties of the Rendered Data Item

The marks property specifies the graphical attributes of how each data item is rendered:

marks: [
    {
        type: "points",
        from: {
            data: "tweets"
        },
        properties: {
            x: {
                scale: "x",
                field: "x"
            },
            y: {
                scale: "y",
                field: "y"
            },
            "fillColor": "blue",
            size: {
                value: 3
            }
        }
    }
]

Specify How Input Data are Scaled to the Visualization Area

The following scales specification maps marks to the visualization area.

scales: [
    {
        name: "x",
        type: "linear",
        domain: [
            -3650484.1235206556,
            7413325.514451755
        ],
        range: "width"
      },
      {
        name: "y",
        type: "linear",
        domain: [
            -5778161.9183506705,
            10471808.487466192
        ],
        range: "height"
    }
]

Both x and y scales specify a linear mapping of the continuous, quantitative input domain to a continuous output range. In this example, input data values are transformed to predefined width and height range values.

Later tutorials show how to specify data transformation using discrete domain-to-range mapping.

Step 2 - Connect to the Backend

Follow these steps to instantiate the connector and to connect to the backend:

  1. <script src="<localJSdir>/browser-connector.js"></script>
  2. var vegaOptions = {}
    var connector = new MapdCon()
      .protocol("http")
      .host("my.host.com")
      .port("6273")
      .dbName("omnisci")
      .user("omnisci")
      .password("HyperInteractive")

    Property

    Description

    dbName

    OmniSci database name.

    host

    OmniSci web server name.

    password

    OmniSci user password.

    port

    OmniSci web server port

    protocol

    Communication protocol: http, https

    user

    OmniSci user name.

For example,

.connect(function(error, con) { ... });

The connect() function generates client and session IDs for this connection instance, which are unique for each instance and are used in subsequent API calls for the session.

On a successful connection, the callback function is called. The callback function in this example calls the renderVega() function.

Step 3 - Make the Render Request and Handle the Response

.connect(function(error, con) {
  con.renderVega(1, JSON.stringify(exampleVega), vegaOptions, function(error, result) {
    if (error)
      console.log(error.message);
    else {
      var blobUrl = `data:image/png;base64,${result.image}`
      var body = document.querySelector('body')
      var vegaImg = new Image()
      vegaImg.src = blobUrl
      body.append(vegaImg)
    }
  });
});

Parameter

Type

Required

Description

widgetid

number

X

Calling widget ID.

vega

string

X

options

number

Render query options.

compressionLevel:PNG compression level. 1 (low, fast) to 10 (high, slow). Default = 3

callback

function

Callback function with (error, success) signature.

Return

Description

Base64 image

PNG image rendered on server

The backend returns the rendered Base64 image in results.image, which you can display in the browser window using a data URI.

Source Code

Getting Started Directory Structure

 index.html
 /js
   browser-connector.js
   vegaspec.js
   vegademo.js

HTML

Getting Started index.html

<!DOCTYPE html>
<html lang="en">
  <head>
    <title>OmniSci</title>
    <meta charset="UTF-8">
  </head>
  <body>
    <script src="js/browser-connector.js"></script>
    <script src="js/vegaspec.js"></script>
    <script src="js/vegademo.js"></script>

    <script>
    document.addEventListener('DOMContentLoaded', init, false);
    </script>
  </body>
</html>

JavaScript

Getting Started vegademo.js

function init() {
  var vegaOptions = {}
  var connector = new MapdCon()
    .protocol("http")
    .host("my.host.com")
    .port("6273")
    .dbName("omnisci")
    .user("omnisci")
    .password("changeme")
    .connect(function(error, con) {
      con.renderVega(1, JSON.stringify(exampleVega), vegaOptions, function(error, result) {
        if (error) {
          console.log(error.message);
        }
        else {
          var blobUrl = `data:image/png;base64,${result.image}`
          var body = document.querySelector('body')
          var vegaImg = new Image()
          vegaImg.src = blobUrl
          body.append(vegaImg)
        }
      });
    });
}

Getting Started vegaspec.js

const exampleVega = {
  "width": 384,
  "height": 564,
  "data": [
    {
      "name": "tweets",
      "sql": "SELECT goog_x as x, goog_y as y, tweets_data_table.rowid FROM tweets_data_table"
    }
  ],
  "scales": [
    {
      "name": "x",
      "type": "linear",
      "domain": [
        -3650484.1235206556,
        7413325.514451755
      ],
      "range": "width"
    },
    {
      "name": "y",
      "type": "linear",
      "domain": [
        -5778161.9183506705,
        10471808.487466192
      ],
      "range": "height"
    }
  ],
  "marks": [
    {
      "type": "points",
      "from": {
        "data": "tweets"
      },
      "properties": {
        "x": {
          "scale": "x",
          "field": "x"
        },
        "y": {
          "scale": "y",
          "field": "y"
        },
        "fillColor": "blue",
        "size": {
          "value": 3
        }
      }
    }
  ]
};

You can create the Vega specification statically, as shown in this tutorial, or programmatically. See the charting example for a programmatic implementation. Here is the programmatic source code:

The input data are the latitude and longitude coordinates of tweets from the tweets_nov_feb data table. The coordinates are labeled x and y for in the marks property, which references the data using the tweets name.

In this example, each data item from the tweets data table is rendered as a point. The points marks type includes position, fill color, and size attributes. The specifies how to visually encode points according to these attributes. Points in this example are three pixels in diameter and colored blue.

Points are scaled to the visualization area using the property.

Use the renderVega() API to communicate with the backend. The connector is layered on for cross-language client communication with the server.

Include browser-connector.js located at to include the MapD connector and Thrift interface APIs.

Instantiate the MapdCon() connector and set the server name, protocol information, and your authentication credentials, as described in the API:

Finally, call the MapD connector API function to initiate a connect request, passing a callback function with a (error, success) signature as the parameter.

The MapD connector API function sends the Vega JSON to the backend, and has the following parameters:

Vega JSON object, as described in .

Poly Map with Backend Rendering
example3.html
example3.js
marks property
scales
browser-connector.js
Apache Thrift
https://github.com/omnisci/mapd-connector/tree/master/dist
MapD Connector
connect()
renderVega()
Create the Vega Specification
Connect to the Backend
Make the Render Request and Handle the Response
Step 1 - Create the Vega Specification
Vega at a Glance
Source code
Field Reference