HEAVY.AI Installation using Docker on Ubuntu

Follow these steps to install HEAVY.AI as a Docker container on a machine running with on CPU or with supported NVIDIA GPU cards using Ubuntu as the host OS.

Preparation

Prepare your host by installing Docker and if needed for your configuration NVIDIA drivers and NVIDIA runtime.

Install Docker

Remove any existing Docker Installs and if on GPU the legacy NVIDIA docker runtime.

sudo apt-get purge nvidia-docker
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

Add Docker's GPG key using curl and ca-certificates

sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

Add Docker to your Apt repository.

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update your repository.

Install Docker, the command line interface, and the container runtime.

Run the following usermod command so that Docker command execution does not require sudo privilege. Log out and log back in for the changes to take effect. (recommended)

Verify your Docker installation.

For more information on Docker installation, see the Docker Installation Guide.

Install NVIDIA Drivers and NVIDIA Container ᴳᴾᵁ ᴼᴾᵀᴵᴼᴺ

Install NVIDIA Drivers

Install NVIDIA driver and Cuda Toolkit using Install NVIDIA Drivers and Vulkan on Ubuntu

Install NVIDIA Docker Runtime

Use curl to add Nvidia's Gpg key:

Update your sources list:

Update apt-get and install nvidia-container-runtime:

Edit /etc/docker/daemon.json to add the following, and save the changes:

Restart the Docker daemon:

Check NVIDIA Drivers

Verify that docker and NVIDIA runtime work together.

If everything is working you should get the output of nvidia-smi command showing the installed GPUs in the system.

Standard NVIDIA-SMI output shows the GPU visible in your container.

HEAVY.AI Installation

Create a directory to store data and configuration files

Then a minimal configuration file for the docker installation

Optional: Download HEAVY.AI from Release Website

The subsequent section will download and install an image using DockerHub. However, if you wish to avoid pulling from DockerHub and instead download and prepare a specific image, follow the instructions in this section. To Download a specific version, visit one of the following websites, choose the version that you wish to install, right click and select "COPY URL".

Enterprise/Free Editions: https://releases.heavy.ai/ee/tar/

Open Source Editions: https://releases.heavy.ai/os/tar/

Use files ending in -render-docker.tar.gz to install GPU edition and -cpu-docker.tar.gz to install CPU editions.

Then, on the server where you wish to install HEAVY.AI, run the following command (Replacing $DOWNLOAD_URL with the URL from your clipboard).

wget $DOWNLOAD_URL

Await successful download and run ls | grep heavy to see the filename of the package you just downloaded. Copy the filename to your clipboard, and then run the next command replacing $DOWNLOADED_FILENAME with the contents of your clipboard.

docker load < $DOWNLOADED_FILENAME

The command will return a Docker image name. Replace heavyai/heavyai-(...):latest with the image you just loaded.

Download HEAVY.AI from DockerHub and Start HEAVY.AI in Docker.

Select the tab depending on the Edition (Enterprise, Free, or Open Source) and execution Device (GPU or CPU) you are going to use.

Replace ":latest" with ":vX.Y.Z" to pull a specific docker version. (Eg: heavyai-ee-cuda:v8.0.1)

Check that the docker is up and running a docker ps commnd:

You should see an output similar to the following.

See also the note regarding the CUDA JIT Cache in Optimizing Performance.

Configure Firewall ᴼᴾᵀᴵᴼᴺᴬᴸ

If a firewall is not already installed and you want to harden your system, install theufw.

To use Heavy Immerse or other third-party tools, you must prepare your host machine to accept incoming HTTP(S) connections. Configure your firewall for external access.

Most cloud providers use a different mechanism for firewall configuration. The commands above might not run in cloud deployments.

For more information, see https://help.ubuntu.com/lts/serverguide/firewall.html.

Licensing HEAVY.AI ᵉᵉ⁻ᶠʳᵉᵉ ᵒⁿˡʸ

If you are on Enterprise or Free Edition, you need to validate your HEAVY.AI instance using your license key. You must skip this section if you are on Open Source Edition ²

  1. Copy your license key of Enterprise or Free Edition from the registration email message. If you don't have a license and you want to evaluate HEAVY.AI in an enterprise environment, contact your Sales Representative or register for your 30-day trial of Enterprise Edition here. If you need a Free License you can get one here.

  2. Connect to Heavy Immerse using a web browser to your host on port 6273. For example, http://heavyai.mycompany.com:6273.

  3. When prompted, paste your license key in the text box and click Apply.

  4. Log into Heavy Immerse by entering the default username (admin) and password (HyperInteractive), and then click Connect.

Command-Line Access

You can access the command line in the Docker image to perform configuration and run HEAVY.AI utilities.

You need to know the container-id to access the command line. Use the command below to list the running containers.

You see output similar to the following.

Once you have your container ID, in the example 9e01e520c30c, you can access the command line using the Docker exec command. For example, here is the command to start a Bash session in the Docker instance listed above. The -it switch makes the session interactive.

You can end the Bash session with the exit command.

Final Checks

To verify that everything is working, load some sample data, perform a heavysql query, and generate a Scatter Plot or a Bubble Chart using Heavy Immerse ¹

Load Sample Data and Run a Simple Query

HEAVY.AI ships with two sample datasets of airline flight information collected in 2008, and a census of New York City trees. To install sample data, run the following command.

Where <container-id> is the container in which HEAVY.AI is running.

When prompted, choose whether to insert dataset 1 (7,000,000 rows), dataset 2 (10,000 rows), or dataset 3 (683,000 rows). The examples below use dataset 2.

Connect to HeavyDB by entering the following command (a password willò be asked; the default password is HyperInteractive):

Enter a SQL query such as the following:

The results should be similar to the results below.

Create a Dashboard Using Heavy Immerse ᵉᵉ⁻ᶠʳᵉᵉ ᵒⁿˡʸ ¹

Installing Enterprise or Free Edition, check if Heavy Immerse is running as intended.

  1. Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273.

  2. Log into Heavy Immerse by entering the default username (admin) and password (HyperInteractive), and then click Connect.

Create a new dashboard and a Scatter Plot to verify that backend rendering is working.

  1. Click New Dashboard.

  2. Click Add Chart.

  3. Click SCATTER.

  4. Click Add Data Source.

  5. Choose the flights_2008_10k table as the data source.

  6. Click X Axis +Add Measure.

  7. Choose depdelay.

  8. Click Y Axis +Add Measure.

  9. Choose arrdelay.

  10. Click Size +Add Measure.

  11. Choose airtime.

  12. Click Color +Add Measure.

  13. Choose dest_state.

The resulting chart shows, unsurprisingly, that there is a correlation between departure delay and arrival delay.\

Gpu Drawed Scatterplot

¹ In the OS Edition, Heavy Immerse Service is unavailable.

² The OS Edition does not require a license key.

Last updated