In this section, you will find recipes to install HEAVY.AI platform and NVIDIA drivers using package manager like apt or tarball.
This is an end-to-end recipe for installing HEAVY.AI on a Ubuntu 18.04/20.04 machine using CPU and GPU devices.
The order of these instructions is significant. To avoid problems, install each component in the order presented.
These instructions assume the following:
You are installing on a “clean” Ubuntu 18.04/20.04 host machine with only the operating system installed.
Your HEAVY.AI host only runs the daemons and services required to support HEAVY.AI.
Your HEAVY.AI host is connected to the Internet.
Prepare your Ubuntu machine by updating your system, creating the HEAVY.AI user (named heavyai), installing kernel headers, installing CUDA drivers, and optionally enabling the firewall.
Update the entire system:
2. Install the utilities needed to create Heavy.ai repositories and download archives:
3. Install the headless JDK and the utility apt-transport-https
:
4. Reboot to activate the latest kernel:
Create a group called heavyai
and a user named heavyai
, who will be the owner of the HEAVY.AI software and data on the filesystem.
Create the group, user, and home directory using the useradd
command with the --user-group
and --create-home
switches.
2. Set a password for the user:
3. Log in with the newly created user:
Install the HEAVY.AI using APT and a tarball.
The installation using the APT package manager is recommended to those who want a more automated install and upgrade procedure.
If your system uses NVIDIA GPUs, but the drivers not installed, install them now. See Install NVIDIA Drivers and Vulkan on Ubuntu for details.
Download and add a GPG key to APT.
Add a source apt depending on the edition (Enterprise, Free, or Open Source) and execution device (GPU or CPU) you are going to use.
Use apt
to install the latest version of HEAVY.AI.
If you need to install a specific version of HEAVY.AI, because you are upgrading from Omnisci or for different reasons, you must run the following command:
First create the installation directory.
Download the archive and install the software. A different archive is downloaded depending on the Edition (Enterprise, Free, or Open Source) and the device used for runtime (GPU or CPU).
Follow these steps to prepare your HEAVY.AI environment.
For convenience, you can update .bashrc with these environment variables
Although this step is optional, you will find references to the HEAVYAI_BASE and HEAVYAI_PATH variables. These variables contain respectively the paths where configuration, license, and data files are stored and where the software is installed. Setting them is strongly recommended.
Run the systemd
installer to create heavyai services, a minimal config file, and initialize the data storage.
Accept the default values provided or make changes as needed.
The script creates a data directory in $HEAVYAI_BASE/storage
(default /var/lib/heavyai/storage
) with the directories catalogs
, data
, export
and log
.The import
directory is created when you insert data the first time. If you are HEAVY.AI administrator, the log
directory is of particular interest.
Start and use HeavyDB and Heavy Immerse. ¹
Heavy Immerse is available in OS Edition, so the systemctl
command using the heavy_web_server
has no effect.
Enable the automatic startup of the service at reboot and start the HEAVY.AI services.
If a firewall is not already installed and you want to harden your system, install theufw
.
To use Heavy Immerse or other third-party tools, you must prepare your host machine to accept incoming HTTP(S) connections. Configure your firewall for external access.
Most cloud providers use a different mechanism for firewall configuration. The commands above might not run in cloud deployments.
For more information, see https://help.ubuntu.com/lts/serverguide/firewall.html.
If you are using Enterprise or Free Edition, you need to validate your HEAVY.AI instance with your license key.
Skip this section if you are on Open Source Edition ²
Copy your license key of Enterprise or Free Edition from the registration email message. If you do not have a license and you want to evaluate HEAVI.AI in an unlimited
enterprise environment, contact your Sales Representative or register for your 30-day trial of Enterprise Edition here. If you need a Free License you can get one here.
Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273
.
When prompted, paste your license key in the text box and click Apply.
Log into Heavy Immerse by entering the default username (admin
) and password (HyperInteractive
), and then click Connect.
.
To verify that everything is working, load some sample data, perform a heavysql
query, and generate a Pointmap using Heavy Immerse ¹
HEAVY.AI ships with two sample datasets of airline flight information collected in 2008, and a census of New York City trees. To install sample data, run the following command.
Connect to HeavyDB by entering the following command in a terminal on the host machine (default password is HyperInteractive):
Enter a SQL query such as the following
The results should be similar to the results below.
After installing Enterprise or Free Edition, check if Heavy Immerse is running as intended.
Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273
.
Log into Heavy Immerse by entering the default username (admin
) and password (HyperInteractive
), and then click Connect.
Create a new dashboard and a Scatter Plot to verify that backend rendering is working.
Click New Dashboard.
Click Add Chart.
Click SCATTER.
Click Add Data Source.
Choose the flights_2008_10k table as the data source.
Click X Axis +Add Measure.
Choose depdelay.
Click Y Axis +Add Measure.
Choose arrdelay.
Click Size +Add Measure.
Choose airtime.
Click Color +Add Measure.
Choose dest_state.
The resulting chart shows, unsurprisingly, that there is a correlation between departure delay and arrival delay.
Create a new dashboard and a Table chart to verify that Heavy Immerse is working.
Click New Dashboard.
Click Add Chart.
Click Bubble.
Click Select Data Source.
Choose the flights_2008_10k table as the data source.
Click Add Dimension.
Choose carrier_name.
Click Add Measure.
Choose depdelay.
Click Add Measure.
Choose arrdelay.
Click Add Measure.
Choose #Records.
The resulting chart shows, unsurprisingly, that also the average departure delay is correlated to the average of arrival delay, while there is quite a difference between Carriers.
Upgrade the system and the kernel, then the machine if needed.
Install kernel headers and development packages.
Install the extra packages.
The rendering engine of HEAVY.AI (present in Enterprise Editions) requires a Vulkan-enabled driver and the Vulkan library. Without these components, the database itself may not be able to start.
Install the Vulkan library and its dependencies using apt
.
For more information about troubleshooting Vulkan, see the Vulkan Renderer section.
Installing NVIDIA drivers with support for the CUDA platform is required to run GPU-enabled versions of HEAVY.AI.
You can install NVIDIA drivers in multiple ways, we've outlined three available options below. If you would prefer not to decide, we recommend Option 1.
Option 1: Install NVIDIA drivers with CUDA toolkit from NVIDIA Website
Option 2: Install NVIDIA drivers via .run file using the NVIDIA Website
Option 3: Install NVIDIA drivers using APT package manager
It is advisable to keep a record of the installation method used, as upgrading NVIDIA drivers will require the utilization of the same method for successful results.
CUDA is a parallel computing platform and application programming interface (API) model. It uses a CUDA-enabled graphics processing unit (GPU) for general-purpose processing. The CUDA platform provides direct access to the GPU virtual instruction set and parallel computation elements. For more information on CUDA unrelated to installing HEAVY.AI, see https://developer.nvidia.com/cuda-zone.
The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime. The CUDA Toolkit is not required to run HEAVY.AI, but you must install the CUDA toolkit if you use advanced features like C++ User-Defined Functions and or User-Defined Table Functions to extend the database capabilities.
Open https://developer.nvidia.com/cuda-toolkit-archive and select the desired CUDA Toolkit version to install.
The minimum CUDA version supported by HEAVY.AI is 11.4. We recommend using a release that has been available for at least two months.
In the "Target Platform" section, follow these steps:
For "Operating System" select Linux
For Architecture" select x86_64
For "Distribution" select Ubuntu
For "Version" select the version of your operating system (18.04 or 20.04)
For "Installer Type" choose deb (network) **
One by one, run the presented commands in the Installer Instructions section on your server.
** You may optionally use any of the "Installer Type" options available.
If you choose to use the .run file option, prior to running the installer you will need to manually install build-essentials
using apt
and change permissions of the downloaded .run file to allow execution.
Install the CUDA package for your platform and operating system according to the instructions on the NVIDIA website (https://www.nvidia.com/download/index.aspx).
If you don't know the exact GPU model in your system run this command
You'll get an output in the format Product Type, Series and Model
In this example, the Product type is Tesla the Series is T (as Turing), and the model is T4.
Select the Product Type as the one you got with the command.
Select the correct Product Series and Product Type for your installation.
In the Operating System dropdown list, select Linux 64-bit.
In the CUDA Toolkit dropdown list, click a supported version (11.4 or higher).
Click Search.
On the resulting page, verify the download information and click Download
On the subsequent page, if you agree to the terms, right click on "Agree and Download" and select "Copy Link Address". You may also manually download and transfer to your server, skipping the next step.
On your server, type wget
and paste the URL you copied in the previous step. Press enter to download.
Please check that the driver's version you are downloading meets the HEAVI.AI minimum requirements
Install the tools needed for installation.
Change the permissions of the downloaded .run file to allow execution, and run the installation.
Install a specific version of the driver for your GPU by installing the NVIDIA repository and using the apt
package manager.
Be careful when choosing the driver version to install. Ensure that your GPU's model is supported and that meets the HEAVI.AI minimum requirements
Run the command to get a list of the available driver's version
Install the driver version needed with apt
Reboot your system to ensure the new version of the driver is loaded
Run nvidia-smi
to verify that your drivers are installed correctly and recognize the GPUs in your environment. Depending on your environment, you should see something like this to confirm that your NVIDIA GPUs and drivers are present.
If you see an error like the following, the NVIDIA drivers are probably installed incorrectly:
Review the installation instructions, specifically checking for completion of install prerequisites, and correct any errors.
The rendering engine of HEAVY.AI requires a Vulkan-enabled driver and the Vulkan library. Without these components, the database itself can't even start without disabling the back-end renderer.
Install the Vulkan library and its dependencies using apt
.
For more information about troubleshooting Vulkan, see the Vulkan Renderer section.
You must install the CUDA toolkit and Clang if you use advanced features like C++ User-Defined Functions and or User-Defined Table Functions to extend the database capabilities.
If you installed NVIDIA drivers using Option 1 above, the CUDA toolkit is already installed; you may proceed to the verification step below.
Install the NVIDIA public repository GPG key.
Add the repository.
List the available Cuda toolkit versions.
Install the CUDA toolkit using apt
.
Check that everything is working and the toolkit has been installed.
You must install Clang if you use advanced features like C++ User-Defined Functions and or User-Defined Table Functions to extend the database capabilities. Install Clang and LLVM dependencies using apt
.
Check that the software is installed and in the execution path.
For more information, see C++ User-Defined Functions.