In this section, you will find recipes to install HEAVY.AI platform and NVIDIA drivers using package manager like yum or tarball.
Install the Extra Packages for Enterprise Linux (EPEL) repository and other packages before installing NVIDIA drivers.
For CentOS, use yum to install the epel-release
package.
Use the following install command for RHEL.
RHEL-based distributions require Dynamic Kernel Module Support (DKMS) to build the GPU driver kernel modules. For more information, see https://fedoraproject.org/wiki/EPEL. Upgrade the kernel and restart the machine.
Install kernel headers and development packages:
If installing kernel headers does not work correctly, follow these steps instead:
Identify the Linux kernel you are using by issuing the uname -r
command.
Use the name of the kernel (3.10.0-862.11.6.el7.x86_64
in the following code example) to install kernel headers and development packages:
Install the dependencies and extra packages:
CUDA is a parallel computing platform and application programming interface (API) model. It uses a CUDA-enabled graphics processing unit (GPU) for general-purpose processing. The CUDA platform provides direct access to the GPU virtual instruction set and parallel computation elements. For more information on CUDA unrelated to installing HEAVY.AI, see https://developer.nvidia.com/cuda-zone. You can install drivers in multiple ways. This section provides installation information using the NVIDIA website or using yum.
Although using the NVIDIA website is more time consuming and less automated, you are assured that the driver is certified for your GPU. Use this method if you are not sure which driver to install. If you prefer a more automated method and are confident that the driver is certified, you can use the package-manager method.
Install the CUDA package for your platform and operating system according to the instructions on the NVIDIA website (https://developer.nvidia.com/cuda-downloads).
If you do not know the GPU model installed on your system, run this command:
The output shows the product type, series, and model. In this example, the product type is Tesla, the series is T (as Turing), and the model is T4.
Select the product type shown after running the command above.
Select the correct product series and model for your installation.
In the Operating System dropdown list, select Linux 64-bit.
In the CUDA Toolkit dropdown list, click a supported version (11.4 or higher).
Click Search.
On the resulting page, verify the download information and click Download.
Please check that the driver's version you are downloading meets the HEAVI.AI minimum requirements.
Move the downloaded file to the server, change the permissions, and run the installation.
You might receive the following error during installation:
ERROR: The Nouveau kernel driver is currently in use by your system. This driver is incompatible with the NVIDIA driver, and must be disabled before proceeding. Please consult the NVIDIA driver README and your Linux distribution's documentation for details on how to correctly disable the Nouveau kernel driver.
If you receive this error, blacklist the Nouveau driver by editing the /etc/modprobe.d/blacklist-nouveau.conf
file, adding the following lines at the end:
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off
Install a specific version of the driver for your GPU by installing the NVIDIA repository and using the yum
package manager.
When installing the driver, ensure that your GPU model is supported and meets the HEAVI.AI minimum requirements.
Add the NVIDIA network repository to your system.
Run the available drivers for the download.
Install the driver version needed with yum
.
Reboot your system to ensure that the new version of the driver is loaded.
Run nvidia-smi
to verify that your drivers are installed correctly and recognize the GPUs in your environment. Depending on your environment, you should see something like this to confirm that your NVIDIA GPUs and drivers are present:
If you see an error like the following, the NVIDIA drivers are probably installed incorrectly:
Review the Install NVIDIA Drivers section and correct any errors.
To work correctly, the back-end renderer requires a Vulkan-enabled driver and the Vulkan library. Without these components, the database cannot start without disabling the back-end renderer.
Install the Vulkan library and its dependencies using yum
both CentOS and RHEL.
If installing on RHEL, you must obtain and manually install the vulkan-filesystem package manually. Perform these additional steps:
Download the rpm file
Install the rpm file
You might see a warning similar to the following:
Ignore it now; you can verify NVIDIA driver installation here.
For more information about troubleshooting Vulkan, see the Vulkan Renderer section.
You must install the CUDA Toolkit if you use advanced features like C++ User-Defined Functions or User-Defined Table Functions to extend the database capabilities.
Add the NVIDIA network repository to your system:
2. List the available CUDA Toolkit versions:
3. Install the CUDA Toolkit using yum
:
4. Check that everything is working correctly:
This is an end-to-end recipe for installing HEAVY.AI on a CentOS/RHEL 7 machine using CPU and GPU devices.
The order of these instructions is significant. To avoid problems, install each component in the order presented.
These instructions assume the following:
You are installing on a “clean” CentOS/RHEL 7 host machine with only the operating system installed.
Your HEAVY.AI host only runs the daemons and services required to support HEAVY.AI.
Your HEAVY.AI host is connected to the Internet.
Prepare your Centos/RHEL machine by updating your system and optionally enabling or configuring a firewall.
Update the entire system and reboot the system if needed.
Install the utilities needed to create HEAVY.AI repositories and download archives
Open a terminal on the host machine.
Install the headless JDK using the following command:
Create a group called heavyai
and a user named heavyai
, who will own HEAVY.AI software and data on the file system.
You can create the group, user, and home directory using the useradd
command with the --user-group
and --create-home
switches:
Set a password for the user:
Log in with the newly created user:
Install HEAVY.AI using yum or a tarball.
The installation using the yum package manager is recommended to those who want a more automated install and upgrade procedure.
If your system includes NVIDIA GPUs, but the drivers are not installed, install them now.
Create a yum repository depending on the edition (Enterprise, Free, or Open Source) and execution device (GPU or CPU) you are going to use.
Add the GPG-key to the newly added repository.
Use yum
to install the latest version of HEAVY.AI.
If you need to install a specific version of HEAVY.AI because you are upgrading from Omnisci, or for different reasons, you must run the following command:
First create the installation directory.
Download the archive and install the latest version of the software. A different archive is downloaded depending on the edition (Enterprise, Free, or Open Source) and the device used for runtime.
Follow these steps to prepare your HEAVY.AI environment.
For your convenience, you can update .bashrc with these environment variables
Although this step is optional, you will find references to the HEAVYAI_BASE and HEAVYAI_PATH variables. These variables contain respectively the paths where configuration, license, and data files are stored and where the software is installed. Installing them is strongly recommended.
Run the systemd
installer to initialize the HEAVY.AI services and the database storage.
Accept the default values provided or make changes as needed.
The script creates a data directory in $HEAVYAI_BASE/storage
(typically /var/lib/heavyai
) with the directories catalogs
, data
, export
and log
. The directory import
is created when you insert data the first time. If you are a HeavyDB administrator, the log
directory is of particular interest.
Note that Heavy Immerse is not available in the OS SEdition, so if running the OSS Edition the systemctl command using the heavy_web_server
has no effect.
Enable the automatic startup of the service at reboot and start the HEAVY.AI services.
If a firewall is not already installed and you want to harden your system, install and start firewalld
.
To use Heavy Immerse or other third-party tools, you must prepare your host machine to accept incoming HTTP(S) connections. Configure your firewall for external access:
Most cloud providers use a different mechanism for firewall configuration. The commands above might not run in cloud deployments.
Open a terminal window.
Enter cd ~/
to go to your home directory.
Open .bashrc
in a text editor. For example, vi .bashrc
.
Edit the .bashrc
file. Add the following export commands under “User specific aliases and functions.”
Save the .bashrc
file. For example, in vi enter[esc]:x!
Open a new terminal window to use your changes.
Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273
.
When prompted, paste your license key in the text box and click Apply.
Log into Heavy Immerse by entering the default username (admin
) and password (HyperInteractive
), and then click Connect.
The $HEAVYAI_BASE directory must be dedicated to HEAVYAI; do not set it to a directory shared by other packages.
HEAVY.AI ships with two sample datasets of airline flight information collected in 2008, and a census of New York City trees. To install sample data, run the following command.
Connect to HeavyDB by entering the following command in a terminal on the host machine (default password is HyperInteractive
):
Enter a SQL query such as the following:
The results should be similar to the results below.
After installing Enterprise or Free Edition, check if Heavy Immerse is running as intended.
Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273
.
Log into Heavy Immerse by entering the default username (admin
) and password (HyperInteractive
), and then click Connect.
Create a new dashboard and a Scatter Plot to verify that backend rendering is working.
Click New Dashboard.
Click Add Chart.
Click SCATTER.
Click Add Data Source.
Choose the flights_2008_10k table as the data source.
Click X Axis +Add Measure.
Choose depdelay.
Click Y Axis +Add Measure.
Choose arrdelay.
Click Size +Add Measure.
Choose airtime.
Click Color +Add Measure.
Choose dest_state.
The resulting chart shows, unsurprisingly, that there is a correlation between departure delay and arrival delay.
Create a new dashboard and a Table chart to verify that Heavy Immerse is working.
Click New Dashboard.
Click Add Chart.
Click Bubble.
Click Select Data Source.
Choose the flights_2008_10k table as the data sour
Click Add Dimension.
Choose carrier_name.
Click Add Measure.
Choose depdelay.
Click Add Measure.
Choose arrdelay.
Click Add Measure.
Choose #Records.
The resulting chart shows, unsurprisingly, that also the average departure delay is correlated to the average of arrival delay, while there is quite a difference between Carriers.
Follow these instructions to install a headless JDK and configure an environment variable with a path to the library. The “headless” Java Development Kit does not provide support for keyboard, mouse, or display systems. It has fewer dependencies and is best suited for a server host. For more information, see .
See for details.
Start and use HeavyDB and Heavy Immerse.
For more information, see .
If you are on Enterprise or Free Edition, you need to validate your HEAVY.AI instance with your license key. You can skip this section if you are using Open Source Edition.
Copy your license key from the registration email message. If you have not received your license key, contact your Sales Representative or register for your 30-day trial .
To verify that everything is working, load some sample data, perform a heavysql
query, and generate a Pointmap using Heavy Immerse.