Install the Extra Packages for Enterprise Linux (EPEL) repository and other packages before installing NVIDIA drivers.
RHEL-based distributions require Dynamic Kernel Module Support (DKMS) to build the GPU driver kernel modules. For more information, see https://fedoraproject.org/wiki/EPEL. Upgrade the kernel and restart the machine.
Install kernel headers and development packages:
If installing kernel headers does not work correctly, follow these steps instead:
Identify the Linux kernel you are using by issuing the uname -r
command.
Use the name of the kernel (4.18.0-553.el8_10.x86_64 in the following code example) to install kernel headers and development packages:
Install the dependencies and extra packages:
CUDA is a parallel computing platform and application programming interface (API) model. It uses a CUDA-enabled graphics processing unit (GPU) for general-purpose processing. The CUDA platform provides direct access to the GPU virtual instruction set and parallel computation elements. For more information on CUDA unrelated to installing HEAVY.AI, see https://developer.nvidia.com/cuda-zone. You can install drivers in multiple ways. This section provides installation information using the NVIDIA website or using dnf.
Although using the NVIDIA website is more time-consuming and less automated, you are assured that the driver is certified for your GPU. Use this method if you are not sure which driver to install. If you prefer a more automated method and are confident that the driver is certified, you can use the DNF package manager method.
Install the CUDA package for your platform and operating system according to the instructions on the NVIDIA website (https://developer.nvidia.com/cuda-downloads).
If you do not know the GPU model installed on your system, run this command:
The output shows the product type, series, and model. In this example, the product type is Tesla, the series is T (as Turing), and the model is T4.
Select the product type shown after running the command above.
Select the correct product series and model for your installation.
In the Operating System dropdown list, select Linux 64-bit.
In the CUDA Toolkit dropdown list, click a supported version (11.4 or higher).
Click Search.
On the resulting page, verify the download information and click Download.
Please check that the driver's version you download meets the HEAVI.AI minimum requirements.
Move the downloaded file to the server, change the permissions, and run the installation.
You might receive the following error during installation:
ERROR: The Nouveau kernel driver is currently in use by your system. This driver is incompatible with the NVIDIA driver, and must be disabled before proceeding. Please consult the NVIDIA driver README and your Linux distribution's documentation for details on how to correctly disable the Nouveau kernel driver.
If you receive this error, blacklist the Nouveau driver by editing the /etc/modprobe.d/blacklist-nouveau.conf
file, adding the following lines at the end:
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off
Install a specific version of the driver for your GPU by installing the NVIDIA repository and using the DNF
package manager.
When installing the driver, ensure your GPU model is supported and meets the HEAVI.AI minimum requirements.
Add the NVIDIA network repository to your system.
Install the driver version needed with dnf
. For 8.0, the minimum version is 535.
To load the installed driver, you can run sudo modprobe nvidia
or nvidia-smi
commands, or , in case of driver upgrade, you can reboot your system to ensure that the new version of the driver is loaded using the command sudo reboot
Run the specified command to verify that your drivers are installed correctly and recognize the GPUs in your environment. Depending on your environment, you should see output confirming the presence of your NVIDIA GPUs and drivers. This verification step ensures that your system can identify and utilize the GPUs as intended.
If you encounter an error similar to the following, the NVIDIA drivers are likely installed incorrectly:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Please ensure that the latest NVIDIA driver is installed and running.
Please review the Install NVIDIA Drivers section and correct any errors.
The back-end renderer requires a Vulkan-enabled driver and the Vulkan library to work correctly. Without these components, the database cannot start without disabling the back-end renderer.
To ensure the Vulkan library and its dependencies are installed, use the DNF.
For more information about troubleshooting Vulkan, see the Vulkan Renderer section.
You must install the CUDA Toolkit if you use advanced features like C++ User-Defined Functions or User-Defined Table Functions to extend the database capabilities.
Add the NVIDIA network repository to your system:
2. List the available CUDA Toolkit versions using the DNF list command
3. Install the CUDA Toolkit version using DNF.
4. Check that everything is working correctly:
In this section you will find a recipe to install the heavy.ai platofrom on Red Hat and derivates like Rocky Linux
In this section you will find a recipe to install the heavy.ai platofrom on Red Hat and derivates like Rocky Linux
This is an end-to-end recipe for installing HEAVY.AI on a Red Hat Enterprise 8.x machine using CPU and GPU devices.
The order of these instructions is significant. To avoid problems, install each component in the order presented.
The same instructions can be used to install on RL / RHEL 9, which some minor modifications.
These instructions assume the following:
You are installing a "clean" Rocky Linux / RHEL 8 host machine with only the operating system.
Your HEAVY.AI host only runs the daemons and services required to support HEAVY.AI.
Your HEAVY.AI host is connected to the Internet.
Prepare your machine by updating your system and optionally enabling or configuring a firewall.
Update the entire system and reboot the system if needed.
Install the utilities needed to create HEAVY.AI repositories and download installation binaries.
Follow these instructions to install a headless JDK and configure an environment variable with a path to the library. The “headless” Java Development Kit does not provide support for keyboard, mouse, or display systems. It has fewer dependencies and is best suited for a server host. For more information, see https://openjdk.java.net.
Open a terminal on the host machine.
Install the headless JDK using the following command:
Create a group called heavyai
and a user named heavyai
, who will own HEAVY.AI software and data on the file system.
You can create the group, user, and home directory using the useradd
command with the --user-group
and --create-home
switches:
Set a password for the user using the passwd command.
Log in with the newly created user.
There are two ways to install the heavy.ai software
DNF Installation To install software using DNF's package manager, you can utilize DNF's package management capabilities to search for and then install the desired software. This method provides a convenient and efficient way to manage software installations and dependencies on your system.
Tarball Installation Installing via a tarball involves obtaining a compressed archive file (tarball) from the software's official source or repository. After downloading the tarball, you would need to extract its contents and follow the installation instructions provided by the software developers. This method allows for manual installation and customization of the software.
Using the DNF package manager for installation is highly recommended due to its ability to handle dependencies and streamline the installation process, making it a preferred choice for many users.
If your system includes NVIDIA GPUs but the drivers are not installed, it is advisable to install them before proceeding with the suite installation.
See Install NVIDIA Drivers and Vulkan on Rocky Linux and RHEL for details.
Create a DNF repository depending on the edition (Enterprise, Free, or Open Source) and execution device (GPU or CPU) you will use.
Add the GPG-key to the newly added repository.
Use DNF
to install the latest version of HEAVY.AI.
You can use the DNF package manager to list the available packages when installing a specific version of HEAVY.AI, such as when a multistep upgrade is necessary, or a specific version is needed for any other reason.
sudo
dnf --showduplicates
list
heavyai
Select the version needed from the list (e.g. 7.0.0) and install using the command.
sudo
dnf
install
heavyai-7.0.0_20230501_be4f51b048-1.x86_64
Let's begin by creating the installation directory.
Download the archive and install the latest version of the software. The appropriate archive is downloaded based on the edition (Enterprise, Free, or Open Source) and the device used for runtime.
Follow these steps to configure your HEAVY.AI environment.
For your convenience, you can update .bashrc with these environment variables
Although this step is optional, you will find references to the HEAVYAI_BASE and HEAVYAI_PATH variables. These variables contain the paths where configuration, license, and data files are stored and the location of the software installation. It is strongly recommended that you set them up.
Run the script that will initialize the HEAVY.AI services and database storage located in the systemd folder.
Accept the default values provided or make changes as needed.
This step will take a few minutes if you are installing a CUDA-enabled version of the software because the shaders must be compiled.
The script creates a data directory in $HEAVYAI_BASE/storage
(typically /var/lib/heavyai
) with the directories catalogs
, data
and log
, which will contain the metadata, the data of the database tables, and the log files from Immerse's web server and the database.
The log folder is particularly important for database administrators. It contains data about the system's health, performance, and user activities.
The first step to activate the system is starting HeavyDB and the Web Server service that Heavy Immerse needs. ¹
Heavy Immerse is not available in the OS Edition.
Start the services and enable the automatic startup of the service at reboot and start the HEAVY.AI services.
If a firewall is not already installed and you want to harden your system, install and start firewalld
.
To use Heavy Immerse or other third-party tools, you must prepare your host machine to accept incoming HTTP(S) connections. Configure your firewall for external access:
Most cloud providers use a different mechanism for firewall configuration. The commands above might not run in cloud deployments.
For more information, see https://fedoraproject.org/wiki/Firewalld?rd=FirewallD.
If you are on Enterprise or Free Edition, you need to validate your HEAVY.AI instance with your license key. You can skip this section if you are using Open Source Edition. ²
Copy your license key from the registration email message. If you have not received your license key, contact your Sales Representative or register for your 30-day trial here.
Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273
.
When prompted, paste your license key in the text box and click Apply.
Log into Heavy Immerse by entering the default username (admin
) and password (HyperInteractive
), and then click Connect.
The $HEAVYAI_BASE directory must be dedicated to HEAVYAI; do not set it to a directory shared by other packages.
To verify that everything is working, load some sample data, perform a heavysql
query, and generate a Pointmap using Heavy Immerse. ¹
HEAVY.AI ships with two sample datasets of airline flight information collected in 2008, and a census of New York City trees. To install sample data, run the following command.
Connect to HeavyDB by entering the following command in a terminal on the host machine (default password is HyperInteractive
):
anEnter a SQL query such as the following:
The results should be similar to the results below.
After installing Enterprise or Free Edition, check if Heavy Immerse is running as intended.
Connect to Heavy Immerse using a web browser connected to your host machine on port 6273. For example, http://heavyai.mycompany.com:6273
.
Log into Heavy Immerse by entering the default username (admin
) and password (HyperInteractive
), and then click Connect.
Create a new dashboard and a Scatter Plot to verify that backend rendering is working.
Click New Dashboard.
Click Add Chart.
Click SCATTER.
Click Add Data Source.
Choose the flights_2008_10k table as the data source.
Click X Axis +Add Measure.
Choose depdelay.
Click Y Axis +Add Measure.
Choose arrdelay.
Click Size +Add Measure.
Choose airtime.
Click Color +Add Measure.
Choose dest_state.
The resulting chart clearly demonstrates that there is a direct correlation between departure delay and arrival delay. This insight can help in identifying areas for improvement and implementing strategies to minimize delays and enhance overall efficiency.
Create a new dashboard and a Table chart to verify that Heavy Immerse is working.
Click New Dashboard.
Click Add Chart.
Click Bubble.
Click Select Data Source.
Choose the flights_2008_10k table as the data sour
Click Add Dimension.
Choose carrier_name.
Click Add Measure.
Choose depdelay.
Click Add Measure.
Choose arrdelay.
Click Add Measure.
Choose #Records.
The resulting chart shows, unsurprisingly, that also the average departure delay is correlated to the average of arrival delay, while there is quite a difference between Carriers.