--enable_watch_dog=0
). That allows the query to run in stages on the GPU. HEAVY.AI orchestrates the transfer of data through layers of abstraction and onto the GPU for execution. See Configuration Parameters.--allow-cpu-retry
. If a query does not fit in GPU memory, it falls back and executes on the CPU. See Configuration Parameters.--render-mem-bytes
configuration flag. Try setting it to 1000000000. If that does not work, go to 2000000000.heavysql
command line client, which you can find at $HEAVYAI_PATH/bin/heavysql.bin/heavysql -p HyperInteractive
, where HyperInteractive is the default password.heavysql
is running, use one of the following methods to see where your query is running:EXPLAIN
command to a SELECT
statement to see a representation of the code that will run on the CPU or GPU. The first line is important; it shows either IR for the GPU
or IR for the CPU
. This is most direct method./var/lib/heavyai/storage
), in a directory named log
.\memory_summary
command shows how much memory is in use on the CPU and on each GPU. HEAVY.AI manages memory itself, so you will see separate columns for in use (actual memory being used) and allocated (memory assigned to heavydb, but not necessarily in use yet). Data is loaded lazily from disk, which means that you must first perform a query before the data is moved to CPU and GPU. Even then, HEAVY.AI only moves the data and columns on which you are running your queries.heavysql
using \timing
.\gpu
.\cpu
. Again, run your queries a few times.nvidia-smi
command to see the GPU IDs of the GTX 760s. Most likely, the GPUs are grouped together by type.heavydb
config file as follows:0,1
, configure heavydb
with the option start-gpu=2
to use the remaining two TITAN GPUs.2,3
, add the option num-gpus=2
to the config file.--enable-watchdog
switch to false on startup.{{Exception: OutOfGpuMemoryError:Cuda error code=801 CUDA_ERROR_UNKNOWN/CUDA_ERROR_NOT_SUPPORTED, possibly not enough gpu memory available for the requested buffer size of 1000000000 bytes.}}
heavyai.conf
file:res-gpu-mem = 262144000
res-gpu-mem
to use 250 MB of memory. Then, restart HeavyDB to use the new configuration:systemctl restart heavyai_server
res-gpu-mem
by 250 MB each time until the dashboard renders correctly.cluster.conf
file, check the aggregator log file for 'Cluster file specified running as aggregator with config <>' and the leaf log file for 'String servers file specified running as dbleaf with config at << path to config >>'. If those lines are not present in each appropriate configuration file, there is likely an issue with the heavyai.conf
server configuration for either the aggregator or the leaves.systemd
, which avoids this issue. If you need to set this manually, try setting your nofile limit to 50,000 as a starting point.read_csv
parameter chunksize to a number of rows that is less than 2 GB in size. This chunked approach allows pyarrow
to convert the data sets without error. For example:MYTABLE
.TEMPTABLE
.MYFILE
.TEMPTABLE
.MYTABLE
instance.heavysql
\o
command to output an optimized CREATE TABLE statement, based on the size of the actual data stored in your table. See heavysql.db-query-list
to provide a path to a file that contains SELECT queries you want performed at start-up.USER [super-user-name] [database-name]
. For example:WHERE
condition that does not have any other filters. For example, in an employee database a query to cache salary might be the following.max_rows
?max_rows
setting defines the maximum number of rows allowed in a table. When you reach the limit, the oldest fragment is removed. This can be helpful when executing operations that insert and retrieve records based on insertion order. The default value for max_rows
is 2^62.