<table>
specifying <columns>
and table properties.BIGINT
-9,223,372,036,854,775,807
; maximum value: 9,223,372,036,854,775,807
.BOOLEAN
'true'
, '1'
, 't'
. FALSE: 'false'
, '0'
, 'f'
. Text values are not case-sensitive.DATE
*DATE ENCODING DAYS(32)
.DATE ENCODING DAYS(32)
+/-5,883,517
around epoch. Maximum date January 1, 5885487 (approximately). Minimum value: -2,147,483,648
; maximum value: 2,147,483,647
. Supported formats when using COPY FROM
: mm/dd/yyyy
, dd-mmm-yy
, yyyy-mm-dd
, dd/mmm/yyyy
.DATE ENCODING DAYS(16)
-32,768
- 32,767
Range in years: +/-90
around epoch, April 14, 1880 - September 9, 2059.
Minumum value: -2,831,155,200
; maximum value: 2,831,068,800
.
Supported formats when using COPY FROM
: mm/dd/yyyy
, dd-mmm-yy
, yyyy-mm-dd
, dd/mmm/yyyy
.DATE ENCODING FIXED(32)
DATE ENCODING DAYS(16)
. Deprecated.DATE ENCODING FIXED(16)
DATE ENCODING DAYS(16)
. Deprecated.DECIMAL
DECIMAL(precision,scale)
.4
: 2 bytes5
to 9
: 4 bytes10
to 18
(maximum): 8 bytesDOUBLE
-1.79 x e^308
; maximum value: 1.79 x e^308
.FLOAT
-3.4 x e^38
; maximum value: 3.4 x e^38
.INTEGER
-2,147,483,647
; maximum value: 2,147,483,647
.SMALLINT
-32,767
; maximum value: 32,767
.TEXT ENCODING DICT
TEXT ENCODING NONE
TIME
00:00:00
; maximum value: 23:59:59
.TIMESTAMP
-30610224000
(1/1/1000 00:00:00.000
) through 29379542399
(12/31/2900 23:59:59.999
).YYYY-MM-DD HH:MM:SS
YYYY-MM-DDTHH:MM:SS
(The T
is dropped when the field is populated.)TINYINT
-127
; maximum value: 127
.DATE
columns, but you can create only 4-byte DATE
columns (default) and 2-byte DATE
columns (see DATE ENCODING FIXED(16)
).tweets
and specify the columns, including type, in the table.San Francisco
to column city.CREATE TABLE tbl (id INTEGER NOT NULL DEFAULT 0, name TEXT, shard key (id)) with (shard_count = 2);
ARRAY[A, B, C, …. N]
{A, B, C, ... N}
is not supported.DICT
TEXT
columns). Limit of 2 billion unique string values.FIXED
(bits)NONE
TEXT
columns. No Dictionary is created. Aggregate operations are not possible on this column type.fragment_size
max_rollback_epochs
test_table
and sets the maximum epoch rollback number to 50:CREATE TABLE test_table(a int) WITH (MAX_ROLLBACK_EPOCHS = 50);
max_rows
max_rows
limit is reached, the oldest fragment is removed. When populating a table from a file, make sure that your row count is below the max_rows
setting. If you attempt load more rows at one time than the max_rows
setting defines, the records up to the max_rows
limit are removed, leaving only the additional rows. Default: 2^62.
In a distributed system, the maximum number of rows is calculated as max_rows * leaf_count
. In a sharded distributed system, the maximum number of rows is calculated as max_rows * shard_count
.page_size
partitions
SHARDED
: Partition table using sharding.REPLICATED
: Partition table using replication.shard_count
sort_column
shard_count
).shard_count
to the number of GPUs you eventually want to distribute the data table across.shard_count
-aligned.fragment_size
max_chunk_size
max_rows
max_rows
limit is reached, the oldest fragment is removed. When populating a table from a file, make sure that your row count is below the max_rows
setting. If you attempt load more rows at one time than the max_rows
setting defines, the records up to the max_rows
limit are removed, leaving only the additional rows. Default = 2^62.page_size
partitions
SHARDED
: Partition table using sharding.REPLICATED
: Partition table using replication.use_shared_dictionaries
true
), which increases the speed of table creation.CREATE TABLE new_table AS SELECT * FROM old_table WITH (USE_SHARED_DICTIONARIES='false');
vacuum
DELETE
requests. The only parameter available is delayed
. Rather than immediately remove deleted rows, vacuum marks items to be deleted, and they are removed at an optimal time.newTable
. Populate the table with all information from the table oldTable
, effectively creating a duplicate of the original table.trousers
. Populate it with data from the columns name
, waist
, and inseam
from the table wardrobe
.cosmos
. Populate it with data from the columns star
and planet
from the table universe where planet has the class M.15
for column b.ARRAY[A, B, C, …. N]
. The syntax {A, B, C, ... N}
is not supported.<table>
to file <filepath>
.<compression_program>
include:DUMP
command is not supported on distributed configurations.DUMP
command.<table>
from the file at <filepath>
. If you specified a compression program when you used the DUMP TABLE
command, you must specify the same compression method during RESTORE
.RESTORE
command is not supported on distributed configurations.RESTORE
command.TRUNCATE TABLE
statement to remove all rows from a table without deleting the table structure.DELETE
statements.enable-auto-metadata-update
is not set, HeavyDB does not narrow metadata during an update or delete — metadata is only widened to cover a new range.VALIDATE
detects epoch-related issues, it returns a report similar to the following:VALIDATE CLUSTER
also detects and reports issues related to table epochs. It reports when epochs are negative or when table epochs across leaf nodes or shards are inconsistent.VALIDATE CLUSTER
detects issues, it returns a report similar to the following:WITH(REPAIR_TYPE)
argument. (REPAIR_TYPE='NONE')
is the same as running the command with no argument. (REPAIR_TYPE='REMOVE')
removes any leaf objects that have issues. For example:VALIDATE CLUSTER
command on a distributed setup shows epoch-related issues: