Version:

Configuration Reference

Kinetica configuration consists of several user-modifiable system parameters present in /opt/gpudb/core/etc/gpudb.conf that are used to set up and tune the system.

Network

Configuration Parameter Default Value Description
head_ip_address 127.0.0.1 Head HTTP server IP address. Set to the publicly accessible IP address of the first process, rank0.
head_port 9191 Head HTTP server port to use for head_ip_address.
use_https false Set to true to use HTTPS; if true then https_key_file and https_cert_file must be provided
https_key_file  

Files containing the SSL private Key and the SSL certificate for. If required, a self signed certificate (expires after 10 years) can be generated via the command:

openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
https_cert_file  
http_allow_origin * Value to return via Access-Control-Allow-Origin HTTP header (for Cross-Origin Resource Sharing). Set to empty to not return the header and disallow CORS.
enable_httpd_proxy false

Start an HTTP server as a proxy to handle LDAP and/or Kerberos authentication. Each host will run an HTTP server and access to each rank is available through http://host:8082/gpudb-1, where port 8082 is defined by httpd_proxy_port.

Note

HTTP external endpoints are not affected by the use_https parameter above. If you wish to enable HTTPS, you must edit the /opt/gpudb/httpd/conf/httpd.conf and setup HTTPS as per the Apache httpd documentation at https://httpd.apache.org/docs/2.2/

httpd_proxy_port 8082 TCP port that the httpd auth proxy server will listen on if enable_httpd_proxy is true.
httpd_proxy_use_https false Set to true if the httpd auth proxy server is configured to use HTTPS.
rank0_ip_address ${gaia.rank0.host} Internal use IP address of the head HTTP server, rank0. Set to either a second internal network accessible by all ranks or to ${gaia.head_ip_address}.
trigger_port 9001 Trigger ZMQ publisher server port (-1 to disable), uses the head_ip_address interface.
set_monitor_port 9002 Set monitor ZMQ publisher server port (-1 to disable), uses the head_ip_address interface.
set_monitor_proxy_port 9003

Set monitor ZMQ publisher internal proxy server port (-1 to disable), uses the head_ip_address interface.

Important

Disabling this port effectively prevents worker nodes from publishing set monitor notifications when multi-head ingest is enabled (see enable_worker_http_servers).

enable_reveal true Enable Reveal runtime
global_manager_port_one 5552 Internal communication ports
global_manager_pub_port 5553 Host manager synchronization port
host_manager_http_port 9300 HTTP port for web portal of the host manager
enable_worker_http_servers false Enable worker HTTP servers; each process runs its own server for multi-head ingest.
rank1.worker_http_server_port 9192 Optionally, specify the worker HTTP server ports. The default is to use (head_port + rank #) for each worker process where rank number is from 1 to number of ranks in rank<#>.host below.
rank2.worker_http_server_port 9193
rank0.public_url  

Optionally, specify a public URL for each worker HTTP server that clients should use to connect for multi-head operations.

Note

If specified for any ranks, a public URL must be specified for all ranks.

rank1.public_url  
rank2.public_url  
rank0.host 127.0.0.1 Specify the hosts to run each rank worker process in the cluster. For a single machine system, use 127.0.0.1, but if using two or more machines, a hostname or IP address must be specified for each rank that is accessible from the other ranks. See also head_ip_address and rank0_ip_address.
rank1.host 127.0.0.1
rank2.host 127.0.0.1
rank0.communicator_port 6555 Specify the TCP ports each rank will use to communicate with the others. If the port for any rank<#> is not specified the port will be assigned to rank0.communicator_port + rank #.
rank1.communicator_port 6556
rank2.communicator_port 6557
compress_network_data false Enables compression of inter-node network data transfers.

Security

Configuration Parameter Default Value Description
require_authentication false Require authentication.
enable_authorization false Enable authorization checks.
min_password_length 0 Minimum password length.
enable_external_authentication false

Enable external (LDAP, Kerberos, etc.) authentication. User IDs of externally-authenticated users must be passed in via the REMOTE_USER HTTP header from the authentication proxy. May be used in conjuntion with the enable_httpd_proxy setting above for an integrated external authentication solution.

Important

DO NOT ENABLE unless external access to GPUdb ports has been blocked via firewall AND the authentication proxy is configured to block REMOTE_USER HTTP headers passed in from clients.

auto_create_external_users false Automatically create accounts for externally-authenticated users. If enable_external_authentication is false, this setting has no effect. Note that accounts are not automatically deleted if users are removed from the external authentication provider and will be orphaned.
auto_grant_external_roles false

Automatically add roles passed in via the KINETICA_ROLES HTTP header to externally-authenticated users. Specified roles that do not exist are ignored. If enable_external_authentication is false, this setting has no effect.

Important

DO NOT ENABLE unless the authentication proxy is configured to block KINETICA_ROLES HTTP headers passed in from clients.

auto_revoke_external_roles   Comma-separated list of roles to revoke from externally-authenticated users prior to granting roles passed in via the KINETICA_ROLES HTTP header, or * to revoke all roles. Preceding a role name with an ! overrides the revocation (e.g. *,!foo revokes all roles except foo). Leave blank to disable. If either enable_external_authentication or auto_grant_external_roles is false, this setting has no effect.

Auditing

This section controls the request auditor, which will audit all requests received by the server in full or in part based on the settings below. The output location of the audited requests is controlled via settings in the Auditing section of gpudb_logger.conf.

Configuration Parameter Default Value Description
enable_audit false Controls whether request auditing is enabled. If set to true, the following information is audited for every request: Job ID, URI, User, and Client Address. The settings below control whether additional information about each request is also audited. If set to false, all auditing is disabled.
audit_headers false Controls whether HTTP headers are audited for each request. If enable_audit is false this setting has no effect.
audit_body false

Controls whether the body of each request is audited (in JSON format). If enable_audit is false this setting has no effect.

Note

For requests that insert data records, this setting does not control the auditing of the records being inserted, only the rest of the request body; see audit_data below to control this.

audit_data false

Controls whether records being inserted are audited (in JSON format) for requests that insert data records. If either enable_audit or audit_body is false, this setting has no effect.

Note

Enabling this setting during bulk ingestion of data will rapidly produce very large audit logs and may cause disk space exhaustion; use with caution.

lock_audit false Controls whether the above audit settings can be altered at runtime via the /alter/system/properties endpoint. In a secure environment where auditing is required at all times, this should be set to true to lock the settings to what is set in this file.

Licensing

Configuration Parameter Default Value Description
license_key   The license key to authorize running.

Processes and Threads

Configuration Parameter Default Value Description
min_http_threads 8 Set min number of web server threads to spawn. (default: 8)
max_http_threads 512 Set max number of web server threads to spawn. (default: 512)
sm_omp_threads 2 Set the number of parallel jobs to create for multi-child set calculations. Use -1 to use the max number of threads (not recommended).
kernel_omp_threads 4 Set the number of parallel calculation threads to use for data processing. Use -1 to use the max number of threads (not recommended).
max_tbb_threads_per_rank -1 Set the maximum number of threads (both workers and masters) to be passed to TBB on initialization. Generally speaking, max_tbb_threads_per_rank - 1 TBB workers will be created. Use -1 for no limit.
toms_per_rank 1 Set the number of TOMs (data container shards) per rank
tps_per_tom 4 Set the number of TaskProcessors per TOM, CPU data processors.
tcs_per_tom 16 Set the number of TaskCalculators per TOM, GPU data processors.
subtask_concurrency_limit 4 Maximum number of simultaneous threads allocated to a given request, on each rank. Note that thread allocation may also be limted by resource group limits and/or system load.

Hardware

Configuration Parameter Default Value Description
rank0.gpu 0

Specify the GPU to use for all calculations on the HTTP server node, rank0.

Note

The rank0 GPU may be shared with another rank.

rank1.taskcalc_gpu  

Set the GPU device for each worker rank to use. If no GPUs are specified, each rank will round-robin the available GPUs per host system. Add rank<#>.taskcalc_gpu as needed for the worker ranks, where # ranges from 1 to the highest rank # among the rank<#>.host parameters

Example setting the GPUs to use for ranks 1 and 2:

rank1.taskcalc_gpu = 0
rank2.taskcalc_gpu = 1
rank2.taskcalc_gpu  
rank0.numa_node  

Set the head HTTP rank0 numa node(s). If left empty, there will be no thread affinity or preferred memory node. The node list may be either a single node number or a range; e.g., 1-5,7,10.

If there will be many simultaneous users, specify as many nodes as possible that won't overlap the rank1 to rankN worker numa nodes that the GPUs are on.

If there will be few simultaneous users and WMS speed is important, choose the numa node the rank0.gpu is on.

rank1.base_numa_node  

Set each worker rank's preferred base numa node for CPU affinity and memory allocation. The rank<#>.base_numa_node is the node or nodes that non-data intensive threads will run in. These nodes do not have to be the same numa nodes that the GPU specified by the corresponding rank<#>.taskcalc_gpu is on for best performance, though they should be relatively near to their rank<#>.data_numa_node.

There will be no CPU thread affinity or preferred node for memory allocation if not specified or left empty.

The node list may be a single node number or a range; e.g., 1-5,7,10.

rank2.base_numa_node  
rank1.data_numa_node  

Set each worker rank's preferred data numa node for CPU affinity and memory allocation. The rank<#>.data_numa_node is the node or nodes that data intensive threads will run in and should be set to the same numa node that the GPU specified by the corresponding rank<#>.taskcalc_gpu is on for best performance.

If the rank<#>.taskcalc_gpu is specified the rank<#>.data_numa_node will be automatically set to the node the GPU is attached to, otherwise there will be no CPU thread affinity or preferred node for memory allocation if not specified or left empty.

The node list may be a single node number or a range; e.g., 1-5,7,10.

rank2.data_numa_node  

General

Configuration Parameter Default Value Description
protected_sets MASTER,_MASTER,_DATASOURCE Tables and collections with these names will not be deleted (comma separated).
default_ttl 20 Time-to-live in minutes of non-protected tables before they are automatically deleted from the database.
disable_clear_all false Disallow the /clear/table request to clear all tables.
pinned_memory_pool_size 250000000 Size in bytes of the pinned memory pool per-rank process to speed up copying data to the GPU. Set to 0 to disable.
concurrent_kernel_execution false Enable (if true) multiple kernels to run concurrently on the same GPU
max_concurrent_kernels 4 Maximum number of kernels that can be running at the same time on a given GPU. Set to 0 for no limit. Only takes effect if concurrent_kernel_execution is true
force_host_filter_execution 0 If true then all filter execution will be host-only (i.e. CPU). This can be useful for high-concurrency situations and when PCIe bandwidth is a limiting factor.
max_get_records_size 20000 Maximum number of records that data retrieval requests such as /get/records and /aggregate/groupby will return per request.
request_timeout 20 Timeout (in minutes) for filter-type requests
on_startup_script  

Set an optional executable command that will be run once when Kinetica is ready for client requests. This can be used to perform any initialization logic that needs to be run before clients connect. It will be run as the gpudb user, so you must ensure that any required permissions are set on the file to allow it to be executed. If the command cannot be executed or returns a non-zero error code, then Kinetica will be stopped. Output from the startup script will be logged to /opt/gpudb/core/logs/gpudb-on-start.log (and its dated relatives). The gpudb_env.sh script is run directly before the command, so the path will be set to include the supplied Python runtime.

Example:

on_startup_script = /home/gpudb/on-start.sh param1 param2 ...
enable_predicate_equi_join false Enable predicate-equi-join filter plan type
timeout_startup_subsystem 60 Timeout (in seconds) to wait for each database subsystem to startup. Subsystems include the Query Planner, Graph, Stats, & HTTP servers, as well as external text-search ranks.
timeout_shutdown_subsystem 20 Timeout (in seconds) to wait for each database subsystem to exit gracefully before it is force-killed.
timeout_shutdown_rank 300 Timeout (in seconds) to wait for a rank to exit gracefully before it is force-killed. Machines with slow disk drives may require longer times and data may be lost if a drive is not responsive.

Visualization

Configuration Parameter Default Value Description
point_render_threshold 100000 Threshold number of points (per-TOM) at which point rendering switches to fast mode.
symbology_render_threshold 10000 Threshold for the number of points (per-TOM) after which symbology rendering falls back to regular rendering
max_heatmap_size 3072 Maximum heatmap size (in pixels) that can be generated. This reserves max_heatmap_size 2 * 8 bytes of GPU memory at rank0
symbol_resolution 100 The image width/height (in pixels) of svg symbols cached in the OpenGL symbol cache.
symbol_texture_size 4000 The width/height (in pixels) of an OpenGL texture which caches symbol images for OpenGL rendering.
enable_opengl_renderer true If true, enable hardware-accelerated OpenGL renderer; if false, use the software-based Cairo renderer.
rendering_precision_threshold 30 Single-precision coordinates are used for usual rendering processes, but depending on the precision of geometry data and use case, double precision processing may be required at a high zoomlevel. Double precision rendering processes are used from the zoomlevel specified by this parameter, which is corresponding to a zoomlevel of TMS or Google map service.
enable_lod_rendering true Enable level-of-details rendering for fast interaction with large WKT polygon data. Only available for the OpenGL renderer (when enable_opengl_renderer is true).
enable_vectortile_service false If true, enable Vector Tile Service (VTS) to support client-side visualization of geospatial data. Enabling this option increases memory usage on ingestion, but the maximum usage is bounded by max_memory_for_shape_processing.
min_vectortile_zoomlevel 1 Input geometries are pre-processed upon ingestion for faster vector tile generation. This parameter determines the zoomlevel from which the vector tile pre-processing starts. A vector tile request for a lower zoomlevel than this parameter takes additional time because the vector tile needs to be generated on the fly.
max_vectortile_zoomlevel 8 Input geometries are pre-processed upon ingestion for faster vector tile generation. This parameter determines the zoomlevel at which the vector tile pre-processing stops. A vector tile request for a higher zoomlevel than this parameter takes additional time because the vector tile needs to be generated on the fly.
vectortile_map_tiler google The name of map tiler used for Vector Tile Service. google and tms map tilers are supported currently. This parameter should be matched with the map tiler of clients' vector tile renderer.
lod_data_extent -180 -90 180 90

Longitude and latitude ranges of geospatial data for which level-of-details representations are being generated. The parameter order is:

<min_longitude> <min_latitude> <max_longitude> <max_latitude>

The default values span over the world, but the level-of-details rendering becomes more efficient when the precise extent of geospatial data is specified.

lod_subregion_num 12 6 The number of subregions in horizontal and vertical geospatial data extent. The default values of 12 6 divide the world into subregions of 30 degree (lon.) x 30 degree (lat.)
lod_subregion_resolution 512 512 A base image resolution (width and height in pixels) at which a subregion would be rendered in a global view spanning over the whole dataset. Based on this resolution level-of-details representations are generated for the polygons located in the subregion.
max_lod_level 8 The maximum number of levels in the level-of-details rendering. As the number increases, level-of-details rendering becomes effective at higher zoom levels, but it may increase memory usage for storing level-of-details representations.
lod_preprocessing_level 5

The extent to which shape data are pre-processed for level-of-details rendering during data insert/load or processed on-the-fly in rendering time. This is a trade off between speed and memory. The higher the value, the faster level-of-details rendering is, but the more memory is used for storing processed shape data.

The maximum level is 10 (most shape data are pre-processed) and the minimum level is 0.

max_memory_for_shape_processing 8192

The upper bound of system memory usage (in megabytes) for shape processing to generate level-of-details representations on each rank. This parameter overrides lod_preprocessing_level; i.e., stops shape pre-processing, when the memory usage reaches this upper bound.

If the value is set to 0, shape processing is performed without memory usage restriction.

Tomcat

Configuration Parameter Default Value Description
enable_tomcat true Enable Tomcat/GAdmin

Persistence

Configuration Parameter Default Value Description
persist_directory /opt/gpudb/persist Specify a base directory to store persistence data files.
sms_directory ${gaia.persist_directory} Base directory to store hashed strings.
text_index_directory ${gaia.persist_directory} Base directory to store the text search index.
temp_directory /tmp Directory for GPUdb to use to store temporary files. Must be a fully qualified path, have at least 100Mb of free space, and execute permission.
persist_sync false Whether or not to use synchronous persistence file writing. If false, files will be written asynchronously.
synchronous_compression false Synchronous compression: compress vectors on set compression.
persist_sync_time 5

Interval, in minutes, at which persistence files will be force-synced, if out of sync.

Note

Files are always opportunistically saved; this simply enforces a maximum time a file can be out of date. Set to a very high number to disable.

load_vectors_on_start always

Startup data-loading scheme:

  • always : load all the data into memory before accepting requests
  • lazy : load the necessary data to start, but load the remainder lazily
  • on_demand : only load data as requests use it
indexdb_toc_size 1000000 Table of contents size for IndexedDb object file store
indexdb_max_open_files 128 Maximum number of open files for IndexedDb object file store
sms_max_open_files 128 Maximum number of open files (per-TOM) for the SMS (string) store
chunk_size 8000000 Chunk size (0 disables chunking)
execution_mode device

Determines whether to execute kernels on host (CPU) or device (GPU). Possible values are:

  • default : engine decides
  • host : execute only host
  • device : execute only device
  • <rows> : execute on the host if chunked column contains the given number of rows or fewer; otherwise, execute on device.

Statistics

Configuration Parameter Default Value Description
enable_stats_server true Run a statistics server to collect information about Kinetica and the machines it runs on.
stats_server_ip_address ${gaia.rank0_ip_address} Statistics server IP address (run on head node) default port is 2003
stats_server_port 2003
stats_server_namespace gpudb Statistics server namepace - should be a machine identifier

Procs

Configuration Parameter Default Value Description
enable_procs false Enable procs (UDFs)
proc_directory /opt/gpudb/procs Directory where proc files are stored at runtime. Must be a fully qualified path with execute permission. If not specified, temp_directory will be used.
proc_data_directory /opt/gpudb/procs Directory where data transferred to and from procs is written. Must be a fully qualified path with sufficient free space for required volume of data. If not specified, temp_directory will be used.

Graph Server

Configuration Parameter Default Value Description
enable_graph_server false Enable the graph server
graph_server_ip_address ${gaia.rank0_ip_address} Specify where the graph server should be run, defaults to head node
graph_server_push_port 8099 Port used for requests from the database server to the graph server
graph_server_pull_port 8100 Port used for responses from the graph server to the database server
graph_server_timeout 1200 Number of seconds the graph client will wait for a response from the graph server
graph_gpu_list   List of GPU devices to be used by graph server The server would ideally be run on a different node with dedicated GPU(s)
graph_max_memory 0 Maximum memory that can be used by the graph server, set to 0 to disable memory restriction

KiFS

Configuration Parameter Default Value Description
enable_kifs false

Enable access to the KiFS file system from within procs (UDFs). This will create a filesystem collection and an internal kifs user in Kinetica and mount the file system at the mount point specified below. By default, the file system is accessible to the gpudb_proc user; to make it accessible to other users, you must ensure that user_allow_other is enabled in /etc/fuse.conf and add the other users to the gpudb_proc group.

For instance:

sudo usermod -a -G gpudb_proc <user>
kifs_mount_point /opt/gpudb/kifs Parent directory of the mount point for the KiFS file system. Must be a fully qualified path. The actual mount point will be a subdirectory mount below this directory. Note that this folder must have read, write and execute permissions for the gpudb user and the gpudb_proc group, and it cannot be a path on an NFS.

HA

At the moment, this section is simply to provide information about if and where HA is running, so that the gpudb startup script can properly create proxy-pass entries for httpd when both HA and httpd are enabled.

Configuration Parameter Default Value Description
enable_ha false Enable HA.
ha_use_https false Set to true if HA is configured to use HTTPS.
ha_ring_head_nodes   Specify the IP addresses or hostnames of the head nodes in the HA ring
rank0.ha_host   Optionally, specify the IP address or hostname of the HA server for each rank. By default, HA servers are assumed to be on the same nodes as their corresponding ranks.
rank1.ha_host  
rank2.ha_host  
rank0.ha_port  

Optionally, specify the port number of the HA server for each rank. By default, the port number for rank0 is 9191, and the port numbers for other ranks is (head_port + rank #).

Note

If HA is enabled, head_port and/or rank<#>.http_worker_server_port settings may need to be changed to prevent conflicts.

rank1.ha_port  
rank2.ha_port  

Machine Learning (ML)

Configuration Parameter Default Value Description
enable_ml false Enable the ML server.

Alerts

Configuration Parameter Default Value Description
enable_alerts true Enable the alerting system.
alert_exe   Executable to run when an alert condition occurs. This executable will only be run on rank0 and does not need to be present on other nodes.
alert_host_status true Trigger an alert whenever the status of a host or rank changes.
alert_host_status_filter fatal_init_error Optionally, filter host alerts for a comma-delimited list of statuses. If a filter is empty, every host status change will trigger an alert.
alert_rank_status true Trigger an alert whenever the status of a rank changes.
alert_rank_status_filter fatal_init_error, not_responding, terminated Optionally, filter rank alerts for a comma-delimited list of statuses. If a filter is empty, every rank status change will trigger an alert.
alert_rank_cuda_error true Trigger an alert if a CUDA error occurs on a rank.
alert_rank_fallback_allocator true

Trigger alerts when the fallback allocator is employed; e.g., host memory is allocated because GPU allocation fails.

Note

To prevent a flooding of alerts, if a fallback allocator is triggered in bursts, not every use will generate an alert.

alert_error_messages true Trigger generic error message alerts, in cases of various significant runtime errors.
alert_memory_percentage 20, 10, 5, 1 Trigger an alert if available memory on any given node falls to or below a certain threshold, either absolute (number of bytes) or percentage of total memory. For multiple thresholds, use a comma-delimited list of values.
alert_memory_absolute  
alert_disk_percentage 20, 10, 5, 1 Trigger an alert if available disk space on any given node falls to or below a certain threshold, either absolute (number of bytes) or percentage of total disk space. For multiple thresholds, use a comma-delimited list of values.
alert_disk_absolute  
alert_max_stored_alerts 100 The maximum number of triggered alerts guaranteed to be stored at any given time. When this number of alerts is exceeded, older alerts may be discarded to stay within the limit.
chunk_cache_enabled false Whether or not to enable chunk caching
chunk_cache_size 100000000 The maximum number of bytes in the chunk cache
trace_directory /tmp Directory where the trace event and summary files are stored. Must be a fully qualified path with sufficient free space for required volume of data.
trace_event_buffer_size 1000000 The maximum number of trace events to be collected

SQL Engine

Configuration Parameter Default Value Description
sql.enable_planner true Enable Query Planner
sql.planner.address ipc://${gaia.temp_directory}/gpudb-query-engine-0

The network URI for the query planner to start. The URI can be either TCP or IPC. TCP address is used to indicate the remote query planner which may run at other hosts. The IPC address is for a local query planner.

Example for remote or TCP servers:

sql.planner.address  = tcp://127.0.0.1:9293
sql.planner.address  = tcp://HOST_IP:9293

Example for local IPC servers:

sql.planner.address  = ipc:///tmp/query-engine-0
sql.planner.remote_debug_port 0

Remote debugger port used for the query planner. Setting the port to 0 disables remote debugging.

Note

Recommended port to use is 5005

sql.planner.max_memory 4096 The maximum memory for the query planner to use in Megabytes.
sql.planner.timeout 120 Query planner timeout in seconds
sql.planner.workers 16 Max Query planner threads
sql.results.caching true Enable query results caching
sql.results.cache_ttl 120 TTL of the query cache results table
sql.force_binary_joins false Perform joins between only 2 tables at a time; default is all tables involved in the operation at once
sql.force_binary_set_ops false Perform unions/intersections/exceptions between only 2 tables at a time; default is all tables involved in the operation at once
sql.plan_cache_size 4000 The maximum number of entries in the SQL plan cache. The default is 4000 entries, but the configurable range is 1 - 1000000. Plan caching will be disabled if the value is set outside of that range.
sql.rule_based_optimization true Enable rule-based query rewrites
sql.cost_based_optimization false Enable the cost-based optimizer
sql.distributed_joins true Enable distributed joins
sql.distributed_operations true Enable distributed operations
sql.parallel_execution true Enable parallel query evaluation
sql.max_parallel_steps 4 Max parallel steps
sql.temp_collection __SQL_TEMP Name of collection that will be used to store result tables generated as part of query execution
sql.default_schema   Name of default collection for user tables
sql.paging_table_ttl 20 TTL of the paging results table
sql.max_view_nesting_levels 16 Max allowed view nesting levels. Valid range(1-64)

Tiered Storage

Defines system resources using a set of containers (tiers) in which data can be stored, either on a temporary (memory) or permanent (disk) basis. Tiers are defined in terms of their maximum capacity and water mark thresholds. The general format for defining tiers is:

tier.<tier_type>.<config_level>.<parameter>

where tier_type is one of the five basic types of tiers:

  • vram : GPU memory
  • ram : Main memory
  • disk : Disk cache
  • persist : Permanent storage
  • cold : Extended long-term storage

Each tier can be configured on a global or per-rank basis, indicated by the config_level:

  • default : global, applies to all ranks
  • rank<#> : local, applies only to the specified rank, overriding any global default

If a field is not specified at the rank<#> level, the specified default value applies. If neither is specified, the global system defaults will take effect, which vary by tier.

The parameters are also tier-specific and will be listed in their respective sections, though every tier, except Cold Storage, will have the following:

  • limit : [ -1, 1 .. N ] (bytes)
  • high_watermark : [ 1 .. 100 ] (percent)
  • low_watermark : [ 1 .. 100 ] (percent)

Note

To disable watermark-based eviction, set the low_watermark and high_watermark values to 100. Watermark-based eviction is also ignored if the tier limit is set to -1 (no limit).

Global Tier Parameters

Configuration Parameter Default Value Description
tier.global.concurrent_wait_timeout 120 Timeout in seconds for subsequent requests to wait on a locked resource

VRAM Tier

The VRAM Tier is composed of the memory available in one or multiple GPUs per host machine.

A default memory limit and eviction thresholds can be set for CUDA-enabled devices across all ranks, while one or more ranks may be configured to override those defaults.

The general format for VRAM settings:

tier.vram.[default|rank<#>].all_gpus.<parameter>
Configuration Parameter Default Value Description
tier.vram.default.all_gpus.limit -1

Valid parameter names include:

  • limit : The maximum VRAM (bytes) per rank that can be allocated on GPU(s) across all resource groups. Default is -1, signifying to reserve 95% of the available GPU memory at startup.
  • high_watermark : VRAM percentage used eviction threshold. Once memory usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% memory usage threshold.
  • low_watermark : VRAM percentage used recovery threshold. Once memory usage exceeds the high_watermark, evictions will continue until memory usage falls below this recovery threshold. Default is 50, signifying a 50% memory usage threshold.
tier.vram.default.all_gpus.high_watermark 90
tier.vram.default.all_gpus.low_watermark 50

RAM Tier

The RAM Tier represents the RAM available for data storage per rank.

The RAM Tier is NOT used for small, non-data objects or variables that are allocated and deallocated for program flow control or used to store metadata or other similar information; these continue to use either the stack or the regular runtime memory allocator. This tier should be sized on each machine such that there is sufficient RAM left over to handle this overhead, as well as the needs of other processes running on the same machine.

A default memory limit and eviction thresholds can be set across all ranks, while one or more ranks may be configured to override those defaults.

The general format for RAM settings:

tier.ram.[default|rank<#>].<parameter>
Configuration Parameter Default Value Description
tier.ram.default.limit -1

Valid parameter names include:

  • limit : The maximum RAM (bytes) per rank that can be allocated across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : RAM percentage used eviction threshold. Once memory usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% memory usage threshold.
  • low_watermark : RAM percentage used recovery threshold. Once memory usage exceeds the high_watermark, evictions will continue until memory usage falls below this recovery threshold. Default is 50, signifying a 50% memory usage threshold.
tier.ram.default.high_watermark 90
tier.ram.default.low_watermark 50

Disk Tier

Disk Tiers are used as temporary swap space for data that doesn't fit in RAM or VRAM. The disk should be as fast or faster than the Persist Tier storage since this tier is used as an intermediary cache between the RAM and Persist Tiers. Multiple Disk Tiers can be defined on different disks with different capacities and performance parameters. No Disk Tiers are required, but they can improve performance when the RAM Tier is at capacity.

A default storage limit and eviction thresholds can be set across all ranks for a given Disk Tier, while one or more ranks within a Disk Tier may be configured to override those defaults.

The general format for Disk settings:

tier.disk<#>.[default|rank<#>].<parameter>

Multiple Disk Tiers may be defined such as disk, disk0, disk1, ... etc. to support different tiering strategies that use any one of the Disk Tiers. A tier strategy can have, at most, one Disk Tier. Create multiple tier strategies to use more than one Disk Tier, one per strategy. See tier_strategy parameter for usage.

Configuration Parameter Default Value Description
tier.disk0.default.path /opt/gpudb/diskcache

Valid parameter names include:

  • path : A base directory to use as a swap space for this tier.
  • limit : The maximum disk usage (bytes) per rank for this tier across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : Disk percentage used eviction threshold. Once disk usage exceeds this value, evictions from this tier will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% disk usage threshold.
  • low_watermark : Disk percentage used recovery threshold. Once disk usage exceeds the high_watermark, evictions will continue until disk usage falls below this recovery threshold. Default is 50, signifying a 50% disk usage threshold.
  • store_persistent_objects : If true, allow the disk cache to store copies of data even if they are already stored a persistent tier (persist/cold).
tier.disk0.default.limit -1
tier.disk0.default.high_watermark 90
tier.disk0.default.low_watermark 50
tier.disk0.default.store_persistent_objects false

Persist Tier

The Persist Tier is a single pseudo-tier that contains data in persistent form that survives between restarts. Although it also is a disk-based tier, its behavior is different from Disk Tiers: data for persistent objects is always present in the Persist Tier (or Cold Storage Tier, if configured), but may not be up-to-date at any given time.

A default storage limit and eviction thresholds can be set across all ranks, while one or more ranks may be configured to override those defaults.

The general format for Persist settings:

tier.persist.[default|rank<#>].<parameter>

Warning

In general, limits on the Persist Tier should only be set if one or more Cold Storage Tiers are configured. Without a supporting Cold Storage Tier to evict objects in the Persist Tier to, operations requiring space in the Persist Tier will fail when the limit is reached.

Configuration Parameter Default Value Description
tier.persist.default.path ${gaia.persist_directory}

Valid parameter names include:

  • path : Base directory to store column and object vectors.
  • limit : The maximum disk usage (bytes) per rank for this tier across all resource groups. Default is -1, signifying no limit and ignore watermark settings.
  • high_watermark : Disk percentage used eviction threshold. Once disk usage exceeds this value, evictions from this tier to cold storage (if configured) will be scheduled in the background and continue until the low_watermark percentage usage is reached. Default is 90, signifying a 90% disk usage threshold.
  • low_watermark : Disk percentage used recovery threshold. Once disk usage exceeds the high_watermark, evictions will continue until disk usage falls below this recovery threshold. Default is 50, signifying a 50% disk usage threshold.
tier.persist.default.limit -1
tier.persist.default.high_watermark 90
tier.persist.default.low_watermark 50

Cold Storage Tier

Cold Storage Tiers can be used to extend the storage capacity of the Persist Tier. Assign a tier strategy with cold storage to objects that will be infrequently accessed since they will be moved as needed from the Persist Tier. The Cold Storage Tier is typically a much larger capacity physical disk or a cloud-based storage system which may not be as performant as the Persist Tier storage.

A default storage limit and eviction thresholds can be set across all ranks for a given Cold Storage Tier, while one or more ranks within a Cold Storage Tier may be configured to override those defaults.

Note

If an object needs to be pulled out of cold storage during a query, it may need to use the local persist directory as a temporary swap space. This may trigger an eviction of other persisted items to cold storage due to low disk space condition defined by the watermark settings for the Persist Tier.

The general format for Cold Storage settings:

tier.cold<#>.[default|rank<#>].<parameter>

Multiple Cold Storage Tiers may be defined such as cold, cold0, cold1, ... etc. to support different tiering strategies that use any one of the Cold Storage Tiers. A tier strategy can have, at most, one Cold Storage Tier. Create multiple tier strategies to use more than one Cold Storage Tier, one per strategy. See tier_strategy parameter for usage.

Configuration Parameter Default Value Description
tier.cold0.default.type disk

Valid parameter names include:

  • type : The storage provider type. Currently supports disk (local/network storage), hdfs (Hadoop distributed filesystem), and s3 (Amazon S3 bucket)
  • base_path : A base path based on the provider type for this tier.
  • wait_timeout : Timeout in seconds for reading from or writing to this storage provider.
  • connection_timeout : Timeout in seconds for connecting to this storage provider.

HDFS-specific parameter names include:

  • hdfs_uri : The host IP address & port for the hadoop distributed file system. For example: hdfs://localhost:8020
  • hdfs_principal : The effective principal name to use when connecting to the hadoop cluster.
  • hdfs_use_kerberos : Set to true to enable Kerberos authentication to an HDFS storage server. The credentials of the principal are in the file specified by the hdfs_kerberos_keytab parameter. Note that Kerberos's kinit command will be run when the database is started.
  • hdfs_kerberos_keytab : The Kerberos keytab file used to authenticate the gpudb Kerberos principal.

AmazonS3-specific parameter names include:

  • s3_bucket_name
  • s3_region (optional)
  • s3_endpoint (optional)
  • s3_aws_access_key_id
  • s3_aws_secret_access_key
tier.cold0.default.base_path /mnt/gpudb/cold_storage
tier.cold0.default.wait_timeout 1
tier.cold0.default.connection_timeout 1

Tier Strategy

Configuration Parameter Default Value Description
tier_strategy.default VRAM 1, RAM 5, PERSIST 5

Default strategy to apply to tables or columns when one was not provided during table creation. This strategy is also applied to a resource group that does not specify one at time of creation.

The strategy is formed by chaining together the tier types and their respective eviction priorities. Any given tier may appear no more than once in the chain and the priority must be in range 1 - 10, where 1 is the lowest priority (first to be evicted) and 9 is the highest priority (last to be evicted). A priority of 10 indicates that an object is unevictable.

Each tier's priority is in relation to the priority of other objects in the same tier; e.g., RAM 9, DISK2 1 indicates that an object will be the highest evictable priority among objects in the RAM Tier (last evicted), but that it will be the lowest priority among objects in the Disk Tier named disk2 (first evicted). Note that since an object can only have one Disk Tier instance in its strategy, the corresponding priority will only apply in relation to other objects in Disk Tier instance disk2.

See the Tiered Storage section for more information about tier type names.

Format:

<tier1> <priority>, <tier2> <priority>, <tier3> <priority>, ...

Examples using a Disk Tier named disk2 and a Cold Storage Tier cold0:

vram 3, ram 5, disk2 3, persist 10
vram 3, ram 5, disk2 3, persist 6, cold0 10
tier_strategy.predicate_evaluation_interval 60 predicate evaluation interval (in minutes) - indicates the interval at which the tier strategy predicates are evaluated

Default Resource Group

Resource groups are used to enforce simultaneous memory, disk and thread usage limits for all users within a given group. Users not assigned to a specific resource group will be placed within this default group.

Configuration Parameter Default Value Description
resource_group.default.schedule_priority 50 The scheduling priority for this group's operations, 1 - 100, where 1 is the lowest priority and 100 is the highest
resource_group.default.max_tier_priority 10 The maximum eviction priority for tiered objects, 1 - 10. This supercedes any priorities that are set by any user provided tiering strategies.
resource_group.default.max_cpu_concurrency -1 Maximum number of concurrent operations; -1 for no limit
resource_group.default.vram_limit -1 The maximum memory (bytes) this group can use at any given time in the VRAM tier; -1 for no limit
resource_group.default.ram_limit -1 The maximum memory (bytes) this group can use at any given time in the RAM tier; -1 for no limit