HEAVY.AI Docs
v7.2.4
v7.2.4
  • Welcome to HEAVY.AI Documentation
  • Overview
    • Overview
    • Release Notes
  • Installation and Configuration
    • System Requirements
      • Hardware Reference
      • Software Requirements
    • Installation
      • Free Version
      • Installing on CentOS
        • HEAVY.AI Installation on CentOS/RHEL
        • Install NVIDIA Drivers and Vulkan on CentOS/RHEL
      • Installing on Ubuntu
        • HEAVY.AI Installation on Ubuntu
        • Install NVIDIA Drivers and Vulkan on Ubuntu
      • Installing on Docker
        • HEAVY.AI Installation using Docker on Ubuntu
      • Getting Started on AWS
      • Getting Started on GCP
      • Getting Started on Azure
      • Getting Started on Kubernetes (BETA)
      • Upgrading
        • Upgrading HEAVY.AI
        • Upgrading from Omnisci to HEAVY.AI 6.0
        • CUDA Compatibility Drivers
      • Uninstalling
      • Ports
    • Services and Utilities
      • Using Services
      • Using Utilities
    • Executor Resource Manager
    • Configuration Parameters
      • Overview
      • Configuration Parameters for HeavyDB
      • Configuration Parameters for HEAVY.AI Web Server
    • Security
      • Roles and Privileges
      • Connecting Using SAML
      • Implementing a Secure Binary Interface
      • Encrypted Credentials in Custom Applications
      • LDAP Integration
    • Distributed Configuration
  • Loading and Exporting Data
    • Supported Data Sources
      • Kafka
      • Using Heavy Immerse Data Manager
      • Importing Geospatial Data
    • Command Line
      • Loading Data with SQL
      • Exporting Data
  • SQL
    • Data Definition (DDL)
      • Datatypes
      • Users and Databases
      • Tables
      • System Tables
      • Views
      • Policies
    • Data Manipulation (DML)
      • SQL Capabilities
        • ALTER SESSION SET
        • ALTER SYSTEM CLEAR
        • DELETE
        • EXPLAIN
        • INSERT
        • KILL QUERY
        • LIKELY/UNLIKELY
        • SELECT
        • SHOW
        • UPDATE
        • Arrays
        • Logical Operators and Conditional and Subquery Expressions
        • Table Expression and Join Support
        • Type Casts
      • Geospatial Capabilities
        • Uber H3 Hexagonal Modeling
      • Functions and Operators
      • System Table Functions
        • generate_random_strings
        • generate_series
        • tf_compute_dwell_times
        • tf_feature_self_similarity
        • tf_feature_similarity
        • tf_geo_rasterize
        • tf_geo_rasterize_slope
        • tf_graph_shortest_path
        • tf_graph_shortest_paths_distances
        • tf_load_point_cloud
        • tf_mandelbrot*
        • tf_point_cloud_metadata
        • tf_raster_contour_lines; tf_raster_contour_polygons
        • tf_raster_graph_shortest_slope_weighted_path
        • tf_rf_prop_max_signal (Directional Antennas)
        • ts_rf_prop_max_signal (Isotropic Antennas)
        • tf_rf_prop
      • Window Functions
      • Reserved Words
      • SQL Extensions
  • Heavy Immerse
    • Introduction to Heavy Immerse
    • Admin Portal
    • Control Panel
    • Working with Dashboards
      • Dashboard List
      • Creating a Dashboard
      • Configuring a Dashboard
      • Duplicating and Sharing Dashboards
    • Measures and Dimensions
    • Using Parameters
    • Using Filters
    • Using Cross-link
    • Chart Animation
    • Multilayer Charts
    • SQL Editor
    • Customization
    • Joins (Beta)
    • Chart Types
      • Overview
      • Bar
      • Bubble
      • Choropleth
      • Combo
      • Cross-Section
      • Contour
      • Gauge
      • Geo Heatmap
      • Heatmap
      • Histogram
      • Line
      • Linemap
      • New Combo
      • Number
      • Pie
      • Pointmap
      • Scatter Plot
      • Skew-T
      • Stacked Bar
      • Table
      • Text Widget
      • Wind Barb
  • HeavyRF
    • Introduction to HeavyRF
    • Getting Started
    • HeavyRF Table Functions
  • HeavyConnect
    • HeavyConnect Release Overview
    • Getting Started
    • Best Practices
    • Examples
    • Command Reference
    • Parquet Data Wrapper Reference
    • ODBC Data Wrapper Reference
  • HeavyML (BETA)
    • HeavyML Overview
    • Clustering Algorithms
    • Regression Algorithms
      • Linear Regression
      • Random Forest Regression
      • Decision Tree Regression
      • Gradient Boosting Tree Regression
    • Principal Components Analysis
  • Python / Data Science
    • Data Science Foundation
    • JupyterLab Installation and Configuration
    • Using HEAVY.AI with JupyterLab
    • Python User-Defined Functions (UDFs) with the Remote Backend Compiler (RBC)
      • Installation
      • Registering and Using a Function
      • User-Defined Table Functions
      • RBC UDF/UDTF Example Notebooks
      • General UDF/UDTF Tutorial Notebooks
      • RBC API Reference
    • Ibis
    • Interactive Data Exploration with Altair
    • Additional Examples
      • Forecasting with HEAVY.AI and Prophet
  • APIs and Interfaces
    • Overview
    • heavysql
    • Thrift
    • JDBC
    • ODBC
    • Vega
      • Vega Tutorials
        • Vega at a Glance
        • Getting Started with Vega
        • Getting More from Your Data
        • Creating More Advanced Charts
        • Using Polys Marks Type
        • Vega Accumulator
        • Using Transform Aggregation
        • Improving Rendering with SQL Extensions
      • Vega Reference Overview
        • data Property
        • projections Property
        • scales Property
        • marks Property
      • Migration
        • Migrating Vega Code to Dynamic Poly Rendering
      • Try Vega
    • RJDBC
    • SQuirreL SQL
    • heavyai-connector
  • Tutorials and Demos
    • Loading Data
    • Using Heavy Immerse
    • Hello World
    • Creating a Kafka Streaming Application
    • Getting Started with Open Source
    • Try Vega
  • Troubleshooting and Special Topics
    • FAQs
    • Troubleshooting
    • Vulkan Renderer
    • Optimizing
    • Known Issues and Limitations
    • Logs and Monitoring
    • Archived Release Notes
      • Release 6.x
      • Release 5.x
      • Release 4.x
      • Release 3.x
Powered by GitBook
On this page
Export as PDF
  1. SQL
  2. Data Manipulation (DML)
  3. System Table Functions

tf_raster_graph_shortest_slope_weighted_path

Aggregate point data into x/y bins of a given size in meters to form a dense spatial grid, computing the specified aggregate (using agg_type) across all points in each bin as the output value for the bin. A Gaussian average is then taken over the neighboring bins, with the number of bins specified by neighborhood_fill_radius, optionally only filling in null-valued bins if fill_only_nulls is set to true.

The graph shortest path is then computed between an origin point on the grid specified by origin_x and origin_y and a destination point on the grid specified by destination_x and destination_y, where the shortest path is weighted by the nth exponent of the computed slope between a bin and its neighbors, with the nth exponent being specified by slope_weighted_exponent. A max allowed traversable slope can be specified by slope_pct_max, such that no traversal is considered or allowed between bins with absolute computed slopes greater than the percentage specified by slope_pct_max.

SELECT * FROM TABLE(
    tf_raster_graph_shortest_slope_weighted_path(
        raster => CURSOR(
            SELECT x, y, z FROM table
        ),
        agg_type => <'AVG'|'COUNT'|'SUM'|'MIN'|'MAX'>,
        bin_dim => <meters>,
        geographic_coords => <true/false>,
        neighborhood_fill_radius => <num bins>,
        fill_only_nulls => <true/false>,
        origin_x => <origin x coordinate>,
        origin_y => <origin y coordinate>,
        destination_x => <destination x coordinate>,
        destination_y => <destination y coordinate>,
        slope_weighted_exponent => <exponent>,
        slope_pct_max => <max pct slope>
    )

Input Arguments

Parameter
Description
Data Types

x

Input x-coordinate column or expression of the data to be rasterized.

Column <FLOAT | DOUBLE>

y

Input y-coordinate column or expression of the data to be rasterized.

Column <FLOAT | DOUBLE> (must be the same type as x)

z

Input z-coordinate column or expression of the data to be rasterized.

Column <FLOAT | DOUBLE>

agg_type

The aggregate to be performed to compute the output z-column. Should be one of 'AVG', 'COUNT', 'SUM', 'MIN', or 'MAX'.

TEXT ENCODING NONE

bin_dim

The width and height of each x/y bin . If geographic_coords is true, the input x/y units will be translated to meters according to a local coordinate transform appropriate for the x/y bounds of the data.

DOUBLE

geographic_coords

If true, specifies that the input x/y coordinates are in lon/lat degrees. The function will then compute a mapping of degrees to meters based on the center coordinate between x_min/x_max and y_min/y_max.

BOOLEAN

neighborhood_bin_radius

The radius in bins to compute the gaussian blur/filter over, such that each output bin will be the average value of all bins within neighborhood_fill_radius bins.

BIGINT

fill_only_nulls

Specifies that the gaussian blur should only be used to provide output values for null output bins (i.e. bins that contained no data points or had only data points with null Z-values).

BOOLEAN

origin_x

The x-coordinate for the starting point for the graph traversal, in input (not bin) units.

DOUBLE

origin_y

The y-coordinate for the starting point for the graph traversal, in input (not bin) units.

DOUBLE

destination_x

The x-coordinate for the destination point for the graph traversal, in input (not bin) units.

DOUBLE

destination_y

The y-coordinate for the destination point for the graph traversal, in input (not bin) units.

DOUBLE

slope_weighted_exponent

The slope weight between neighboring raster cells will be weighted by the slope_weighted_exponent power. A value of 1 signifies that the raw slopes between neighboring cells should be used, increasing this value from 1 will more heavily penalize paths that traverse steep slopes.

DOUBLE

slope_pct_max

The max absolute value of slopes (measured in percentages) between neighboring raster cells that will be considered for traversal. A neighboring graph cell with an absolute slope greater than this amount will not be considered in the shortest slope-weighted path graph traversal

DOUBLE

Output Columns

/* Compute the shortest slope weighted path over a 30m Copernicus 
Digital Elevation Model (DEM) input raster comprising the area around Mt. Everest,
to compute the shorest slope-weighted path from the plains of Nepal to the peak */

create table mt_everest_climb as
select
  path_step,
  st_setsrid(st_point(x, y), 4326) as path_pt
from
  table(
    tf_raster_graph_shortest_slope_weighted_path(
      raster => cursor(
        select
          st_x(raster_point),
          st_y(raster_point),
          z
        from
          copernicus_30m_mt_everest
      ),
      agg_type => 'AVG',
      bin_dim => 30,
      geographic_coords => TRUE,
      neighborhood_fill_radius => 1,
      fill_only_nulls => FALSE,
      origin_x => 86.01,
      origin_y => 27.01,
      destination_x => 86.9250,
      destination_y => 27.9881,
      slope_weight_exponent => 4,
      slope_pct_max => 50
    )
  );
Previoustf_raster_contour_lines; tf_raster_contour_polygonsNextWindow Functions

Last updated 2 years ago

Result of the example query above, showing the shortest slope-weighted path between the Nepali planes and the peak of Mt. Everest. The path closely mirrors the actual climbing route used.