We use cookies to give you the best experience on our website. By continuing to browse the site, you are agreeing to our use of cookies. You can change your cookie settings at any time but if you do, you may lose some functionality. More information can be found in our Privacy policy and Cookie policy.

×

Extreme Analytics Made Possible
on a GPU Database

GPUs crunch large volumes of data faster and more efficiently than CPUs because they work in parallel, instead of in sequence. GPUs accelerate location-based and in-memory analytics, machine learning, and AI.

GPU Databases - What, Why, & How

How are GPU Databases Different?

With thousands of processing cores available on a single card, it is possible to perform operations in parallel, using brute force to solve complex analytics operations that traditional databases struggle with. Aggregations, sorts, and grouping operations are workload intensive for a CPU, but can work effectively in parallel on a GPU. NVIDIA’s CUDA API made it possible to use these GPU cards for high performance computing on standard hardware.

To take advantage of GPUs for such operations requires ground up development of the database. GPUs require specific programming, and operations need to be processed differently to take maximum advantage of its threading model. A modern analytics database holds data in a columnar store, optimized for feeding the GPU compute capability as fast as possible.

Aggregations, sorts, and grouping operations are workload intensive for a CPU, but can be effectively parallelized on a GPU.
GPUs provide so much raw compute power, you don't need to worry as much about indexing, partitioning or downsampling!

Extreme Performance with Affordable Hardware

GPUs offer orders of magnitude more compute power, so data structures can be simpler and heavy indexing is unnecessary. This means the GPU DB does not have as much work to do for each update. Complex queries on the new data can be performed immediately.

CPU-bound in-memory systems require complex data structures to speed up queries. This works on a low node count, but as the data under management grows, updating these data structures becomes exponentially difficult, leading to increasing latency between write and ability to read.

With a GPU database you can see complex queries returned in milliseconds even as the dataset grows and more nodes are added. More and more customers are trying to solve for challenges where leading analytical databases can’t keep up with data ingest.

GPU Databases Change the way you Work

GPU-accelerated databases have reduced the need for specialized data architects to restructure and optimize the data for query. Their power also makes it easier to work with extreme datasets such as cyber data, IoT data, web data, and business transaction data, where patterns and insights need to be recognized quickly, and the need for time relevancy is more important than ever.

The variety of data sources and uniqueness of individual data points (high cardinality) in today's extreme data flows pose particular challenges for analytical systems. While there are many databases that provide for analytics across applications automatically (at scale), or fast data access at scale (NoSQL), all of these struggle when complex querying on time-sensitive data is also required.

The variety and high cardinality of today's extreme data flows poses particular challenges for traditional analytical systems.

GPUs vs CPUs

A CPU consists of a few cores optimized for sequential serial processing, while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed to handle multiple tasks simultaneously.
CPU
64Cores
Serial Compute
GPU
5,100+Cores
Parallel Compute
81,920Cores
Unparalleled Compute
Watch Mythbusters explain how the GPU works!    PLAY

Features to Look for When Choosing a GPU Database for Accelerated Analytics

SQL Support

SQL is the lingua franca of relational databases and enables business users and analysts to easily access and analyze data. Look for robust SQL functionality including CRUD operations, advanced constructs (JOINs, UNIONs, GROUP-BYs, etc.) and subquery capabilities.
Kinetica SQL Query Support »

APIs

Business analysts like the simplicity of SQL but developers and data scientists typically prefer a RESTful API for richer programmatic access to the database. Access Kinetica through a REST API or SQL. Language-specific bindings are available for Java, Python, C++, JavaScript and Node.js. Kinetica APIs and connectors are open source and available in GitHub.

System Ram or VRAM?

To feed the high-speed processing available to the GPU, all GPU databases will store data in-memory, either system memory or vRAM on the GPU itself. vRAM is very, very fast, but expensive and limited in capacity. Kinetica utilizes both system RAM and vRAM. For terabytes of data, system RAM allows the database to scale to much larger volumes.

Data Persistence

Storing data in-memory is fast, but it can be lost if the power goes out or something causes the database to crash. Kinetica can also persist data to disk. Many other GPU databases lack persistence, which means that if your GPU DB crashes, your data is gone.

Scale-out Architecture

Scaling up, adding more GPU cards, more memory, and premium hardware will only take you so far. Look for a GPU database that can distribute the data and workloads across multiple machines on affordable hardware. Pick a GPU DB solution that intelligently manages partitioning rows (sharding), and fully distributes data processing across nodes.

High Availability

Built-in high availability (HA) and automated replication ensures continued operation even if a component fails (fault tolerance), and makes it simpler to maintain systems. Kinetica offers Active/Active HA and automatic replication between clusters. This eliminates the single point of failure in a given cluster and provides reliable and fast recovery from a crash.

What can you do with a GPU Database?

Geospatial Analysis

The GPU is ideal for processing data positioned in space and time. Geospatial indexes can become very complex, as systems typically partition data into separate buckets for different resolutions of the earth. A GPU can compute massive amounts of this geospatial data directly, with ease. A GPU DB also opens the door to powerful, high-performance visualization capabilities.

Kinetica also comes with a native visualization pipeline capable of leveraging the GPU to quickly render vector-based map visualizations, on the fly.

GPU Acceleration Enables Location-Based Analytics at Scale »

Animation showing Twitter 4bn posts color coded by year. Even through the web it takes seconds to filter data by arbitrary geometries and interact with individual points. <br><br><a href="/product/demo/">Try it out yourself</a>
Animation showing Twitter 4bn posts color coded by year. Even through the web it takes seconds to filter data by arbitrary geometries and interact with individual points.

Try it out yourself
GPU database processing between data scientists and business users
In-database processing in Kinetica, makes it possible for data-scientists to develop models on the same database platform as business users.

Machine Learning and Predictive Analytics

As questions asked of the data become more complex, data scientists look to more advanced tools for algorithms and modeling. A user-defined functions (UDF) framework enables custom algorithms and code to run on data directly within the database. For workloads that are already computationally intensive, particularly for machine learning, in-database analytics on a GPU-accelerated database open new opportunities that were previously unimaginable.

UDFs make it possible to run custom computation as well as data processing within the database. This provides a highly flexible means of performing complex, advanced analytics at scale via grid computing, linking the idle processing power of every computer in the network to take on a single job, divided into multiple tasks. BI and AI workloads can run together on the same GPU-accelerated database platform. User-defined functions can be written in C++, Java, or Python. Kinetica also ships with bundled TensorFlow for machine learning and deep learning use cases. Customers such as GlaxoSmithKline, ScotiaBank, and one of the world’s largest retailers leverage the UDF framework for advanced predictive analytics use cases.

Speed Layer for Your Data Lake

What if you could hydrate billions of rows per minute from a data lake, as needed, into a powerful, flexible high-performance analytics speed layer?

Hadoop and cloud-based object stores (such as S3 and ADLS) have become the de facto stores for the vast volumes of data generated by modern business. Query frameworks such as Hive, Impala, and Presto have made it easier to query this data using SQL, but these tools typically suffer from a shortage of computing cycles, move slowly, and are better suited to batch workloads. They do not work well for use cases where interactive ‘what-if’ analysis is required.

A fully distributed GPU database distributes ingest across multiple nodes, and reduces reliance on indexing, loading terabytes of data from data lakes simply and quickly. In a recent implementation of Kinetica’s GPU database with the new Spark Connector, a 10-node Kinetica cluster was able to ingest 4 billion ORC records per minute (14 columns per record) from a Kerberized Hadoop cluster.

Kinetica as a Speed Layer for Your Data Lake »

Kinetica Spark Connector with Multi-Head Ingest
GPU Data Analytics eBook
FREE eBook

Introduction to GPUs for Data Analytics

Advances and Applications for Accelerated Computing