Machine Learning Warehouse
...Powered by GPUs

Kinetica is a distributed, in-memory, GPU database – Its CUDA-optimized user-defined functions API, makes it simpler to prepare, train and deploy predictive analytics and machine learning.

How Does a GPU Database Fit in Your Machine Learning Stack? »

Recent Webinar: Introducing The AI Database: A Prerequisite to Operationalizing Machine and Deep Learning Watch Now »

Accelerate the Machine Learning Pipeline

Bring the model to the data, not the data to the model

Work with Operational Data

With Kinetica, custom algorithms and machine learning models can run directly in-database – on data as it arrives. There is no need to export data to a seperate data-science environment.


Kinetica is an advanced analytics database – powered by GPUs. Abundant compute means data does not depend on intricate indexing and down-sampling. Data can be prepared interactively and models can be trained faster.

Deploy your Models

Once models have been tested and trained, they can then be deployed for use with live data from within Kinetica. UDFs can be called by systems interacting with the database – making it simpler to make AI models available from within BI tools.

Do More with Less

Distribute processing of compute-intensive ML workloads across a cluster of nodes. Vector and matrix processing delivers results quickly, even with large datasets.

How it Works

Kinetica provides an extensible and highly flexible framework for connecting distributed data with custom code and machine learning libraries.

User-defined functions (UDFs) enable GPU-accelerated data science logic to power advanced business analytics, on a single database platform. UDFs have direct access to CUDA APIs, and can take full advantage of the distributed architecture of Kinetica. Because Kinetica is designed from the ground up to utilize the GPU, users have an advanced set of tools for distributed computation.

UDFs are able to receive filtered data, do arbitrary computations, and then save output to a separate table. The brute-force parallel compute power of the GPU delivers fast response which makes it highly valuable for interactive analytics and experimentation.

GPUs are also particularly well suited for the types of vector and matrix operations found in machine learning and deep learning systems.

Advanced In-Database Analytics on the GPU : Why it Matters »

Kinetica UDFs

Democratize Data Science

Deploy and test data science models on the same database platform as is used for business analytics. No need to export data to specialized high-performance computing (HPC) systems staffed by data scientists.  With in-database processing on Kinetica, BI and AI workloads can run together on the same GPU-accelerated platform.
Converged AI and BI

Business users can be empowered to do more sophisticated analysis without resorting to code. Data science teams can develop and test gold-standard simulations and algorithms while making them directly available on the systems used by end users. Foreseeably, in addition to query, reporting and analytics, users could also call a Monte Carlo simulation, or other custom algorithms, straight from their BI dashboard.

Bringing AI to BI : Observations from the Field »

Kinetica v6 - Now bundled with TensorFlow

With Kinetica, businesses can now manage complex, compute-intensive TensorFlow workloads as part of a comprehensive database solution. The combination of Kinetica and TensorFlow offers a unified solution for data preparation, model training, and model deployment into production.
Learn More »

Related Resources

GPU Data Analytics eBook
FREE eBook

Introduction to GPUs for Data Analytics

Advances and Applications for Accelerated Computing

Take the Kinetica Challenge!

Sometimes benchmarks and marketing copy can sound too good to be true. The best way to appreciate the possibilities that GPU acceleration brings to large-scale analytics is to try it with your own data, your own schemas and your own queries.

Contact us, and we'll set you up with a trial environment for you to experience it for yourself.