Skip to content

How to Explain GPUs to Your CEO

It’s no wonder we are in the midst of a major IoT (Internet of Things) boom when you consider that IoT-enabled products can solve business problems and predict failures far faster than ever imagined. But all this potential is just theory if you don’t have enough processing power to analyze all of your data. Beyond IoT, we are in the era of “fast data.” Enormous value can be derived from fast data, but only if it can be processed as it streams in. Organizations will need to change, adapt, or move on from their traditional analytics database technologies, and GPU databases offer a major advantage to those early adopters looking to capitalize on today’s fast data.

GPUs are powering the world’s most advanced applications. In fact, Kinetica was designed from the ground up to leverage the parallel processing power of GPUs, opening new doors in advanced analytics, geospatial analytics, and machine learning applications.

In our recent webinar, Explaining GPUs to Your CEO: The Power of Productization, Dan Woods, CTO and Editor of CITO Research, along with Kinetica Co-Founder, President, and Chief Strategy Officer Amit Vij, explained how GPUs and GPU databases can handle the quantity, speed, and diversity of today’s fast data.

So how do you justify the investment in GPUs to leadership, or even your CEO? Here are answers to some common questions that may help you explain GPUs and GPU databases and why they’re critical for data-driven organizations moving forward.

What are GPUs and why should I care about them? The CPU has often been considered the “brains” of the PC. The GPU (graphics processing unit) goes beyond basic graphics controller functions. GPUs are designed around thousands of small, efficient cores that are well-suited to performing repeated similar instructions in parallel, such as the compute-intensive workloads required of large data sets. GPUs are a far more cost-effective way to address the compute performance bottleneck, as they can process data up to 100 times faster than configurations containing CPUs alone. This is due to their massively parallel processing capabilities, with some GPUs containing nearly 6,000 cores—upwards of 200 times more than the 16 to 32 cores found in today’s most powerful CPUs.

GPUs can usually crunch large volumes of data far more efficiently and quickly than CPUs—which process data sequentially and which are better suited to serial tasks such as branching and file operations. The advanced capabilities of GPUs make them ideal for accelerating computational workloads in areas such as IoT, location-based analytics, advanced in-database analytics, and machine learning. Kinetica’s GPU-accelerated database leverages GPUs to do parallel processing, which enables brute-force compute at the time of query with billions of objects. In tests, GPU-accelerated databases return results for advanced analytical queries on billions of rows of data in well under a second. All of this is done by leveraging commodity hardware as a scale-out architecture.

Why are GPUs getting so much press these days? It turns out that AI and machine learning algorithms particularly benefit from GPUs, as they are based primarily on vectors (matrices of numbers) that are then processed in parallel.

The availability of both big data and computing power has enabled incredible advancements in deep learning, which powers everything from speech recognition, medical diagnoses, autonomous cars, and many other use cases. In fact, many of the early victories in AI such as AlphaGo and ImageNet were achieved through algorithms powered by GPUs. The GPU-accelerated database also makes it possible to ingest, analyze, and render results on a single platform, so you don’t need to move data among different layers or technologies to get the desired results.

Press outlets such as SiliconANGLE have recognized and written about the GPU’s prominent role in increasing computer performance:

“Kinetica’s database harnesses GPUs to speed up applications that lend themselves well to parallel execution. GPU coprocessors are seen as one way to continue to ramp up computer performance as the once-exponential gains of Moore’s Law begin to wind down.

Does this mean I no longer need a CPU-based architecture? No. Think of GPUs not as a replacement for CPUs, but as the way forward for you to tackle the new workloads that modern business demands—for example, if you need to perform high-intensity processing on high-velocity big data feeds to gain actionable real-time intelligence. GPUs are much faster and offer new opportunities for analytics, BI, and AI that are not possible with CPUs.

Can I use GPUs to accelerate our data analysis? Yes! In most companies using big data, the focus has been on batch use because processing has been bound by CPUs. GPU-based systems allow big data analytics and AI insights to be available in time to matter for real-time business processes. You can combine CPU and GPU technology to analyze large streams of data, and get answers and insights much quicker. Instead of looking at reports that were generated the night before, you’ll be able to see what’s happening in the moment. The workloads that can be accelerated right now are those that are using SQL-based processing of large amounts of data to understand the subsets of that data that are interesting, and then applying machine learning to those subsets and scoring the models.

What can I do to encourage our organization to move forward with GPUs? The most important thing you can do is to provide leadership, encouragement, and budget around identifying GPU use cases that will lead to large leaps forward within your organization. Research your options and choose a solution that can meet all of your analytical needs, scale as needed, and be purpose-built to take full advantage of the GPU. It’s also important to understand when your use of data and analytics is running into bottlenecks. Make a list of all the areas where you can’t get the answers fast enough and prioritize that list. Start with a trial version or pilot project to gain familiarity with the technology, then begin experiments and POCs to truly see the value of GPUs.

What types of things are possible with GPU-accelerated databases? BI acceleration, location-based analytics, machine learning, and real-time IoT analytics are just a few examples.

For general parallel workloads, what is the optimal ratio of GPUs to CPUs? If you are doing OLAP database processing, we recommend a one to two CPU:GPU ratio, but if your general workload is machine learning, deep learning, or geovisualization, the more GPUs the better, as they are the ideal hardware platform to run those algorithms.

How do I know if I should use GPUs for a specific application? GPUs are ideal for things like floating-point operations per second, vector processing, SIMD operations, operating massive matrices, and ingesting streaming data and then retraining your data model with new data coming in.

How much faster is a GPU database compared to a standard in-memory database? The Kinetica database provides roughly a 50 to 80x performance boost on 1/10th the hardware.

Watch the webinar to take learn more about how GPUs and the software that leverages them can help you get the most out of your data and how they can augment your existing data supply chains. And be sure to read Dan Woods’ article in Forbes—Explaining GPUs To Your CEO: The Power Of Productization, which takes an even deeper dive into how to sell GPUs to your CEO.

MIT Technology Review

Making Sense of Sensor Data

Businesses can harness sensor and machine data with the generative AI speed layer.
Download the report