Importing Data

By now you should have installed the product and have had the quick overview of the tools you can use to manage it and query data.  In this tutorial, we’ll show you some of the ways to load data. The datasets being loaded will be used in future tutorials.

The three methods of importing you’ll learn how to use:

  • Drag and Drop for quickly importing small sets of data
  • Advanced Import with the Kinetica Input/Output (KIO) Tool for batch importing large sets of data with more control over the process
  • Streaming ingest with the Active Analytics Workbench (AAW) for ingesting continuously updating data to feed views and applications

Create Target Table

Before importing, we’ll create the target table for the Drag and Drop import, adding Kinetica-specific attributes to the columns that save space and improve performance.

To create the table using the SQL tool in GAdmin:

  1. Navigate to GAdmin (http://<kinetica-host>:8080/)
  2. Click Query > SQL.
  3. Enter the following CREATE TABLE statement into the SQL Statements text area.

    CREATE TABLE nyct2010
    (
        gid        BIGINT       NOT NULL,
        geom       GEOMETRY     NOT NULL,
        CTLabel    DOUBLE       NOT NULL,
        BoroCode   BIGINT       NOT NULL,
        BoroName   VARCHAR(256) NOT NULL,
        CT2010     BIGINT       NOT NULL,
        BoroCT2010 BIGINT       NOT NULL,
        CDEligibil VARCHAR(64)  NOT NULL,
        NTACode    VARCHAR(64)  NOT NULL,
        NTAName    VARCHAR(64)  NOT NULL,
        PUMA       BIGINT       NOT NULL,
        Shape_Leng DOUBLE       NOT NULL,
        Shape_Area DOUBLE       NOT NULL
    );
    
  4. Click Run SQL to create the tables.

Drag and Drop

Kinetica allows you to drag and drop CSV, ORC, Apache Parquet, or Zip files (containing Shapefiles) to import the data into Kinetica. Drag-and-drop importing will attempt to import the file’s first record as a header. The file’s name will be used as the table’s name in Kinetica. The Drag-and-Drop documentation covers additional details and limitations.

We’re going to import, via drag-and-drop, the nyct2010.csv data file, which is the NYC Neighborhood Tabulation Areas (NTA) dataset. This file provides geospatial boundaries for neighborhoods in New York. The file can be downloaded using the following link: nyct2010.csv

Download and save the above CSV file locally to your disk (if you are using Linux we suggest saving it into /tmp where it can be read by Kinetica). Now let’s import the data:

  1. Navigate to GAdmin (http://<kinetica-host>:8080/)
  2. Click Data > Import.
  3. Drag the nyct2010.csv file from a file explorer window into the drop area of GAdmin. You can also click Choose File in GAdmin to select the nyct2010.csv file from your disk.

The data will begin importing. Once completed, click View Table to view the table or click Data > Table to see it in a listing with other tables & collections in the database.

Advanced Import with KIO

Kinetica also allows you to ingest data from various different sources, including Sybase IQ, Oracle, PostgreSQL, and AWS S3, using the KIO tool. While KIO has a command line interface, this tutorial uses the GAdmin KIO user interface, also known as Advanced Import. Advanced Import is great for importing larger files and offers more control over the incoming schema. We’re going to load a historical NYC taxi dataset using an Apache Parquet file in a public AWS S3 bucket. You can read more about the full public data set, which contains taxi trip information, at nyc.gov.

To import the NYC taxi dataset using Advanced Import, first select the source and destination:

  1. Navigate to GAdmin (http://<kinetica-host>:8080/)
  2. Click Data > Import.
  3. Click Advanced Import.
  4. In the Source section:

    1. Select AWS S3 for the Datasource.
    2. Select Parquet for the Format.
    3. Input the following bucket File Path for the NYC taxi dataset: /kinstart/taxi_data.parquet
  5. In the Target section, input taxi_data for the Table name. Leave the Collection blank and the Batch Size and Spark Options the default values.

Next, configure the columns of the target table that will be created:

  1. Click Configure Columns. The data will be analyzed and the projected column names, types, & sizes will be displayed.
  2. Click the Subtype drop-down associated with the vendor_id column, and choose char4.
  3. Click the Subtype drop-down associated with the store_and_fwd_flag column, and choose char1.
  4. Click the Subtype drop-down associated with the payment_type column, and choose char16.

Lastly, click Transfer Dataset.

The Transfer Status window appears to show data importing status. Once completed, click View Table to view the table.

Streaming Ingest with AAW

Kinetica can ingest streaming data via Kafka using the Active Analytics Workbench (AAW). We’re going to begin streaming in a current-day NYC Taxi dataset, which updates with taxi cab transactions continuously, to supplement the historical NYC taxi dataset you just loaded from a Parquet file. A public Kafka broker is available to serve this data.

Before we can create the streaming ingest, first we need to create a Kafka credential in AAW.

  1. Navigate to AAW (http://<kinetica-host>:8070/)
  2. Click Security > Credentials.
  3. Click Add New Credential.
  4. Select Kafka from the Credential Type drop-down menu.
  5. Input Quickstart Kafka Broker for the Name.
  6. Input Public Kafka broker for the Kinetica Quick Start for the Description.
  7. Input quickstart.kinetica.com:9092 for the Connection String.
  8. Input nyctaxi for the Topic.
  9. Click Create.

Now that a Kafka credential has been created, we can use it to begin streaming in data.

  1. From AAW (http://<kinetica-host>:8070/), click Data > Ingests.
  2. Click + Add New Ingest > New Streaming.
  3. Input NYC Taxi Streaming Ingest for the ingest Name.
  4. Input Streaming ingest of NYC Taxi transactional data from public Kafka broker into Kinetica for the Description and click Next.
  5. Click Search next to Credentials and select the Quickstart Kafka Broker you created in the previous steps. Then, click Select.
  6. Under Destination, input taxi_data for the Table name and click Next.
  7. Review the Summary and click Create.
  8. Once on the Ingest Details page, click Start to begin the data ingestion.


 

Additional Ingestion Methods

The two methods discussed above are among the most common ways to get your data into Kinetica. The list below comprises additional ingestion methods and their use case(s).

  • Use Multi-head Ingest to insert sharded data directly into the nodes of your cluster, bypassing the head node and enabling the best ingest speeds.
  • Use the Spark Connector to enable quick ingestion of large data sets and to stream data out of Kinetica.

Kinetica Trial Feedback