Skip to main content

47 posts tagged with "pinot"

View All Tags

Announcing Apache Pinot 1.0™

· 13 min read
Hubert Dulay
Hubert Dulay, Mayank Shrivastava, Neha Pawar

By: Hubert Dulay, Mayank Shrivastava, Neha Pawar

What Makes a “1.0 Release?”#

Apache Pinot has continuously evolved since the project’s inception within LinkedIn in 2013. Back then it was developed at a single company with a single use case in mind: to power “who viewed my profile?” Over the ensuing decade the Apache Pinot community expanded to be embraced by many other organizations, and those organizations have expanded its capabilities to address new use cases. Apache Pinot in 2023 is continuously evolving to address emerging needs in the real-time analytics community. Let’s look at how much innovation has gone into Apache Pinot over the years:

  • Upserts — data-in-motion tends to stay in motion, and one of the cornerstone capabilities of Apache Pinot is upsert support to handle upsert mutations in real-time.
  • Query-time Native JOINs — it was important to get this right, so that they were performant and scalable, allowing high QPS. This we will discuss in more detail below.
  • Pluggable architecture — a broad user base requires the ability to extend the database with new customizable index types, routing strategies and storage options
  • Handling Semi-Structured/Unstructured Data — Pinot can easily index JSON and text data types at scale.
  • Improving ANSI SQL Compliance — to that end, we’ve added better NULL handling, window functions, and as stated above, the capability for native JOINs.

With all of these features and capabilities, Apache Pinot moves farther and farther from mere database status, and becomes more of a complete platform that can tackle entire new classes of use cases that were beyond its capabilities in earlier days.

First let’s look at what Apache Pinot 1.0 itself is delivering. The first foundational pillar of what makes something worthy of a “1.0” release is software quality. Over the past year, since September 2022, engineers across the Apache Pinot community have closed over 300 issues to provide new features, optimize performance, expand test coverage, and squash bugs.

Features are also a key thing that makes a new release worthy of “1.0” status. The most critical part of the 1.0 release is undoubtedly the Multi-Stage Query Engine, which permits Apache Pinot users to do performant and scalable query-time JOINs.

The original engine works very well for simpler filter-and-aggregate queries, but the broker could become a bottleneck for more complex queries. The new engine also resolves this by introducing intermediary compute stages on the query servers, and brings Apache Pinot closer to full ANSI SQL semantics. While this query engine has been available within Apache Pinot already (since release 0.11.0), with the release of Apache Pinot 1.0 this feature is functionally complete.

(While you can read more below, check out the accompanying blog by Apache Pinot PMC Neha Pawar about using query-time JOINs here).

This post is a summary of the high points, but you can find a full list of everything included in the release notes. And if you’d like a video treatment of many of the main features in 1.0, including some helpful animations, watch here:

Otherwise, let’s have a look at some of the highlighted changes:

  • Join Support - Part of the Multi-Stage Query Engine
  • Improved Upserts - Deletion and Compaction Support
  • Encode User-Specified Compressed Log Processor (CLP) During Ingestion
  • NULL Support
  • Pluggable Index Types [Index Service Provider Interface (SPI)]
  • Improved Pinot-Spark Integration - Spark3 Compatibility

Join Support#

Apache Pinot 1.0 introduces native query-time JOIN support equipping Pinot to handle a broad spectrum of JOIN scenarios providing full coverage from user-facing analytics all the way up to ad hoc analytics. Underpinning this innovation is the multi-stage query engine, introduced a year ago, which efficiently manages complex analytical queries, including JOIN operations. This engine alleviates computational burdens by offloading tasks from brokers to a dedicated intermediate compute stage. Additionally, a new planner supporting full SQL semantics enhances Pinot's analytical capabilities.

JOIN optimization strategies play a pivotal role in Apache Pinot 1.0. These include predicate push-down to individual tables and using indexing and pruning to reduce scanning which speeds up query processing, smart data layout considerations to minimize data shuffling, and query hints for fine-tuning JOIN operations. With support for all JOIN types and three JOIN algorithms, including broadcast join, shuffle distributed hash join, and lookup join, Apache Pinot delivers versatility and scalability. By significantly reducing query latency and simplifying architecture, Apache Pinot 1.0 is a game-changer for real-time OLAP systems.

For more detailed information on JOINs, please visit this blog post.

Discover How Uber is using Joins in Apache Pinot For a real-world use case, Uber is already using the new join capabilities of Apache Pinot at scale in production. You can watch this video to learn more.

Upsert Improvements#

Support for upserts is one of the key capabilities Apache Pinot offers that differentiates it from other real-time analytics databases. It is a vital feature when real-time streaming data is prone to frequent updates. While upserts have been available in Apache Pinot since 0.6.0, with 1.0 they include two major new enhancements: segment compaction and delete support for upsert tables.

Segment Compaction for Upsert Tables#

Pinot’s Upsert tables store all versions of a record ingested into immutable segments on disk. Older records unnecessarily consume valuable storage space when they’re no longer used in query results. Pinot’s Segment Compaction reclaims this valuable storage space by introducing a periodic process that replaces completed segments with compacted segments which only contain the latest version of the records.

"task": {  "taskTypeConfigsMap": {    "UpsertCompactionTask": {      "schedule": "0 */5 * ? * *",      "bufferTimePeriod": "7d",      "invalidRecordsThresholdPercent": "30",      "invalidRecordsThresholdCount": "100000"    }  }}

The example above, bufferTimePeriod is set to “7d” which means that any segment that was completed over 7 days ago may be eligible for compaction. However, if you want to ensure that segments are compacted without any additional delay this config can be set to “0d”.

invalidRecordsThresholdPercent is an optional limit to the amount of older records allowed in the completed segment represented as a percentage of the total number of records in the segment (i.e. old records / total records). In the example, this property is set to “30” which means that if more than 30% of the records in the completed segment are old, then the segment may be selected for compaction.

invalidRecordsThresholdCount is also a limit similar to the previous property, but allows you to express the threshold as a record count. In the example above, this property is set to “100000” which means that if the segment contains more than 100K records then it may be selected for compaction.

Read more about the design of this feature.

DELETE Support for Upsert Tables#

Apache Pinot upsert tables now support deleting records. Supporting delete with upsert avoids the need for the user to explicitly filter out invalid records in the query. SELECT FROM table WHERE deleted_column != true becomes as simple as SELECT FROM table. Pinot will only return the latest non-deleted records from the table. This feature opens up the support to ingest Change Data Capture (CDC) data like Debezium where the changes from a source (typically, mutable) will contain DELETE events.

Deletes itself is implemented as a soft-delete in Apache Pinot with a dedicated boolean column that serves as a delete marker for the record. Pinot automatically filters out records that are marked in this column. For more details, please see the documentation.

NULL Value Support#

This feature enables Postgres compatible NULL semantics in Apache Pinot queries. The NULL semantics are important for usability for full SQL compatibility which many BI applications like Tableau rely upon when invoking queries to render dashboards. Previously in Pinot, we could not represent NULL. The workaround was to use special values like Integer.MIN_VALUE to represent NULL. Now Pinot 1.0 has full support to represent NULL values. By adding NULL support, Pinot 1.0 has increased the Tableau certification pass rate by 90%.

Here are some examples of how NULLs will work in Pinot 1.0.

Aggregations#

Given the following table below, aggregating columns with NULL values will have this behavior.

col1col2
1NULL
2NULL
31

Since col1 does not contain NULL values, all the values are included in the aggregation.

select sum(col1) -- returns 6select count(col1) -- returns 3

In the select statement below, the NULL values in col2 are not included in the aggregation.

select sum(col2) -- returns 1select count(col2) -- returns 1

Group By#

Pinot now supports grouping by NULL. In the example below, we are grouping by col1 which contains a NULL value. Given the table below, grouping by columns with NULL value will have this behavior.

col1
a
NULL
b
a

The following select statement will output the following result.

select col1, count(*) from table group by col1

col1count()
a2
b1
NULL1

Sorting#

Pinot now allows you to specify the location of NULL values when sorting records. The default is to act as though NULLs are larger than non-NULLs.

Given this list of values, sorting them will result in the following.

values: 1, 2, 3, NULL

Example 1:

NULL values sort BEFORE all non-NULL values.

SQL:

select col from table order by col NULLS FIRST

RESULT: NULL, 1, 2, 3

Example 2:

NULL values sort AFTER all non-NULL values.

SQL:

select col from table order by col ASC NULLS LAST

RESULT: 1, 2, 3, NULL

Example 3:

Default behavior is NULL LAST.

SQL:

select col from table order by col

RESULT: 1, 2, 3, NULL

Index Pluggability#

Today, Pinot supports multiple index types, like forward index, bloom filter, and range index. Before Pinot 1.0, index types were all statically defined, which means that in order to create a new index type, you’d need to rebuild Pinot from source. Ideally that shouldn’t be the case.

To increase speed of development, Index Service Provider Interface (SPI), or index-spi, reduces friction by adding the ability to include new index types at runtime in Pinot. This opens the ability of adding third party indexes by including an external jar in the classpath and adding some configuration. This opens up Pinot indexing to lower-friction innovation from the community.

For now, SPI-accessible indexes are limited to single field index types. Features like the star-tree index or other multi-column approaches are not yet supported.

Apache Pinot Spark 3 Connector and Passing Pinot Options#

Apache Spark users can now take advantage of Pinot’s ability to provide high scalability, low latency, and high concurrency within the context of a Spark 3 cluster using the Apache Pinot Spark 3 Connector.

This connector supports Apache Spark (2.x and 3.x) as a processor to create and push segment files to the database and can read realtime, offline, and hybrid tables from Pinot.

Now you can merge your streaming and batch datasets together in Spark to provide a full view of real-time and historical data for your machine learning algorithms and feature stores.

Performance Features

  • Distributed, parallel scan
  • Streaming reads using gRPC (optional)
  • Column and filter push down to optimize performance
  • Support for Pinot’s Query Options that include: maxExecutionThreads, enableNullHandling, skipUpsert, etc.

Usability Features

  • SQL support instead of PQL
  • Overlap between realtime and offline segments is queried exactly once for hybrid tables
  • Schema discovery - If schema is not specified, the connector reads the table schema from the Pinot controller, and then converts to the Spark schema.

Here is an example that reads a Pinot table, by setting the format to “pinot” spark will automatically load the Pinot connector and read the “airlinesStats” table. The queryOptions property allows you to provide Pinot Query Options.

val data = spark.read  .format("pinot")  .option("table", "airlineStats")  .option("tableType", "offline")  .option("queryOptions", "enableNullHandling=true,maxExecutionThreads=1")  .load()  .sql("SELECT * FROM airlineStats WHERE DEST = ‘SFO’")
data.show(100)

Petabyte-Scale Log Storage and Search in Pinot with CLP#

Compressed Log Processor (CLP) is a tool capable of losslessly compressing text logs and searching them in their compressed state. It achieves a better compression ratio than general purpose compressors alone, while retaining the ability to search the compressed log events without incurring the performance penalty of fully decompressing them. Part of CLP’s algorithm was deployed within Uber to compress unstructured Spark logs, as they are generated, achieving an unprecedented compression of 169×.

Log events generated as JSON objects with user-defined schemas, meaning each event may have different keys. Such user-defined schemas make these events challenging to store in a table with a set schema. With Log Storage and Search in Pinot with CLP, users would be able to:

  • Store their log events losslessly (without dropping fields)
  • Store their logs with some amount of compression
  • Query their logs efficiently

The CLP ingestion pipeline can be used on log events from a stream, such as JSON log events ingested from Kafka. The plugin takes two inputs: a JSON record and a list of fields to encode with CLP.

The fields to encode can be configured as shown:

{    ...    "tableIndexConfig": {        ...        "streamConfigs": {            ...            "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.clplog.CLPLogMessageDecoder",            "stream.kafka.decoder.prop.fieldsForClpEncoding": "<field-name-1>,<field-name-2>"        }    }}

<field-names-1 and 2> are a comma-separated list of fields you wish to encode with CLP.

You can read the design document for more details into why and how this feature was implemented.

Summary#

Apache Pinot’s evolution is expressly due to the humans behind the code, and in reaching 1.0 release status it is proper and fitting to give credit to the open source project’s key committers. Since its early days, over three hundred contributors have produced more than 1.3 million source lines of code (SLOC).

alt

The introduction of Apache Pinot 1.0 represents an exceptional stride forward in real-time online analytical processing (OLAP) capabilities, marking a watershed moment in the evolution of real-time analytics systems. This release redefines the limits of what can be achieved in the realm of instant data analysis, presenting a game-changing solution for organizations seeking high throughput and low latency in their OLAP queries. If you would like to get started with Apache Pinot 1.0, you can check out the documentation, and download it now.

Resources#

If you want to try out Apache Pinot, the following resources will help you get started:

Download page: https://pinot.apache.org/download/

Getting started: https://docs.pinot.apache.org/getting-started

Join our Slack channel: https://communityinviter.com/apps/apache-pinot/apache-pinot

See our upcoming events: https://www.meetup.com/apache-pinot

Follow us on social media: https://twitter.com/ApachePinot

Segment Compaction for Upsert Enabled Tables in Apache Pinot

· 4 min read
Robert Zych
Software Engineer

I’m happy to share that my 1st feature contribution to the Apache Pinot project (Segment compaction for upsert enabled real-time tables) was merged recently! In this post, I will briefly discuss the problem segment compaction addresses, how to configure it, and what it looks like in action. If you’re unfamiliar with Pinot’s Upsert features, I recommend reviewing Full Upserts in Pinot to get started and Stream Ingestion with Upsert for more information.

Context and Configuration#

As Pinot’s Upsert stores all versions of the record ingested into immutable segments on disk, older records unnecessarily consume valuable storage space when they’re no longer used in query results. Pinot’s Segment Compaction reclaims this valuable storage space by introducing a periodic process that replaces the completed segments with compacted segments which only contain the latest version of the records. I recommend reviewing the Minion documentation if you’re unfamiliar with Pinot’s ability to run periodic processes.

With task scheduling enabled and an available Minion, you can configure segment compaction by adding the following to your table’s config.

"task": {  "taskTypeConfigsMap": {    "UpsertCompactionTask": {      "schedule": "0 */5 * ? * *",      "bufferTimePeriod": "7d",      "invalidRecordsThresholdPercent": "30",      "invalidRecordsThresholdCount": "100000"    }  }}

All the configs above (excluding schedule) determine which completed segments are selected for compaction.

bufferTimePeriod is the amount of time that has elapsed since the segment was consuming. In the example above, this has been set to “7d” which means that any segment that was completed over 7 days ago may be eligible for compaction. However, if you want to ensure that segments are compacted without any additional delay this config can be set to “0d”.

invalidRecordsThresholdPercent is a limit to the amount of older records allowed in the completed segment represented as a percentage of the total number of records in the segment (i.e. old records / total records). In the example above, this has been set to “30” which means that if more than 30% of the records in the completed segment are old, then the segment may be selected for compaction. As segment compaction is an expensive operation, it is not recommended to set this config (or invalidRecordsThresholdCount) too close to 1. This config is optional on the condition that invalidRecordsThresholdCount has been set and can be used in conjunction with invalidRecordsThresholdCount.

invalidRecordsThresholdCount is also a limit similar to invalidRecordsThresholdPercent, but allows you to express the threshold as a record count. In the example above, this has been set to “100000” which means that if the segment contains more than 100K records then it may be selected for compaction.

Example Use Case#

I’ve created a data set that includes 24M records. The data set contains 240K unique keys that have each been duplicated 100 times.

alt

After ingesting the data set there are 6 segments (5 completed segments + 1 consuming segment) with a total estimated size of 22.8MB. Submitting the query “set skipUpsert=true; select count(*) from transcript_upsert” before compaction produces the following query result.

alt

After the compaction tasks are complete, the Minion Task Manager UI reports the following.

alt

Segment compaction generates a task for each segment to be compacted. 5 tasks were generated in this case because 90% of the records (3.6–4.5M records) are old in all 5 of the completed segments, therefore exceeding the configured thresholds. If a completed segment only contains old records, it is deleted immediately and a task isn’t generated to compact it.

alt

Submitting the query again we now see that count matches the set of 240K unique keys.

alt

Once compaction has completed and the original segments have been replaced with their compacted counterparts we see that the total number of segments remained the same, but the total estimated size dropped to only 2.77MB! Since compaction can yield very small segments, one improvement would be to merge smaller segments into larger ones as this would improve query latency.

Conclusion#

In this brief overview of Segment Compaction I covered the problem it addresses, how you can configure it, and demonstrated its ability to reclaim storage space. I’d like to thank Ankit Sultana, Seunghyun Lee, and especially Jackie Jiang for their feedback and support throughout the design and development stages. If you have any questions or feedback, I’m available on the Apache Pinot Slack.

Star-Tree Index in Apache Pinot - Part 3 - Understanding the Impact in Real Customer Scenarios

· 8 min read

In part 1 of this blog series, we looked at how a star-tree index brought down standalone query latency on a sizable dataset of ~633M records from 1,513ms to 4ms! — nearly 380x faster. 

In part 2 of this blog series, we imitated a real production scenario by firing hundreds of concurrent queries using JMeter and showcased how using a star-tree index helped achieve a >95% drop in p90th / p95th / p99th latencies and 126x increase in Throughput.

In this part, we will cover some real customer stories that have seen 95% to 99% improvement in query performance using Star-Tree Index.

AdTech Use Case#

This was for a leading AdTech platform and a somewhat typical use case; users of the platform (advertisers, publishers, and influencers) wanted to see fresh metrics on how their activities (such as online content, ad, and email campaigns) were performing in real-time so they could tweak things as needed. The application team wanted to provide a rich analytical interface to these users so that not only can they see the current performance but also do custom slicing and dicing of data over a period of time. For example, compare their current campaign performance to one they ran two weeks back, do cohort analysis, and so on.

Why was the existing system not working?#

Their existing tech stack was a mix of OSS and custom-built in-house code, which was both operationally difficult to manage and costly to maintain. Yet more importantly, it wasn’t able to meet the basic throughput and latency requirements required by the platform to sustain user growth as well as provide richer analytic capabilities in the product.

The Problem and Challenges?#

When the StarTree Sales Engineering team was engaged, the requirements were very clear:

  • Throughput: Support 50+ QPS during POC and 200+ for production)
  • Latency: P95th latency of 2s, including query that needed aggregation of ~ 2 billion rows
  • Scalability: Ability to scale efficiently with future growth in QPS in a non-linear manner

The biggest challenge was the size of data — 20+ TB and growing — and on top of that, a complex aggregation query driving the summary view for users so they can drill further in to get more details. 

This particular query needed to aggregate close to 2 Billion records at read time and then would be fired for every active user interacting with the platform (so high concurrent QPS). In this case, despite applying all relevant indexes available in their existing system, out-of-the-box query performance was still in the 6-8 seconds range, which is expected given that bulk of the work for the query is happening in the aggregation phase and not during the filtering phase (indexing helps with this).

In other OLAP systems they explored, the only option available to handle this use case was doing ingestion time rollups. In other words, changing the data to higher granularity. However, this obviously means losing access to raw data and also potentially re-bootstrapping if new use cases come down the road that need raw data access.

This is exactly the type of scenario that the Star-Tree Index, unique to Apache Pinot, is designed to address - handle large aggregation queries at scale that need sub-second performance. The best part is you can apply it anytime without any need to reprocess the data or plan any system downtime. (Segment reload to apply table config changes run as a background task in Apache Pinot.) In this specific case, the same query latencies with the star-tree index applied went down to 15 ms. This implicitly meant that with the same infrastructure footprint, StarTree was able to support ~70 QPS (Queries Per Second) vs < 1 QPS for this most complex query; while still keeping the raw data intact.

Data Size and Infra Footprint for the Pilot: #

  • Total # of records: ~2 Trillion
  • Data Size: ~20 TB
  • Capacity: 72 vCPUs across 9 Pinot servers (8 vCPU, 64GB per node). 

Impact Summary:#

  • 99.76% reduction in latency vs. no Star-Tree Index (6.3 seconds to 15 ms)
  • 99.99999% reduction in amount of data scanned/aggregated per query (>1.8B docs to <2,400)

Visualization of the impact of start-tree index for an AdTech use case with Apache Pinot

CyberSecurity Use Case:#

A cybersecurity company that provides their customers with a real-time threat detection platform with AI, allowing them to analyze network flow logs in real-time with a sophisticated reporting/analytical UI. The initial landing page inside the customer portal is a summary view of everything the platform was monitoring in the user's environment and then provides the capability to drill down into specifics of each. For example, filter requests by a specific application or IP Address.

Why was the existing system not working?#

Their existing tech stack was a mix of Athena/Presto, which couldn’t meet the throughput and latency requirements with growing data volume across their customers. Additionally, operational overhead around managing some of these systems in-house led to increased cost.

The Problem and Challenges?#

Some of the key requirements that StarTree Cloud cluster had to meet:

  • Throughput: Up to 200 QPS (200 projected by end of year)
  • Latency: <1 second P99
  • High ingestion rate: 300k events/sec
  • ROI: Provide better cost efficiencies

Similar to Use case #1, the customer wanted to retain data at the lowest granularity (so no ingestion roll-ups), and given the time column granularity similar challenge with running the complex aggregation query to power off the summary view. Additionally, the requirement to get double-digit throughput(QPS) for the POC with the most efficient compute footprint made it quite challenging.

Given the overhead while doing complex aggregations, efficient filtering (indexes) wasn’t enough - in this case, with 3 * 4-core/32GB nodes query took more than 15 seconds. We immediately switched the table config to add star-tree index to the table config and do a segment reload, and the results were phenomenal — query latency was reduced to 10ms. 

Data Size and Infra Footprint for the Pilot: #

  • Total # of records: ~8 Billion
  • Data Size: 500+ GB
  • Capacity: 12 vCPUs across 3 Pinot servers (4-core/32GB) 

Impact Summary:#

  • 99.94% reduction in query latency (achieving 100 QPS for the same query with no extra hardware)
  • 99.9998% reduction in data scanned/aggregated per query
  • Happy Customer 😃

Visualization of the impact of star-tree index for a Cybersecurity use case with Apache Pinot

Multiplayer Game Leaderboard Use Case#

A global leader in the interactive entertainment field has an A/B Testing / Experimentation use case that tracks players’ activities to measure the player engagement on the new features being rolled out.

The Problem and Challenges?#

Some of the key requirements that StarTree Cloud cluster had to meet:

  • Throughput: = 200 QPS 
  • Latencies: <1 second P99
  • Ingestion rate: 50K events/sec

Given the overhead while doing complex aggregations, efficient filtering (indexes) wasn’t enough - in this case, with 1 * 4-core/32GB nodes query took 163 milliseconds. After switching to a star-tree index, the query latency was reduced to 7ms (a reduction of 95.7%). 

Data Size and Infra Footprint for the Pilot: #

  • Total # of records: ~34 Million
  • Data Size: 500+ GB
  • Capacity: 4 vCPUs - 1 Pinot server (4-cores / 32 GB) 

Impact Summary:#

  • 95.70% improvement in query performance as a result of 99.9962% reduction in number of documents and entries scanned.  

Visualization of the impact of star-tree index for a Gaming use case with Apache Pinot

Quick Recap: Star-Tree Index Performance Improvements#

Recap Table of the Impact that star-tree index had on three real-world customers using Apache Pinot™

  • 99.99% reduction in data scanned/aggregated per query
  • 95 to 99% improvement in query performance

Disk IO is the most expensive operation in query processing. The star-tree index reduces Disk IO significantly. Instead of scanning raw documents from the disk and computing aggregates on the fly, star-tree index scans pre-aggregated documents for the combination of dimensions specified in the query from the disk. 

In part 1 of the series, we saw that the star-tree index reduced the disk reads by 99.999% from 584 Million entries (in case of an inverted index) to 2,045. Query latency came down 99.67% from 1,513 ms to 4 ms! This, in itself, was a HUGE benefit. 

In addition to the drastic improvement in query latency, the memory and CPU usage decreased significantly, freeing up resources for taking up more concurrent workloads. The cumulative effect was the 126 x increase in QPS on this small 4 vCPU Pinot Server, as we saw in part 2 blog of this series. 

And finally, in this part 3 of the blog series, we covered three real production use cases that have seen 95% to 99% improvement in query performance using Star-Tree Index.

Intrigued by What You’ve Read?#

The next step is to load your data into an open-source Apache Pinot cluster or, if you prefer, a fully-managed database-as-a-service (DBaaS). Sign up today for a StarTree Cloud account, free for 30 days. If you have more questions, sign up for the StarTree Community Slack.

GET STARTED ON STARTREE CLOUD

Real-Time Mastodon Usage with Apache Kafka, Apache Pinot, and Streamlit

· 7 min read
Mark Needham
Developer Advocate

I recently came across a fascinating blog post written by Simon Aubury that shows how to analyze user activity, server popularity, and language usage on Mastodon, a decentralized social networking platform that has become quite popular in the last six months. 

The Existing Solution: Kafka Connect, Parquet, Seaborn and DuckDB #

To start, Simon wrote a listener to collect the messages, which he then published into Apache Kafka®. He then wrote a Kafka Connect configuration that consumes messages from Kafka and flushes them after every 1,000 messages into Apache Parquet files stored in an Amazon S3 bucket. 

Finally, he queried those Parquet files using DuckDB and created some charts using the Seaborn library, as reflected in the architecture diagram below:

Flowchart of data collection to data processing

Fig: Data Collection Architecture

The awesome visualizations that Simon created make me wonder whether we can change what happens downstream of Kafka to make our queries even more real-time. Let’s find out!

Going Real-Time with Apache Pinot™#

Now Apache Pinot comes into the picture. Instead of using Kafka Connect to batch Mastodon toots into groups of 1,000 messages to generate Parquet files, we can stream the data immediately and directly, toot-by-toot into Pinot and then build a real-time dashboard using Streamlit:

Data collection in Mastodon, followed by processing in Apache Kafka, Apache Pinot, and Streamlit

Setup#

To follow along, first clone my fork of Simon’s GitHub repository:

git clone git@github.com:mneedham/mastodon-stream.gitcd mastodon-stream

Then launch all of the components using Docker Compose:

docker-compose up

Pinot Schema and Table#

Similar to what Simon did with DuckDB, we’ll ingest the Mastodon events into a table. Pinot tables have a schema that’s defined in a schema file. 

To come up with a schema file, we need to know the structure of the ingested events. For example:

{  "m_id": 110146691030544274,  "created_at": 1680705124,  "created_at_str": "2023 04 05 15:32:04",  "app": "",  "url": "https://mastodon.social/@Xingcat/110146690810165414",  "base_url": "https://techhub.social",  "language": "en",  "favourites": 0,  "username": "Xingcat",  "bot": false,  "tags": 0,  "characters": 196,  "words": 36,  "mastodon_text": "Another, “I don’t know what this is yet,” paintings. Many, many layers that look like distressed metal or some sort of rock crosscut. Liking it so far, need to figure out what it’ll wind up being."}

Mapping these fields directly to columns is easiest and will result in a schema file that looks like this:

{  "schemaName":"mastodon",  "dimensionFieldSpecs":[    {"name":"m_id","dataType":"LONG"},    {"name":"created_at_str","dataType":"STRING"},    {"name":"app","dataType":"STRING"},    {"name":"url","dataType":"STRING"},    {"name":"base_url","dataType":"STRING"},    {"name":"language","dataType":"STRING"},    {"name":"username","dataType":"STRING"},    {"name":"bot","dataType":"BOOLEAN"},        {"name":"mastodon_text","dataType":"STRING"}  ],  "metricFieldSpecs":[    {"name":"favourites","dataType":"INT"},    {"name":"words","dataType":"INT"},    {"name":"characters","dataType":"INT"},    {"name":"tags","dataType":"INT"}  ],  "dateTimeFieldSpecs":[    {      "name":"created_at",      "dataType":"LONG",      "format":"1:MILLISECONDS:EPOCH",      "granularity":"1:MILLISECONDS"    }  ]}

Next up: our table config, shown below:

{    "tableName": "mastodon",    "tableType": "REALTIME",    "segmentsConfig": {      "timeColumnName": "created_at",      "timeType": "MILLISECONDS",      "schemaName": "mastodon",      "replicasPerPartition": "1"    },    "tenants": {},    "tableIndexConfig": {      "loadMode": "MMAP",      "streamConfigs": {        "streamType": "kafka",        "stream.kafka.consumer.type": "lowLevel",        "stream.kafka.topic.name": "mastodon-topic",        "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder",        "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",        "stream.kafka.decoder.prop.format": "AVRO",        "stream.kafka.decoder.prop.schema.registry.rest.url": "http://schema-registry:8081",        "stream.kafka.decoder.prop.schema.registry.schema.name": "mastodon-topic-value",        "stream.kafka.broker.list": "broker:9093",        "stream.kafka.consumer.prop.auto.offset.reset": "smallest"      }    },    "metadata": {      "customConfigs": {}    },    "routing": {      "instanceSelectorType": "strictReplicaGroup"    }}

The following configs represent the most important ones for ingesting Apache Avro™ messages into Pinot:

"stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder","stream.kafka.decoder.prop.format": "AVRO","stream.kafka.decoder.prop.schema.registry.rest.url": "http://schema-registry:8081","stream.kafka.decoder.prop.schema.registry.schema.name": "mastodon-topic-value",

The KafkaConfluentSchemaRegistryAvroMessageDecoder decoder calls the Schema Registry with the schema name to get back the schema that it will use to decode messages.

We can create the Pinot table by running the following command:

docker run \   --network mastodon \   -v $PWD/pinot:/config \   apachepinot/pinot:0.12.0-arm64 AddTable \     -schemaFile /config/schema.json \     -tableConfigFile /config/table.json \     -controllerHost "pinot-controller" \    -exec

We can then navigate to the table page of the Pinot UI: 

http://localhost:9000/#/tenants/table/mastodon_REALTIME

Here, we’ll see the following:

Apache Pinot table config and schema

Ingest Data into Kafka#

Now, we need to start ingesting data into Kafka. Simon created a script that accomplishes this for us, so we just need to indicate which Mastodon servers to query.

python mastodonlisten.py --baseURL https://data-folks.masto.host \  --public --enableKafka --quietpython mastodonlisten.py --baseURL https://fosstodon.org/ \  --public --enableKafka --quietpython mastodonlisten.py --baseURL https://mstdn.social/ \  --public --enableKafka --quiet

We can then check the ingestion of messages with the kcat command line tool:

kcat -C -b localhost:9092 -t mastodon-topic \  -s value=avro -r http://localhost:8081 -e

Query Pinot#

Now, let’s go to the Pinot UI to see what data we’ve got to play with:

http://localhost:9000

We’ll see the following preview of the data in the mastodon table:

SQL Editor, query response stats, and query result in Apache Pinot

We can then write a query to find the number of messages posted in the last five minutes:

select count(*) as "Num toots", count(distinct(username)) as "Num users", count(distinct(url)) as "Num urls"from mastodonwhere created_at*1000 > ago('PT1M')order by 1 DESC;

Query results for toots, users, and urls

We can also query Pinot via the Python client, which we can install by running the following:

pip install pinotdb

Once we’ve done that, let’s open the Python REPL and run the following code:

from pinotdb import connectimport pandas as pd
conn = connect(host='localhost', port=8099, path='/query/sql', scheme='http')
curs = conn.cursor()
st.header("Daily Mastodon Usage")query = """select count(*) as "Num toots", count(distinct(username)) as "Num users", count(distinct(url)) as "Num urls"from mastodonwhere created_at*1000 > ago('PT1M')order by 1 DESC;"""curs.execute(query)
df = pd.DataFrame(curs, columns=[item[0] for item in curs.description])

This produces the resulting DataFrame:

   Num toots  Num users  Num urls0        552        173       192

Streamlit#

Next, we’ll create a Streamlit dashboard to package up these queries. We’ll visualize the results using Plotly, which you can install using:

pip install streamlit plotly

I’ve created a Streamlit app in the file app.py, which you can find in the GitHub repository. Let’s have a look at the kinds of visualizations that we can generate. 

First, we’ll create metrics to show the number of toots, users, and URLs in the last n minutes. n will be configurable from the app as shown in the screenshot below:

Chart of real-time Mastodon usage

From the screenshot, we can identify mastodon.cloud as the most active server, though it produces only 1,800 messages in 10 minutes or three messages per second. The values in green indicate the change in values compared to the previous 10 minutes.

We can also create a chart showing the number of messages per minute for the last 10 minutes:

Time of day Mastodon usage

Based on this chart, we can see that we’re creating anywhere from 200–900 messages per second. Part of the reason lies in the fact that the Mastodon servers sometimes disconnect our listener, and at the moment, I have to manually reconnect.

Finally, we can look at the toot length by language:

Chart of toot length by language usage

We see much bigger ranges here than Simon saw in his analysis. He saw a maximum length of 200 characters, whereas we see some messages of up to 4,200 characters. 

Summary#

We hope you enjoyed following along as we explored this fun use case for real-time analytics. As you can see, even though we’re pulling the data from many of the popular Mastodon servers, it’s still not all that much data!

Give the code a try and let us know how it goes. If you have any questions, feel free to join us on Slack, where we’ll gladly do our best to help you out.

How to Ingest Streaming Data from Kafka to Apache Pinot™

· 9 min read
Barkha Herman
Developer Advocate

We previously walked through getting started with Apache Pinot™ using batch data, and now we will learn how to ingest streaming data using Apache Kafka® topics. 

As the story goes, Apache Pinot was created at LinkedIn to provide a platform that could ingest a high number of incoming events (kafka) and provide “fresh” (sub second) analytics to a large number (20+ million) of users, fast (sub second latency). So, really, consuming events is part of the reason why Pinot was created.

The obligatory “What is Apache Pinot and StarTree?” section#

Pinot is a real-time, distributed, open source, and free-to-use OLAP datastore, purpose-built to provide ultra low-latency analytics at extremely high throughput. It is open source and free to use.

How does StarTree come in? StarTree offers a fully managed version of the Apache Pinot real-time analytics system , plus other tools around it that you can try for free. The system includes  StarTree Dataset Manager and StarTree ThirdEye, a UI based data ingestion tool, and a real-time anomaly detection and root cause analysis tool, respectively.

How to install Kafka alongside Pinot #

Prerequisite#

Complete the steps outlined in the introduction to Apache Pinot

Step 1: Install Kafka on your Pinot Docker image#

Make sure you have completed the first article in the series.

We will be installing Apache Kafka onto our already existing Pinot docker image. To start the Docker image, run the following command:

docker run -it --entrypoint /bin/bash -p 9000:9000 apachepinot/pinot:0.12.0

PowerShell 7.3.4 docker run Apache Pinot

We want to override the ENTRYPOINT and run Bash script within the Docker image. If you already have a container running, you can skip this step. I tend to tear down containers after use, so in my case, I created a brand new container.

Now, start each of the components one at a time like we did in the previous session:

bin/pinot-admin.sh StartZookeeper &

bin/pinot-admin.sh StartController &

bin/pinot-admin.sh StartBroker &

bin/pinot-admin.sh StartServer &

Run each of the commands one at a time. The & allows you to continue using the same Bash shell session. If you like, you can create different shells for each service:

  1. Get the container ID by running docker ps
  2. Run docker exec -it DOCKER_CONTAINER_ID bash where DOCKER_CONTAINER_ID is the ID received from step 1.
  3. Run the pinot-admin.sh command to start the desired service

It should look like this:

Docker with container ID, Image, Command, and Created

You can now browse to http://localhost:9000/#/zookeeper to see the running cluster:

Empty Zookeeper Browser

Step 2: Install Kafka on the Docker container#

Next, let's install Kafka. We will be installing Kafka on the existing docker container. For this step, download the TAR file, extract the contents, and start Kafka.

Apache Kafka is an open source software platform that provides a unified, high-throughput, low-latency platform for handling real-time data feeds.

Use the following command to download the Kafka image:

cd ..curl https://downloads.apache.org/kafka/3.4.0/kafka_2.12-3.4.0.tgz --output kafka.tgz --output kafka.tgz

It should look this:

Code with Apache Pinot speed results

Note that we’ve changed the directory to keep the Kafka folder separate from the Pinot folder.

Now, let’s expand the downloaded TAR file, rename the folder for convenience, and delete the downloaded file:

tar -xvf kafka.tgzmv kafka_2.12-3.4.0 kafkarm -rf kafka.tgz

It should look like this:

Code with Apache Kafka

Code with kafka version

Now, Kafka and Pinot reside locally on our Docker container with Pinot up and running. Let’s run the Kafka service. Kafka will use the existing ZooKeeper for configuration management.

Use the following command to run Kafka:

cd kafka./bin/kafka-server-start.sh config/server.properties

It should look like this:

Code with cd kafka

To verify that Kafka is running, let’s look at our ZooKeeper configs by browsing to http://localhost:9000/#/zookeeper:

Zookeeper Browser

You may have to refresh the page and find many more configuration items appear thanexpectedt. These are Kafka configurations. 

Step 3: Ingest data into Kafka#

In this step, we will ingest data into Kafka. We will be using Wikipedia events since they are easily accessible. We will use a node script to ingest the Wikipedia events, then add them to a Kafka Topic.

Let’s first create some folders like this:

cd /opt

mkdir realtime

cd realtime

mkdir events

It should look like this:

Code with realtime

You may have to start a new PowerShell window and connect to Docker for this. Now, let’s install Node.js and any dependencies we might need for the event consumption script:

curl -fsSL https://deb.nodesource.com/setup_14.x | bash -apt install nodejs

Node.js takes a few minutes to install. Next, we will create a script to consume the events called wikievents.js. Cut and paste the following code to this file:

var EventSource = require("eventsource");var fs = require("fs");var path = require("path");const { Kafka } = require("kafkajs");
var url = "https://stream.wikimedia.org/v2/stream/recentchange";
const kafka = new Kafka({ clientId: "wikievents", brokers: ["localhost:9092"],});
const producer = kafka.producer();
async function start() { await producer.connect(); startEvents();}
function startEvents() { console.log(`Connecting to EventStreams at ${url}`); var eventSource = new EventSource(url);
 eventSource.onopen = function () { console.log("--- Opened connection."); };
 eventSource.onerror = function (event) { console.error("--- Encountered error", event); };
 eventSource.onmessage = async function (event) { const data = JSON.parse(event.data); const eventPath = path.join(__dirname, "./events", data.wiki); fs.existsSync(eventPath) || fs.mkdirSync(eventPath); fs.writeFileSync(path.join(eventPath, data.meta.id + ".json"), event.data); await producer.send({ topic: "wikipedia-events", messages: [ { key: data.meta.id, value: event.data, }, ], }); };}
start();

You can use vi to create the file and save it. You can also use Docker Desktop to edit the file.

To install the two modules referenced in the file above, kafkajs and eventsource, run the following command:

npm i eventsource kafkajs

Let’s run the program. This will result in the download of many files, so I recommend running the program for just a few minutes. You can stop the run by using Ctrl-C.

node wikievents.js

Use Ctrl-C to stop the program. Navigate to the events folder to see some new folders created with the various language events downloaded from Wikipedia.

Wikievents node in code

Navigate to the enwiki folder and review some of the downloaded JSON files.

Code with realtime wikievents

At http://localhost:9000/#/zookeeper, you can find the Kafka topic by locating the ZooKeeper config and expanding config > topics. You may have to refresh your browser.

Zookeeper browser in Apache Pinot topics

Here, you should see the wikipedia-events topic that we created using the Node.js script. So far, so good.

Step 4: Connect Kafka to Pinot#

With Kafka installed and configured to receive events, we can connect it to Pinot. 

To create a real-time table in Pinot that can consume the Kafka topic, create a schema and a configuration table. The schema configuration is very much like the schema that we created for our batch example. You can use vi to create a file named realtime.schema.json and cut and paste the content below.

Here’s the JSON for the wikievents schema:

{ "schemaName": "wikievents", "dimensionFieldSpecs": [ { "name": "id", "dataType": "STRING" }, { "name": "wiki", "dataType": "STRING" }, { "name": "user", "dataType": "STRING" }, { "name": "title", "dataType": "STRING" }, { "name": "comment", "dataType": "STRING" }, { "name": "stream", "dataType": "STRING" }, { "name": "domain", "dataType": "STRING" }, { "name": "topic", "dataType": "STRING" }, { "name": "type", "dataType": "STRING" }, { "name": "metaJson", "dataType": "STRING" } ], "dateTimeFieldSpecs": [ { "name": "timestamp", "dataType": "LONG", "format": "1:MILLISECONDS:EPOCH", "granularity": "1:MILLISECONDS" } ]}

Creating the table config file is where the magic happens. Use vi (or your favorite editor) to create realtime.tableconfig.json and cut and paste the following content:

{ "tableName": "wikievents_REALTIME", "tableType": "REALTIME", "segmentsConfig": { "timeColumnName": "timestamp", "schemaName": "wikievents", "replication": "1", "replicasPerPartition": "1" }, "tenants": { "broker": "DefaultTenant", "server": "DefaultTenant", "tagOverrideConfig": {} }, "tableIndexConfig": { "invertedIndexColumns": [], "rangeIndexColumns": [], "autoGeneratedInvertedIndex": false, "createInvertedIndexDuringSegmentGeneration": false, "sortedColumn": [], "bloomFilterColumns": [], "loadMode": "MMAP", "streamConfigs": { "streamType": "kafka", "stream.kafka.topic.name": "wikipedia-events", "stream.kafka.broker.list": "localhost:9092", "stream.kafka.consumer.type": "lowlevel", "stream.kafka.consumer.prop.auto.offset.reset": "smallest", "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory", "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder", "realtime.segment.flush.threshold.rows": "0", "realtime.segment.flush.threshold.time": "24h", "realtime.segment.flush.segment.size": "100M" }, "noDictionaryColumns": [], "onHeapDictionaryColumns": [], "varLengthDictionaryColumns": [], "enableDefaultStarTree": false, "enableDynamicStarTreeCreation": false, "aggregateMetrics": false, "nullHandlingEnabled": false }, "metadata": {}, "quota": {}, "routing": {}, "query": {}, "ingestionConfig": { "transformConfigs": [ { "columnName": "metaJson", "transformFunction": "JSONFORMAT(meta)" }, { "columnName": "id", "transformFunction": "JSONPATH(metaJson, '$.id')" }, { "columnName": "stream", "transformFunction": "JSONPATH(metaJson, '$.stream')" }, { "columnName": "domain", "transformFunction": "JSONPATH(metaJson, '$.domain')" }, { "columnName": "topic", "transformFunction": "JSONPATH(metaJson, '$.topic')" } ] }, "isDimTable": false}

Notice the section called streamConfigs, where we define the source as a Kafka stream, located at localhost:9092, and consume the topic wikipedia-events. That’s all it takes to consume a Kafka Topic into Pinot.

Don’t believe me? Give it a try!

Create the table by running the following command:

/opt/pinot/bin/pinot-admin.sh AddTable -schemaFile /opt/realtime/realtime.schema.json -tableConfigFile /opt/realtime/realtime.tableconfig.json -exec

Now, browse to the following location http://localhost:9000/#/tables, and you should see the newly created table. However, where’s the real-time data, you say?

Run the node wikievents.js command, then query the newly created wikievents table to see the totalDocs increase in real time:

Apache Pinot query console

To avoid running out of space on your computer, make sure to stop the wikievents.js script when you’re done :-D

Conclusion#

Congratulations! Using only the table config, we simultaneously consumed Kafka topics directly into Pinot tables and queried events. We also transformed JSON to map to the Pinot table. In the transformConfigs portion of the Pinot table config file, we consumed the nested block meta into a field called metaJson. In the subsequent steps, we referenced the metaJson field with jsonPath to extract fields such as id, stream, domain, and topic. 

Not only does Pinot support easy ingestion from Kafka topics, but it also provides a robust way to transform JSON to OLAP tables. 

In summary, we have:

  • Installed and run Kafka
  • Consumed events from Wikipedia into Kafka
  • Created a real-time table schema and a table in Pinot
  • Streamed events from Wikipedia into Pinot tables via Kafka topics
  • Run multiple queries
  • Performed JSON transformations

In some upcoming blog posts, we will explore more advanced topics, such as indexes and transformations, not to mention real-time anomaly detection with ThirdEye.

In the meantime, run more queries, load more data, and don’t forget to join the community Slack for support if you get stuck or would like to request a topic for me to write about—you know where to find us!

Change Data Capture with Apache Pinot - How Does It Work?

· 10 min read
Hubert Dulay
Developer Advocate

Change Data Capture (CDC) is the process of capturing and communicating changes made to records in a data store, including INSERTs, UPDATEs, and DELETEs transactions to records. 

CDC implementations vary across different types of transactional databases, whether SQL or NoSQL. However, the means to ingest and analyze that data in Apache Pinot™ will generally remain the same.

As your applications interact with their data stores, they automatically log the transaction in a construct called a write-ahead log (WAL) in real time. In fact, each transaction reflects an event that has been recorded, naturally giving the WAL event streaming properties. This approach is typically used by relational OLTP databases like PostgreSQL. 

NOTE: NoSQL databases also have the ability to perform CDC but may use other mechanisms than a WAL. CDC for NoSQL databases is outside the scope of this post.

The WAL is an append-only, immutable stream of events designed to replicate its data to another instance of the data store for high availability in disaster recovery scenarios (see diagram below). The transactions occurring on the left data store (primary) get replicated to the data store to the right (secondary). The applications connect to the primary data store and replicate its data to the secondary data store. If the primary data store goes down, the application switches to the secondary data store.

Primary data store transactions being replicated to a secondary data store

The following diagram shows an example of a WAL in a data store. New transactions get appended to the end of the WAL. The old transactions are on the left, and the newer transactions are on the right.

WAL in a data store with new transactions appended to the end of the WAL

Change data capture enables you to listen to this WAL by capturing these transactions and sending them downstream for processing. The data processing occurs in a different system where we can view the latest version of each record in other applications. Because of the real-time nature of the data, the subscribing applications to the stream of transactions receive real-time transaction events.

Pre-Image, Post-Image, or Diffs?#

An important consideration for CDC is what specific elements of change it captures. Not all CDC implementations are the same. Some provide only the post-image — the complete state to which the record changes after an update. Some only provide the diffs (or deltas) — the specific changes made to the record at the time of the update, not the complete current state of the record. And others can provide the pre-image as well — what the state of the record was before the changes were applied.

Different transactional databases may only provide one or two of these elements. Usually, it will provide the complete post-image or the diffs (or deltas) to the record. In other cases, a CDC implementation might provide all three data elements — pre-, post-, and diffs. It is very important for you to understand what specific CDC data elements your transactional database provides because of how it limits the kind of analytics you can perform.

How to Capture Change Data with Debezium#

Capturing change events requires specific knowledge of the database from which the changes are occurring; and there are many transactional databases. Debezium, an open source project, provides a set of connectors that can subscribe to WALs in many different data stores, such as PostgreSQL, SQL Server, and MongoDB. Their implementation involves the Kafka Connect framework, an open source framework that enables integrations to Apache Kafka®. Two types of connectors exist: source and sink. Debezium connectors are source-only connectors.

Kafka connectors must run in a Kafka Connect cluster, a highly available and distributed system for running connectors. Kafka connectors cannot run on their own and require a server. The Debezium project provides a Debezium server that can also run Debezium connectors capable of writing to other event streaming platforms besides Kafka, for instance, Amazon Kinesis. The diagram below shows a Debezium connector reading the WAL and writing to a Debezium server. The Debezium server can then write to either Kafka or Kinesis.

Diagram showing a Debezium connector reading the WAL and writing to a Debezium server

Debezium Data Format#

For details on the Debezium format, check out the tutorial. Below, you’ll find an example of a transaction event encoded in JSON coming from the Debezium connector.

{  "schema": {...},  "payload": {    "before": {        "user_id": 1004,      "first_name": "Anne",      "last_name": "Kretchmar",      "email": "annek@noanswer.org"    },    "after": {        "user_id": 1004,      "first_name": "Anne Marie",      "last_name": "Kretchmar",      "email": "annek@noanswer.org"    },    "source": {        "name": "2.2.0.Final",      "name": "dbserver1",      "server_id": 223344,      "ts_sec": 1486501486,      "gtid": null,      "file": "mysql-bin.000003",      "pos": 364,      "row": 0,      "snapshot": null,      "thread": 3,      "db": "inventory",      "table": "customers"    },    "op": "u",      "ts_ms": 1486501486308    }}

A few elements to note:

  • The schema element never changes and defines the schema of the payload

  • The payload element holds three different elements:

    • before: shows the state of the record before it was changed; if this is null, then you can assume that the transaction is an INSERT
    • after: shows the state of the record after the record was changed; if this is null, then you can assume that the transaction is a DELETE
    • source: constitutes metadata that describes the source of the data
  • The op element defines the actual transaction 

    • Values:

      • c for CREATE (or INSERT)
      • r for READ (in the case of a snapshot)
      • u for UPDATE
      • d for DELETE
  • The ts_ms element refers to the timestamp in milliseconds of when the transaction occurred

In the op element of the format, you may use a possible r value to determine if the record originated from a snapshot of the entire table in the data store. When the Debezium connector first starts, you could encounter existing records. You can configure the connector to first take a snapshot of the entire table to send as events downstream to its eventual destination. This will affect the treatment of records in the destination, in our case, Apache Pinot.

In Apache Pinot, we will have to create a schema that corresponds to the Debezium format. This could be defined a number of ways. I chose to bring the comments in the after field so users can access the latest values for any customer. I also kept the op at the top level. Since there are no metrics, that context in the schema is an empty array. I also preserved the after and before fields. Notice they are of type STRING. In Apache Pinot, you can assign a JSON index to any field containing multi-level JSON data. Apache Pinot will index all the values in the JSON payload so that any query referencing data in those JSON fields would be fast. This will allow users to see previous values of the record in cases where the operation was a change.  Lastly, I have a date time field to indicate when the last change was made. 

{  "schemaName": "customers",  "dimensionFieldSpecs": [    {      "name": "user_id",      "dataType": "STRING"    },    {      "name": "first_name",      "dataType": "STRING"    },    {      "name": "last_name",      "dataType": "STRING"    },    {      "name": "email",      "dataType": "STRING"    },    {      "name": "op",      "dataType": "STRING"    },    {      "name": "before",      "dataType": "STRING"    },    {      "name": "after",      "dataType": "STRING"    },    {      "name": "source",      "dataType": "STRING"    },  ],  "metricFieldSpecs": [  ],  "dateTimeFieldSpecs": [    {      "name": "ts_ms",      "dataType": "LONG",      "format": "1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd'T'HH:mm:ss.SSS'Z'",      "granularity": "1:MILLISECONDS"    }  ],  "primaryKeyColumns": [    "user_id"  ]}

You may have an alternative schema depending on your use case. You don’t need any of the fields I preserved. If at the end you only want the latest version, you can do that easily by only preserving the columns that matter to you.

Materialized Views#

When looking up your record in Pinot, you only need to provide a WHERE clause with the primary key. Pinot will only return one record—the latest version of the record, not the history of the record—as a true materialized view should. Otherwise, you would have to provide more logic in the SQL statement that selects for the latest record. This adds latency to the query and may make downstream aggregations less accurate. Pinot provides a materialized view by implementing upsert for real-time tables with a primary key.

Upsert in Apache Pinot#

Unlike any other real-time OLAP, Pinot offers native support for upsert for real-time ingestion. Upsert logic says, “If the record exists, update it or otherwise insert it.” 

You need upsert capabilities for dimensional data to simply SELECT for the record’s primary key when retrieving it. Without upsert, you will need to find the latest version of a record by comparing the latest timestamps, which leaves room for error. 

This JSON document shows a schema snippet in Pinot that contains a primaryKeyColumns property. By applying this property, Pinot automatically enables the upsert feature. Upsert is completely transparent to the sender and therefore no specific programming is required.

{    "primaryKeyColumns": ["user_id"]}

You can further configure the behavior of the upsert to allow for different behaviors: FULL or PARTIAL.

A FULL upsert means that a new record will replace the older record completely if they share the same primary key.

PARTIAL only allows updates to specific columns and employs additional strategies.

Table describing the strategy and descriptions of stream ingestion with upsert

Source: Stream Ingestion with Upsert

Here is a sample snippet of a table configuration containing the property that configures the upsert strategy:

"upsertConfig": { "mode": "FULL" },

Upsert simplifies client queries in an extremely powerful way. More importantly, upsert assures the accuracy of any aggregations applied to updated columns, which proves especially important when the analytics lead to critical decisions. 

Summary#

Change data capture is the best way to capture changes in a database. Other options require comparing snapshots or applying complex modified timestamp logic. Other solutions only emulate real-time, but change data capture embodies the only genuine real-time event streaming solution.

Debezium provides many other CDC connectors that you can find in their documentation. If you do not have a Kafka Connect cluster or do not use Kafka at all, you can use the Debezium server to run the CDC connectors and write to an alternative streaming system, such as Amazon Kinesis, Pub/Sub from Google Cloud, Apache® Pulsar™, Azure Event Hubs, and RabbitMQ.

Lastly, Apache Pinot enables upsert for any client sinking into it, which means the client does not need to implement upsert logic. Any client can generate a materialized view in Pinot. This makes the resulting table faster to query and provides more accurate analytics.

To try Pinot in the cloud, visit startree.ai for a free trial.

Apache Pinot Tutorial for Getting Started - A Step-by-Step Guide

· 8 min read
Barkha Herman
Developer Advocate

How do you get started with Apache Pinot™? Good question! To save you the hassle of trying to tackle this on your own, here’s a handy guide that overviews all of the components that make up Pinot and how to set Pinot up.

The Obligatory What is Apache Pinot and StarTree Section#

Pinot is an open source, free-to-use, real-time, and distributed OLAP datastore, purpose built to provide ultra low-latency analytics at extremely high throughput.

StarTree offers a fully managed version of the Apache Pinot real-time analytics system and other tools around it, such as a real-time anomaly detection and root cause analysis tool, which you can try for free.

What do you need to run Apache Pinot?#

The Docker image that we will use runs multiple services. To accommodate this, we recommend at a minimum the following resources in order to run the sample:

  • CPUs: four or more
  • Memory: 8 GB or more
  • Swap: 2 GB or more
  • Disk space: 10 GB or more

Note: When importing custom data or event streaming, you may need more resources. Additionally, note that if not set, Docker will use resources from the host environment as needed and available.

Step-by-step installation of Apache Pinot#

For this intro tutorial, we will use Docker. Alternatively, you can run Pinot locally if you wish. 

The instructions use a Windows 11 computer, but they will work on Macs as well. Also note that I am using VS Code with the Docker extension installed.

Step 1: #

Make sure you have Docker installed on your machine.

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.

Step 2:#

Now, let’s download the Docker image. On a Windows machine, start a new PowerShell command window. Note that this is not the same as a Windows PowerShell command window, as shown below. 

Download Docker image on Windows with PowerShell command window

Use the following command to get (pull) the image we are looking for:

docker pull apachepinot/pinot:0.12.0

You can also download the latest version like so:

docker pull apachepinot/pinot:latest

Here, apachepinot is the name of the repository in Docker Hub, pinot is the name of the image, and :latest or :0.12.0 is the version for the image.  Note that we will be using the 0.12.0 version for this blog post.

Docker Hub is the world’s largest repository of container images in the world. 

You can verify the image was downloaded or pulled by running the following command:

docker images

It should show you the image like so:

Docker images command

Step 3:#

Let’s run a container using the Docker image that we downloaded:

docker run -it --entrypoint /bin/bash -p 9000:9000 apachepinot/pinot:0.12.0

Running a container with downloaded Docker image

The docker run command runs the image. The -p 9000:00 option maps the docker container port 9000 to the local machine port 9000. This allows us to access the Pinot UI, which defaults to port 9000 to be accessible from the localhost. We are using the –entrypoint to override the default entrypoint and replace it with Bash. We want to override the default behavior so that we can start each component one at a time. The next parameter apachepinot/pinot:0.12.0 is the Docker image we pulled above.

After running the command, we’ll find ourselves in the Docker container instance running Bash shell. We can use ls to list the contents of the Docker container as shown above.

If you’re using VS Code, with the Docker extension installed, you can click on the Docker extension and see our container and its content:

VS Code Docker extension open to see container and content

Click on the Docker icon in the left menu, and apachepinot/pinot:0.12.0. This should take a few seconds to connect to the running container. Now, you can navigate to the files and see what we have under the opt folder.

Step 4:#

Let’s run the components that are essential to running a Pinot cluster. Change directory to the bin folder and list the contents like so:

Running components, directory changed to bin folder and contents listed

In order to start the Pinot cluster, we will need to run the following essential components:

  • Apache ZooKeeper™
  • Controller
  • Broker
  • Server

Start ZooKeeper using the following command:

./pinot-admin.sh StartZookeeper &

pinot-admin.sh is a shell script for starting the various components. The & allows us to continue using the Bash shell. ZooKeeper is responsible for the configuration for the Pinot cluster and needs to be started first.

We can start the remaining components using these commands one at a time:

./pinot-admin.sh StartController &./pinot-admin.sh StartBroker &./pinot-admin.sh StartServer &

The controller controls the cluster health and coordinates with ZooKeeper for configuration and status changes. The broker is responsible for query distribution and result collation, sometimes called Scatter-Gather. Servers manage individual table segments and perform the actual read/writes. To get a better understanding of each component, read this intro to Apache Pinot.

At this time, we should have a running Pinot cluster. We can verify via the Pinot Data Explorer by browsing to localhost:9000. You should see something like this:

Pinot data explorer

What just happened?

Let’s dive in.

We have started the four essential components of Pinot, however, you will note that there is not yet any data in our fresh new instance.

Before we create a table and load data, notice the four navigation menus on the left-hand side. You can look at the cluster status, run queries, inspect ZooKeeper, or launch the Swagger endpoints for the REST API that Pinot supports.

On the cluster, we notice that we have the essentials deployed: controller, broker, and server. Currently, there are no tables and no minions—dispatchable components used for task management—exist, though Notice also that multi-tenancy support is available in the cluster manager.

Step 5:#

Now that we have our Apache Pinot cluster ready, let’s load some data. Of course, before we do that, we have to create a schema. 

Let’s navigate to the folder:

cd /opt/pinot/examples/batch/baseballStats

You will notice that there are the following files listed here:

baseballStats_offline_table_config.json baseballStats_schema.json ingestionJobSpec.yaml sparkIngestionJobSpec.yaml rawdata

From the names, we can see that there is a schema file, a table config file, an ingestion job, and Apache Spark™ ingestion job files as well as a raw data folder.

The content of the schema file contains both metric and dimension like so (abbreviated):

{ "metricFieldSpecs": [ { "dataType": "INT", "name": "playerStint" }, { "dataType": "INT", "name": "baseOnBalls" }, ], "dimensionFieldSpecs": [ { "dataType": "STRING", "name": "playerID" }, …. { "dataType": "STRING", "name": "playerName" } ], "schemaName": "baseballStats"}

To create a schema and table for the baseball stats file, run the following command from the /app/pinot/bin folder:

./pinot-admin.sh AddTable -schemaFile /opt/pinot/examples/batch/baseballStats/baseballStats_schema.json -tableConfigFile /opt/pinot/examples/batch/baseballStats/baseballStats_offline_table_config.json -exec

You should now see the schema and table created:

Apache Pinot tables created

Next, we’ll want to load some data into the table that we created. We have some sample data in the folder rawdata that we can use to load. We will need a YAML file to perform the actual ingestion job and can use the following command to import data:

./pinot-admin.sh LaunchDataIngestionJob -jobSpecFile /opt/pinot/examples/batch/baseballStats/ingestionJobSpec.yaml

If you run into trouble on this step like I did, edit the ingestJobSpec.yaml file using Docker Desktop to change the inputDirURI from relative to absolute path. Then rerun the above command.

Editing the .yaml file with Docker Desktop

You should now be able to see the table has been populated like so:

Apache Pinot table populated

Now, let’s run some queries. From localhost:9000, select the Query Console in the left-hand menu. Then type in some of these queries:

select * from baseballStats limit 10select sum(runs), playerName from baseballStats group by playerName order by sum(runs) desc

You should see results like so:

Apache Pinot query console

And there you have it!

What’s under the hood?#

If you’re curious to go a step further and see what the segments look like and what the actual data on disk looks like, keep reading! In the Tables section of localhost:9000, you can scroll down to find a segment:

Apache Pinot data on disk segment

Clicking on this gives the specifics of the segment:

Segment specifics in Pinot UI

Pinot allows you to easily inspect your segments and tables in one easy-to-use UI. You can find what’s where and keep an eye on size, location, number of documents, etc.

Conclusion#

Congratulations!

Together, we’ve:

  • Installed and ran Apache Pinot components
  • Created a table schema and a table
  • Loaded data in a table
  • Ran a few queries
  • Explored the Pinot UI

In my next article, we’ll consume event streaming data using Apache Pinot and Apache Kafka®.

In the meantime, run more queries, load more data, and don’t forget to join the Community Slack for support if you get stuck!

StarTree Indexes in Apache Pinot Part-1 - Understanding the Impact on Query Performance

· 7 min read
Sandeep Dabade
Solutions engineer

Star-tree is a specialized index in Apache Pinot™. This index dynamically builds a tree structure to maintain aggregates for a group of dimensions. With star-tree Index, the query latency becomes a function of just a tree traversal with computational complexity of log(n).

This comprehensive blog explains in depth how the star-tree Index differs from traditional materialized views (MVs). In particular, read the section Star-Tree Index: Pinot’s intelligent materialized view. Particularly this one key passage:

Star-Tree Index: Pinot’s Intelligent Materialized View: 

The star-tree index provides an intelligent way to build materialized views within Pinot. Traditional MVs work by fully materializing the computation for each source record that matches the specified predicates. Although useful, this can result in non-trivial storage overhead. On the other hand, the star-tree index allows us to partially materialize the computations and provide the ability to tune the space-time tradeoff by providing a configurable threshold between pre-aggregation and data scans.

In this three-part blog series, we will compare and contrast query performance of a star-tree index with an inverted index, something that most of the OLAP databases end up using for such queries.  

In this first part, we will showcase how a star-tree index brought down standalone query latency on a sizable dataset of ~633M records from 1,513ms to 4ms! — nearly 380x faster.

1. The Dataset:#

We used New York City Taxi Data for this comparison. Original source: here. Below are the high level details about this dataset. 

Schema:#

The dataset has 8 dimension fields and 11 metric columns as listed below. 

2. Query Pattern#

The query pattern involved slicing and dicing the data (GROUPING) BY various dimensions (Date, Month and Year), aggregating different metrics (total trips, distance and passengers count) and FILTERING BY a time range that could go as wide as 1 year.

Note: A key thing to note is that a single star-tree index covers a wide range of OLAP queries that comprise the dimensions, metrics and aggregate functions specified in it. 

Star-Tree Index Config:#

To support the various query patterns specified above, we defined the following star-tree index.

"starTreeIndexConfigs": [        {          "dimensionsSplitOrder": [            "dropoff_date_str",            "dropoff_month",            "dropoff_year"          ],          "skipStarNodeCreationForDimensions": [],          "functionColumnPairs": [            "COUNT__*",            "SUM__passenger_count",            "SUM__total_amount",            "SUM__trip_distance",            "AVG__passenger_count",            "AVG__total_amount",            "AVG__trip_distance",            "MIN__passenger_count",            "MIN__total_amount",            "MIN__trip_distance",            "MAX__passenger_count",            "MAX__total_amount",            "MAX__trip_distance"          ],          "maxLeafRecords": 10000        }      ]

This one star-tree index can get us insights to questions such as:

  • How many trips were completed in a given day, month or year? 
  • How many passengers traveled in a given day, month or year? 
  • What is the daily / monthly / annual average trip revenue? 
  • What is the daily / monthly / annual average trip revenue, trip duration and distance traveled? 
  • What is the daily / monthly / annual breakdown of total number of trips, total distance traveled and total revenue generated in 2015?
  • And many more…

We will use one such variant query for this illustration:

  • What is the total number of trips, total distance traveled and total revenue generated by day in 2015?

We used a very small infrastructure footprint for this comparison test. 

4. Query Results and Stats#

Iteration #1: w/o any Apache Pinot optimizations:#

First, we ran the query without any optimizations offered in Apache Pinot. 

-- Iteration #1: w/o optimizations > 120s
SELECT       toDateTime(tpep_dropoff_datetime/1000, 'yyyy-MM-dd') "Date",      count(*) "Total # of Trips",       sum(trip_distance) "Total distance traveled",       sum(passenger_count) "Total # of Passengers",       sum(total_amount) "Total Revenue"FROM       nyc_taxi_demoWHERE    "Date" BETWEEN '2015-01-01' AND '2015-12-31' GROUP BY    "Date"ORDER BY    "Date" ASClimit 1000

This was a wide time range query (365 days). It required scanning across ~146M out of ~633M documents. In addition, it involved performing an expensive ToDateTime transformation on the tpep_dropoff_datetime entry in each of those ~146M documents during query time. 

Result: The query took 131,425 milliseconds (~131.4s; ~2m 11s) to complete. 

Iteration #2: w/ Inverted Index #

In this iteration, we used a derived column - dropoff_date_str - which performed the ToDateTime transformation for every record during ingestion time. Since the cardinality of this derived column was much lower (granularity was at Day level instead of milliseconds), this enabled us to use an inverted index on this column.

-- Iteration #2: w/ Ingestion Time TransformationSELECT       dropoff_date_str "Date",      count(*) "Total # of Trips",       sum(trip_distance) "Total distance traveled",       sum(passenger_count) "Total # of Passengers",       sum(total_amount) "Total Revenue"FROM       nyc_taxi_demoWHERE    "Date" BETWEEN '2015-01-01' AND '2015-12-31' GROUP BY    "Date"ORDER BY    "Date" ASClimit 1000option(useStarTree=false, timeoutMs=20000)

Result: The query completed in 1,513 milliseconds. (~1.5s); from ~131s to ~1.5s was a BIG improvement. However, results still took more than a second — which is a relatively long time for an OLAP database, especially if it is faced with multiple concurrent queries.

Iteration #3: w/ Star-Tree Index: #

In this iteration, we ran the same query with star-tree index enabled. 

-- Iteration #3: w/ Ingestion Time Transformation + StarTree IndexSELECT       dropoff_date_str "Date",      count(*) "Total # of Trips",       sum(trip_distance) "Total distance traveled",       sum(passenger_count) "Total # of Passengers",       sum(total_amount) "Total Revenue"FROM       nyc_taxi_demoWHERE    "Date" BETWEEN '2015-01-01' AND '2015-12-31' GROUP BY    "Date"ORDER BY    "Date" ASClimit 1000option(useStarTree=true)

Result: The query completed in 4 milliseconds! Notice in particular that the numDocsScanned came down from ~146M to 409! 

Comparison:#

Let’s take a closer look at the query response stats across all three iterations to understand the “how” part of this magic of indexing in Apache Pinot. 

  1. The dataset has 633,694,594 records (documents) spread across 130 segments. 

  2. Query Stats: 

    1. w/o any index optimizations (Iteration #1), the query scanned ALL 633,694,594 records (check numEntriesScannedInFilter) during processing. Also, numEntriesScannedPostFilter was 584,147,312 (numDocsScanned = ~146M). All 130 segments were processed which was very inefficient. 
    2. w/ Inverted Index (Iteration #2), numEntriesScannedInFilter was 0; numEntriesScannedPostFilter was 584,147,312 (numDocsScanned = ~146M) which meant that the query selectivity was low (the query had to scan a lot of records during post filter phase; about 92% of overall records). This is an indication of when a star-tree index could help.
    3. w/ Star-tree Index (Iteration #3), numEntriesScannedInFilter was 0; numEntriesScannedPostFilter was only 2,045 (numDocsScanned = 409). The star-tree index helped improve query performance tremendously by providing high query selectivity.

5. Impact Summary:#

  1. 356,968x improvement (or 99.999% drop) in num docs scanned from ~146M to 409.
  2. 378.5x improvement (~99.736% drop) in query latency from 1,513 ms to 4 ms.

Key Benefits of the Star-Tree Index:#

  • User controllable: Tune space vs. time overhead

  • Flexible: create any number of indexes. The right index is chosen based on the query structure.

  • Transparent: Unlike traditional MVs, users don’t need to know about the existence of a star-tree index. The same query will be accelerated with a star-tree index in place.

  • Dynamic: Very easy to generate a new index at any point of time.

  • Disk IO is the most expensive operation in query processing. Latency is linear to the number of disk reads a query has to perform. Star-Tree Index brings the number of disk reads down exponentially. 

    • In this example, star-tree Index reduced the disk reads by 99.999% from ~584 Million entries (~146 Million documents or records) in case of an inverted index to 2,045 entries (409 documents or records). Query latency came down from 1,513 ms to 4 ms! 

In part 2 of this series, we will perform throughput tests to measure the impact of star-tree index under high load.

Geospatial Indexing in Apache Pinot

· 9 min read
Mark Needham
Mark Needham

Watch the video

It’s been over 18 months since geospatial indexes were added to Apache Pinot™, giving you the ability to retrieve data based on geographic location—a common requirement in many analytics use cases. Using geospatial queries in combination with time series queries in Pinot, you can perform complex spatiotemporal analysis, such as analyzing changes in weather patterns over time or tracking the movement of objects, vehicles, or people. Pinot's support for geospatial data indexing means fast and efficient querying of large-scale, location-based datasets distributed across multiple nodes.

In that time, more indexing functionality has been added, so I wanted to take an opportunity to have a look at the current state of things.

What is geospatial indexing?#

Geospatial indexing is a technique used in database management systems to store and retrieve spatial data based on its geographic location. It involves creating an index that allows for efficient querying of location-based data, such as latitude and longitude coordinates or geographical shapes.

Geospatial indexing organizes spatial data in such a way that enables fast and accurate retrieval of data based on its proximity to a specific location or geographic region. This indexing can be used to answer queries such as "What are the restaurants with a 30-minute delivery window to my current location?" or "What are the boundaries of this specific city or state?"

Geospatial indexing is commonly used in geographic information systems (GIS), mapping applications, and location-based services such as ride-sharing apps, social media platforms, and navigation tools. It plays a crucial role in spatial data analysis, spatial data visualization, and decision-making processes.

How do geospatial indexes work in Apache Pinot?#

We can index points using H3, an open source library that originated at Uber. This library provides hexagon-based hierarchical gridding. Indexing a point means that the point is translated to a geoId, which corresponds to a hexagon. Its neighbors in H3 can be approximated by a ring of hexagons. Direct neighbors have a distance of 1, their neighbors are at a distance of 2, and so on.

For example, if the central hexagon covers the Westminster area of central London, neighbors at distance 1 are colored blue, those at distance 2 are in green, and those at distance 3 are in red.

Geospatial Indexing In Apache Pinot

Let’s learn how to use geospatial indexing with help from a dataset that captures the latest location of trains moving around the UK. We’re streaming this data into a trains topic in Apache Kafka®. Here’s one message from this stream:

kcat -C -b localhost:9092 -t trains -c1| jq

{  "trainCompany": "CrossCountry",  "atocCode": "XC",  "lat": 50.692726,  "lon": -3.5040767,  "ts": "2023-03-09 10:57:11.1678359431",  "trainId": "202303096771054"}

We’re going to ingest this data into Pinot, so let’s create a schema:

{    "schemaName": "trains",    "dimensionFieldSpecs": [      {"name": "trainCompany", "dataType": "STRING"},      {"name": "trainId", "dataType": "STRING"},      {"name": "atocCode", "dataType": "STRING"},      {"name": "point", "dataType": "BYTES"}    ],    "dateTimeFieldSpecs": [      {        "name": "ts",        "dataType": "TIMESTAMP",        "format": "1:MILLISECONDS:EPOCH",        "granularity": "1:MILLISECONDS"      }    ]}

The point column will store a point object that represents the current location of a train. We’ll see how that column gets populated from our table config, as shown below:

{    "tableName": "trains",    "tableType": "REALTIME",    "segmentsConfig": {      "timeColumnName": "ts",      "schemaName": "trains",      "replication": "1",      "replicasPerPartition": "1"    },    "fieldConfigList": [{        "name": "point",        "encodingType":"RAW",        "indexType":"H3",        "properties": {"resolutions": "7"}    }],    "tableIndexConfig": {      "loadMode": "MMAP",      "noDictionaryColumns": ["point"],      "streamConfigs": {        "streamType": "kafka",        "stream.kafka.topic.name": "trains",        "stream.kafka.broker.list": "kafka-geospatial:9093",        "stream.kafka.consumer.type": "lowlevel",        "stream.kafka.consumer.prop.auto.offset.reset": "smallest",        "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",        "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder"      }    },    "ingestionConfig": {      "transformConfigs": [        {          "columnName": "point",          "transformFunction": "STPoint(lon, lat, 1)"        }      ]    },    "tenants": {},    "metadata": {}}

The point column is populated by the following function under transformConfigs:

STPoint(lon, lat, 1)

In earlier versions of Pinot, you’d need to ensure that the schema included lat and lon columns, but that no longer applies. 

We define the geospatial index on the point column under fieldConfigList. We can configure what H3 calls resolutions, which defines the size of a hexagon and their number. A resolution of 7 means that there will be 98,825,150 hexagons, each covering an area of 5.16 km². We also need to add the geospatial column to tableIndexConfig.noDictionaryColumns.

We can go ahead and create that table using the AddTable command and Pinot will automatically start ingesting data from Kafka.

When is the geospatial index used?#

The geospatial index is used when a WHERE clause in a query calls the StDistance, StWithin, or StContains functions.

ST\_Distance

Let’s say we want to find all the trains within a 10 km radius of Westminster. We could write a query to answer this question using the STDistance function. The query might look like this:

select ts, trainId, atocCode, trainCompany, ST\_AsText(point),       STDistance(         point,          toSphericalGeography(ST_GeomFromText('POINT (-0.13624 51.499507)')))   AS distancefrom trains WHERE distance < 10000ORDER BY distance, ts DESClimit 10    

These results from running the query would follow:

Sample Geospatial Indexing In Apache Pinot Query Result

Let’s now go into a bit more detail about what happens when we run the query.

The 10 km radius covers the area inside the white circle on the diagram below:

Geospatial Indexing In Apache Pinot Circle

Pinot’s query planner will first translate the distance of 10 km into a number of rings, in this case, two. It will then find all the hexagons located two rings away from the white one. Some of these hexagons will fit completely inside the white circle, and some will overlap with the circle.

If a hexagon fully fits, then we can get all the records inside this hexagon and return them. For those that partially fit, we’ll need to apply the distance predicate before working out which records to return.

ST\_Within/ST\_Contains

Let’s say that rather than specifying a distance, we instead want to draw a polygon and find the trains that fit inside that polygon. We could use either the ST\_Within or ST\_Contains functions to answer this question.

The query might look like this:

select ts, trainId, atocCode, trainCompany, ST\_AsText(point)from trains WHERE StWithin(      point,       toSphericalGeography(ST_GeomFromText('POLYGON((        -0.1296371966600418 51.508053828550544,        -0.1538461446762085 51.497007194317064,        -0.13032652437686923 51.488276935884414,        -0.10458670556545259 51.497003019756846,        -0.10864421725273131 51.50817152245844,        -0.1296371966600418 51.508053828550544))'))) = 1ORDER BY ts DESClimit 10

The results from running the query are shown below:

Sample Geospatial Indexing In Apache Pinot Query Result

If we change the query to show trains outside of a central London polygon, we’d see the following results:

Sample Geospatial Indexing In Apache Pinot Query Result

So what’s actually happening when we run this query? 

The polygon covers the area inside the white shape as shown below:

Geospatial Indexing In Apache Pinot Polygon

Pinot’s query planner will first find all the coordinates on the exterior of the polygon. It will then find the hexagons that fit within that geofence. Those hexagons get added to the potential cells list. 

The query planner then takes each of those hexagons and checks whether they fit completely inside the original polygon. If they do, then they get added to the fully contained cells list. If we have any cells in both lists, we remove them from the potential cells list.

Next, we find the records for the fully contained cells list and those for the potential cells list. 

If we are finding records that fit inside the polygon, we return those in the fully contained list and apply the STWithin/StContains predicate to work out which records to return from the potential list.

If we are finding records outside the polygon, we will create a new fully contained list, which will actually contain the records that are outside the polygon. This list contains all of the records in the database except the ones in the potential list and those in the initial fully contained list. 

This one was a bit tricky for me to get my head around, so let’s just quickly go through an example. Imagine that we store 10 records in our database and our potential and fully contained lists hold the following values:

potential = [0,1,2,3]fullyContained = [4,5,6]

First, compute newFullyContained to find all the records not in potential:

newFullyContained = [4,5,6,7,8,9]

Then we can remove the values in fullyContained, which gives us:

newFullyContained = [7,8,9]

We will return all the records in newFullyContained and apply the STWithin or StContains predicate to work out which records to return from the potential list.

How do you know the index usage?#

We can write queries that use STDistance, STWithin, and STContains without using a geospatial index, but if we’ve got one defined, we’ll want to get the peace of mind of its actual use.

You can check by prefixing a query with EXPLAIN PLAN FOR, which will return a list of the operators in the query plan. 

If our query uses STDistance, we should expect to see the ​​FILTER\_H3\_INDEX operator. If it uses STWithin or STContains, we should expect to see the INCLUSION_FILTER_H3_INDEX operator.

See this example query plan:

Apache Pinot Geospatial Indexing Query Plan

The StarTree Developer Hub contains a geospatial indexing guide that goes through this in more detail.

Summary#

I hope you found this blog post useful and now understand how geospatial indexes work and when to use them in Apache Pinot.

Give them a try, and let us know how you get on! If you want to use, or are already using geospatial queries in Apache Pinot, we’d love to hear how — feel free to contact us and tell us more! To help get you started, sign up for a free trial of fully managed Apache Pinot. And if you run into any technical questions, feel free to find me on the StarTree Community Slack.

Apache Pinot™ 0.12 - Consumer Record Lag

· 5 min read
Mark Needham
Mark Needham

Watch the video

The Apache Pinot community recently released version 0.12.0, which has lots of goodies for you to play with. I’ve been exploring and writing about those features in a series of blog posts.

This post will explore a new API endpoint that lets you check how much Pinot is lagging when ingesting from Apache Kafka.

Why do we need this?#

A common question in the Pinot community is how to work out the consumption status of real-time tables. 

This was a tricky one to answer, but Pinot 0.12 sees the addition of a new API that lets us see exactly what’s going on.

Worked Example#

Let’s have a look at how it works with help from a worked example. 

First, we’re going to create a Kafka topic with 5 partitions:

docker exec -it kafka-lag-blog kafka-topics.sh \--bootstrap-server localhost:9092 \--partitions 5 \--topic events \--create 

We’re going to populate this topic with data from a data generator, which is shown below:

import datetime, uuid, random, json, click, time
@click.command()@click.option('--sleep', default=0.0, help='Sleep between each message')def generate(sleep):    while True:        ts = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ")        id = str(uuid.uuid4())        count = random.randint(0, 1000)        print(json.dumps({"tsString": ts, "uuid": id, "count": count}))        time.sleep(sleep)
if __name__ == '__main__':    generate()

We can see an example of the messages generated by this script by running the following:

python datagen.py --sleep 0.01 2>/dev/null | head -n3 | jq -c

You should see something like this:

{"tsString":"2023-03-17T12:10:03.854680Z","uuid":"f3b7b5d3-b352-4cfb-a5e3-527f2c663143","count":690}{"tsString":"2023-03-17T12:10:03.864815Z","uuid":"eac57622-4b58-4456-bb38-96d1ef5a1ed5","count":522}{"tsString":"2023-03-17T12:10:03.875723Z","uuid":"65926a80-208a-408b-90d0-36cf74c8923a","count":154}

So far, so good. Let’s now ingest this data into Kafka:

python datagen.py --sleep 0.01 2>/dev/null |jq -cr --arg sep ø '[.uuid, tostring] | join($sep)' |kcat -P -b localhost:9092 -t events -K 

Next we’re going to create a Pinot schema and table. First, the schema config:

{    "schemaName": "events",    "dimensionFieldSpecs": [{"name": "uuid", "dataType": "STRING"}],    "metricFieldSpecs": [{"name": "count", "dataType": "INT"}],    "dateTimeFieldSpecs": [      {        "name": "ts",        "dataType": "TIMESTAMP",        "format": "1:MILLISECONDS:EPOCH",        "granularity": "1:MILLISECONDS"      }    ]  }

And now, the table config:

{    "tableName": "events",    "tableType": "REALTIME",    "segmentsConfig": {      "timeColumnName": "ts",      "schemaName": "events",      "replication": "1",      "replicasPerPartition": "1"    },    "tableIndexConfig": {      "loadMode": "MMAP",      "streamConfigs": {        "streamType": "kafka",        "stream.kafka.topic.name": "events",        "stream.kafka.broker.list": "kafka-lag-blog:9093",        "stream.kafka.consumer.type": "lowlevel",        "stream.kafka.consumer.prop.auto.offset.reset": "smallest",        "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",        "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",        "realtime.segment.flush.threshold.rows": "10000000"      }    },    "ingestionConfig": {      "transformConfigs": [        {          "columnName": "ts",          "transformFunction": "FromDateTime(tsString, 'YYYY-MM-dd''T''HH:mm:ss.SSSSSS''Z''')"        }      ]    },    "tenants": {},    "metadata": {}  }

We can create both the table and schema using the AddTable command:

docker run \  --network lag_blog \  -v $PWD/config:/config \  apachepinot/pinot:0.12.0-arm64 AddTable \  -schemaFile /config/schema.json \  -tableConfigFile /config/table.json \  -controllerHost "pinot-controller-lag-blog" \  -exec

Now let’s call the /consumingSegmentsInfo endpoint to see what’s going on:

curl "http://localhost:9000/tables/events/consumingSegmentsInfo" 2>/dev/null | jq

The output of calling this end point is shown below:

{  "_segmentToConsumingInfoMap": {    "events__0__0__20230317T1133Z": [      {        "serverName": "Server_172.29.0.4_8098",        "consumerState": "CONSUMING",        "lastConsumedTimestamp": 1679052823350,        "partitionToOffsetMap": {          "0": "969"        },        "partitionOffsetInfo": {          "currentOffsetsMap": {            "0": "969"          },          "latestUpstreamOffsetMap": {            "0": "969"          },          "recordsLagMap": {            "0": "0"          },          "availabilityLagMsMap": {            "0": "26"          }        }      }    ],}

If we look under partitionOffsetInfo, we can see what’s going on:

  • currentOffsetsMap is Pinot’s current offset
  • latestUpstreamOffsetMap is Kafka’s offset
  • recordsLagMap is the record lag
  • availabilityLagMsMap is the time lag

This output is a bit unwieldy, so let’s create a bash function to tidy up the output into something that’s easier to consume:

function consuming_info() {  curl "http://localhost:9000/tables/events/consumingSegmentsInfo" 2>/dev/null |   jq -rc '[._segmentToConsumingInfoMap | keys[] as $k | (.[$k] | .[] | {    segment: $k,    kafka: (.partitionOffsetInfo.currentOffsetsMap | keys[] as $k | (.[$k])),    pinot: (.partitionOffsetInfo.latestUpstreamOffsetMap | keys[] as $k | (.[$k])),    recordLag: (.partitionOffsetInfo.recordsLagMap | keys[] as $k | (.[$k])),    timeLagMs: (.partitionOffsetInfo.availabilityLagMsMap | keys[] as $k | (.[$k]))})] | (.[0] |keys_unsorted | @tsv), (.[]  |map(.) |@tsv)'  | column -t  printf "\n"
}

Let’s call the function:

consuming\_info

We’ll see the following output:

Consumer record lag output

Now let’s put it in a script and call the watch command so that it will be refreshed every couple of seconds:

!#/bin/bash
function consuming_info() {  curl "http://localhost:9000/tables/events/consumingSegmentsInfo" 2>/dev/null |  jq -rc '[._segmentToConsumingInfoMap | keys[] as $k | (.[$k] | .[] | {    segment: $k,    kafka: (.partitionOffsetInfo.currentOffsetsMap | keys[] as $k | (.[$k])),    pinot: (.partitionOffsetInfo.latestUpstreamOffsetMap | keys[] as $k | (.[$k])),    recordLag: (.partitionOffsetInfo.recordsLagMap | keys[] as $k | (.[$k])),    timeLagMs: (.partitionOffsetInfo.availabilityLagMsMap | keys[] as $k | (.[$k]))})] | (.[0] |keys_unsorted | @tsv), (.[]  |map(.) |@tsv)'  | column -t  printf "\n"}
export -f consuming_infowatch bash -c consuming_info

Give permissions to run it as a script:

chmod u+x watch\_consuming\_info.sh

And finally, run it:

./watch\_consuming\_info.sh

This will print out a new table every two seconds. Let’s now make things more interesting by removing the sleep from our ingestion command:

python datagen.py  2>/dev/null |jq -cr --arg sep ø '[.uuid, tostring] | join($sep)' |kcat -P -b localhost:9092 -t events -Kø

And now if we look at the watch output:

Apache Pinot Consumer Record Lag

We get some transitory lag, but it generally goes away by the next time the command is run. 

Summary#

I love this feature, and it solves a problem I’ve struggled with when using my datasets. I hope you’ll find it just as useful.

Give it a try, and let us know how you get on. If you have any questions about this feature, feel free to join us on Slack, where we’ll be happy to help you out.

Apache Pinot™ 0.12 - Configurable Time Boundary

· 4 min read
Mark Needham
Mark Needham

Watch the video

The Apache Pinot community recently released version 0.12.0, which has lots of goodies for you to play with. This is the first in a series of blog posts showing off some of the new features in this release.

This post will explore the ability to configure the time boundary when working with hybrid tables.

What is a hybrid table?#

A hybrid table is the term used to describe a situation where we have an offline and real-time table with the same name. The offline table stores historical data, while the real-time data continuously ingests data from a streaming data platform.

How do you query a hybrid table?#

When you write a query against a hybrid table, the Pinot query engine needs to work out which records to read from the offline table and which to read from the real-time table.

It does this by computing the time boundary, determined by looking at the maximum end time of segments in the offline table and the segment ingestion frequency specified for the offline table.

timeBoundary = <Maximum end time of offline segments> - <Ingestion Frequency>

The ingestion frequency can either be 1 hour or 1 day, so one of these values will be used.

When a query for a hybrid table is received by a Pinot Broker, the broker sends a time boundary annotated version of the query to the offline and real-time tables. Any records from or before the time boundary are read from the offline table; anything greater than the boundary comes from the real-time table.

Apache Pinot computing the time boundary

For example, if we executed the following query:

SELECT count(*)FROM events

The broker would send the following query to the offline table:

SELECT count(*)FROM events_OFFLINEWHERE timeColumn <= $timeBoundary

And the following query to the real-time table:

SELECT count(*)FROM events_REALTIMEWHERE timeColumn > $timeBoundary

The results of the two queries are merged by the broker before being returned to the client.

So, what’s the problem?#

If we have some overlap in the data in our offline and real-time tables, this approach works well, but if we have no overlap, we will end up with unexpected results.

For example, let’s say that the most recent timestamp in the events offline table is 2023-01-09T18:41:17, our ingestion frequency is 1 hour, and the real-time table has data starting from 2023-01-09T18:41:18.

This will result in a boundary time of 2023-01-09T17:41:17, which means that any records with timestamps between 17:41 and 18:41 will be excluded from query results.

And the solution?#

The 0.12 release sees the addition of the tables/{tableName}/timeBoundary API, which lets us set the time boundary to the maximum end time of all offline segments.

curl -X POST \  "http://localhost:9000/tables/{tableName}/timeBoundary" \  -H "accept: application/json"

In this case, that will result in a new boundary time of 2023-01-09T18:41:17, which is exactly what we need.

We’ll then be able to query the events table and have it read the offline table to get all records on or before 2023-01-09T18:41:17 and the real-time table for everything else.

Neat, anything else I should know?#

Something to keep in mind when updating the time boundary is that it’s a one-off operation. It won’t be automatically updated if you add a new, more recent segment to the offline table.

In this scenario, you need to call the tables/{tableName}/timeBoundary API again.

And if you want to revert to the previous behavior where the time boundary is computed by subtracting the ingestion frequency from the latest end time, you can do that too:

curl -X DELETE \  "http://localhost:9000/tables/{tableName}/timeBoundary" \  -H "accept: application/json"

Summary#

I love this feature, and it solves a problem I’ve struggled with when using my datasets. I hope you’ll find it just as useful.

Give it a try, and let us know how you get on. If you have any questions about this feature, feel free to join us on Slack, where we’ll be happy to help you out.

Apache Pinot™ 0.11 - Deduplication on Real-Time Tables

· 8 min read
Mark Needham
Mark Needham

Last fall, the Apache Pinot community released version 0.11.0, which has lots of goodies for you to play with.

In this post, we’re going to learn about the deduplication for the real-time tables feature

Why do we need deduplication on real-time tables?#

This feature was built to deal with duplicate data in the streaming platform. 

Users have previously used the upsert feature to de-duplicate data, but this has the following limitations:

  • It forces us to keep redundant records that we don’t want to keep, which increases overall storage costs.
  • We can’t yet use the StarTree index with upserts, so the speed benefits we get from using that indexing technique are lost.

How does dedup differ from upserts?#

Both upserts and dedup keep track of multiple documents that have the same primary key. They differ as follows:

  • Upserts are used when we want to get the latest copy of a document for a given primary key. It’s likely that some or all of the other fields will be different. Pinot stores all documents it receives, but when querying it will only return the latest document for each primary key.
  • Dedup is used when we know that multiple documents with the same primary key are identical. Only the first event received for a given primary key is stored in Pinot—any future events with the same primary key are thrown away.

Let’s see how to use this functionality with help from a worked example.

Setting up Apache Kafka and Apache Pinot#

We’re going to spin up Kafka and Pinot using the following Docker Compose config:

version: "3"services:  zookeeper:    image: zookeeper:3.8.0    hostname: zookeeper    container_name: zookeeper-dedup-blog    ports:      - "2181:2181"    environment:      ZOOKEEPER_CLIENT_PORT: 2181      ZOOKEEPER_TICK_TIME: 2000    networks:       - dedup_blog  kafka:    image: wurstmeister/kafka:latest    restart: unless-stopped    container_name: "kafka-dedup-blog"    ports:      - "9092:9092"    expose:      - "9093"    depends_on:     - zookeeper    environment:      KAFKA_ZOOKEEPER_CONNECT: zookeeper-dedup-blog:2181/kafka      KAFKA_BROKER_ID: 0      KAFKA_ADVERTISED_HOST_NAME: kafka-dedup-blog      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-dedup-blog:9093,OUTSIDE://localhost:9092      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,OUTSIDE:PLAINTEXT    networks:       - dedup_blog  pinot-controller:    image: apachepinot/pinot:0.11.0-arm64    command: "QuickStart -type EMPTY"    container_name: "pinot-controller-dedup-blog"    volumes:      - ./config:/config    restart: unless-stopped    ports:      - "9000:9000"    networks:       - dedup_blognetworks:  dedup_blog:    name: dedup_blog

We can spin up our infrastructure using the following command:

docker-compose up

Data Generation#

Let’s imagine that we want to ingest events generated by the following Python script:

import datetimeimport uuidimport randomimport json
while True:    ts = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ")    id = str(uuid.uuid4())    count = random.randint(0, 1000)    print(        json.dumps({"tsString": ts, "uuid": id[:3], "count": count})    )

We can view the data generated by this script by pasting the above code into a file called datagen.py and then running the following command:

python datagen.py 2>/dev/null | head -n3 | jq

We’ll see the following output:

{  "tsString": "2023-01-03T10:59:17.355081Z",  "uuid": "f94",  "count": 541}{  "tsString": "2023-01-03T10:59:17.355125Z",  "uuid": "057",  "count": 96}{  "tsString": "2023-01-03T10:59:17.355141Z",  "uuid": "d7b",  "count": 288}

If we generate only 25,000 events, we’ll get some duplicates, which we can see by running the following command:

python datagen.py 2>/dev/null  | jq -r '.uuid' | head -n25000 | uniq -cd

The results of running that command are shown below:

2 3a22 a042 4332 2912 d73

We’re going to pipe this data into a Kafka stream called events, like this:

python datagen.py 2>/dev/null | jq -cr --arg sep 😊 '[.uuid, tostring] | join($sep)' | kcat -P -b localhost:9092 -t events -K😊

The construction of the key/value structure comes from Robin Moffatt’s excellent blog post. Since that blog post was written, kcat has started supporting multi byte separators, which is why we can use a smiley face to separate our key and value.

Pinot Schema/Table Config#

Next, we’re going to create a Pinot table and schema with the same name. Let’s first define a schema:

{  "schemaName": "events",  "dimensionFieldSpecs": [{"name": "uuid", "dataType": "STRING"}],  "metricFieldSpecs": [{"name": "count", "dataType": "INT"}],  "dateTimeFieldSpecs": [    {      "name": "ts",      "dataType": "TIMESTAMP",      "format": "1:MILLISECONDS:EPOCH",      "granularity": "1:MILLISECONDS"    }  ]}

Note that the timestamp field is called ts and not tsString, as it is in the Kafka stream. We’re going to transform the DateTime string value held in that field into a proper timestamp using a transformation function. 

Our table config is described below:

{  "tableName": "events",  "tableType": "REALTIME",  "segmentsConfig": {    "timeColumnName": "ts",    "schemaName": "events",    "replication": "1",    "replicasPerPartition": "1"  },  "tableIndexConfig": {    "loadMode": "MMAP",    "streamConfigs": {      "streamType": "kafka",      "stream.kafka.topic.name": "events",      "stream.kafka.broker.list": "kafka-dedup-blog:9093",      "stream.kafka.consumer.type": "lowlevel",      "stream.kafka.consumer.prop.auto.offset.reset": "smallest",      "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",      "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder"    }  },  "ingestionConfig": {    "transformConfigs": [      {        "columnName": "ts",        "transformFunction": "FromDateTime(tsString, 'YYYY-MM-dd''T''HH:mm:ss.SSSSSS''Z''')"      }    ]  },  "tenants": {},  "metadata": {}}

Let’s create the table using the following command:

docker run \   --network dedup_blog \   -v $PWD/pinot/config:/config \   apachepinot/pinot:0.11.0-arm64 AddTable \     -schemaFile /config/schema.json \     -tableConfigFile /config/table.json \     -controllerHost "pinot-controller-dedup-blog" \    -exec 

Now we can navigate to http://localhost:9000 and run a query that will return a count of the number of each uuid:

select uuid, count(*)from events group by uuidorder by count(*)limit 10

The results of this query are shown below:

Sample Apache Pinot real-time query response stats including duplicates

We can see loads of duplicates! 

Now let’s add a table and schema that uses the de-duplicate feature, starting with the schema:

{  "schemaName": "events_dedup",  "primaryKeyColumns": ["uuid"],  "dimensionFieldSpecs": [{"name": "uuid", "dataType": "STRING"}],  "metricFieldSpecs": [{"name": "count", "dataType": "INT"}],  "dateTimeFieldSpecs": [    {      "name": "ts",      "dataType": "TIMESTAMP",      "format": "1:MILLISECONDS:EPOCH",      "granularity": "1:MILLISECONDS"    }  ]}

The main difference between this schema and the events schema is that we need to specify a primary key. This key can be any number of fields, but in this case, we’re only using the uuid field.

Next, the table config:

{  "tableName": "events_dedup",  "tableType": "REALTIME",  "segmentsConfig": {    "timeColumnName": "ts",    "schemaName": "events_dedup",    "replication": "1",    "replicasPerPartition": "1"  },  "tableIndexConfig": {    "loadMode": "MMAP",    "streamConfigs": {      "streamType": "kafka",      "stream.kafka.topic.name": "events",      "stream.kafka.broker.list": "kafka-dedup-blog:9093",      "stream.kafka.consumer.type": "lowlevel",      "stream.kafka.consumer.prop.auto.offset.reset": "smallest",      "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",      "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder"    }  },  "routing": {"instanceSelectorType": "strictReplicaGroup"},  "dedupConfig": {"dedupEnabled": true, "hashFunction": "NONE"},  "ingestionConfig": {    "transformConfigs": [      {        "columnName": "ts",        "transformFunction": "FromDateTime(tsString, 'YYYY-MM-dd''T''HH:mm:ss.SSSSSS''Z''')"      }    ]  },  "tenants": {},  "metadata": {}}

The changes to notice here are:

  • "dedupConfig": {"dedupEnabled": true, "hashFunction": "NONE"} - This enables the feature and indicates that we won’t use a hash function on our primary key.
  • "routing": {"instanceSelectorType": "strictReplicaGroup"} - This makes sure that all segments of the same partition are served from the same server to ensure data consistency across the segments. 
docker run \   --network dedup_blog \   -v $PWD/pinot/config:/config \   apachepinot/pinot:0.11.0-arm64 AddTable \     -schemaFile /config/schema-dedup.json \     -tableConfigFile /config/table-dedup.json \     -controllerHost "pinot-controller-dedup-blog" \    -exec
select uuid, count(*)from events_dedupgroup by uuidorder by count(*)limit 10

Sample Apache Pinot real-time query response stats deduplicated

We have every combination of hex values (16^3=4096) and no duplicates! Pinot’s de-duplication feature has done its job.

How does it work? #

When we’re not using the deduplication feature, events are ingested from our streaming platform into Pinot, as shown in the diagram below:

Events ingested from a streaming platform into Apache Pinot without using the deduplication feature

When de-dup is enabled, we have to check whether records can be ingested, as shown in the diagram below:

Events ingested from a streaming platform into Apache Pinot using the deduplication feature

De-dup works out whether a primary key has already been ingested by using an in memory map of (primary key -> corresponding segment reference).

We need to take that into account when using this feature, otherwise, we’ll end up using all the available memory on the Pinot Server. Below are some tips for using this feature:

  • Try to use a simple primary key type and avoid composite keys. If you don’t have a simple primary key, consider using one of the available hash functions to reduce the space taken up.
  • Create more partitions in the streaming platform than you might otherwise create. The number of partitions determines the partition numbers of the Pinot table. The more partitions you have in the streaming platform, the more Pinot servers you can distribute the Pinot table to, and the more horizontally scalable the table will be.

Summary#

This feature makes it easier to ensure that we don’t end up with duplicate data in our Apache Pinot estate. 

So give it a try and let us know how you get on. If you have any questions about this feature, feel free to join us on Slack, where we’ll be happy to help you out.

And if you’re interested in how this feature was implemented, you can look at the pull request on GitHub.

Apache Pinot™ 0.11 - Pausing Real-Time Ingestion

· 7 min read
Mark Needham
Mark Needham

Watch the video

The Apache Pinot community recently released version 0.11.0, which has lots of goodies for you to play with.

In this post, we will learn about a feature that lets you pause and resume real-time data ingestion. Sajjad Moradi has also written a blog post about this feature, so you can treat this post as a complement to that one.

How does real-time ingestion work?#

Before we get into this feature, let’s first recap how real-time ingestion works.

This only applies to tables that have the REALTIME type. These tables ingest data that comes in from a streaming platform (e.g., Kafka). 

Pinot servers ingest rows into consuming segments that reside in volatile memory. 

Once a segment reaches the segment threshold, it will be persisted to disk as a completed segment, and a new consuming segment will be created. This new segment takes over the ingestion of new events from the streaming platform.

The diagram below shows what things might look like when we’re ingesting data from a Kafka topic that has 3 partitions:

Apache pinot 0.11 Real Time Data Ingestion

A table has one consuming segment per partition but would have many completed segments.

Why do we need to pause and resume ingestion?#

There are many reasons why you might want to pause and resume ingestion of a stream. Some of the common ones are described below:

  • There’s a problem with the underlying stream, and we need to restart the server, reset offsets, or recreate a topic
  • We want to ingest data from different streams into the same table.
  • We made a mistake in our ingestion config in Pinot, and it’s now throwing exceptions and isn’t able to ingest any more data.

The 0.11 release adds the following REST API endpoints:

  • /tables/{tableName}/pauseCompletion
  • /tables/{tableName}/resumeCompletion

As the names suggest, these endpoints can be used to pause and resume streaming ingestion for a specific table. This release also adds the /tables/{tableName}/pauseStatus endpoint, which returns the pause status for a table.

Let’s see how to use this functionality with help from a worked example.

Data Generation#

Let’s imagine that we want to ingest events generated by the following Python script:

import datetimeimport uuidimport randomimport json
while True:    ts = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ")    id = str(uuid.uuid4())    count = random.randint(0, 1000)    print(        json.dumps({"tsString": ts, "uuid": id, "count": count})    )

We can view the data generated by this script by pasting the above code into a file called datagen.py and then running the following command:

python datagen.py 2>/dev/null | head -n3 | jq

We’ll see the following output:

{  "tsString": "2022-11-23T12:08:44.127481Z",  "uuid": "e1c58795-a009-4e21-ae76-cdd66e090797",  "count": 203}{  "tsString": "2022-11-23T12:08:44.127531Z",  "uuid": "4eedce04-d995-4e99-82ab-6f836b35c580",  "count": 216}{  "tsString": "2022-11-23T12:08:44.127550Z",  "uuid": "6d72411b-55f5-4f9f-84e4-7c8c5c4581ff",  "count": 721}

We’re going to pipe this data into a Kafka stream called ‘events’ like this:

python datagen.py | kcat -P -b localhost:9092 -t events

We’re not setting a key for these messages in Kafka for simplicity’s sake, but Robin Moffat has an excellent blog post that explains how to do it.

Pinot Schema/Table Config#

We want to ingest this data into a Pinot table with the same name. Let’s first define a schema:

Schema:

{  "schemaName": "events",  "dimensionFieldSpecs": [{"name": "uuid", "dataType": "STRING"}],  "metricFieldSpecs": [{"name": "count", "dataType": "INT"}],  "dateTimeFieldSpecs": [    {      "name": "ts",      "dataType": "TIMESTAMP",      "format": "1:MILLISECONDS:EPOCH",      "granularity": "1:MILLISECONDS"    }  ]}

Note that the timestamp field is called ts and not tsString, as it is in the Kafka stream. We will transform the DateTime string value held in that field into a proper timestamp using a transformation function. 

Our table config is described below:

{    "tableName":"events",    "tableType":"REALTIME",    "segmentsConfig":{      "timeColumnName":"ts",      "schemaName":"events",      "replication":"1",      "replicasPerPartition":"1"    },    "tableIndexConfig":{      "loadMode":"MMAP",      "streamConfigs":{        "streamType":"kafka",        "stream.kafka.topic.name":"events",        "stream.kafka.broker.list":"kafka-pause-resume:9093",        "stream.kafka.consumer.type":"lowlevel",        "stream.kafka.consumer.prop.auto.offset.reset":"smallest",        "stream.kafka.consumer.factory.class.name":"org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",        "stream.kafka.decoder.class.name":"org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",      }    },    "ingestionConfig":{      "transformConfigs": [        {            "columnName": "ts",            "transformFunction": "FromDateTime(tsString, 'YYYY-MM-dd''T''HH:mm:ss.SS''Z''')"        }      ]    },    "tenants":{},    "metadata":{}  }

Our transformation has a subtle error. The second parameter passed to the FromDateTime function describes the format of the DateTime string, which we defined as:

YYYY-MM-dd''T''HH:mm:ss.SS''Z''

But tsString has values in the following format:

2022-11-23T12:08:44.127550Z

i.e., we don’t have enough S values - there should be 5 rather than 2. 

If we create the table using the following command:

docker run \   --network  pause-resume \   -v $PWD/pinot/config:/config \   apachepinot/pinot:0.11.0-arm64 AddTable \     -schemaFile /config/schema.json \     -tableConfigFile /config/table.json \     -controllerHost pinot-controller-pause-resume \    -exec 

Pinot will immediately start trying to ingest data from Kafka, and it will throw a lot of exceptions that look like this:

java.lang.RuntimeException: Caught exception while executing function: fromDateTime(tsString,'YYYY-MM-dd'T'HH:mm:ss.SS'Z'')Caused by: java.lang.IllegalStateException: Caught exception while invoking method: public static long org.apache.pinot.common.function.scalar.DateTimeFunctions.fromDateTime(java.lang.String,java.lang.String) with arguments: [2022-11-23T11:12:34.682504Z, YYYY-MM-dd'T'HH:mm:ss.SS'Z']

At this point, we’d usually be stuck and would need to fix the transformation function and then restart the Pinot server. With the pause/resume feature, we can fix this problem without resorting to such drastic measures. 

The Pause/Resume Flow#

Instead, we can follow these steps:

  • Pause ingestion for the table
  • Fix the transformation function
  • Resume ingestion
  • Profit $$$

We can pause ingestion by running the following command:

curl -X POST \  "http://localhost:9000/tables/events/pauseConsumption" \  -H "accept: application/json"

The response should be something like this:

{  "pauseFlag": true,  "consumingSegments": [    "events__0__0__20221123T1106Z"  ],  "description": "Pause flag is set. Consuming segments are being committed. Use /pauseStatus endpoint in a few moments to check if all consuming segments have been committed."}

Let’s follow the response’s advice and check the consuming segments status:

curl -X GET \  "http://localhost:9000/tables/events/pauseStatus" \  -H "accept: application/json"

We’ll see the following response:

{  "pauseFlag": true,  "consumingSegments": []}

So far, so good. Now we need to fix the table. We have a config, table-fixed.json, that contains a working transformation config. These are the lines of interest:

{    "ingestionConfig":{      "transformConfigs": [        {            "columnName": "ts",            "transformFunction": "FromDateTime(tsString, 'YYYY-MM-dd''T''HH:mm:ss.SSSSSS''Z''')"        }      ]    }}

We now have five S values rather than two, which should sort out our ingestion.

Update the table config:

curl -X PUT "http://localhost:9000/tables/events" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d @pinot/config/table-fixed.json

And then resume ingestion. You can pass in the query string parameter consumeFrom, which takes a value of smallest or largest. We’ll pass in smallest since no data has been consumed yet:

curl -X POST \  "http://localhost:9000/tables/events/resumeConsumption?consumeFrom=smallest" \  -H "accept: application/json"

The response will be like this:

{  "pauseFlag": false,  "consumingSegments": [],  "description": "Pause flag is cleared. Consuming segments are being created. Use /pauseStatus endpoint in a few moments to double check."}

Again, let’s check the consuming segments status:

curl -X GET \  "http://localhost:9000/tables/events/pauseStatus" \  -H "accept: application/json"

This time we will see some consuming segments:

{  "pauseFlag": false,  "consumingSegments": [    "events__0__22__20221123T1124Z"  ]}

Navigate to http://localhost:9000/#/query and click on the events table. You should see the following:

Sample events table containing records

We have records! We can also run our data generator again, and more events will be ingested.

Summary#

This feature makes real-time data ingestion a bit more forgiving when things go wrong, which has got to be a good thing in my book.

When you look at the name of this feature, it can seem a bit esoteric and perhaps not something that you’d want to use, but I think you’ll find it to be extremely useful.

So give it a try and let us know how you get on. If you have any questions about this feature, feel free to join us on Slack, where we’ll be happy to help you out.

Apache Pinot™ 0.11 - Timestamp Indexes

· 8 min read
Mark Needham
Mark Needham

Watch the video

The recent Apache Pinot™ 0.11.0 release has lots of goodies for you to play with. This is the third in a series of blog posts showing off some of the new features in this release.

Pinot introduced the TIMESTAMP data type in the 0.8 release, which stores the time in millisecond epoch long format internally. The community feedback has been that the queries they’re running against timestamp columns don’t need this low-level granularity. 

Instead, users write queries that use the datetrunc function to filter at a coarser grain of functionality. Unfortunately, this approach results in scanning data and time value conversion work that takes a long time at large data volumes.

The timestamp index solves that problem! In this blog post, we’ll use it to get an almost 5x query speed improvement on a relatively small dataset of only 7m rows.

Time in milliseconds with and without timestamp indexes bar chart

Spinning up Pinot#

We’re going to be using the Pinot Docker container, but first, we’re going to create a network, as we’ll need that later on:

docker network create timestamp_blog

We’re going to spin up the empty QuickStart in a container named pinot-timestamp-blog:

docker run \  -p 8000:8000 \  -p 9000:9000 \ --name pinot-timestamp-blog \  --network timestamp_blog \  apachepinot/pinot:0.11.0 \  QuickStart -type EMPTY

Or if you’re on a Mac M1, change the name of the image to have the arm-64 suffix, like this:

docker run \  -p 8000:8000 \  -p 9000:9000 \    --network timestamp_blog \ --name pinot-timestamp-blog \  apachepinot/pinot:0.11.0-arm64 \  QuickStart -type EMPTY

Once that’s up and running, we’ll be able to access the Pinot Data Explorer at http://localhost:9000, but at the moment, we don’t have any data to play with.

Importing Chicago Crime Dataset#

The Chicago Crime dataset is a small to medium-sized dataset with 7 million records representing reported crimes in the City of Chicago from 2001 until today.

It contains details of the type of crime, where it was committed, whether an arrest was recorded, which beat it occurred on, and more.

Each of the crimes has an associated timestamp, which makes it a good dataset to demonstrate timestamp indexes.

You can find the code used in this blog post in the Analyzing Chicago Crimes recipe section of Pinot Recipes GitHub repository. From here on, I’m assuming that you’ve downloaded this repository and are in the recipes/analyzing-chicago-crimes directory.

We’re going to create a schema and table named crimes by running the following command:  

docker run \   --network timestamp_blog \   -v $PWD/config:/config \   apachepinot/pinot:0.11.0-arm64 AddTable \     -schemaFile /config/schema.json \     -tableConfigFile /config/table.json \     -controllerHost pinot-timestamp-blog \    -exec  

We should see the following output: 

2022/11/03 13:07:57.169 INFO [AddTableCommand] [main] {"unrecognizedProperties":{},"status":"TableConfigs crimes successfully added"}

A screenshot of the schema is shown below:

Chicago crime dataset table schema

We won’t go through the table config and schema files in this blog post because we did that in the last post, but you can find them in the config directory on GitHub. 

Now, let’s import the dataset. 

docker run \   --network timestamp_blog \   -v $PWD/config:/config \   -v $PWD/data:/data \   apachepinot/pinot:0.11.0-arm64 LaunchDataIngestionJob \    -jobSpecFile /config/job-spec.yml \     -values controllerHost=pinot-timestamp-blog 

It will take a few minutes to load, but once that command has finished, we’re ready to query the crimes table.

Querying crimes by date#

The following query finds the number of crimes that happened after 16th January 2017, grouped by week of the year, with the most crime-filled weeks shown first:

select datetrunc('WEEK', DateEpoch) as tsWeek, count(*) from crimes WHERE tsWeek > fromDateTime('2017-01-16', 'yyyy-MM-dd') group by tsWeekorder by count(*) DESClimit 10

If we run that query, we’ll see the following results:

Chicago crime dataset query result

And, if we look above the query result, there’s metadata about the query, including the time that it took to run.

Chicago crime dataset metadata about the query, including the time that it took to run

The query took 141 ms to execute, so that’s our baseline.

Adding the timestamp index#

We could add a timestamp index directly to this table and then compare query performance, but to make it easier to do comparisons, we’re going to create an identical table with the timestamp index applied. 

The full table config is available in the config/table-index.json file, and the main change is that we’ve added the following section to add a timestamp index on the DateEpoch column:

"fieldConfigList": [  {    "name": "DateEpoch",    "encodingType": "DICTIONARY",    "indexTypes": ["TIMESTAMP"],    "timestampConfig": {      "granularities": [        "DAY",        "WEEK",        "MONTH"      ]    }  }],

encodingType will always be ‘DICTIONARY’ and indexTypes must contain ‘TIMESTAMP’. We should specify granularities based on our query patterns.

As a rule of thumb, work out which values you most commonly pass as the first argument to the datetrunc function in your queries and include those values.

The full list of valid granularities is: millisecond, second, minute, hour, day, week, month, quarter, and year.

Our new table is called crimes_indexed, and we’re also going to create a new schema with all the same columns called crimes_indexed, as Pinot requires the table and schema names to match.

We can create the schema and table by running the following command:

docker run \   --network timestamp_blog \   -v $PWD/config:/config \   apachepinot/pinot:0.11.0-arm64 AddTable \     -schemaFile /config/schema-index.json \     -tableConfigFile /config/table-index.json \     -controllerHost pinot-timestamp-blog \    -exec  

We’ll populate that table by copying the segment that we created earlier for the crimes table. 

docker run \   --network timestamp_blog \   -v $PWD/config:/config \   -v $PWD/data:/data \   apachepinot/pinot:0.11.0-arm64 LaunchDataIngestionJob \    -jobSpecFile /config/job-spec-download.yml \     -values controllerHost=pinot-timestamp-blog 

If you’re curious how that job spec works, I wrote a blog post explaining it in a bit more detail.

Once the Pinot Server has downloaded this segment, it will apply the timestamp index to the DateEpoch column. 

For the curious, we can see this happening in the log files by connecting to the Pinot container and running the following grep command:

​​docker exec -iti pinot-timestamp-blog \  grep -rni -A10 "Successfully downloaded segment:.*crimes_indexed_OFFLINE.*" \     logs/pinot-all.log

We’ll see something like the following (tidied up for brevity):

[BaseTableDataManager]  Successfully downloaded segment: crimes_OFFLINE_0 of table: crimes_indexed_OFFLINE to index dir: /tmp/1667490598253/quickstart/PinotServerDataDir0/crimes_indexed_OFFLINE/crimes_OFFLINE_0[V3DefaultColumnHandler]  Starting default column action: ADD_DATE_TIME on column: $DateEpoch$DAY[SegmentDictionaryCreator]  Created dictionary for LONG column: $DateEpoch$DAY with cardinality: 7969, range: 978307200000 to 1666742400000[V3DefaultColumnHandler]  Starting default column action: ADD_DATE_TIME on column: $DateEpoch$WEEK[SegmentDictionaryCreator]  Created dictionary for LONG column: $DateEpoch$WEEK with cardinality: 1139, range: 978307200000 to 1666569600000[V3DefaultColumnHandler]  Starting default column action: ADD_DATE_TIME on column: $DateEpoch$MONTH[SegmentDictionaryCreator]  Created dictionary for LONG column: $DateEpoch$MONTH with cardinality: 262, range: 978307200000 to 1664582400000[RangeIndexHandler]  Creating new range index for segment: crimes_OFFLINE_0, column: $DateEpoch$DAY[RangeIndexHandler]  Created range index for segment: crimes_OFFLINE_0, column: $DateEpoch$DAY[RangeIndexHandler]  Creating new range index for segment: crimes_OFFLINE_0, column: $DateEpoch$WEEK[RangeIndexHandler]  Created range index for segment: crimes_OFFLINE_0, column: $DateEpoch$WEEK

What does a timestamp index do?#

So, the timestamp index has now been created, but what does it actually do?

When we add a timestamp index on a column, Pinot creates a derived column for each granularity and adds a range index for each new column.

In our case, that means we’ll have these extra columns: $DateEpoch$DAY, $DateEpoch$WEEK, and $DateEpoch$MONTH. 

We can check if the extra columns and indexes have been added by navigating to the segment page and typing $Date$Epoch in the search box.  You should see the following:

Apache Pinot timestamp index on a column

These columns will be assigned the following values:

  • $DateEpoch$DAY = dateTrunc(‘DAY’, DateEpoch)
  • $DateEpoch$WEEK = dateTrunc(‘WEEK’, DateEpoch)
  • $DateEpoch$MONTH = dateTrunc(‘MONTH’, DateEpoch)

Pinot will also rewrite any queries that use the dateTrunc function with DAY, WEEK, or MONTH and the DateEpoch field to use those new columns.

This means that this query:

select datetrunc('WEEK', DateEpoch) as tsWeek, count(*) from crimes_indexedGROUP BY tsWeeklimit 10

Would be rewritten as:

select  $DateEpoch$WEEK as tsWeek, count(*) from crimes_indexedGROUP BY tsWeeklimit 10

And our query:

select datetrunc('WEEK', DateEpoch) as tsWeek, count(*) from crimes WHERE tsWeek > fromDateTime('2017-01-16', 'yyyy-MM-dd') group by tsWeekorder by count(*) DESClimit 10

Would be rewritten as:

select $DateEpoch$WEEK as tsWeek, count(*) from crimes WHERE tsWeek > fromDateTime('2017-01-16', 'yyyy-MM-dd') group by tsWeekorder by count(*) DESClimit 10

Re-running the query#

Let’s now run our initial query against the crimes_indexed table. We’ll get exactly the same results as before, but let’s take a look at the query stats:

Chicago crime dataset updated query stats

This time the query takes 36 milliseconds rather than 140 milliseconds. That’s an almost 5x improvement, thanks to the timestamp index.

Summary#

Hopefully, you’ll agree that timestamp indexes are pretty cool, and achieving a 5x query improvement without much work is always welcome!

If you’re using timestamps in your Pinot tables, be sure to try out this index and let us know how it goes on the StarTree Community Slack . We’re always happy to help out with any questions or problems you encounter.

Apache Pinot™ 0.11 - Inserts from SQL

· 4 min read
Mark Needham
Mark Needham

The Apache Pinot community recently released version 0.11.0, which has lots of goodies for you to play with. This is the second in a series of blog posts showing off some of the new features in this release.

In this post, we’re going to explore the INSERT INTO clause, which makes ingesting batch data into Pinot as easy as writing a SQL query.

Batch importing: The Job Specification#

The power of this new clause is only fully appreciated if we look at what we had to do before it existed. 

In the Batch Import JSON from Amazon S3 into Apache Pinot | StarTree Recipes video (and accompanying developer guide), we showed how to ingest data into Pinot from an S3 bucket.

The contents of that bucket are shown in the screenshot below:

Sample data ingested into Apache Pinot from a S3 bucket

Let’s quickly recap the steps that we had to do to import those files into Pinot. We have a table called events, which has the following schema:

Events schema table

We first created a job specification file, which contains a description of our import job. The job file is shown below:

executionFrameworkSpec:  name: 'standalone'  segmentGenerationJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'  segmentTarPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'  segmentUriPushJobRunnerClassName: 'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'jobType: SegmentCreationAndTarPushinputDirURI: 's3://marks-st-cloud-bucket/events/'includeFileNamePattern: 'glob:**/*.json'outputDirURI: '/data'overwriteOutput: truepinotFSSpecs:  - scheme: s3    className: org.apache.pinot.plugin.filesystem.S3PinotFS    configs:      region: 'eu-west-2'  - scheme: file    className: org.apache.pinot.spi.filesystem.LocalPinotFSrecordReaderSpec:  dataFormat: 'json'  className: 'org.apache.pinot.plugin.inputformat.json.JSONRecordReader'tableSpec:  tableName: 'events'pinotClusterSpecs:  - controllerURI: 'http://${PINOT_CONTROLLER}:9000'

At a high level, this file describes a batch import job that will ingest files from the S3 bucket at s3://marks-st-cloud-bucket/events/ where the files match the glob:**/*.json pattern.

We can import the data by running the following command from the terminal:

docker run \  --network ingest-json-files-s3 \  -v $PWD/config:/config \  -e AWS_ACCESS_KEY_ID=AKIARCOCT6DWLUB7F77Z \  -e AWS_SECRET_ACCESS_KEY=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 \  apachepinot/pinot:0.11.0 LaunchDataIngestionJob \  -jobSpecFile /config/job-spec.yml \  -values PINOT_CONTROLLER=pinot-controller

And don’t worry, those credentials have already been deleted; I find it easier to understand what values go where if we use real values. 

Once we’ve run this command, if we go to the Pinot UI at http://localhost:9000 and click through to the events table from the Query Console menu, we’ll see that the records have been imported, as shown in the screenshot below:

Sample imported records shown in the Apache Pinot Query Console menu

This approach works, and we may still prefer to use it when we need fine-grained control over the ingestion parameters, but it is a bit heavyweight for your everyday data import!

Batch Importing with SQL#

Now let’s do the same thing in SQL.

There are some prerequisites to using the SQL approach, so let’s go through those now, so you don’t end up with a bunch of exceptions when you try this out! 

First of all, you must have a Minion in the Pinot cluster, as this is the component that will do the data import.

You’ll also need to include the following in your table config:

"task": {  "taskTypeConfigsMap": { "SegmentGenerationAndPushTask": {} }}

As long as you’ve done those two things, we’re ready to write our import query! A query that imports JSON files from my S3 bucket is shown below:

INSERT INTO eventsFROM FILE 's3://marks-st-cloud-bucket/events/'OPTION(  taskName=events-task,  includeFileNamePattern=glob:**/*.json,  input.fs.className=org.apache.pinot.plugin.filesystem.S3PinotFS,  input.fs.prop.accessKey=AKIARCOCT6DWLUB7F77Z,  input.fs.prop.secretKey=gfz71RX+Tj4udve43YePCBqMsIeN1PvHXrVFyxJS,  input.fs.prop.region=eu-west-2);

If we run this query, we’ll see the following output:

Sample events_OFFLINE query result

We can check on the state of the ingestion job via the Swagger REST API. If we navigate to http://localhost:9000/help#/Task/getTaskState, paste Task_SegmentGenerationAndPushTask_events-task as our task name, and then click Execute, we’ll see the following:

Checking the state of an ingestion job screen

If we see the state COMPLETED, this means the data has been ingested, which we can check by going back to the Query console and clicking on the events table.

Summary#

I have to say that batch ingestion of data into Apache Pinot has always felt a bit clunky, but with this new clause, it’s super easy, and it’s gonna save us all a bunch of time.

Also, anything that means I’m not writing YAML files has got to be a good thing!

So give it a try and let us know how you get on. If you have any questions about this feature, feel free to join us on Slack, where we’ll be happy to help you out.

Apache Pinot™ 0.11 - How do I see my indexes?

· 4 min read
Mark Needham
Mark Needham

We recently released Pinot 0.11.0 , which has lots of goodies for you to play with. This is the first in a series of blog posts showing off some of the new features in this release.

A common question from the community is: how can you work out which indexes are currently defined on a Pinot table? This information has always been available via the REST API, but sometimes you simply want to see it on the UI and not have to parse your way through a bunch of JSON. Let's see how it works!

Spinning up Pinot#

We’re going to spin up the Batch QuickStart in Docker using the following command:

docker run \  -p 8000:8000 \  -p 9000:9000 \  apachepinot/pinot:0.11.0 \  QuickStart -type BATCH

Or if you’re on a Mac M1, change the name of the image to have the arm-64 suffix, like this:

docker run \  -p 8000:8000 \  -p 9000:9000 \  apachepinot/pinot:0.11.0-arm64 \  QuickStart -type BATCH

Once that’s up and running, navigate to http://localhost:9000/#/ and click on Tables. Under the tables section click on airlineStats_OFFLINE. You should see a page that looks like this:

airlineStats_OFFLINE page

Click on Edit Table. This will show a window with the config for this table.

Window with configuration for airlineStats_OFFLINE table

Indexing Config#

We’re interested in the tableIndexConfig and fieldConfigList sections. These sections are responsible for defining indexes, which are applied to a table on a per segment basis. 

  • tableIndexConfig is responsible for inverted, JSON, range, Geospatial, and StarTree indexes.
  • fieldConfigList is responsible for timestamp and text indexes.

tableIndexConfig is defined below:

"tableIndexConfig": {  "rangeIndexVersion": 2,  "autoGeneratedInvertedIndex": false,  "createInvertedIndexDuringSegmentGeneration": false,  "loadMode": "MMAP",  "enableDefaultStarTree": false,  "enableDynamicStarTreeCreation": false,  "aggregateMetrics": false,  "nullHandlingEnabled": false,  "optimizeDictionaryForMetrics": false,  "noDictionarySizeRatioThreshold": 0},

From reading this config we learn that no indexes have been explicitly defined.

Now for fieldConfigList, which is defined below:

"fieldConfigList": [  {    "name": "ts",    "encodingType": "DICTIONARY",    "indexType": "TIMESTAMP",    "indexTypes": [      "TIMESTAMP"    ],    "timestampConfig": {      "granularities": [        "DAY",        "WEEK",        "MONTH"      ]    }  }],

From reading this config we learn that a timestamp index is being applied to the ts column. It is applied at DAY, WEEK, and MONTH granularities, which means that the derived columns $ts$DAY, $ts$WEEK, and $ts$MONTH will be created for the segments in this table.

Viewing Indexes#

Now, close the table config modal, and under the segments section, open airlineStats_OFFLINE_16071_16071_0 and airlineStats_OFFLINE_16073_16073_0 in new tabs.

If you look at one of those segments, you’ll see the following grid that lists columns/field names against the indexes defined on those fields.

Segment grid that lists columns/field names against the indexes defined on those fields

All the fields on display are persisting their values using the dictionary/forward index format ). Still, we can also see that the Quarter column is sorted and has an inverted index, neither of which we explicitly defined.

This is because Pinot will automatically create sorted and inverted indexes for columns whose data is sorted when the segment is created. 

So the data for the Quarter column was sorted, and hence it has a sorted index.

I’ve written a couple of blog posts explaining how sorted indexes work on offline and real-time tables:

Adding an Index#

Next, let’s see what happens if we add an explicit index. We’re going to add an inverted index to the FlightNum column. Go to Edit Table config again and update tableIndexConfig to have the following value:

Inverted index addition

If you go back to the page for segment airlineStats_OFFLINE_16073_16073_0, notice that it does not have an inverted index for this field.

page for segment airlineStats_OFFLINE_16073_16073_0 without an inverted index

This is because indexes are applied on a per segment basis. If we want the inverted index on the FlightNum column in this segment, we can click Reload Segment on this page, or we can go back to the table page and click Reload All Segments

If we do that, all the segments in the airlineStats_OFFLINE table will eventually have an inverted index on FlightNum.

Summary#

As I mentioned in the introduction, information about the indexes on each segment has always been available via the REST API, but this feature democratizes that information. 

If you have any questions about this feature, feel free to join us on Slack, where we’ll be happy to help you out.

GapFill Function For Time-Series Datasets In Pinot

· 9 min read
Weixiang Sun,Lakshmanan Velusamy
Weixiang Sun,Lakshmanan Velusamy

Many real-world datasets are time-series in nature, tracking the value or state changes of entities over time. The values may be polled and recorded at constant time intervals or at random irregular intervals or only when the value/state changes. There are many real-world use cases of time series data. Here are some specific examples:

  • Telemetry from sensors monitoring the status of industrial equipment.
  • Real-time vehicle data such as speed, braking, and acceleration, to produce the driver's risk score trend.
  • Server performance metrics such as CPU, I/O, memory, and network usage over time.
  • An automated system tracking the status of a store or items in an online marketplace.

Let us use an IOT dataset tracking the occupancy status of the individual parking slots in a parking garage using automated sensors in this post. The granularity of recorded data points might be sparse or the events could be missing due to network and other device issues in the IOT environment. The following figure demonstrates entities emitting values at irregular intervals as the value changes. Polling and recording values of all entities regularly at a lower granularity would consume more resources, take up more space on disk and during processing and incur high costs. But analytics applications that are operating on these datasets, might be querying for values at a lower granularity than the data recording interval (Ex: A dashboard showing the total no of occupied parking slots at 15 min granularity in the past week when the sensors are not recording status as frequent).

Entities emitting data over time at irregular intervals

It is important for Pinot to provide the on-the-fly interpolation (filling the missing data) functionality to better handle time-series data.

Starting from the 0.11.0 release, we introduced the new query syntax, gapfilling functions to interpolate data and perform powerful aggregations and data processing over time series data.

We will discuss the query syntax with an example and then the internal architecture.

Processing time series data in Pinot#

Let us use the following sample data set tracking the status of parking lots in the parking space to understand this feature in detail.

Sample Dataset:#

Sample parking lot dataset

parking_data table

Use case: We want to find out the total number of parking lots that are occupied over a period of time, which would be a common use case for a company that manages parking spaces.

Let us take 30 minutes time bucket as an example:

Sample parking lot dataset with 30 minute time bucket

In the 30 mins aggregation results table above, we can see a lot of missing data as many lots didn't have anything recorded in those 30-minute windows. To calculate the number of occupied parking lots per time bucket, we need to gap-fill the missing data for each of these 30-minute windows.

Interpolating missing data#

There are multiple ways to infer and fill the missing values. In the current version, we introduce the following methods, which are more common:

  • FILL_PREVIOUS_VALUE can be used to fill time buckets missing values for entities with the last observed value. If no previous observed value can be found, the default value is used as an alternative.
  • FILL_DEFAULT_VALUE can be used to fill time buckets missing values for entities with the default value depending on the data type.

More advanced gapfilling strategies such as using the next observed value, the value from the previous day or past week, or the value computed using a subquery shall be introduced in the future.

Gapfill Query with a Use Case:#

Let us write a query to get the total number of occupied parking lots every 30 minutes over time on the parking lot dataset discussed above.

Query Syntax:#

SELECT time_col, SUM(status) AS occupied_slots_countFROM (    SELECT GAPFILL(time_col,'1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS','2021-10-01 09:00:00.000',                   '2021-10-01 12:00:00.000','30:MINUTES', FILL(status, 'FILL_PREVIOUS_VALUE'),                    TIMESERIESON(lot_id)), lot_id, status    FROM (        SELECT DATETIMECONVERT(event_time,'1:MILLISECONDS:EPOCH',               '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS','30:MINUTES') AS time_col,               lot_id, lastWithTime(is_occupied, event_time, 'INT') AS status        FROM parking_data        WHERE event_time >= 1633078800000 AND  event_time <= 1633089600000        GROUP BY 1, 2        ORDER BY 1        LIMIT 100)    LIMIT 100)GROUP BY 1LIMIT 100

This query suggests three main steps:

  1. The raw data will be aggregated;
  2. The aggregated data will be gapfilled;
  3. The gapfilled data will be aggregated.

We make one assumption that the raw data is sorted by timestamp. The Gapfill and Post-Gapfill Aggregation will not sort the data.

Query components:#

The following concepts were added to interpolate and handle time-series data.

  • LastWithTime(dataColumn, timeColumn, 'dataType') - To get the last value of dataColumn where the timeColumn is used to define the time of dataColumn. This is useful to pick the latest value when there are multiple values found within a time bucket. Please see https://docs.pinot.apache.org/users/user-guide-query/supported-aggregations for more details.
  • Fill(colum, FILL_TYPE) - To fill the missing data of the column with the FILL_TYPE.
  • TimeSeriesOn - To specify the columns to uniquely identify entities whose data will be interpolated.
  • Gapfill - Specify the time range, the time bucket size, how to fill the missing data, and entity definition.

Query Workflow#

The innermost sql will convert the raw event table to the following table.

Sample parking lot query workflow innermost SQL

The second most nested sql will gap fill the returned data as below:

Sample parking lot query workflow second most SQL

The outermost query will aggregate the gapfilled data as follows:

Sample parking lot query workflow outermost SQL

Other Supported Query Scenarios:#

The above example demonstrates the support to aggregate before and post gapfilling. Pre and/or post aggregations can be skipped if they are not needed. The gapfilling query syntax is flexible to support the following use cases:

  • Select/Gapfill - Gapfill the missing data for the time bucket. Just the raw events are fetched, gapfilled, and returned. No aggregation is needed.
  • Aggregate/Gapfill - If there are multiple entries within the time bucket we can pick a representative value by applying an aggregate function. Then the missing data for the time buckets will be gap filled.
  • Gapfill/Aggregate - Gapfill the data and perform some form of aggregation on the interpolated data.

For detailed query syntax and how it works, please refer to the documentation here: https://docs.pinot.apache.org/users/user-guide-query/gap-fill-functions.

How does it work?#

Let us use the sample query given above as an example to understand what's going on behind the scenes and how Pinot executes the gapfill queries.

Request Flow#

Here is the list of steps in executing the query at a high level:

  1. Pinot Broker receives the gapfill query. It will strip off the gapfill part and send out the stripped SQL query to the pinot server.
  2. The pinot server will process the query as a normal query and return the result back to the pinot broker.
  3. The pinot broker will run the DataTableReducer to merge the results from pinot servers. The result will be sent to GapfillProcessor.
  4. The GapfillProcessor will gapfill the received result and apply the filter against the gap-filled result.
  5. Post-Gapfill aggregation and filtering will be applied to the result from the last step.

There are two gapfill-specific steps:

  1. When Pinot Broker Server receives the gapfill SQL query, it will strip out gapfill related information and send out the stripped SQL query to the pinot server
  2. GapfillProcessor will process the result from BrokerReducerService. The gapfill logic will be applied to the reduced result.

Gapfill steps

Here is the stripped version of the sql query sent to servers for the query shared above:

SELECT DATETIMECONVERT(event_time,'1:MILLISECONDS:EPOCH',               '1:MILLISECONDS:SIMPLE_DATE_FORMAT:yyyy-MM-dd HH:mm:ss.SSS','30:MINUTES') AS time_col,               lot_id, lastWithTime(is_occupied, event_time, 'INT') AS status        FROM parking_data        WHERE event_time >= 1633078800000 AND  event_time <= 1633089600000        GROUP BY 1, 2        ORDER BY 1        LIMIT 100

Execution Plan#

The sample execution plan for this query is as shown in the figure below:

Sample query execution plan

Time and Space complexity:#

Let us say there are M entities, R rows returned from servers, and N time buckets. The data is gapfilled time bucket by time bucket to limit the broker memory usage to O(M + N + R). When the data is gapfilled for a time bucket, it will be aggregated and stored in the final result (which has N slots). The previous values for each of the M entities are maintained in memory and carried forward as the gapfilling is performed in sequence. The time complexity is O(M * N) where M is the number of entities and N is the number of time buckets.

Challenges#

Sample server challenges graph

As the time-series datasets are enormous and partitioned, it's hard to get answers to the following questions:

  • How many different entities exist within the query time frame. In the temporal partition scheme demonstrated above, a server/partition may not know the answer.
  • What's the previously observed value for entities especially for the first data points in a time bucket where previous time buckets don’t exist in the same server.

For the scenario shown in the figure above, server2 may not know about the circle entity, as there are no events for the circle in Server2. It would also not know the last observed value for the square entity frame beginning of the time bucket till the first observed value timestamp within the partition.

The Future Work#

When doing the gapfill for one or a few entities, there might not be too much data. But when we deal with a large dataset that has multiple entities queried over a long date range without any filtering, this gets tricky. Since gapfill happens at the pinot broker, it will become very slow and the broker will become a bottleneck. The raw data transferred from servers to brokers would be enormous. Data explodes when interpolated. Parallelism is limited as the single broker instance is handling the query.

The next step of the gapfill project is to remove the pinot broker as a bottleneck. The gapfill logic will be pushed down to the servers and be running where the data live. This will reduce the data transmission and increase the parallelism and performance of gapfill.

Announcing Apache Pinot 0.10

· 5 min read
Apache Pinot Engineering Team
Apache Pinot Engineering Team

We are excited to announce the release this week of Apache Pinot 0.10. Apache Pinot is a real-time distributed datastore designed to answer OLAP queries with high throughput and low latency.

This release is cut from commit fd9c58a11ed16d27109baefcee138eea30132ad3. You can find a full list of everything included in the release notes.

Let’s have a look at some of the changes, with the help of the batch QuickStart configuration.

Query Plans#

Amrish Lal implemented the EXPLAIN PLAN clause, which returns the execution plan that will be chosen by the Pinot Query Engine. This lets us see what the query is likely to do without actually having to run it.

EXPLAIN PLAN FORSELECT *FROM baseballStatsWHERE league = 'NL'

If we run this query, we'll see the following results:

OperatorOperator_IdParent_Id
BROKER_REDUCE(limit:10)0-1
COMBINE_SELECT10
SELECT(selectList:AtBatting, G_old, baseOnBalls, caughtStealing, doules, groundedIntoDoublePlays, hits, hitsByPitch, homeRuns, intentionalWalks, league, numberOfGames, numberOfGamesAsBatter, playerID, playerName, playerStint, runs, runsBattedIn, sacrificeFlies, sacrificeHits, stolenBases, strikeouts, teamID, tripples, yearID)21
TRANSFORM_PASSTHROUGH(AtBatting, G_old, baseOnBalls, caughtStealing, doules, groundedIntoDoublePlays, hits, hitsByPitch, homeRuns, intentionalWalks, league, numberOfGames, numberOfGamesAsBatter, playerID, playerName, playerStint, runs, runsBattedIn, sacrificeFlies, sacrificeHits, stolenBases, strikeouts, teamID, tripples, yearID)32
PROJECT(homeRuns, playerStint, groundedIntoDoublePlays, numberOfGames, AtBatting, stolenBases, tripples, hitsByPitch, teamID, numberOfGamesAsBatter, strikeouts, sacrificeFlies, caughtStealing, baseOnBalls, playerName, doules, league, yearID, hits, runsBattedIn, G_old, sacrificeHits, intentionalWalks, runs, playerID)43
FILTER_FULL_SCAN(operator:EQ,predicate:league = 'NL')54

FILTER Clauses for Aggregates#

Atri Sharma added the filter clause for aggregates. This feature makes it possible to write queries like this:

SELECT SUM(homeRuns) FILTER(WHERE league = 'NL') AS nlHomeRuns,       SUM(homeRuns) FILTER(WHERE league = 'AL') AS alHomeRunsFROM baseballStats

If we run this query, we'll see the following output:

nlHomeRunsalHomeRuns
135486135990

greatest and least#

Richard Startin added the greatest and least functions:

SELECT playerID,       least(5.0, max(homeRuns)) AS homeRuns,       greatest(5.0, max(hits)) AS hitsFROM baseballStatsWHERE league = 'NL' AND teamID = 'SFN'GROUP BY playerIDLIMIT 5

If we run this query, we'll see the following output:

playerIDhomeRunshits
ramirju0105
milneed01454
testani0105
shawbo0108
vogelry01012

DistinctCountSmartHLL#

Xiaotian (Jackie) Jiang added the DistinctCountSmartHLL aggregation function, which automatically converts the Set to HyperLogLog if the set size grows too big to protect the servers from running out of memory:

SELECT DISTINCTCOUNTSMARTHLL(homeRuns, 'hllLog2m=8;hllConversionThreshold=10')FROM baseballStats

If we run this query, we'll see the following output:

distinctcountsmarthll(homeRuns)
66

UI updates#

There were also a bunch of updates to the Pinot Data Explorer, by Sanket Shah and Johan Adami.

The display of reported size and estimated size is now in a human readable format:

Human readable sizes

Fixes for the following issues:

  • Error messages weren't showing on the UI when an invalid operation is attempted:

A backwards incompatible attempted schema change

  • Query console goes blank on syntax error.
  • Query console cannot show query result when multiple columns have the same name.
  • Adding extra fields after SELECT * would throw a NullPointerException.
  • Some queries were returning -- instead of 0.
  • Query console couldn't show the query result if multiple columns had the same name.
  • Pinot Dashboard tenant view showing the incorrect amount of servers and brokers.

RealTimeToOffline Task#

Xiaotian (Jackie) Jiang made some fixes to the RealTimeToOffline job to handle time gaps and proceed to the next time window when no segment matches the current one.

Empty QuickStart#

Kenny Bastani added an empty QuickStart command, which lets you quickly spin up an empty Pinot cluster:

docker run \  -p 8000:8000 \  -p 9000:9000 \  apachepinot/pinot:0.10.0 QuickStart \  -type empty

You can then ingest your own dataset without needing to worry about spinning up each of the Pinot components individually.

Data Ingestion#

  • Richard Startin fixed some issues with real-time ingestion where consumption of messages would stop if a bad batch of messages was consumed from Kafka.

  • Mohemmad Zaid Khan added the BoundedColumnValue partition function, which partitions segments based on column values.

  • Xiaobing Li added the fixed name segment generator, which can be used when you want to replace a specific existing segment.

Other changes#

  • Richard Startin set LZ4 compression as the default for all metrics fields.
  • Mark Needham added the ST_Within geospatial function.
  • Rong Rong fixed a bug where query stats wouldn't show if there was an error processing the query (e.g. if the query timed out).
  • Prashant Pandey fixed the query engine to handle extra columns added to a SELECT * statement.
  • Richard Startin added support for forward indexes on JSON columns.
  • Rong Rong added the GRPC broker request handler so that data can be streamed back from the server to the broker when processing queries.
  • deemoliu made it possible to add a default strategy when using the partial upsert feature.
  • Jeff Moszuti added support for the TIMESTAMP data type in the configuration recommendation engine.

Dependency updates#

The following dependencies were updated:

  • async-http-client because the library moved to a different organization.
  • RoaringBitmap to 0.9.25
  • JsonPath to 2.7.0
  • Kafka to 2.8.1
  • Prometheus to 0.16.1

Resources#

If you want to try out Apache Pinot, the following resources will help you get started:

Text analytics on LinkedIn Talent Insights using Apache Pinot

· One min read
LinkedIn
LinkedIn Engineering Team

LinkedIn Talent Insights (LTI) is a platform that helps organizations understand the external labor market and their internal workforce, and enables the long term success of their employees. Users of LTI have the flexibility to construct searches using the various facets of the LinkedIn Economic Graph (skills, titles, location, company, etc.).

Read More at https://engineering.linkedin.com/blog/2021/text-analytics-on-linkedin-talent-insights-using-apache-pinot

Text analytics on LinkedIn Talent Insights using Apache Pinot

Introduction to Geospatial Queries in Apache Pinot

· One min read
Kenny Bastani
Kenny Bastani

Geospatial data has been widely used across the industry, spanning multiple verticals, such as ride-sharing and delivery, transportation infrastructure, defense and intel, public health. Deriving insights from timely and accurate geospatial data could enable mission-critical use cases in the organizations and fuel a vibrant marketplace across the industry. In the design document for this new Pinot feature, we discuss the challenges of analyzing geospatial at scale and propose the geospatial support in Pinot.

Read More at https://medium.com/apache-pinot-developer-blog/introduction-to-geospatial-queries-in-apache-pinot-b63e2362e2a9

Introduction to Geospatial Queries in Apache Pinot

Automating Merchant Live Monitoring with Real-Time Analytics - Charon

· One min read
Uber
Uber Data Team

At Uber, live monitoring and automation of Ops is critical to preserve marketplace health, maintain reliability, and gain efficiency in markets. By the virtue of the word “live”, this monitoring needs to show what is happening now, with prompt access to fresh data, and the ability to recommend appropriate actions based on that data. Uber’s data platform provides the self-serve tools which empower the Ops teams to build their own live monitoring tools, and support their regional teams by building rich solutions.

For this project, the requirement was to provide merchant level monitoring and handle the edge cases which remain unaddressed by the sophisticated internal marketplace management tools. We used a variety of Uber’s real-time data platform components to build a tool called Charon to reduce impact of poor marketplace reliability on the merchants.

Read More at https://eng.uber.com/charon/

Operating Apache Pinot at Uber Scale

Solving for the cardinality of set intersection at scale with Pinot and Theta Sketches

· One min read
LinkedIn
LinkedIn Engineering Team

The Lambda architecture has become a popular architectural style that promises both speed and accuracy in data processing by using a hybrid approach of both batch processing and stream processing methods.

Read More at https://engineering.linkedin.com/blog/2021/pinot-and-theta-sketches

From Lambda to Lambda-less Lessons learned

Introduction to Upserts in Apache Pinot

· One min read
Kenny Bastani
Kenny Bastani

Since the 0.6.0 release of Apache Pinot, a new feature was made available for stream ingestion that allows you to upsert events from an immutable log. Typically, upsert is a term used to describe inserting a record into a database if it does not already exist or update it if it does exist. In Apache Pinot’s case, upsert isn’t precisely the same concept, and I wanted to write this blog post to explain why it’s exciting and how you can start using it.

Read More at https://medium.com/apache-pinot-developer-blog/introduction-to-upserts-in-apache-pinot-987c12149d93

Introduction to Upserts in Apache Pinot

Real-time Analytics with Presto and Apache Pinot

· One min read
PinotDev
Pinot Editorial Team

In this world, most analytics products either focus on ad-hoc analytics, which requires query flexibility without guaranteed latency, or low latency analytics with limited query capability. In this blog, we will explore how to get the best of both worlds using Apache Pinot and Presto.

Read Part 1 at https://www.startree.ai/blogs/real-time-analytics-with-presto-and-apache-pinot-part-i/

Read Part 2 at https://www.startree.ai/blogs/real-time-analytics-with-presto-and-apache-pinot-part-ii/

Real-time Analytics with Presto and Apache Pinot

Change Data Analysis with Debezium and Apache Pinot

· One min read
Kenny Bastani
Kenny Bastani

In this blog post, we’re going to explore an exciting new world of real-time analytics based on combining the popular CDC tool, Debezium, with the real-time OLAP datastore, Apache Pinot.

Read More at https://medium.com/apache-pinot-developer-blog/change-data-analysis-with-debezium-and-apache-pinot-b4093dc178a7

Change Data Analysis with Debezium and Apache Pinot

Operating Apache Pinot at Uber Scale

· One min read
Uber
Uber Data Team

Uber has a complex marketplace consisting of riders, drivers, eaters, restaurants and so on. Operating that marketplace at a global scale requires real-time intelligence and decision making. For instance, identifying delayed Uber Eats orders or abandoned carts helps to enable our community operations team to take corrective action. Having a real-time dashboard of different events such as consumer demand, driver availability, or trips happening in a city is crucial for day-to-day operation, incident triaging, and financial intelligence.

Read More at https://eng.uber.com/operating-apache-pinot/

Operating Apache Pinot at Uber Scale

Deep Analysis of Russian Twitter Trolls

· One min read
Kenny Bastani
Kenny Bastani

The history behind Russian disinformation is a dense and continuously evolving subject. The world’s best research hasn’t seemed to hit the mainstream yet, which made this an excellent opportunity to see if I could use some open source tooling to surface new analytical evidence.

In this blog post, I’ll show you how to use Apache Pinot and Superset to analyze 3 million tweets by the Internet Research Agency (IRA) open-sourced by FiveThirtyEight.

Read More at https://towardsdatascience.com/a-deep-analysis-of-russian-trolls-with-apache-pinot-and-superset-590c8c4d1843

Deep Analysis of Russian Twitter Trolls

Leverage Plugins to Ingest Parquet Files from S3 in Pinot

· One min read
PinotDev
Pinot Editorial Team

One of the primary advantages of using Pinot is its pluggable architecture. The plugins make it easy to add support for any third-party system which can be an execution framework, a filesystem, or input format.

In this tutorial, we will use three such plugins to easily ingest data and push it to our Pinot cluster. The plugins we will be using are -

  • pinot-batch-ingestion-spark
  • pinot-s3
  • pinot-parquet

Read more at https://medium.com/apache-pinot-developer-blog/leverage-plugins-to-ingest-parquet-files-from-s3-in-pinot-decb12e4d09d

Leverage Plugins to Ingest Parquet Files from S3 in Pinot

Monitoring Apache Pinot with JMX, Prometheus and Grafana

· One min read
PinotDev
Pinot Editorial Team

I may be kicking open doors here, but a simple question has always helped me start from somewhere. When it comes to investigating degraded user experience caused by latency, can I observe high resource usage on all or some nodes of the system?

Read more at https://medium.com/apache-pinot-developer-blog/monitoring-apache-pinot-99034050c1a5

Monitoring Apache Pinot with JMX, Prometheus and Grafana

Achieving 99th percentile latency SLA using Apache Pinot

· One min read
PinotDev
Pinot Editorial Team

In this article, we talk about how users can build critical site-facing analytical applications requiring high throughput and strict p99th query latency SLA using Apache Pinot.

Read more at https://medium.com/apache-pinot-developer-blog/achieving-99th-percentile-latency-sla-using-apache-pinot-2ba4ce1d9eff

Achieving 99th percentile latency SLA using Apache Pinot

Utilize UDFs to Supercharge Queries in Apache Pinot

· One min read
PinotDev
Pinot Editorial Team

Apache Pinot is a realtime distributed OLAP datastore that can answer hundreds of thousands of queries with millisecond latencies. You can head over to https://pinot.apache.org/ to get started with Apache Pinot.

While using any database, we can come across a scenario where a function required for the query is not supported out of the box. In such time, we have to resort to raising a pull request for a new function or finding a tedious workaround.

Scalar Functions that allow users to write and add their functions as a plugin.

Read more at https://medium.com/apache-pinot-developer-blog/utilize-udfs-to-supercharge-queries-in-apache-pinot-e488a0f164f1

Utilize UDFs to Supercharge Queries in Apache Pinot

Building a culture around metrics and anomaly detection

· One min read
Kenny Bastani
Kenny Bastani

Anomaly detection is a very broad term. Usually it means that you want to see if things are running as usual. This could go from your business metrics down to the lowest level of how your systems are running. Anomaly detection is an entire process. It’s not just a tool that you get out of the box that measures time series data. Similar to DevOps, anomaly detection is a culture of different roles engaging in a process that combines tooling with human analysis.

Read More at https://medium.com/apache-pinot-developer-blog/building-a-culture-around-metrics-and-anomaly-detection-da740960fcc2

Building a culture around metrics and anomaly detection

Moving developers up the stack with Apache Pinot

· One min read
Kenny Bastani
Kenny Bastani

Once upon a time, an internet company named LinkedIn faced the challenge of having petabytes of connected data with no way to analyze it in real-time. As this was a problem that was the first of its kind, there was only one solution. The company put together a talented team of engineers and tasked them with building the right tool for the job. Today, that tool goes by the name of Apache Pinot.

Read More at https://medium.com/apache-pinot-developer-blog/moving-developers-up-the-stack-with-apache-pinot-29d36717a3f4

Moving developers up the stack with Apache Pinot

Monitoring business performance data with ThirdEye smart alerts

· One min read
LinkedIn
LinkedIn Engineering Team

Explain how ThirdEye smart alerts and automated dashboards helped the LinkedIn Premium business operations team monitor key metrics—such as new free trial signups—for the timely detection of outliers in business performance data.

Read More at https://engineering.linkedin.com/blog/2020/monitoring-business-performance-data-with-thirdeye-smart-alerts

Monitoring business performance data with ThirdEye smart alerts

Using Apache Pinot and Kafka to Analyze GitHub Events

· One min read
Kenny Bastani
Kenny Bastani

In this blog post, we’ll show you how Pinot and Kafka can be used together to ingest, query, and visualize event streams sourced from the public GitHub API. For the step-by-step instructions, please visit our documentation, which will guide you through the specifics of running this example in your development environment.

Read More at https://medium.com/apache-pinot-developer-blog/using-apache-pinot-and-kafka-to-analyze-github-events-93cdcb57d5f7

Using Apache Pinot and Kafka to Analyze GitHub Events

Engineering SQL Support on Apache Pinot at Uber

· One min read
Uber
Uber Data Team

Uber leverages real-time analytics on aggregate data to improve the user experience across our products, from fighting fraudulent behavior on Uber Eats to forecasting demand on our platform.

To resolve these issues, we built a solution that linked Presto, a query engine that supports full ANSI SQL, and Pinot, a real-time OLAP (online analytical processing) datastore. This married solution allows users to write ad-hoc SQL queries, empowering teams to unlock significant analysis capabilities.

Read More at https://eng.uber.com/engineering-sql-support-on-apache-pinot/

SQL Support on Apache Pinot at Uber

Auto-tuning Pinot real-time consumption

· One min read
LinkedIn
LinkedIn Engineering Team

Focus on Auto tuning Pinot, a scalable distributed columnar OLAP data store developed at LinkedIn, delivers real-time analytics for site-facing use cases such as LinkedIn's Who viewed my profile, Talent insights, and more.

Read More at https://engineering.linkedin.com/blog/2020/bridging-batch-and-stream-processing

Bridging batch and stream processing for the Recruiter usage statistics dashboard

Introducing ThirdEye - LinkedIn’s Business-Wide Monitoring Platform

· One min read
LinkedIn
LinkedIn Engineering Team

ThirdEye is a comprehensive platform for real-time monitoring of metrics that covers a wide variety of use-cases. LinkedIn relies on ThirdEye to monitor site performance, track member growth, understand adoption of new features, flag sustained attempts to circumvent system security, and many other areas

Read More at https://engineering.linkedin.com/blog/2019/01/introducing-thirdeye--linkedins-business-wide-monitoring-platfor

Star-tree index - Powering fast aggregations on Pinot

Engineering Restaurant Manager - UberEATS Analytics Dashboard

· One min read
Uber
Uber Data Team

At Uber, we use data analytics to architect more magical user experiences across our products. Whenever possible, we harness these data engineering capabilities to empower our partners to better serve their customers. For instance, in late 2016, the UberEATS engineering team built a comprehensive analytics dashboard that provides restaurant partners with additional insights about the health of their business.

Read More at https://eng.uber.com/restaurant-manager/

Engineering Restaurant Manager - UberEATS Analytics Dashboard

A Brief History of Scaling LinkedIn

· One min read
LinkedIn
LinkedIn Engineering Team

LinkedIn started in 2003 with the goal of connecting to your network for better job opportunities. It had only 2,700 members the first week. Fast forward many years, and LinkedIn’s product portfolio, member base, and server load has grown tremendously.

Today, LinkedIn operates globally with more than 350 million members. We serve tens of thousands of web pages every second of every day. We've hit our mobile moment where mobile accounts for more than 50 percent of all global traffic. All those requests are fetching data from our backend systems, which in turn handle millions of queries per second.

Read More at https://engineering.linkedin.com/architecture/brief-history-scaling-linkedin

A Brief History of Scaling LinkedIn