Real-Time Analytics for Users and AI Agents

Apache Pinot™ is an open-source distributed OLAP database for user-facing and agent-facing real-time analytics. From interactive dashboards to LLM-powered decision engines, Pinot delivers sub-second queries on fresh data at petabyte scale.

Apache Pinot

Best for

Pinot powers both human-facing products and autonomous AI systems with the same real-time engine.

Built for User-Facing Apps

Serve interactive analytics to millions of end users with sub-second query latency at any scale.

Embedded Analytics

Ship interactive dashboards inside your product — 100K+ concurrent users querying live data with sub-second latency.

Customer Dashboards

Let customers explore their own data in real time with multitenant isolation, no pre-aggregation required.

Metrics APIs

Expose low-latency analytics endpoints that power leaderboards, usage meters, and real-time reporting.

Built for AI Agents

Sub-second SQL over fresh data makes Pinot a natural backend for LLM-powered systems that reason on live state.

RAG on Fresh Events

Retrieve real-time context from streaming data so LLMs ground answers in facts, not stale snapshots.

Decision Engines

Feed live signals — ad bids, fraud scores, pricing — to models that act in milliseconds, not minutes.

Agentic Observability

Let autonomous agents query system metrics and logs in real time to self-diagnose and self-heal.

Trusted by engineering teams at leading companies

LinkedIn
Uber
Stripe
Walmart
Visa
NVIDIA
Goldman Sachs
Slack
Target
DoorDash

What is Apache Pinot?

Originally developed at LinkedIn, Apache Pinot is an open-source distributed OLAP database for user-facing and agent-facing real-time analytics, delivering sub-second queries on fresh data at very high concurrency.

With its distributed architecture and columnar storage, Apache Pinot empowers businesses to gain valuable insights from real-time data — powering both user-facing and agent-facing applications.

Learn More

Features

Sub-Second Queries

Sub-Second Queries

Filter and aggregate petabyte data sets with P90 latencies in the tens of milliseconds — fast enough to return live results interactively in the UI.

High Concurrency

High Concurrency

With applications and AI agents querying Pinot directly, it can serve hundreds of thousands of concurrent queries per second.

Real-Time Streaming Ingestion

Real-Time Streaming Ingestion

Ingest from Apache Kafka, Apache Pulsar, and AWS Kinesis in real time. Batch ingest from Hadoop, Spark, AWS S3, and more. Combine batch and streaming sources into a single table for querying.

Upserts

Upserts

Ingest the same record many times, but see only the latest value at query time. Upserts are built-in and production-tested since version 0.6.

Versatile Joins

Versatile Joins

Perform arbitrary fact/dimension and fact/fact joins on petabyte data sets.

Rich Indexing Options

Rich Indexing Options

Choose from pluggable indexes including timestamp, inverted, StarTree, Bloom filter, range, text, JSON, and geospatial options.

Built for Scale

Built for Scale

Pinot is horizontally scalable and fault-tolerant, adaptable to workloads across the storage and throughput spectrum.

SQL Query Interface

SQL Query Interface

The highly standard SQL query interface is accessible through a built-in query editor and a REST API.

Built-in Multitenancy

Built-in Multitenancy

Manage and secure data in isolated logical namespaces for cloud-friendly resource management. Learn more about multitenancy.

Last verified against Apache Pinot 1.4.0 on 2025-09-30

Run Pinot locally in 60 seconds

$ docker run -p 9000:9000 apachepinot.docker.scarf.sh/apachepinot/pinot:1.4.0 QuickStart -type hybridcopy

Then open localhost:9000 to explore the query console. Full getting started guide →

LinkedIn

Apache Pinot powers over 50 user-facing applications at LinkedIn, serving 250,000+ queries per second with millisecond latency across hundreds of billions of records.

250K+ QPSacross 50+ user-facing applications

- LinkedIn Engineering

Stripe

Pinot enables us to execute sub-second, petabyte-scale aggregation queries over fresh financial events. During Black Friday-Cyber Monday, Pinot helped us track over $18.6B in transaction volume across 300M+ transactions with P99 latency of 70ms.

200K QPSat P99 latency of 70ms across 3PB

- Peter Bakkum, Stripe

Uber

Uber relies on Apache Pinot for 100+ real-time analytics use cases across the marketplace. Our Neutrino service alone serves 500+ million Pinot queries daily, powering everything from ride tracking to catalog search over 10 billion+ row tables.

500M+queries served daily via Neutrino

- Uber Engineering

Cisco Webex

Apache Pinot replaced Elasticsearch for our real-time observability, delivering 5x to 150x better query performance. We shrank our cluster by 500+ nodes while handling 100+ TB of telemetry data per day with sub-second latency.

500 nodeseliminated vs. Elasticsearch

- Cisco Webex Engineering

DoorDash

We migrated our metrics and alerting platform to Apache Pinot, reducing query latency from 30-second timeouts down to under 100ms. Pinot now powers real-time analytics across 500+ dimensions for our risk and ads platforms.

<100mslatency, down from 30s timeouts

- DoorDash Engineering

Walmart

Every order on walmart.com flows through Apache Pinot. We ingest 14 million events per minute from Kafka with under 900ms lag, enabling real-time order monitoring and dramatically reducing our Mean Time to Detect and Recover.

14Mevents/min ingested with <900ms lag

- Walmart Global Tech

Razorpay

Apache Pinot transformed our payment monitoring from 15-20 minute batch delays to under 1 second data freshness. At peak, we ingest 1 million events per second while tracking 60 billion transactions per year across our platform.

1M events/secat peak, 60B transactions/year

- Razorpay Engineering

See Company Stories

Built for Performance

Production-proven at the world's largest internet companies

P99 < 100ms

Query Latency

Sub-100ms latencies for analytical queries at scale

Stripe case study
200,000+ QPS

Throughput

Queries per second in production deployments

LinkedIn case study
< 1 second

Data Freshness

End-to-end latency from Kafka to queryable

Walmart engineering post
1M+ events/sec

Ingest Rate

Data ingestion throughput per second

Razorpay engineering post

Based on production deployments at LinkedIn, Stripe, Uber, and other Pinot users. Your results will vary based on hardware, schema design, and query complexity.

Last verified against Apache Pinot 1.4.0 on 2025-09-30

Join our Community

Pinot Blog

Share Your Knowledge on Apache Pinot YouTube channel!

Apache Pinot OSS YouTube Channel is a dedicated video hub for all things Pinot. Our goal is to bring together meetup talks, tutorials, and real-world use cases in one place, making it easier for the community to learn and share.

Share your video