Interview with pganalyze

Find out more about our sponsor pganalyze

Any views or opinions represented or expressed in this interview belong solely to the interviewee and do not necessarily represent those of the PostgreSQL Conference Europe 2025 organization, PostgreSQL Europe, or the wider PostgreSQL community, unless explicitly stated.

What is your PostgreSQL centered product?

We develop pganalyze, a Postgres-centric performance monitoring and tuning product, available in the cloud and on-premise.

pganalyze helps you identify workload bottlenecks, such as queries that run with suboptimal query plans, tables that are missing indexes, or Postgres-specific problems such as having the right autovacuum configuration.

Our product is built on customized graphs and interfaces that enable DBAs, developers and anyone else interested in tuning the database to quickly grasp what needs attention. Instead of relying on off-the-shelf tools, we've developed purpose-built advisors (specialized algorithms, no LLMs, or "AI") for tuning recommendations that encode Postgres expertise into a product. Our most recent launch is the pganalyze Query Advisor, that identifies anti-patterns in query plans and provides query rewrite recommendations to fix them.

Which of your company's contributions to the PostgreSQL Project (code/community conference/sponsorship) are you most proud of?

One of the key components of pganalyze that we've contributed to the wider PostgreSQL project is the pg_query query parsing library, which is fully open-source. pg_query lets you parse a query with the Postgres parser outside of the server source code, for example in client applications. It has been used widely in tools that support Postgres, such as for monitoring, schema migration, and more. We have recently added support for deparsing and pretty-printing the emitted SQL to pg_query as well.

We're also contributing code directly to upstream Postgres, such as the Plan ID improvements in Postgres 18, and proposed improvements to reduce the overhead of EXPLAIN (BUFFERS) and different ways of lowering overhead of capturing timing information.

Another way we contribute to the community is by documenting how Postgres works, such as through our recent blog post on Asynchronous I/O, our new "Postgres in Production" video series (hosted by Ryan Booz), or our previous video series "5mins of Postgres".

Which PostgreSQL extension do you benefit from most, and why?

We are big fans of the pg_stat_statements extension, which we use at pganalyze to capture query activity over time, and identify when a new query goes off the rails.

We also rely on auto_explain to capture plan outliers, and retrieve them from the logs. We combine auto_explain plans with the query statistics using a query fingerprint (similar to the "queryid" in Postgres), and process them in the background to find anti-patterns / bad query plans.

Will anyone from your team be presenting at PostgreSQL Conference Europe? What is the talk about and why does this topic matter?

Lukas Fittl from the pganalyze team will present "Tracking plan shapes over time with Plan IDs and the new pg_stat_plans" on Friday morning. In the talk we will discuss our new pg_stat_plans extension, which is similar in spirit to the pg_stat_plans from many years ago (created by 2ndquadrant in 2012), but re-invented on new infrastructure.

In the talk Lukas will discuss why tracking plan statistics is important, how we've designed a mechanism that is low-overhead compared to alternate extensions, and Postgres 18 improvements that enable the new extension, specifically Plan ID support, as well as pluggable cumulative statistics.

Which feature is missing in PostgreSQL?

In our corner of the ecosystem (monitoring/tuning), what comes up often are shortcomings of the Postgres planner. Specifically, the ways to influence query plans are quite limited in certain cases (e.g. join mis-estimates), and the only way to influence plans are either frowned upon (pg_hint_plan), or limited to certain cloud platforms (AWS Aurora's Query Plan Management).

There are some efforts underway to improve this, e.g. "plan shape" work by Robert Haas & similar threads on -hackers, which will hopefully enable a new era of extensions and tools that can help fix bad query plans. Ideally this eventually leads to Postgres being smarter about bad plans, and either stopping a bad execution whilst its occurring (e.g. turning a Nested Loop Join into different join type if assumptions don't hold true), or informing future executions of learnings from past mis-estimates.

What can PostgreSQL Conference Europe attendees look forward to at your booth Any activities, swag, prices or quizzes?

Visit the pganalyze booth to get one of our new pganalyze t-shirts (featuring an illustration created by an actual human, not AI), a pganalyze astronaut sticker, or learn more about our Postgres performance workshops. We're also happy to talk about Postgres 18, and will have engineers from our team happy to answer questions about pganalyze, the new pg_stat_plans extension, or other topics.

Why does your company attend PostgreSQL Conference Europe?

We're excited to participate in the world's largest Postgres community conference, to contribute to the conversation around how we all can make Postgres better, and to share with those interested what pganalyze can do to make your Postgres database more performant. We're also looking forward to talk about Postgres 18 and what it brings to the table. See you in Riga!


Join Us For PostgreSQL Conference Europe 2025

October 21–24 2025

Radisson Blu Latvija, Riga, Latvia