Back

Monarch: Google’s Planet-Scale In-Memory Time Series Database

249 points16 hoursmicahlerner.com
kasey_junk13 hours ago

A huge difference between monarch and other tsdb that isn’t outlined in this overview, is that a storage primitive for schema values is a histogram. Most (maybe all besides Circonus) tsdb try to create histograms at query time using counter primitives.

All of those query time histogram aggregations are making pretty subtle trade offs that make analysis fraught.

hn_go_brrrrr12 hours ago

In my experience, Monarch storing histograms and being unable to rebucket on the fly is a big problem. A percentile line on a histogram will be incredibly misleading, because it's trying to figure out what the p50 of a bunch of buckets is. You'll see monitoring artifacts like large jumps and artificial plateaus as a result of how requests fall into buckets. The bucketer on the default RPC latency metric might not be well tuned for your service. I've seen countless experienced oncallers tripped up by this, because "my graphs are lying to me" is not their first thought.

heinrichhartman11 hours ago

Circonus Histograms solve that by using a universal bucketing scheme. Details are explained in this paper: https://arxiv.org/abs/2001.06561

Disclaimer: I am a co-author.

mherdeg8 hours ago

Wow, this is a fantastic solution to some questions I've had rattling around in my head for years about the optimal bucket choices to minimize error given a particular set of buckets.

Do I read right that circllhist has a pretty big number of bin sizes and is not configurable (except that they're sparse so may be small on disk)?

I've found myself using high-cardinality Prometheus metrics where I can only afford 10-15 distinct histogram buckets. So I end up

(1) plugging in my live system data from normal operations and from outage periods into various numeric algorithms that propose optimal bucket boundaries. These algorithms tell me that I could get great accuracy if I chose thousands of buckets, which, thanks for rubbing it in about my space problems :(. Then I write some more code to collapse those into 15 buckets while minimizing error at various places (like p50, p95, p99, p999 under normal operations and under irregular operations).

(2) making sure I have an explicit bucket boundary at any target that represents a business objective (if my service promises no more than 1% of requests will take >2500ms, setting a bucket boundary at 2500ms gives me perfectly precise info about whether p99 falls above/below 2500ms)

(3) forgetting to tune this and leaving a bunch of bad defaults in place which often lead to people saying "well, our graph shows a big spike up to 10000ms but that's just because we forgot to tune our histogram bucket boundaries before the outage, actually we have to refer to logs to see the timeouts at 50 sec"

joeblubaugh5 hours ago

I’ve used these log-linear history in a few pieces of code. There is some configurability in the abstract - you could choose a different logarithm base.

In practice none of the implementations seem to provide that. Within the each set of buckets for a given log base you have reasonable precision at that magnitude. If your metric is oscillating around 1e6 you shouldn’t care much about the variance at 1e2, and with this scheme you don’t have to tune anything to provide for that.

NHQ5 hours ago

Is it lossy to store data this way?

jrockway12 hours ago

I definitely remember a lot of time spent tweaking histogram buckets for performance vs. accuracy. The default bucketing algorithm at the time was powers of 4 or something very unusual like that.

shadowgovt11 hours ago

It's because powers of four was great for the original application of statistics on high traffic services where the primary thing the user was interested in was deviations from the norm, and with a high traffic system the signal for what the norm is would be very strong.

I tried applying it to a service with much lower traffic and found the bucketing to be extremely fussy.

kasey_junk11 hours ago

My personal opinion is that they should have done a log linear histogram which solves the problems you mention (with other trade offs) but to me the big news was making the db flexible enough to have that data type.

Leaving the world of single numeric type for each datum will influence the next generation of open source metrics db.

gttalbot7 hours ago

Yeah in theory people could do their own custom bucketing functions. Would be worth researching log-linear for that certainly.

gttalbot8 hours ago

Yeah it was a tough tradeoff for the default case, because the team didn't want to use too much memory in everyone's binary since the RPC metrics were on by default. This is easily changeable by the user if necessary, though.

sujayakar10 hours ago

I've been pretty happy with datadog's distribution type [1] that uses their own approximate histogram data structure [2]. I haven't evaluated their error bounds deeply in production yet, but I haven't had to tune any bucketing. The linked paper [3] claims a fixed percentage of relative error per percentile.

[1] https://docs.datadoghq.com/metrics/distributions/

[2] https://www.datadoghq.com/blog/engineering/computing-accurat...

[3] https://arxiv.org/pdf/1908.10693.pdf

jeffbee9 hours ago

That is a very different tradeoff, though. A DDSketch is absolutely gigantic compared to a power-of-four binned distribution that could be implemented as a vector of integers. A practical DDSketch will be 5KiB+. And when they say DDSketch merges are "fast" they are comparing to other sketches that take microseconds or more to merge, not to CDF vectors that can be merged literally in nanoseconds.

spullara12 hours ago

Wavefront also has histogram ingestion (I wrote the original implementation, I'm sure it is much better now). Hugely important if you ask me but honestly I don't think that many customers use it.

buro98 hours ago

Prometheus is adding sparse histograms. There's a couple of online talks about it already but one of the maintainers, Ganesh, is giving a talk at kubecon on it next week off anyone is attending and curious about it.

teraflop12 hours ago

Is it really that different from, say, the way Prometheus supports histogram-based quantiles? https://prometheus.io/docs/practices/histograms/

Granted, it looks like Monarch supports a more cleanly-defined schema for distributions, whereas Prometheus just relies on you to define the buckets yourself and follow the convention of using a "le" label to expose them. But the underlying representation (an empirical CDF) seems to be the same, and so the accuracy tradeoffs should also be the same.

spullara12 hours ago

Much different. When you are reporting histograms you can combine them and see the true p50 or whatever across all the individual systems reporting the metric.

nvarsj10 hours ago

Can you elaborate a bit? You can do the same in Prometheus by summing the bucket counts. Not sure what you mean by “true p50” either. With buckets it’s always an approximation based on the bucket widths.

spullara9 hours ago

Ah, I misunderstood what you meant. If you are reporting static buckets I get how that is better than what folks typically do but how do you know the buckets a priori? Others back their histograms with things like https://github.com/tdunning/t-digest. It is pretty powerful as the buckets are dynamic based on the data and histograms can be added together.

teraflop9 hours ago

That is also possible in Prometheus, which is why I made the comparison.

gttalbot11 hours ago

Yes. This. Also, displaying histograms in heatmap format can allow you to intuit the behavior of layered distributed systems, caches, etc. Relatedly, exemplars allowed tying related data to histogram buckets. For example, RPC traces could be tied to the latency bucket & time at which they complete, giving a natural means to tie metrics monitoring and tracing, so you can "go to the trace with the problem". This is described in the paper as well.

NHQ5 hours ago

Is this lossy?

hintymad2 hours ago

I don't quite get the benefit of pull model by default either. A pull model by default means that it's not easy for a library to publish its metrics. For instance, every god damn application is expected to implement a `/metrics` endpoint for a freaking agent to publish the application's metrics to Prometheus. With Monarch, any library or application can simply publish metrics to Monarch's API. Similarly in Netflix, publishing to its Altas system is totally transparent to library authors, with the help of their metric library.

Sometimes I feel many open source systems do not give a shit about productivity.

pm9014 hours ago

A lot of Google projects seem to rely on other Google projects. In this case Monarch relies on spanner.

I guess its nice to publish at least the conceptual design so that others can implement it in “rest of the world” case. Working with OSS can be painful, slow and time consuming so this seems like a reasonable middle ground (although selfishly I do wish all of this was source available).

praptak14 hours ago

Spanner may be hard to set up even with source code available. It relies on atomic clocks for reliable ordering of events.

wbl7 hours ago

Atomic clocks aren't that exotic, and a GPS disciplined ovenized quartz oscillator will do just fine outside of a disruption. The hard part is getting the right sampling semantics, requiring end to end error analysis.

joshuamorton14 hours ago

I don't think there's any spanner necessity and iirc monarch existed pre-spanner.

gttalbot11 hours ago

Correct. Spanner is used to hold configuration state, but is not in the serving path.

dijit14 hours ago
yegle14 hours ago

Google Cloud Monitoring's time series database is backed by Monarch.

The query language is mql which closely resembles the internal Python based query language: https://cloud.google.com/monitoring/mql

sleepydog13 hours ago

MQL is an improvement over the internal language, IMO. There are some missing features around literal tables, but otherwise the language is more consistent and flexible.

804014 hours ago

I broke this once several years ago. I even use the incident number in my random usernames to see if a Googler recognizes it.

zoover202012 hours ago

This is also why I love HN. So niche!

voldacar3 hours ago

Could someone elaborate on this

Xorlev5 hours ago

I was oncall for that incident. Good times.

foota9 hours ago

omg

ajstiles14 hours ago

Wow - that was a doozy.

orf13 hours ago

How did you break it?

ikiris11 hours ago

IIRC they didn't not break it.

ikiris11 hours ago

ahahahahahaha

cientifico2 hours ago

Offtopic: Could The web owner allow to zoom in, to see the content of the pictures?

candiddevmike14 hours ago

Interesting that Google replaced a pull based metric system similar to Prometheus with a push based system... I thought one of the selling points of Prometheus and the pull based dance was how scalable it was?

lokar14 hours ago

It's sort of a pull/push hybrid. The client connects to the collection system and is told how often to send each metric (or group of them) back over that same connection. You configure per target/metric collection policy centrally.

Jedd9 hours ago

So, much like a Zabbix agent, with both active (push) & passive (pull) capabilities.

We're diving into OTEL, and the registration / discovery challenges don't seem to have any kind of best-practice consensus out there. We're looking at NodeRED (telegraf agent can query from same at startup) but it brings its own challenges.

I haven't read the full paper, but do you know if the push model was revisited mostly for auto-registration / discovery, or performance bottlenecks at the server, or some other concern?

Typically for us, once we've got the hard part - an entity registered - we're happy with pull only. A no-response from a prod end-point is an automatic critical. I guess at their scale there's more nuance around one or more agents being non-responsive.

EDIT: Oh, there's not much in the paper on the subject, as it happens. And yes, it's vanilla discovery woes.

"Push-based data collection improves system robustness while simplifying system architecture. Early versions of Monarch discovered monitored entities and “pulled” monitoring data by querying the monitored entity.

"This required setting up discovery services and proxies, complicating system architecture and negatively impacting overall scalability. Push-based collection, where entities simply send their data to Monarch, eliminates these dependencies."

gttalbot8 hours ago

See my comment below, on the challenges of pull based collection on Monarch. There were many. I can answer questions, if that's helpful.

gttalbot8 hours ago

Also, I gave this talk a couple of years ago, though I'm not sure how deeply I talked about collection models.

https://youtu.be/2mw12B7W7RI

lokar9 hours ago

Push can help a bit, but you still have to know which endpoints you expect to hear from (if you want to detect they are missing).

gttalbot7 hours ago

If you can put these into the metrics system as metrics themselves, then join against them, that works. (IIRC handling all the various things that could be happening in a data center like drains, planned & unplanned downtime, etc., it's more complex than a simple join.)

dilyevsky13 hours ago

It was originally push but i think they went back to sort of scheduled pull mode after a few years. There was a very in depth review doc written about this internally which maybe will get published some day

atdt12 hours ago

What's the go/ link?

dilyevsky11 hours ago

Can’t remember - just search on moma /s

gttalbot11 hours ago

Pull collection eventually became a real scaling bottleneck for Monarch.

The way the "pull" collection worked was that there was an external process-discovery mechanism, which the leaf used to connect to the entities it was monitoring, the leaf backend processes would connect to the monitored entities to an endpoint that the collection library would listen on, and those entities collection libraries would stream the metric measurements according to the schedules that the leaves sent.

Several problems.

First, the leaf-side data structures and TCP connections become very expensive. If that leaf process is connecting to many many many thousands of monitored entities, TCP buffers aren't free, keep-alives aren't free, and a host of other data structures. Eventually this became an...interesting...fraction of the CPU and RAM on these leaf processes.

Second, this implies a service discovery mechanism so that the leaves can find the entities to monitor. This was a combination of code in Monarch and an external discovery service. This was a constant source of headaches an outages, as the appearance and disappearance of entities is really spiky and unpredictable. Any burp in operation of the discovery service could cause a monitoring outage as well. Relatedly, the technical "powers that be" decided that the particular discovery service, of which Monarch was the largest user, wasn't really something that was suitable for the infrastructure at scale. This decision was made largely independently of Monarch, but required Monarch to move off.

Third, Monarch does replication, up to three ways. In the pull-based system, it wasn't possible to guarantee that the measurement that each replica sees is the same measurement with the same microsecond timestamp. This was a huge data quality issue that made the distributed queries much harder to make correct and performant. Also, the clients had to pay both in persistent TCP connections on their side and in RAM, state machines, etc., for this replication as a connection would be made from each backend leaf processes holding a replica for a given client.

Fourth, persistent TCP connections and load balancers don't really play well together.

Fifth, not everyone wants to accept incoming connections in their binary.

Sixth, if the leaf process doesn't need to know the collection policies for all the clients, those policies don't have to be distributed and updated to all of them. At scale this matters for both machine resources and reliability. This can be made a separate service, pushed to the "edge", etc.

Switching from a persistent connection to the clients pushing measurements in distinct RPCs as they were recorded eventually solved all of these problems. It was a very intricate transition that took a long time. A lot of people worked very hard on this, and should be very proud of their work. I hope some of them jump in to the discussion! (At very least they'll add things I missed/didn't remember... ;^)

ignoramous44 minutes ago

Thanks.

What are some problems (or peculiarities that otherwise didn't exist) with the push based setup?

At another BigCloud, pull/push made for tasty design discussions as well, given the absurd scale of it all.

General consensus was, smaller fleet always pulls from its downstream; push only if downstream and upstream both had similar scaling characteristics.

Jedd7 hours ago

Thanks George, and apologies for missing this comment on my first scan through this page. Your Youtube talk is lined up for viewing later today.

We're using prom + cortex/mimir. With ~30-60k hosts + at least that figure again for other endpoints (k8s, snmp, etc), so we can get away with semi-manual sharding (os, geo, env, etc). We're happy with 1m polling, which is still maybe 50 packets per query, but no persistent conns held open to agents.

I'm guessing your TCP issues were exacerbated by a much high polling frequency requirement? You come back to persistent connections a lot, so this sounds like a bespoke agent, and/or the problem was not (mostly) a connection establish/tear-down performance issue?

The external discovery service - I assume an in-house, and now long disappeared and not well publicly described system? ;) We're looking at NodeRED to fill that gap, so it also becomes a critical component, but the absence only bites at agent restart. We're pondering wrapping some code around the agents to be smarter about dealing with a non-responsive config service. (During a major incident we have to assume a lot of things will be absent and/or restarting.)

The concerns around incoming conns to their apps, it sounds like those same teams you were dealing with ended up having to instrument their code with something from you anyway -- was it the DoS risk they were concerned about?

gttalbot7 hours ago

It was more that they would rather send Monarch an RPC than be connected to. Not everyone wants e.g. an HTTP server in their process. For example maybe they are security sensitive, or have a limited memory envelope, or other reasons.

+1
gttalbot7 hours ago
Thaxll6 hours ago

What issue originally did the pull model solved? Historically the push model existed before so what was the reason to move to a pull based solution?

Too2 hours ago

https://prometheus.io/docs/introduction/faq/#why-do-you-pull... list a few reasons and also end with a note that it probably doesn't matter in the end. Personally, for smaller deployments, i like it because it gives you an easy overview of what should be running, otherwise you need to maintain this list elsewhere anyway, though today with all the auto-scaling around, the concept of "up" is getting more fuzzy.

On top of that there is also less risk that herd of misbehaving clients DoS the monitoring system, usually moments when you need such system the most. This of course wouldn't be a problem with a more scalable solution that distributes ingestion from querying, like the Monarch.

jeffbee14 hours ago

Prometheus itself has no scalability at all. Without distributed evaluation they have a brick wall.

gttalbot10 hours ago

This. Any new large query or aggregation in the Borgmon/Prometheus model requires re-solving federation, and continuing maintenance of runtime configuration. That might technically be scalable in that you could do it but you have to maintain it, and pay the labor cost. It's not practical over a certain size or system complexity. It's also friction. You can only do the queries you can afford to set up.

That's why Google spent all that money to build Monarch. At the end of the day Monarch is vastly cheaper in person time and resources than manually-configured Borgmon/Prometheus. And there is much less friction in trying new queries, etc.

dilyevsky13 hours ago

You can set up dist eval similar to how it was done in borgmon but you gotta do it manually (or maybe write an operator to automate). One of Monarchs core ideas is to do that behind the scenes for you

jeffbee12 hours ago

Prometheus' own docs say that distributed evaluation is "deemed infeasible".

+1
dilyevsky11 hours ago
buro914 hours ago

That's what Mimir solves

deepsun10 hours ago

How does it compare to VictoriaMetrics?

buro99 hours ago

100% Prometheus compatible, proven to scale to 1B active series.

It's not about comparisons, every tool has it's own place and feature set that may be right for you depending on what you're doing. But if you've reached the end of the road with Prometheus due to scale and you need massive scale and perfect compatibility... Then Mimir stands out.

preseinger11 hours ago

Prometheus is highly scalable?? What are you talking about??

dijit11 hours ago

It is not.

It basically does the opposite of what every scalable system does.

To get HA you double you’re number of pollers.

To get scale your queries you aggregate them into other prometheii.

If this is scalability: everything is scalable.

+3
preseinger11 hours ago
halfmatthalfcat13 hours ago

Can you elaborate? I’ve ran Prometheus at some scale and it’s performed fine.

lokar13 hours ago

You pretty quickly exceed what one instance can handle for memory, cpu or both. At that point you don't have any real good options to scale while maintaining a flat namespace (you need to partition).

+1
preseinger11 hours ago
codethief11 hours ago

The first time I heard about Monarch was in discussions about the hilarious "I just want to serve 5 terabytes" video[0].

[0]: https://m.youtube.com/watch?v=3t6L-FlfeaI

nickstinemates14 hours ago

too small for me, i was looking more for the scale of the universe.

yayr14 hours ago

in case this can be deployed single-handed it might be useful on a spaceship... would need some relativistic time accounting though.

klysm14 hours ago

I don’t really grasp why this is a useful spot in the trade off space from a quick skim. Seems risky.

dijit14 hours ago

There’s a good talk on Monarch https://youtu.be/2mw12B7W7RI

Why it exists is laid out quite plainly.

The pain of it is we’re all jumping on Prometheus (borgmon) without considering why Monarch exists. Monarch doesn’t have a good corollary outside of google.

Maybe some weird mix of timescale DB backed by cockroachdb with a Prometheus push gateway.

AlphaSite13 hours ago

Wavefront is based on FoundationDB which I’ve always found pretty cool.

[1] https://news.ycombinator.com/item?id=16879392

Disclaimer: I work at vmware on an unrelated thing.

bbkane8 hours ago

They should open source it like they did Kubernetes. Otherwise the world will (continue to) converge on the Prometheus model and Google will be left with this weird system that may be technically better but is unfamiliar to incoming engineers

gttalbot8 hours ago

The key trade off if low-dependency, with everything up to and including alert notification delivery pushed down to the region. For alerting queries to happen there is little infrastructure Monarch depends upon to keep running beyond networking and being scheduled on the machines.

If you think about Bigtable, a key observation that the Monarch team made very early on is that, if you can support good materialized views (implemented as periodic standing queries) written back to the memtable, and the memtable can hold the whole data set needed to drive alerting, this can work even if much of Google's infrastructure is having problems. It also allows Monarch to monitor systems like Bigtable, Colossus, etc., as it doesn't use them as serving dependencies for alerting or recent dashboard data.

It's a question of optimizing for graceful degradation in the presence of failure of the infrastructure around the monitoring system. The times the system will experience its heaviest and most unpredictable load will be when everyone's trying to figure out why their service isn't working.

holly7610 hours ago
sydthrowaway12 hours ago

Stop overhyping software with buzzwords

gttalbot7 hours ago

What? Planet scale? Well. You can literally issue a query that fans out to every continent on Earth, and returns the result right to your dashboard. Not exaggerating. ;^) (OK maybe not Antarctica but I'm not sure...)