Back

SlateDB – An embedded database built on object storage

113 points17 hoursslatedb.io
drodgers6 hours ago

It looks like writes are buffered in an in-memory write ahead log before being written to object storage, which means that if the writer box dies, then you lose acknowledged writes.

I've built something similar for low-cost storage of infrequently accessed data, but it uses our DBMS (MySQL) for the WAL (+ cache of hot reads), so you get proper durability guarantees.

The other cool trick to use is to use Bε-trees (a relatively recent innovation from Microsoft Research) for the object storage compaction to minimise the number of write operations needed when flushing the WAL.

quadrature2 hours ago

You have the ability to choose your durability guarantee. You can choose to have synchronous writes, in which case the client blocks until the write is acknowledged.

https://docs.rs/slatedb/latest/slatedb/config/struct.WriteOp...

nmca11 hours ago

> Object storage is an amazing technology. It provides highly-durable, highly-scalable, highly-available storage at a great cost.

I don’t know if this was intended to be intentional funny, but there is a little ambiguity in the expression “great cost”, typically great cost means very expensive.

Very cool and useful shim otherwise :)

anon29116 hours ago

This seems to be a key value store built atop object storage. Which is to say, it seems completely redundant. Not sure if there's some feature I'm missing, but all of the six features mentioned on the front page are things you'd have if you used the key value store directly (actually, you get more because then you get multiple writers).

I was excited at first and thought this was SQL atop S3 et al. I've jerryrigged a solution to this using SQLite with a customized VFS backend, and would suggest that as an alternative to this particular project. You get the benefit of ACID transactions across multiple tables and a distributed backend.

aseipp13 hours ago

People want object storage as the backend because in practice it means that you can decouple compute and storage entirely, it has no requirement to provision space up front, and robust object storage systems with (de facto) standardized APIs like S3's are widely available for all kinds of deployments and from many providers, in many forms. In other words: it works with what people already have and want.

Essentially every standalone or embedded key-value storage solution treats the KV store and its operation like a database, from what I can tell -- which is sensible because that's what they are! But people use object stores exactly because they don't operate like traditional databases.

Now there are problems with object stores (they are very coarse grained and have high per-object overhead, necessitating some design that can reconcile the round hole and the square peg) -- but this is just the reality of what people are working with. If there is some other key-value store server/implementation you know of, one that performs and offers APIs like an actual database (e.g. multi writer, range scans, atomic writes) but with unlimited storage, no provisioning, and it's got over 10+ different widespread implementations across every major compute and cloud provider -- I'm interested in what that project is.

necubi15 hours ago

This is a low-level embedded db that would be used by sql databases/query engines/streaming engines/etc rather than something that's likely to make sense for you to use as an application developer. It sits in a similar space to RocksDB and LevelDB.

You generally can't use object storage directly for this stuff; if you have a high volume of writes, it's incredibly slow (and expensive) to write them individually to s3. Similarly, on the read side you want to be able to cache data on local disk & memory to reduce query latency and cost.

iudqnolq16 hours ago

Using an s3 object per key would be too expensive for many use cases.

The website is a bit fancy but the readme seems to pretty straightforwardly explain why you might want this. It seems to me like a nice little (13k loc) project that doesn't fit my needs but might come in handy for someone else?

vineyardmike14 hours ago

> I was excited at first and thought this was SQL atop S3 et al.

You can check out Neon.tech who makes an OS Postgres-on-s3 and DuckDB who makes an embedded DB with transaction support that can operate over S3

abound16 hours ago

If you want SQLite backed by S3, maybe something like SQLite in :memory: mode with Litestream would work?

Edit: actually not sure if you can use :memory: mode since Litestream uses the WAL (IIRC), so maybe a ramfs instead

candiddevmike3 hours ago

In my experience, SQLite on S3 is ridiculously slow. The round trip for writes is horrendous, so you end up doing batch saves, but you need a WAL, which has the same problem as the main DB file.

anon29116 hours ago

There are many solutions. The particular example I was using SQLite via webassembly and then resorting to HTTP's fetch api for a read-only solution.

jitl14 hours ago

From the docs https://slatedb.io/docs/introduction/

> NOTE

> Snapshot isolation and transactions are planned but not yet implemented.

quadrature2 hours ago

Might have been older docs. They now say that transactions are supported

“ Snapshot isolation: SlateDB supports snapshot isolation, which allows readers and writers to see a consistent view of the database. Transactions: Transactional writes are supported.“

remon6 hours ago

I've read the introduction and descriptions two times now and I still don't understand what this adds to the proceedings. It appears to be an extremely thin abstraction over object storage solutions rather than an actual DB which the name and their texts imply.

shenli35146 hours ago

Went thru the document: https://slatedb.io/docs/introduction/#use-cases I can not understand why are they targeting the following use cases with this architecture. * Stream processing * Serverless functions * Durable execution * Workflow orchestration * Durable caches * Data lakes

hantusk9 hours ago

Since writes to object storage are going to be slow anyway, why not double down on read optimized B-trees rather than write optimized LSM's?

chipdart8 hours ago

I think slow writes are not a major concern, as most databases already use some fast log-type data structure to persist writes, and then merge/save these logs to a higher-capacity and slower medium on specific events.

yawnxyz15 hours ago

is this an easier to do the "store parquet on s3 > stream to duckdb" pattern that's popping up more and more?

kosmozaut7 hours ago

Do you know any resources/examples about the setup you mean? It sound interesting but from a quick search I didn't find anything straight forward.

vineyardmike14 hours ago

> MemTables are flushed periodically to object storage as a string-sorted table (SST). The flush interval is configurable.

Looks like it has a pretty similar structure under the hood, but DuckDB would get you more powerful queries.

FYI duckdb directly supports writes (and transactions) so you don’t necessarily even need the separate store step.

jitl14 hours ago

This is more targeted at OLTP style workloads with mutable data and potentially multiple writers

demarq7 hours ago

Embed cloud

Sounds like they just cancel each other out. Not sure what advantage embedding will yield here

tgdn8 hours ago

"It doesn't currently ship with any language bindings"

Rust is needed to use SlateDB at the moment

epolanski15 hours ago

Not a db guy, just asking, what does it mean "embedded" database?

I'm confused here, because Google says it's a db bundled with the application, but that's not really what I get from the landing page.

What problem does it solve?

leetrout15 hours ago

Embedded means it runs in your application process not a standalone server / service.

goodpoint8 hours ago

Despite the name this is not a database.

mtndew4brkfst4 hours ago

What definition/criteria do you feel it does not satisfy?

loxias14 hours ago

Can I please, please, please, have C++ or at least C bindings? :) Or the desired way to call Rust from another runtime? I don't know any Rust.

jitl13 hours ago

Rust is just another programming language that’s quite similar to C++. The main difference is there’s like 4 types for String (some are references and some are owned) and methods for a struct go in a `impl StructName` block after the struct definition instead of inside it.

I don’t really know rust either but I’m currently writing some bindings to expose Rust libraries to NodeJS and not having too much trouble.

For rust -> c++ I googled one time and found this tool which Mozilla seems to use to call Rust from C++ in their web browser, maybe it would “just work”: https://github.com/mozilla/cbindgen?tab=readme-ov-file

sebastianconcpt13 hours ago

Although the borrowing rules will make you feel is quite a different language than others.