> Most relational database management systems do not support nested records, so tables are in first normal form by default. In particular, SQL does not have any facilities for creating or exploiting nested tables. [0]
No? It says that SQL99 allows non-atomic types, and SQL16 allows JSON. That doesn’t mean that 1NF is dead, or even that JSON is allowed in 1NF, only that the standard (which RDBMS providers may choose to implement in part or whole) allows for their existence.
Atomicity of values has been debated for a long time. I’ve come around to the idea that flat arrays can be included in a 1NF table, because they don’t imply any additional structure to the schema. The problem with JSON is that it supports arbitrary K:V pairs as well as nesting, and so can introduce a schema within a schema, which is prone to referential integrity violations (not to mention generally poor performance in RDBMS).
Embedding an entire DB is of course beyond the pale, and my comment was an attempt at wit.
Ok, hear me out: what if we make something that takes a postgres database dir, tars it together and encodes it as a binary blob in SQLite?
We could have SQLite within postgres within sqlite within postgres! Is it practical or even slightly useful? Of course not - but it's SQL databases all the way down. Not that this is a good thing in itself.
Take it one step further, the table-oriented database(tm) , embed clickhouse, MongoDB, Redis and PostgreSQL to ensure you have more flexibility than anyone can utilize efficiently. The one database to rule them all.
What are the use cases for this? I can't imagine designing a database schemas to use this in a typical product. Is it intended for hybrid applications to back up local user data directly with their account info?
The most interesting one for me is if you're running a SaaS product like Notion where your users create custom applications that manage their own small schema-based data tables.
Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?
I'd much rather manage tens of thousands of rows in a table where one of the columns is a BLOB with a little SQLite database in it.
> Letting users create full custom PostgreSQL tables can get complex - do you want to manage tens of thousands of weird custom tables in a PostgreSQL schema somewhere?
Yea, I'd be fine with that - postgres has the concept of databases and schemas within those databases. If you really want to build a product like that I'd suggest starting with per-tenant schemas that leverage table inheritance as appropriate. The permissions would be pretty easy to manage.
Though, in a lot of cases I've actually seen this done every client ends up with a dedicated server (or container - whatever tech you use to do it, something completely isolated from other instances) because user version management ends up being a huge issue. When you're building something that custom it's highly likely that version migrations need to be done with client oversight to ensure everything actually works.
I have yet to find an actual real world case where the inner-platform effect is the right solution. Usually when tools like that are selected the software ends up being so generic and flexible that's it's useless. Custom application/BI environment development relies on really judiciously telling users they can't have most features - with the hard part being figuring out which features are necessary and which ones you can cut to reduce bloat.
Notion has 100 million users, managing schema-per-tenant at our scale sounds like a complexity nightmare. We have 480+ identical schemas across 100+ Postgres hosts, and that already takes a lot of brainpower & engineering time to manage T_T
> managing schema-per-tenant at our scale sounds like a complexity nightmare.
The per-tenant schema could be the tenant's responsibility. Most non-technical users can handle the idea of tables & columns, assuming you leverage UI/UX patterns they are already familiar with.
As long as we never add new features, never need to change how we map UI <-> Postgres DDL, and our users never make any mistakes when they change their tables, it could work without being a complexity nightmare
Why not use jsonb for this kind of thing? Store the schema in one table, one per client, or perhaps one per table per client, and then store the data for that in another table, segregated by customer and table type, with row data stored in a JSONB field using that table's schema.
I normally don't like using JSONB when I could have a rigorous schema, but this sort of application seems reasonable.
You manage a fleet of devices that need to get operating parameters regularly, but they're complicated and SQLite is a great mechanism for sending that.
So at the backend you have a postgres database that contains the device details etc as well as the operating parameters for that device.
You can update the operating parameters as part of a postgres transaction so either all the BLOBs are updated, or none.
Using /tmp on the postgres cluster (server) is a bit of a hack, it would be nicer to have memory based SQLite blobs.
In terms of security, you get Postgres row level security, so each SQLite value is protected in the same way as the rest of the row.
The top line of the README says: "Embed an SQLite database in your PostgreSQL table. AKA multitenancy has been solved."
But I'm still having trouble trying to grok the intricacies of it. In a sense, I guess it has well isolated individual SQLite DBs and you'd have to go out of your way to join over them. With that said, does PostgreSQL manage and pool all the writes correctly? So you don't need to worry about SQLite concurrency issues?
CREATE TABLE tenants (
id BIGINT NOT NULL,
database SQLITE DEFAULT execute_sqlite(
empty_sqlite(),
'CREATE TABLE users (etc.)'
and all the other tables
for each tenant
)
);
then they don't need to make joins between sqlite dbs.
Your other concerns are very real. Those sqlite dbs could become very large.
I prefer the use case depicted in another reply: preparing sqlite dbs before shipping them to their own devices. Or maybe receiving them and performing analysis, maybe after having imported it in overall psql tables. Or similar scenarios in which all the db is read or written at once. Anyway, once we have a tool we start using it.
> then they don't need to make joins between sqlite dbs.
The extension could also provide custom index access methods (considering that SQLite only has a handful of column types in the first place.) That would allow you to incorporate the keys in the index heaps, as opposed to table heaps, boom, you get bitmap index scans for Joins, i.e. GIN but with a bit more redundancy.
I’m thinking maybe you’d like to use litefs for multi-tenant dbs close to the tenant. But perhaps you’ll want a centralized billing/reports database under postgres as well?
So, instead of saving the client sqlite db of the org to cloud storage you save it to the centralized db column instead. Litefs probably doesn’t support it yet, but wouldn’t be too hard to add.
I think SQLite columns for SQLite would be superior to SQLite’s JSON columns whose operators are a whole ‘nother query language you need to learn and seem comparatively limited.
Agreed, the JSON search queries in Postgres are esoteric, to say the least.
But after spending some time with a mixed-schema table at even modest scale, I’m wondering how often a better design could have cut the whole problem off.
They /tmp file mechanism sounds like a bit of a hack, is that definitely necessary?
It may be possible to create a SQLite in-memory database instead and then load the binary blob data into it using the backup API or some kind of trick with VACUUM INTO.
I think the right approach would be to store the sqlite database as a varlena type that can be TOASTed and then "expanded" using the Expanded Datum API so that it's a live open database connection for the life of the transaction:
I had a project that stored a tremendous amount of spatial data. There were "sessions" of spatially-tagged time-series data that would be individually processed (think generating a map layer from time-series data). There were also reasons to perform higher level aggregations that did not dive into the time series data. The data density was high enough that it was impractical to build spatial indices over the entire dataset. Even using space-filling curves as multidimensional B-trees would require so many lookups that queries were impractically slow.
One POC I tried (and then rejected as an abomination) was to store each session's time-series data inside a SQLite database with SpatialLite extensions enabled. Then store each session's metadata, including spatial extent, in a Postgres database. The SQLite files were tossed in S3 and referenced from Postgres. I guess I could have inserted them directly to a BLOB column inside Postgres.
I was _extremely disappointed_ not to see this meme when I clicked on the link. Will not consider using this extension until Xzibit is prominently featured.
The string between the dollar signs can only be closed by another set of dollar quotes with the same string between them. So it allows you to do quotes within quotes within quotes if necessary.
I’m trying to think through when I’d reach for this over jsonb… I guess the fact that there’s an enforced schema? And that you could do aggregations on your SQLite db? Or maybe if you wanted to send the whole delete db to a client??
I love using the database as the source of truth for data consistency, and constraining your data to only be allowed in your database as long as it's in a valid state.
It's easy enough to replicate those constraints to the client if you want the client to do ahead of time validation, but your source of truth lives in the database...
If you’re using Postgres, multi tenancy has been solved with row level security. It’s super easy to add a tenant id column to every table and a policy that only allows connections to see data from one tenant
RLS is very useful and can solve multi tenancy and other problems, but it is complicated and can add a significant per row cost to queries if your policies get complicated.
The common path of comparing some constant like the role name to some column in the table is fine, and it's fast enough as the policy checker already has the row in hand when it does the check, but the natural tendency for people to want to abstract their policies into a function like has_permission() will blow up fast.
The best approach I've seen from pyramation's launchql [1] which precomputes policies into a bitstring and then masks that against a query constant bitstring of required permissions. Flexible policy definitions compiled into the row as bits so the check is as fast as possible.
Sure if you start using it for more than just multitenancy you can get into performance trouble or other complexities. I haven’t felt tempted to put anything beyond the tenant level isolation though yet and it’s served us very well
Yeah really depends on the volume of the data and how sensitive the workload is to a few milliseconds. For a lot of business use cases, it's totally worth it to maintain just one database.
What, no operators? I want indexes on these columns, and some weird and wonderful operator syntax for doing cross-database joins between multiple DATABASE columns! =)
I like the simplicity of SQLite's "a file is all you need" approach so much, that I started to converge all my projects to SQLite. So far, I have not come across any roadblocks.
Can anyone think of a use case where PostgreSQL is better suited than SQLite?
The biggest one is redundancy. Architecting with Read replicas is much easier with Postgres than Sqlite because of it's server model.
Sqlite on the server is a fantastic starter database. Dead simple to set up, highly performant and scales way higher (vertically) than anyone gives it credit for.
But there certainly is a point you'll have to scale out instead of up, and while there are some great solutions for that (rqlite, litefs, dqlite, marmot) it's not inherent to Sqlite's design.
Sometimes you have applications that should not be able to access an entire database. There are other various scaling reasons, and PG extensions that can be helpful. But I agree that for small to medium sized projects, SQLite is highly underrated.
When your application scales beyond one machine that needs access to the same database, PostgreSQL becomes an obviously better choice than SQLite. Until that point, SQLite is a fine, and honestly underrated choice.
Should the concept of "machines" really be a concern of the DB layer?
SQLite already allows multiple connections, so putting it on a server and adding a program that talks a network protocol and proxies the queries to the DB sounds more logical to me?
High performance software is written acknowledging the reality that it will run on hardware. Databases tend to be a class of software that is hyper-focused on performance.
Writing a networked application that uses SQLite as a database is perfectly reasonable. You're just making the decision to lift the layer of abstraction that is concerned with machines from the DB to your application, which may or may not be a reasonable thing to do.
Certainly if you need a network-attached database and aren't creating your own home brew network-attached database (the so-called API server), Postgres is a pretty good choice.
With the expanded datum api you can also work with subscriptable array types to only expand elements lazily as needed. It might already works if you try it, but support for it might be hardwired only to nested stock arrays, something to look into.
We should have a response team standing by, ready to dump thousands of tons of concrete onto that legitimate use case. A gigantic cement sarcophagus that may not solve the problem, but our descendants thousands of years from now may be better prepared to do what we can't and destroy it. The "someone" will just have to be a tragic casualty, as we won't be able to save him or her without risking the contagion spreading.
The current implementation is writing out the DB to `/tmp` then reading the resulting file back and writing it to the column.
So on the bright side updating 1k rows takes the same amount of time as updating one row. On the other hand every write is a full table write (actually two).
I don't think there is a way to do this efficently with the current API as PostgreSQL is MVCC so it needs to write out each version separately (unless it has some sort of support of partial string sharing, I don't think so). Maybe a better version of this would write every page of the SQLite DB as a separate row so that you only need to update the changed pages.
“Not with that attitude.”
– frectonz
[0]: https://en.wikipedia.org/wiki/First_normal_form
reply