Apart from the well-known TOAST functionality for storing large columns, PostgreSQL also contains a less-known large object facility.
An in-depth explanation is found in the documentation, so suffice it to be said that large objects are stored the pg_largeobject
and pg_largeobject_metadata
system tables.
They can be referenced by their oid
and PostgreSQL provides a rather comprehensive set of manipulation functionalities, including importing from or exporting to files, random access, etc.
That being said, they also come with a significant list of caveats, some of which will be given in more detail below.
Encrypting a file in gpg without importing the key
The title of this post is a bit of a misnomer because it is always necessary to import a gpg key before it can be used to sign/encrypt/etc. anything. However we can create a temporary GPG home directory, use it for the encryption process and discard it afterwards. The example below is written in Python but can easily be ported to other languages.
Read MoreA simple task queue in TypeScript
The task queue implementation below provides a simple, drop-in solution for ensuring dynamically generated asynchronous tasks are executed sequentially. A typical use case for this would be update-requests that need to be executed in the order they were created. It works by chaining the submitted promises and assumes that all promises are eventually fulfilled or rejected.
Read MoreExhaustive when-expressions in Kotlin
Per default Kotlin only requires when-expressions to be exhaustive if the result is assigned to a variable. However when the result is not assigned to a variable, no exhaustiveness checks are performed.
Read MoreSpring transactions in GraphQL Java
The example below integrates GraphQL Java with Spring transaction management to ensure each query/mutation is executed in a transaction (similar to request-scoped transactions for RESTful APIs) so as to ensure consistency when different loaders are executed. This is done by providing a custom ExecutionStrategy which starts a transaction before executing the query/mutation and commits it afterwards. In order for this to work properly, it is necessary that for a single query all loaders are executed in sequential order on the same thread.
Read More