Back

Show HN: Unregistry – “docker push” directly to servers without a registry

620 points1 daygithub.com

I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

shykes49 minutes ago

Docker creator here. I love this. In my opinion the ideal design would have been:

1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.

2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.

richardc3234 hours ago

I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

[1]: https://github.com/richardcrichardc/docker2docker

nine_k1 day ago

Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.

gchamonlive22 hours ago

It's fine, but it wouldn't hurt to have a more formal alias like `docker push-over-ssh`.

EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.

psviderski20 hours ago

That's a valid concern. You can very easily give it whatever name you like. Docker looks for `docker-COMAND` executables in ~/.docker/cli-plugins directory making COMMAND a `docker` subcommand.

Rename the file to whatever you like, e.g. to get `docker pushoverssh`:

  mv ~/.docker/cli-plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
Note that Docker doesn't allow dashes in plugin commands.
whalesalad7 hours ago

can easily see an engineer spotting pussh in a ci/cd workflow or something and thinking "this is a mistake" and changing it.

EricRiese23 hours ago

> The extra 's' is for 'sssh'

> What's that extra 's' for?

> That's a typo

someothherguyy22 hours ago

and prone to collision!

nine_k21 hours ago

Indeed so! Because it's art, not engineering. The engineering approach would require a recognizably distinct command, eliminating the possibility of such a pun.

rollcat9 hours ago

I used to have an alias em=mg, because mg(1) is a small Emacs, so "em" seemed like a fun name for a command.

Until one day I made that typo.

+1
bobbiechen8 hours ago
pinoy4208 hours ago

[dead]

alisonatwork22 hours ago

This is a cool idea that seems like it would integrate well with systems already using push deploy tooling like Ansible. It also seems like it would work as a good hotfix deployment mechanism at companies where the Docker registry doesn't have 24/7 support.

Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.

psviderski21 hours ago

You need a containerd on the remote end (Docker and Kubernetes use containerd) and anything that speaks registry API (OCI Distribution spec: https://github.com/opencontainers/distribution-spec) on the client. Unregistry reuses the official Docker registry code for the API layer so it looks and feels like https://hub.docker.com/_/registry

You can use skopeo, crane, regclient, BuildKit, anything that speaks OCI-registry on the client. Although you will need to manually run unregistry on the remote host to use them. 'docker pussh' command just automates the workflow using the local Docker.

Just check it out, it's a bash script: https://github.com/psviderski/unregistry/blob/main/docker-pu...

You can hack your own way pretty easily.

dirkc10 hours ago

I agree! For a bunch of services I manage I build the image locally, save it and then use ansible to upload the archive and restore the image. This usually takes a lot longer than I want it to!

0x45721 hours ago

It needs docker daemon on both ends. This is just a clever way to share layer between two daemons via ssh.

metadat23 hours ago

This should have always been a thing! Brilliant.

Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.

password432119 hours ago

As a VC-funded company Docker had to make money somehow.

ezekg4 hours ago

I think the complexity lies in the dance required to push blobs to the registry. I've built an OCI-compliant pull-only registry before and it wasn't that complicated.

dreis_sw8 hours ago

I recommend using GitHub's registry, ghcr.io, with GitHub Actions.

I invested just 20 minutes to setup a .yaml workflow that builds and pushes an image to my private registry on ghcr.io, and 5 minutes to allow my server to pull images from it.

It's a very practical setup.

revicon9 hours ago

Is this different from using a remote docker context?

My workflow in my homelab is to create a remote docker context like this...

(from my local development machine)

> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

Then I can do...

> docker context use mylinuxserver

> docker compose build

> docker compose up -d

And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

No fuss, registry, no extra applications needed.

Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.

matt_kantor9 hours ago

Assuming I understand your workflow, one difference is that unregistry works with already-built images. They aren't built on the remote host, just pushed there. This means you can be confident that the image on your server is exactly the same as the one you tested locally, and also will typically be much faster (assuming well-structured Dockerfiles with small layers, etc).

pbh1018 hours ago

This is probably an anti-feature in most contexts.

akovaski7 hours ago

The ability to push a verified artifact is an anti-feature in most contexts? How so?

tontony5 hours ago

Totally valid approach if that works for you, the docker context feature is indeed nice.

But if we're talking about hosts that run production-like workloads, using them to perform potentially cpu-/io-intensive build processes might be undesirable. A dedicated build host and context can help mitigate this, but then you again face the challenge of transferring the built images to the production machine, that's where the unregistry approach should help.

lxe24 hours ago

Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.

vhodges23 hours ago

There is also https://skateco.github.io/ which (at quick glance) seems similar

byrnedo5 hours ago

Skate author here: please try it out! I haven’t gotten round to diving deep into uncloud yet, but I think maybe the two projects differ in that skate has no control plane; the cli is the control plane.

I built skate out of that exact desire to have a dokku like experience that was multi host and used a standard deployment configuration syntax ( k8s manifests ).

https://skateco.github.io/docs/getting-started/

benwaffle2 hours ago

Looks like uncloud has no control plane, just a CLI: https://github.com/psviderski/uncloud#-features

psviderski20 hours ago

I'm glad the idea of uncloud resonated with you. Feel free to join our Discord if you have questions or need help

nodesocket24 hours ago

A recommendation for Portainer if you haven't used or considered it. I'm running two EC2 instances on AWS using portainer community edition and portainer agent and works really well. The stack feature (which is just docker compose) is also super nice. One EC2 instance; running Portainer agent runs Caddy in a container which acts as the load balancer and reverse proxy.

lxe8 hours ago

I'm actually running portainer for my homelab setup hosting things like octoprint and omada controller etc.

modeless22 hours ago

It's very silly that Docker didn't work this way to start with. Thank you, it looks cool!

TheRoque21 hours ago

You can already achieve the same thing by making your image into an archive, pushing it to your server, and then running it from the archive on your server.

Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`

And loading it looks like this: `docker load -i /path/to/my-app.tar`

Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.

nine_k21 hours ago

If you have an image with 100MB worth of bottom layers, and only change the tiny top layer, the unregistry will only send the top layer, while save / load would send the whole 100MB+.

Hence the value.

isoprophlex19 hours ago

yeah i deal with horrible, bloated python machine learnings shits; >1 GB images are nothing. this is excellent, and i never knew how much i needed this tool until now.

throwaway29018 hours ago

Docker also has export/load commands. They only exports the current layer filesystem.

authorfly15 hours ago

Good advice and beware the difference between docker export (which will fail if you lack enough storage, since it saves volumes) and docker save. Running the wrong command might knock out your only running docker server into an unrecoverable state...

amne17 hours ago

Takes a look at pipeline that builds image in gitlab, pushes to artifactory, triggers deployment that pulls from artifactory and pushes to AWS ECR, then updates deployment template in EKS which pulls from ECR to node and boots pod container.

I need this in my life.

maccard16 hours ago

My last projects pipeline spent more time pulling and pushing containers than it did actually building the app. All of that was dwarfed by the health check waiting period, when we knew in less than a second from startup if we were actually healthy or not.

scott11334123 hours ago

Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?

[1]: https://zotregistry.dev

stroebs19 hours ago

Your SSL certificate for zothub.io has expired in case you weren’t aware.

matt_kantor10 hours ago

Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.

@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?

[1]: https://github.com/mkantor/docker-pushmi-pullyu

[2]: https://hub.docker.com/_/registry

matt_kantor9 hours ago

After taking a closer look it seems the main conceptual difference between unregistry/docker-pussh and docker-pushmi-pullyu is that the former runs the temporary registry on the remote host, while the latter runs it locally. Although in both cases this is not something users should typically have to care about.

westurner7 hours ago

Do docker-pussh or docker-pushmi-pullyu verify container image signatures and attestations?

From "About Docker Content Trust (DCT)" https://docs.docker.com/engine/security/trust/ :

  > Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images. 

  export DOCKER_CONTENT_TRUST=1
cosign > verifying containers > verify attestation: https://docs.sigstore.dev/cosign/verifying/verify/#verify-at...

/? difference between docker content trust dct and cosign: https://www.google.com/search?q=difference+between+docker+co...

fellatio22 hours ago

Neat idea. This probably has the disadvantage of coupling deployment to a service. For example how do you scale up or red/green (you'd need the thing that does this to be aware of the push).

Edit: that thing exists it is uncloud. Just found out!

That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.

psviderski20 hours ago

For sure, it's always a tradeoff and it's great to have options so you can choose the best tool for every job.

actinium22624 hours ago

This is excellent. I've been doing the save/load and it works fine for me, but I like the idea that this only transfers missing layers.

FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.

larsnystrom18 hours ago

Nice to only have to push the layers that changed. For me it's been enough to just do "docker save my-image | ssh host 'docker load'" but I don't push images very often so for me it's fine to push all layers every time.

bradly1 day ago

As a long ago fan of chef-solo, this is really cool.

Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?

psviderski20 hours ago

Yep, I'm familiar with Kamal and it actually inspired me to build Uncloud using similar principles but with more cluster-like capabilities.

I built Unregistry for Uncloud but I belive Kamal could also benefit from using it.

christiangenco9 hours ago

I think it'd be a perfect fit. We'll see what happens: https://github.com/basecamp/kamal/issues/1588

layoric22 hours ago

I'm so glad there are tools like this and swing back to selfhosted solutions, especially leveraging SSH tooling. Well done and thanks for sharing, will definitely be giving it a spin.

MotiBanana19 hours ago

I've been using ttl.sh for a long time, but only for public, temporary code. This is a really cool idea!

psviderski17 hours ago

Wow ttl.sh is a really neat idea, thank you for sharing!

koakuma-chan1 day ago

This is really cool. Do you support or plan to support docker compose?

psviderski1 day ago

Thank you! Can you please clarify what kind of support you mean for docker compose?

fardo1 day ago

I assume that he means "rather than pushing up each individual container for a project, it could take something like a compose file over a list of underlying containers, and push them all up to the endpoint."

koakuma-chan1 day ago

Yes, pushing all containers one by one would not be very convenient.

+1
baobun24 hours ago
jillesvangurp18 hours ago

Right now, I use ssh to trigger a docker compose restart that pulls all the latest images on some of my servers (we have a few dedicated hosting/on premise setups). That then needs to reach out to our registry to pull images. So, it's this weird mix of push pull that ends up needing a central registry.

What would be nicer instead is some variation of docker compose pussh that pushes the latest versions of local images to the remote host based on the remote docker-compose.yml file. The alternative would be docker pusshing the affected containers one by by one and then triggering a docker compose restart. Automating that would be useful and probably not that hard.

felbane12 hours ago

I've built a setup that orchestrates updates for any number of remotes without needing a permanently hosted registry. I have a container build VM at HQ that also runs a registry container pointed at the local image store. Updates involve connecting to remote hosts over SSH, establishing a reverse tunnel, and triggering the remote hosts to pull from the "localhost" registry (over the tunnel to my buildserver registry).

The connection back to HQ only lasts as long as necessary to pull the layers, tagging works as expected, etc etc. It's like having an on-demand hosted registry and requires no additional cruft on the remotes. I've been migrating to Podman and this process works flawlessly there too, fwiw.

esafak24 hours ago

You can do these image acrobatics with the dagger shell too, but I don't have enough experience with it to give you the incantation: https://docs.dagger.io/features/shell/

throwaway31415524 hours ago

I assume you can do these "image acrobatics" in any shell.

esafak5 hours ago

The dagger shell is built for devops, and can pipe first class dagger objects like services and containers to enable things like

  github.com/dagger/dagger/modules/wolfi@v0.16.2 |
  container |
  with-exec ls /etc/ |
  stdout
What's interesting here is that the first line demonstrates invocation of a remote module (building a Wolfi Linux container), of which there is an ecosystem: https://daggerverse.dev/
iw7tdb2kqo917 hours ago

I think it will be a good fit for me. Currently our 3GB docker image takes a lot of time to push to Github package registry from Github Action and pull from EC2.

mountainriver21 hours ago

I’ve wanted unregistry for a long time, thanks so much for the awesome work!

psviderski20 hours ago

Met too, you're welcome! Please create an issue on github if you find any bugs

nothrabannosir1 day ago

What’s the difference between this and skopeo? Is it the ssh support ? I’m not super familiar with skopeo forgive my ignorance

https://github.com/containers/skopeo

jlcummings3 hours ago

Skopeo lets you work with remote registries and local images without a docker/podman/etc daemon.

We use to ‘clone’ across deployment environments and across providers outside of the build pipeline as an adhoc job.

yibers1 day ago

"skopeo" seems to related to managing registeries, very different from this.

NewJazz24 hours ago

Skopeo manages images, copies them and stuff.

yjftsjthsd-h24 hours ago

What is the container for / what does this do that `docker save some:img | ssh wherever docker load` doesn't? More efficient handling of layers or something?

psviderski23 hours ago

Yeah exactly, which is crucial for large images if you change only a few last layers.

The unregistry container provides a standard registry API you can pull images from as well. This could be useful in a cluster environment where you upload an image over ssh to one node and then pull it from there to other nodes.

This is what I’m planning to implement for Uncloud. Unregistry is so lightweight so we can embed it in every machine daemon. This will allow machines in the cluster to pull images from each other.

pragmatick18 hours ago

Relatively early on the page it says:

"docker save | ssh | docker load transfers the entire image, even if 90% already exists on the server"

dzonga1 day ago

this is nice, hopefully DHH and the folks working on Kamal adopt this.

the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command

psviderski20 hours ago

I don't see a reason to not adopt this in Kamal. I'm also building Uncloud that took a lot of inspiration from Kamal, please check it out. I will integrate unregistry into uncloud soon to make the build/deploy process a breeze.

rudasn24 hours ago

Build the image on the deployment server? Why not build somewhere else once and save time during deployments?

I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.

christiangenco9 hours ago

I think the idea with unregistry is that you're still building somewhere else once but then instead of pushing everything to a registry once you push your unique layers directly to each server you're deploying.

hoppp13 hours ago

Oh this is great, its a problem I also have.

rcarmo14 hours ago

I think this is great and have long wondered why it wasn’t an out of the box feature in Docker itself.

remram24 hours ago

Does it start a unregistry container on the remote/receiving end or the local/sending end? I think that runs remotely. I wonder if you could go the other way instead?

psviderski17 hours ago

It starts an unregistry container on the remote side. I wonder, what's the use case on your mind for doing it the other way around?

selcuka23 hours ago

You mean ssh'ing into the remote server, then pulling image from local? That would require your local host to be accessible from the remote host, or setting up some kind of ssh tunneling.

mdaniel22 hours ago

`ssh -R` and `ssh -L` are amazing, and I just learned that -L and -R both support unix sockets on either end and also unix socket to tcp socket https://manpages.ubuntu.com/manpages/noble/man1/ssh.1.html#:...

I would presume it's something akin to $(ssh -L /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H unix:///tmp/d.sock save | docker load') type deal

matt_kantor9 hours ago

This is what docker-pushmi-pullyu[1] does, using `ssh -R` as suggested by a sibling comment.

[1]: https://github.com/mkantor/docker-pushmi-pullyu

armx401 day ago

How about using docker context. I use that a lot and works nicely.

Snawoot1 day ago

How do docker contexts help with the transfer of image between hosts?

dobremeno14 hours ago

I assume OP meant something like this, building the image on the remote host directly using a docker context (which is different from a build context)

  docker context create my-awesome-remote-context --docker "host=ssh://user@remote-host"

  docker --context my-awesome-remote-context build . -t my-image:latest
This way you end up with `my-image:latest` on the remote host too. It has the advantage of not transferring the entire image but only transferring the build context. It builds the actual image on the remote host.
revicon9 hours ago

This is exactly what I do, make a context pointing to the remote host, use docker compose build / up to launch it on the remote system.

cultureulterior20 hours ago

This is super slick. I really wish there was something that did the same, but using torrent protocol, so all your servers shared it.

psviderski17 hours ago

Not a torrent protocol but p2p, check out https://github.com/spegel-org/spegel it's super cool.

I took inspiration from spegel but built a more focused solution to make a registry out of a Docker/containerd daemon. A lot of other cool stuff and workflows can be built on top of it.

spwa413 hours ago

THANK you. Can you do the same for kubernetes somehow?

tontony5 hours ago

A few thoughts/ideas on using this in Kubernetes are discussed in this issue: https://github.com/psviderski/unregistry/issues/4; generally, should be possible with the same idea, but with some tweaking.

Also have a look at https://spegel.dev/, it's basically a daemonset running in your k8s cluster that implements a (mirror) registry using locally cached images and peer-to-peer communication.

alibarber15 hours ago

This is timely for me!

I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.

I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.

Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.

Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.

dboreham10 hours ago

I like the idea, but I'd want this functionality "unbundled".

Being able to run a registry server over the local containerd image store is great.

The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.

Very slick though.

peyloride18 hours ago

This is awesome, thanks!

victorbjorklund16 hours ago

Sweet. I been wanting this for long.

s1mplicissimus1 day ago

very cool. now lets integrate this such that we can do `docker/podman push localimage:localtag ssh://hostname:port/remoteimage:remotetag` without extra software installed :)

brirec19 hours ago

I was informed that Podman at least has a `podman image scp` function for doing just this...

czhu1220 hours ago

Does this work with Kubernetes image pulls?

psviderski19 hours ago

I guess you're asking about the registry part (not 'pussh' command). It exposes the containerd image store as standard registry API so you can use any tools that work with regular registry to pull/push images to it.

You should be able to run unregistry as a standalone service on one of the nodes. Kubernetes uses containerd for storing images on nodes. So unregistry will expose the node's images as a registry. Then you should be able to run k8s deployments using 'unregistry.NAMESPACE:5000/image-name:tag' image. kubelets on other nodes will be pulling the image from unregistry.

You may want to take a look at https://spegel.dev/ which works similarly but was created specifically for Kubernetes.

jokethrowaway21 hours ago

Very nice! I used to run a private registry on the same server to achieve this - then I moved to building the image on the server itself.

Both approaches are inferior to yours because of the load on the server (one way or another).

Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.

The size of my images are tiny, the extra complexity is unwarranted.

Then of course I'm not a 1000 people company with 1GB docker images.

isaacvando23 hours ago

Love it!

bflesch17 hours ago

this is useful. thanks for sharing

quantadev21 hours ago

I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.

jdsleppy13 hours ago

I've been very happy doing this:

DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.

kosolam12 hours ago

This approach is akin to the prod server pulling an image from a registry. The op method is push based.

jdsleppy8 hours ago

No, in my example the docker-compose.yml would exist alongside your application's source code and you can use the `build` directive https://docs.docker.com/reference/compose-file/services/#bui... to instruct the remote host (Hetzner VPS, or whatever else) to build the image. That image does not go to an external registry, but is used internal to that remote host.

For 3rd party images like `postgres`, etc., then yes it will pull those from DockerHub or the registry you configure.

But in this method you push the source code, not a finished docker image, to the server.

quantadev7 hours ago

Seems like it makes more sense to build on the build machine, and then just copy images out to PROD servers. Having source code on PROD servers is generally considered bad practice.

+1
jdsleppy3 hours ago
politelemon18 hours ago

Considering the nature of servers, security boundaries and hardening,

> Linux via Homebrew

Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.

djfivyvusn16 hours ago

Brew is such a cute little package manager. Updating its repo every time you install something. Randomly self updating like a virus.

v5v314 hours ago

That made me laugh lol

yrro15 hours ago

Well put, but it's a shame this comment is the first thing I read, rather than comments about the tool itself!

cyberax13 hours ago

We're using it to distribute internal tools across macOS and Linux developers. It excels in this.

Are there any good alternatives?

carlhjerpe13 hours ago

100% Nix, it works on every distro, MacOS, WSL2 and won't pollute your system (it'll create /nix and patch your bashrc on installation and everything from there on goes into /nix).

cyberax12 hours ago

Downside: it's Nix.

I tried it, but I have not been able to easily replicate our Homebrew env. We have a private repo with pre-compiled binaries, and a simple Homebrew formula that downloads the utilities and installs them. Compiling the binaries requires quite a few tools (C++, sigh).

I got stuck at the point where I needed to use a private repo in Nix.

+1
lloeki12 hours ago
jlhawn1 day ago

A quick and dirty version:

    docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!
alisonatwork22 hours ago

On podman this is built in as native command podman-image-scp[0], which perhaps could be more efficient with SSH compression.

[0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...

psviderski19 hours ago

Ah neat I didn't know that podman has 'image scp'. Thank you for sharing. Do you think it was more straightforward to implement this in podman because you can easily access its images and metadata as files on the file system without having to coordinate with any daemon?

Docker and containerd also store their images using a specific file system layout and a boltdb for metadata but I was afraid to access them directly. The owners and coordinators are still Docker/containerd so proper locks should be handled through them. As a result we become limited by the API that docker/containerd daemons provide.

For example, Docker daemon API doesn't provide a way to get or upload a particular image layer. That's why unregistry uses the containerd image store, not the classic Docker image store.

travisgriggs20 hours ago

So with Podman, this exists already, but for docker, this has to be created by the community.

I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.

When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.

djfivyvusn15 hours ago

Something that took me 20 years to learn: Never underestimate the value of a slick gui.

password432119 hours ago

> why docker, which seems so much more beaurecratic/convoluted, prevails over podman

First mover advantage and ongoing VC-funded marketing/DevRel

selcuka23 hours ago

That method is actually mentioned in their README:

> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server

rgrau24 hours ago

I use a variant with ssh and some compression:

    docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker load'
selcuka23 hours ago

If you are happy with bzip2-level compression, you could also use `ssh -C` to enable automatic gzip compression.

kristel10012 hours ago

[dead]

ke14248 hours ago

[dead]

kewldev8723 hours ago

[dead]

Aaargh2031811 hours ago

I simply use "docker save <imagename>:<version> | ssh <remoteserver> docker load"

ajd55511 hours ago

This is great! I wonder how well it works in case of Disaster Recovery though. Perhaps it is not intended for production environments with strict SLAs and uptime requirements, but if you have 20 servers in a cluster that you're migrating to another region or even cloud provider, the pull approach from a registry seems like the safest and most scalable approach