One of my favorite GitOps tricks is adding a post-recieve.hook with the contents:
#!/bin/bash
echo -e "\e[1;31mUpdating worktree and fetching remotes\e[m"
git --git-dir="$GIT_DIR" --work-tree="$GIT_DIR/.." reset --hard
git --git-dir="$GIT_DIR" fetch origin master
while read oldref newref refname; do
echo -e "\e[1;32mPushed ${refname##refs/heads/}\t${oldref::7} -> ${newref::7}\e[m"
done
echo -e "\e[1;31mRestarting service\e[m"
# Run whatever command is needed to restart the service
So that I can Heroku-style `git push` to my server (an ssh remote named "deploy") in order to deploy code!Glad to see something like this, if only to point out that even without containers and Kubernetes, a proper deployment tool is a lot more complex than whatever shell script or Makefile you cooked up on your own. Error handling, timeouts, deployment state management, rollback logic, authentication, and a pause feature are the irreducible complexity of a robust deployment method.
Ehh, I think the capistrano model of deployments is a pretty good 'bare minimum' of features without tooling. Basically, just use a rolling symlink of timestamped deploys:
releases/
release-2024-01-01
release-2024-01-02
release-2024-01-07
current -> releases/release-2024-01-07
Make a new one on a new deploy, and only update the symlink on success. Rollback is changing the symlink. Pause is just cancelling the scp. Auth is just linux accounts. Its a simple model.Highly recommend ansible-pull [1] as well for this job. Not because Ansible is amazing or anything but because every use-case you can think of likely already has a module.
[1] https://docs.ansible.com/ansible/latest/cli/ansible-pull.htm...
I've been using ansible-pull for like 3-5(?) years now, and it's been pretty awesome. We use it to do bare-metal device management.
I built something similar, a github action for doing gitops with docker compose (or swarm): https://github.com/FarisZR/docker-compose-gitops-action
For other stuff I just use another version of the action to deploy files using Tailscale SSH: https://github.com/FarisZR/tailscale-ssh-deploy
If you use nixos there's also https://github.com/nlewo/comin
This is interesting - maybe it's time I use flakes...
We have an issue to support non flake deployments: https://github.com/nlewo/comin/issues/30
If you want a basic version of this you can just use the built-in
system.autoUpgrade = {
enable = true;
flake = "github:you/dotfiles";
};
Is this named from the overweight cat meme?
No, it has been initially developed to manage a "COMmunity INfrastructure" and it sounds like the word "coming". I didn't know this meme but it's a nice coincidence because this infrastructure is actually a kind of "CHATONS" [1] (kitten in french)! Thx for the ref which could be useful for a future logo!
Maybe this should be GitOps without Docker? Which is something that’s pretty attractive to me for personal projects.
It's not clear to me what Kubernetes has to do with this project at all. GitOps isn't really synonymous with Kubernetes anyway, so this comes across as throwing the word Kubernetes into something for recognition potentially? But mentioning Kubernetes and conflating GitOps with Kubernetes has me more confused on what this actually solves for me.
Seems like the ArgoCD of non-containerized deployments. Which, ArgoCD has something to do with Kubernetes. So I guess I can sort of link in my head the thought process in branding here. But that requires me to know about ArgoCD and similar continuous deployment tools and link that in between GitOpper, GitOps, and Kubernetes somewhere. A bit too much mental gymnastics for the target audience: someone that doesn't use Kubernetes anyway.
In my opinion, consider changing your branding to something like "GitOps-style continuous deployments to non-containerized environments." A bit less show-y but at least that's more decipherable, just in my opinion.
The term is pretty synonymous with Kubernetes since it was coined in a Kubernetes context: https://web.archive.org/web/20200104172859/https://www.weave...
A Kubernetes tool coined the term but it’s always been about using git to store your manifests for your infrastructure, with some method of synchronization between the current state and what is in the git repo. With that in mind, it makes no sense to call them synonymous, GitOps transcends the concept of a particular tool or ecosystem.
It definitely correlates with Kubernetes pretty well. I read the title and already had a good sense of what the project is about and found the title accurate after reading more.
I disagree on the basis that managing kubernetes deployments isn’t necessarily done in a GitOps fashion. You have to elect to. You might manage your deployments with manifests that you don’t store in git at all, with no automation to synchronize your git repository state to cluster state. You might opt to manage your cluster in a ClickOps fashion with one of the many GUI tools available for doing so. Kubernetes can be managed in a GitOps fashion, but you have to elect to do so. And in that way I don’t see how anybody could say they correlate.
I mean "synonymous" in the sense of:
> having the same connotations, implications, or reference
> to runners, Boston is synonymous with marathon
> to runners, Boston is synonymous with marathon
So to people that manage deployments to kubernetes in a GitOps way, kubernetes is synonymous with GitOps?
I disagree with that logic. To people that manage deployments to kubernetes in a ClickOps way, I suppose kubernetes is synonymous with ClickOps.
Regardless of the term used, GitOps has been a thing longer than Kubernetes has existed.
I found this odd too. It reminded me of a book I read ages ago, sci-fi fantasy where there is a species that is very long-lived but they have a finite memory. Maybe if your memory is a year or two long then GitOps would seem to mean k8s deployments automatically out of git, rather than all the other things.
(I think it was by C.S Friedman but I don't remember which one)
Perhaps you are thinking of the My Little Pony fanfiction "Alicorn Time" by author AlicornPriest. https://www.fimfiction.net/story/298141/alicorn-time
If I was I must have read it more than a few hours ago because I don't remember it :)
Curious when I might use this over GH Actions or GL Runners?
At least three situations that come to mind without even trying to think at all(*), after just skimming the readme:
- When you don't use GitHub or GitLab.
- When you use GitHub or GitLab, but you don't have write permissions to use the repository (e.g. because you don't own it).
- When you just want to deploy a VM image or a container (either one instance, or multiple instances), and be able to shut them down, but don't want to bother updating the repository pipeline references every time. Maybe the image/container is actually a FreeBSD jail created from a ZFS snapshot, with a repository hosted in a NAS.
Note: I won't use this project so I'm not trying to justify its existence or usefulness. For all I know, it's just someone's pet project that the author only intends to use themselves, but someone else submitted to HN.
(* EDIT: On retrospect this actually comes off as negative, but I meant this as in "my reasons might not even be all that good" due to me not being that interested in the first place, and not as a judgement or anything like that. Leaving it as-is for transparency.)
Is this when you don't require availability
You may or may not be joking, but gitops has been the cause of outages for us in quite a few instances. I have been unable to stress what a terrible idea arbitrarily pulling and applying from git is. I am unable to stress it because that's the overwhelming way that many k8s setups are done, so clearly they cannot be wrong.
Your main or release branches should not have untested or unreviewed changes. If they do, they are the cause.
Outages after deploying are then just the effect.
I'm unsure what the alternative is. Even without gitops people do the same hand labour with the commandline, so why is GitOps specifically bad?
When I started out with Kubernetes Gitops hadn't really gained any momentum. We just used normal CI/CD pipeline tooling. For our own apps the pipeline simply built the docker image, pushed it, and then ran kubectl apply. No manual labour, no magic.
For third party stuff (like helm charts for ELK or whatever) it was the same but with helm cli/kubectl, without building the image.
I don't know why really, but gitops have never seemed that nice for me. It's perhaps kinda useful for third party stuff. But for your own applications, where you need to actually build the docker image and then either manually bump the tag or have some rube goldberg machinery committing to your repos, it just seems annoying.
Wanna see the state / source of truth? Use kubectl (or some other tool), we have this whole cluster just for keeping track of the state. Wanna see how/why/by who something was deployed? Look at the CI/CD tooling history.
What's really a True Scotsman? But I wouldn't really say so, no. I think [0] represent what I would consider a gitops pushed based approach, the way we did it differed in two main ways:
- (1) We didn't have any "environment repository". The manifest files were in the same repository as the application code
- (2) Perhaps more importantly: The manifest files did not _exactly_ represent what was deployed. We had a template-variable in the Deployment yaml file, where the Github action substituted the tag that had just been built. To see which version was deployed you either had to look in the cluster, or the Github Action logs.
If you use something like ArgoCD (and maybe also Argo Rollouts) you can do the diffing from Git automatically but either put a manual validation step where you have a chance to review the diff or implement some gradual rollout strategy. Also, it's probably wise to use a branch/tagging strategy and not read from head.
Bottom line is: GitOps means the source of truth is Git and automation makes sure to avoid drifts. You still have to have a rollout strategy and schedule that makes sense for your usecase.
> Bottom line is: GitOps means the source of truth is Git and automation makes sure to avoid drifts. You still have to have a rollout strategy and schedule that makes sense for your usecase.
THIS.
Could you elaborate on this? Why do you think it's a bad idea?
I mean making changes in prod without testing will do that, Gitops or not.
I used https://github.com/kubernetes/git-sync to sync the coredns config and zones so I can have gitops DNS style (coredns can watch for both config and zone changes and reload them dynamically)
Is this barebones puppet/salt/Ansible?
Correct me if I'm wrong:
- GitOps is a fancy word recently created by Gitlab or Github to sound cooler
- It means storing your code / services in git and deploying on push
It all seems so weird. We had tools like puppet since ice ages which can, after you push to git, reconfigure and deploy whatever you described in your git. Over all your fleet of machines.Am I missing something?
You aren't missing anything. It's just marketing so GitHub/Gitlab stay in the main conversation when DevOps comes up.
You're half wrong. The term was invented by Weaveworks, a company which no longer exists, but it's main product, FluxCD and the GitOps toolkit it is based on, lives on as a CNCF project. There never was and still is not any requirement that you use Github or Gitlab as your Git server to use this or any other GitOps product I'm aware of.
I guess you're more 75% wrong because the second statement is still half wrong. Depending on the product you're using, it can work via webhook if the Git server supports doing that, but predominantly GitOps tooling relies upon polling, so you won't necessarily get a deployment immediately going a git push. You'll get it whenever the next poll happens.
Also, FluxCD was created specifically for Kubernetes, which is why a product announcement like this is worded the way it is. It worked by storing Kubernetes custom resource manifests in a Git repo, typically for Helm charts or Kustomize "kustomization" definitions. Roughly the entire point of this was bringing Kubernetes conventions up to par with what was already common for deployment with configuration management tooling that relied upon server-stored configuration as code. I don't think Weaveworks or anyone else involved was under the impression they were the first to ever have this idea. But I also don't believe (but admittedly don't know) that it was particularly easy in 2017 to use Puppet to manage application deployments in Kubernetes. FluxCD also runs in Kubernetes itself, so you don't need any external infrastructure to do this.
Maybe this makes it less weird? Multi-container orchestration nearly a decade ago was a fairly immature ecosystem, so they adopted ideas from configuration management of server fleets. Not all change in the world is greenfield innovation that comes absolutely out of nowhere.
So how did people mange kubernetes before gitops? Automating "kubectl apply" in a different way in each company?
Also it's useless to this discussion but polling was also a standard thing in puppet / chef / saltstack. Main motivation was to make deployments more gradual so in case of untested code only stone subset of servers is affected
I see, I guess I only saw GitOps in gitlab and assumed it's another half-backed feature
To be specific GitOps covers the Ops/DevOps around git. So MRs / RBAC for git all the way through CI to CD. I've seen some stretch it to cover VDI equivalents as well.
Basically large orgs ditched all their ops people a while ago to focus on integrated teams, and now its marketable to seperate it out again a new set of terms have come round to help orgs pretend they didn't make mistakes.
The term DevSecOps really grinds my gears, gitops I'm more ok with, but still...
[flagged]
Interesting hack! I'll be using it from now on. Do you have any tips for when a machine is behind a NAT? Specifically, I want a service to automatically pick up changes from Git whenever their origin is pushed, without using any fancy tools. I prefer a simple, "Taco Bell programming" approach.
I typically tunnel SSH through Cloudflare tunnels for this! Requires a bit of client-side config in the ~/.ssh/config file, but once you do that you can very easily SSH through a NAT!
I recently set up Dokku for that kind of workflow recently, I'm super happy with my new setup which makes it very easy to throw up quick static pages or just push a Rails app and it'll basically just work Heroku style (It's using Heroku's buildpacks).