Decentralized Personal Computing
I have a few different computers. Those machines have different capabilities, form factors, portability, hardware, and software — and yet I still want to do roughly the same work on them. While most of the work I do is with text — reading, writing, editing, programming — Sometimes I want to work on one specific machine because I might need a dedicated graphics card or I need the machine to fit in my pocket. The solution to this is to have work sync automatically between my machines.
This is not a unique problem to have — some common solutions include Microsoft Windows’ Active directory or doing all your work through cloud-based offerings like Google’s Drive or Microsoft’s Sharepoint. There are however a few issues with these solutions. They are almost by definition insecure since they entail sending my data to a third party’s computer1. Services like Proton drive or Koofr at least do the bare minimum of encrypting your files so that only you can access them, but they still keep you reliant on someone else’s infrastructure, and you have to be able to connect to their services in order to access your data.
A better workflow would be (to be able) to work entirely offline and then sync those changes semi-periodically when I am online again. I do not want to have to think about if a machine is connected to the internet or what work I have done on it earlier (or not done). I simply want to pick up (or sit down at) a computer and keep going with whatever I was doing before — all without relying on a centralized host. This of course requires all machines to have their own copy of the data, and for those machines to sync changes to that data between themselves.
This syncing functionality can be offered through a lot of different
services or systems. I personally use Syncthing for this, but you
could just as well use rsync paired with cron jobs to sync your files
securely over ssh. Syncthing is cross platform (so I can use it on
android, GNU/Linux, and theoretically Microsoft Windows) and doesn’t
rely on port forwarding (and so it doesn’t matter what network I am
connected to).
If you want to use rsync and sync to your phone I would probably use
something like tailscale paired with termux to run your cron
jobs. That way you get a single ip address to hardcode into your rsync
scripts and you can move across networks and use your phone’s cellular
connection. I actually use tailscale myself (even though it is not
free software) to speed up discovery between devices.
Tailscale also allows me to ssh quickly between machines to transfer
files that aren’t in my general work-related folders. This is usually
stuff that allows me to perform my work (like dotfiles or other
configuration). It pairs wonderfully with GNU Emacs’ TRAMP since
machines ssh-ing over tailscale don’t need any further authentication
(they already authenticated by being on the tailnet) and it also means
I don’t have to expose port 22 to the open internet.
I have five different machines that each keep a complete copy of all of my critical files:
- Parana
- This is my server. It does not hold any centralised role in my network, but is configured to serve content to the open web and maintain high uptime. Runs Debian stable.
- Nile
- My desktop computer. Like Parana this is located at my home. This is a more high-powered device meant for user-facing interaction. Runs Arch linux.
- Yukon
- A larger, higher performance laptop that acts as a portable workstation when I am away from home for longer periods of time. Runs Debian testing.
- Danube
- A small, durable, and low-power2 laptop that I can easily toss into my bag. This is probably one of my most used devices because it has a nice keyboard and long battery life. Runs GNU Guix.
- Yangtze
- My phone. In this context it mostly acts as a music player and a way for me to access my grocery list. Runs lineageOS.
One of the biggest advantages to this approach is that not only is the general state of the files kept in sync, but I have five different machines located in different places that all have copies of my data. This means I can maintain the integrity of my data even if I lose my phone (1/5), my laptop is stolen (1/5), or my home burns down when I am away (usually 3/5). I can also wipe my computers — something I do semi-regularly — without worrying about losing anything of importance on them. My desktop and server both have high uptime, so there is always some node on the network that is mirroring changes.
It is however inevitable that conflicts occur when syncing files between machines that are not always online. Thankfully there is a tried-and-tested program that fits with this offline workflow on a distributed number of machines; git.
Git is very useful not just as a collaborative development tool, but also for singular users. Syncthing has an integrated way of dealing with this by retaining copies of files and renaming them. This is good, but it is still an annoying hassle to deal with. Using git is a lot nicer (partly because I can use magit to resolve problems) but git also allows me to see the history of the edits that I have made. Most of my git repositories are not kept on any external server like github’s or codeberg’s, but instead only exist in this pseudo-LAN/VPN that my devices all interact through.
All my machines (yes, even my phone) also posses its own copy of my configuration files for GNU Emacs, and so the way that I interact with my computers is always tailor-made for my preferences and workflows. Because of Emacs I work almost exclusively with files in plaintext (usually formatted using orgdown). Working with plaintext of course makes using git very straightforward, but I still use git for non-plaintext tasks.
Git is harder to use in non-plaintext contexts, but in my experience most binary files do not change often enough to experience many conflicts. That might differ for you however, and you might require some other custom solution.
The end result of this system is that each machine becomes an old-school terminal that interacts with a broader computing system, although one that does not require a centralised server or even a network connection to operate! While all my machines possess complete records of the data and are fully empowered to make arbitrary changes (I am the only user after all) no single computer encompasses the whole of the computing environment. I can always discard one element of the network for another3.
This approach not only maintains your own sovereignty over your data, it doesn’t cost anything and also allows you to make use of your preëxisting storage for a continuos backup solution. It doesn’t stop you from accidentally wiping a given file from all of your machines of course, so adding some sort of air-gapped backup is also necessary.
The modern day computer is increasingly less personal. Smartphones today act more like network terminals than real computing platforms (even though they are many times more powerful than old supercomputers). Building a decentralized system in this fashion is one way of maintaining a computer that is still intimately personal while still offering the advantages of network storage and automatic backups. It allows you to make your own choice about what software to use and how to use that software, and is therefore free in the true essence of free software — and of the enlightenment idea of freedom it is based on. ❦
Footnotes:
Also known as “the cloud”. Sending your data to another person requires you trusting them to never get cracked or do anything malicious with your data themselves.
When I say low-power I mean it, it only has 4Gb of RAM and 128Gb of storage. I could probably get a machine of similar performance second hand for free.
Moving between machines that are all connected to the network is almost seamless and instantaneous, but even setting up a new computer (after purchasing it or wiping the disk) takes only perhaps an hour. After that using the machine is once again as simple as using any other perfectly tailor-made computing device that I own.
