Published on
Hello! I'm back again (after a long period away...) and I want to talk about March. This last month was a quiet one because of the sheer amount of testing required to have a baseline for a system I want. I'm talking about Nextcloud, a system I've had interest in for the last few years, but couldn't make it stick. Would this be the month where I finally get it going? Let's talk about that first. I have other things to mention afterwards.
Nextcloud, in short, is a cloud-like self-hosted solution for integrating your digital life slightly easier. It's akin to Google, has a ton of configurations, and it's goal is to help people secure their digital lives. I'm all about that, sounds great, where do I start?
Nextcloud can be installed on virtually any device, but you should consider running it on something that won't be doing much else. Consider a stand-alone system on your network with proper backup configurations and encryption at the disk-layer via LUKS for real security. Get a decent CPU, decent storage option, decent amount of RAM, and you're set. Anything within the last five years of hardware should be good.
Nextcloud is a PHP codebase and requires the use of something like Apache web server, or using FastCGI to serve PHP. If this sounds complicated, then we can go a layer higher and not manually install PHP. We can use Docker instead and create a containerized approach that is easily reproducible. Docker isn't very sexy in 2024, but it's useful and lets me avoid the use of NixOS, a system I don't really want to use. I use Guix OS, but there's no packaging for PHP code in Guix (yet).
So long as you can do something like:
$ apt-get install docker docker-compose
on a Linux computer, you can get Nextcloud running. Substitute apt-get
with whatever your system's package manager is.
Nextcloud, within Docker, comes with the Apache web server, which isn't terribly good and would be at best considered a legacy application. The other variant uses FastCGI and is called nextcloud:fpm
. The choice is yours to pick, but I think FPM has some advantages over Apache. Apache would rapidly simplify your setup, but extremely limits the processing potential of Nextcloud.
Nextcloud needs a few things: a database, a memory store, a cron
job to be periodically executed, a reverse proxy to proxy traffic accordingly, and, if you're feeling adventurous, a Cloudflare Tunnel to proxy traffic from the internet to your local setup. You can keep your Nextcloud entirely private on your network, or you can expose it to the net and have it globally available anywhere you go.
The components I'll be using here are comparable to what Nextcloud has as an example over here. We're going to be using MariaDB, Redis and Nginx, as that's what's available and I don't want to spend time learning how to configure Caddy or figuring out why PostgreSQL doesn't work properly.
In order for Nextcloud to function correctly with a proxy front-end like Cloudflare, we need to have a known, fixed IP for where traffic is coming from in Cloudflare. For this to happen, we need to create a network within Docker to manually assign IPs to all the containers.
docker network create mynet --subnet=172.16.0.0
Docker usually does away with IPs automatically so they're considerably scrambled/managed by Docker instead of the user. However, for Nextcloud's TRUSTED_PROXIES
variable to work, you feed it an IP address for a proxy that it expects forever. If the IPs are scrambled on next run, then Nextcloud won't accept traffic from the proxy anymore. This is a one-time step, and the environment variable does not overwrite the initial first Docker run once you're setup. You would have to wipe the install each and every time.
To side-step this, we can fix IP addresses to each service to have them communicate to each other properly. This is an all-in-one step, unfortunately, so get ready to stub a lot of network addresses to everything, otherwise they'll be separated on the Docker networks they reside on.
services:
cloudflare:
image: cloudflared-tunnel
# other tunnel details
networks
mynet:
ipv4_address: 172.16.0.20
nextcloud:
image: nextcloud:fpm-alpine
networks:
mynet:
ipv4_address: 172.16.0.10
environment:
- TRUSTED_PROXIES: 172.16.0.20
On first install and setup, Nextcloud will recognize this IP address as a trusted proxy, and it should never change on each subsequent run of the stack. Assign an IP to each service, and you should be good to go. Yes, you have to assign an IP to the cron
script.
After a lot of tweaking and configuration, I had a setup that let me connect to the Nextcloud container over Cloudflare Tunnel by configuring the Cloudflare Tunnel accordingly. In Cloudflare Tunnel dashboard, you have to proxy to the Nextcloud IP address you gave it.
And in part of my talking about Nextcloud, I want to talk about the different aspects of it and what made me decide to not use Nextcloud. Yes, that's right.
Within a group context, file sharing on Nextcloud is pretty easy, and I would argue this is the best point of Nextcloud. It functions as expected, and doesn't fall short on anything. The main Nextcloud application on mobile smartphones works, but that's all it can really do. The main issue is that the app itself does not provide a two-way synchronization between the server and the mobile device, meaning should you use multiple devices, they will not all be synchronized and sharing won't function as you expect.
So, not terribly excited about it.
My group tried using this for about a good week, but there's an issue. Notifications won't work if you're using an F-Droid version, as the native Play Store version uses Google Cloud Messaging based on the developer private keys. They do not use GCM in F-Droid, and as such don't have the facility to push notifications from your server.
You would have to use external tools to poll notifications, but after a ton of testing, it simply won't work anymore for us. This gap in functionality makes it kind of a non-starter for us if we wanted it to replace our common smartphone chat functionality. So, sort of disappointing.
Functions as expected, these are basically the same thing just in different form factors.
This is pretty useful, and I liked it a bit as my bookmarks are completely scattered all over the place. This would've been a nice thing I could see myself using. However, the mobile app is less so, as currently posting new bookmarks from the mobile apps does not work. I expect it because the app is so lowly maintained that it was never prioritized to fix, or no one else has looked into it.
The whole thing that started me down this spiral was originally due to wanting to share calendars with my partner somehow. Across two different mobile platforms, I observed no good solution, and don't quite understand CalDAV still. Out of the box, CalDAV never seemed to work accordingly with my own phone, how am I supposed to do it for my partner on a completely different platform? I don't want a solution that adds additional steps, so this seems not entirely ideal for a zero-cost integration.
The overall performance of my setup, in it's current form ranges from great, to less than ideal. Without the use of cron
, background jobs wouldn't process when users weren't using Nextcloud, and would back up when you would use it the next day. With cron
, background jobs worked, but there was still noticeable layer of not great at play. Small images sometimes wouldn't load, uploads to chat would fail, and pages would vary greatly on how fast they would load.
I do not have reason to think it's a hardware limitation, this is probably a code limitation, or worse, network. Cloudflare Tunnel limits requests on free plans to 100 megabytes as the maximum, but I could rack performance up to the number of jumps that requests have to pass through. A Cloudflare request goes through a Nginx reverse proxy to then talk to the Nextcloud service, and Nextcloud can sometimes make multiple types of requests depending on the function being utilized. Or, it can run background jobs if you aren't using cron
.
Regardless, it got too slow over time, and became unusable for us as a team. It wasn't organic, to say the least. Nothing in my workflow felt like "yes, I should put this on Nextcloud" or "Nextcloud would be really good for this task." It never got to that point for us, so I had to make the decision to drop Nextcloud.
Is it fun to waste a lengthy period of time testing a software kit you won't actually use? No, not really. I tried Nextcloud out in the past, kind of liked it, but it didn't stick then. I tried it again now, didn't stick, maybe I'll revisit in another year or so.
There was a lot to learn about Docker in the process, and I feel a little bit better about my understanding of it now, which I think is an added plus. But, saying goodbye to Nextcloud for another undetermined period of time, doesn't feel as good. I quite like Nextcloud's stuff through the web client, but the mobile app support isn't quite there yet still.
Now, onto other things, for I have talked way too much about Nextcloud now.
After a few months trial of Hyprland, I think it's time for me to hang up my gloves and throw in the towel. Where am I going? KDE Plasma.
I have been using Hyprland since January thinking it was a much better Sway, which in reality is an alternative to i3wm. The Wayland and Tiling Window manager rabbit hole I think has finally reached the end of it's journey for me. I'll go into my reasons why.
Hyprland when it's good, it's good. When it's bad, however, it's downright annoying. Like many wlroots based compositors, things work, until they don't. Things feel fine, until you start using it and realizing you have weird edge cases that aren't supported.
Hyprland promised a pretty decent i3-like experience, and it did for a time. Things worked great, but I noticed it's average system resource usage was pretty damn high. The memory usage was always notably higher than most other things I tried, and when apps start dipping into swap memory, that's when I start to take note. Hyprland went into swap quite a bit for me.
Hyprland's ultra customization features are nice, but not utterly game changing. I stopped caring about desktop customization a long time ago as my monitors will never not have windows and programs sprawled about all over the place. I just need a thing to display multiple applications in the fastest and best means possible, not user customization.
Without further ado, here's my list of grievances with Hyprland.
gparted
to open without some finnicky hacks, but Steam works fine, so where's the line drawn exactly?diff
ing my config against the newest default config file was not something I want to do, and I'm sure many people also don't appreciate having to go back and edit their cool configurations now either. End users would only know about this if they're plugged into the GitHub information feed, to which I am not.One of these things on it's own might not annoy me, but in totality, they annoy me that I'm constantly on the lookout for greener pastures. Maybe I'm the person who's hard to please? Probably. I really enjoyed i3, but I have to understand that Wayland introduces a lot of new incompatibility for applications, so there's going to be a difficult period in figuring out what Wayland software really just "works" the best.
Plasma is something I'm not all too familiar with. My only real-world experience with it in recent time, is actually from the Steam Deck. Part of why I am going with Plasma is partly due to that entirely, to have better familiarity with the Steam Deck itself so I'm not utterly clueless when I go into desktop mode and have zero clue what to do.
My friends and I have been using the Steam Deck more with the help of docks, and we agree that the Linux experience on the Steam Deck, once you get it set up, is actually nice. I already knew that from prior Linux background, but many people are surprised to see how easy things are with a Steam Deck in desktop mode. Just given Firefox, people will be blown away by what it can do.
Beyond that, here's why else, from my brief time with Plasma.
As Plasma 6 just came out, I heard a lot of good things about it, and each time I use it, I don't feel bothered in the slightest that it isn't a tiling window manager. There are certain conveniences a tiling WM can offer, but as I enter the realm of multiple monitors, I find less reason to need a tiling WM, especially as my screen real estate is as large as it could be.
I think there are some glitches to be expected, but I will try to make Plasma stick for as long as I can. I would have leaned back more towards Gnome, but I still can't split applications on vertical monitors out of the box. Maybe one day.
After all that time spent on a Nextcloud setup I won't be using (yet), it's time to redirect my efforts into something that's been untouched for a long time - audio programming. It's time I got back into the messy bits of PureData.
I like working with audio, but I don't spend enough time with it on the side. I plan on spending time getting re-acquainted with PureData and trying to polish up my code and turn it into a sort of "standard library", as pd-extended
is no longer supported and I have to de-couple my old programs from that.
I have a bunch of MIDI devices I'd love to get back to experimenting with to create something really high-quality if possible. That's my goal, and I hope each week over the next four weeks I can talk about what makes PureData so much fun.
Right before the weekend, there was an exposed vulnerability inside the compression library xz
, the basis for lzma
compression. Read more here.
The underlying problem is how this happened, and it happened through what some people might call social engineering. A suspicious committer found his way into the xz
committee and began with a series of "trust" commits over the course of years. Trust commits are common to what's called "trust trades" performed in the game RuneScape by malicious players taking advantage of others. Scamming is pretty common in RuneScape, so much that it has extensive documentation.
A trust trade starts with the malicious actor trying to get the unsuspecting person to believe that the malicious actor has no ill intent. Trust trades take all shapes and forms, in order to build rapport, but the idea is to hand out small amounts of wealth in order to eventually trick the victim into trading a large amount of wealth later, after being programmed to suspect no ill intent at all.
This xz
committer trust-traded his way over the course of two years before committing suspicous code into the codebase, before eventually landing his malicious patch early in March. It went unnoticed for almost the entire month, and was only picked up thanks to one guy doing micro-benchmarking on his system. By committing seemingly innocent and working commits into a project, the bad actor tricked his way to the top and got exactly what he wanted - malicious code at the top of the distribution stream.
Canonical and Red Hat were the major targets, and other unlikely targets were small distributions like Archlinux. All the maintainers of the large Linux distributions began work to undo this patch in the repositories and began sending out alerts. This was a bad scenario. Like, really bad.
I'm sure real psychologists researching the field of open source technology would love to use this for future textbooks about open source and trust relationships between decentralized projects. The reality is, there is no standardized format between seemingly important projects. In a swamp of multiple existing projects, in order for one to successfully incorporate another project, they depend on the "release" of individual projects. Releases are published by the maintainers on the project by approving commits from contributors into the core code base.
This was a successful attack of an unknown actor of an unknown origin gaining trust into a small, but important, project, in order to undermine security in most modern computer systems world-wide. This is something that most maintainers would fail to catch early, and unfortunately I'm sure many more will fall suspect to giving permissions to important projects unsuspectingly and unknowing of their true intent.
Fortunately, someone caught this early, and it was able to be rectified fast by all other project managers and maintainers to quickly take note and alert the users. This weekend should have been a relaxing holiday break for many, but turned into a stressful hell instead for those who care about computer integrity. Even I performed package rollbacks against xz
.
It sucks, and unfortunately we just have to own up to the fact that our open source world is slightly fractured. Open source is highly underpaid and fueled with labor of people looking to either do good, or pad their career resumes out with experience. The latter is greater than the former. Our important cryptography libraries are not as secure as we might like to believe.
Either way, should we want to try and move forward, I think we need to build better relationships in critical projects. It's not every day that the xz
library is going to get targeted by bad actors, but the clear lack of oversight in who gets maintainership status in random projects can have echoing effects on our overall security across the globe.
Definitely a weird and unfortunate event, but hopefully we all took a nice lesson from the event and can improve moving forward.
That's all I have for now. Thanks for reading and see you soon!