How my February progress is going, and what's in store for March
Published on
Welcome back to my regularly scheduled generic update post. It’s now March 1st, so it’s time to talk about what’s going on with me, and what’s the next area of focus for March.
Last month I set out to creata a new static site generator that I call “Builder”, and it’s not quite there yet. There is still a number of features to be implemented before it’s battle-ready. That, and I haven’t spent much time with it. Why? See the above image, I’ve been playing Titan Quest a lot.
{{ image_embed(url="/blog/february-update/laura_croft.png" desc="This is my character, Laura Croft, not to be confused with Lara Croft from the Tomb Raider game series. Nope, different character entirely.") }}
Why am I playing a game that’s 18 years old? It’s because I realized how much fun this game actually is, and it’s a great companion game for the Steam Deck when I’m in a more relaxed mode. There’s achievements, and there’s a compelling amount of challenge to the game itself, more than I typically find in most games in my library.
Titan Quest is largely a Diablo clone that seeks to set itself different from the masses. In Titan Quest, you don’t pick one class, but you pick two different classes to combine their strengths and weaknesses together to create your ultimate Greek mythology hero. My above character combines the art of Hunting with the art of Warfare, becoming the hybrid class Slayer. She started out as a longbow-based character firing projectiles from a distance and exploding waves of enemies, but now she uses a spear and shield combination.
Titan Quest has three difficulties across six different massive landscapes; Greece, Egypt, China, the Underworld, the Norselands, and then Atlantis. There is a lot of content here to dig into. Each area is massive, and can easily take you a few hours each to really get through. I think it took me over six hours to complete Atlantis alone. The game is incredibly long, and I’m not on the final difficulty just yet. In order to reach that, you have to beat the previous difficulty in the Norseland as the final chapter, which I’ve yet to do on the middle difficulty.
The thing that really drove me back to Titan Quest was a bad taste in my mouth experience with a game that released last week, called Last Epoch. This was another Diablo derivative work that I published a day after release, and it was basically non-functional. It required online play, and you couldn’t play offline then transfer it back online, because that would take in a threat factor of “cheating” to the online leaderboards.
The game hardly worked, and I tried for hours to get it to work in co-op, so I ended up refunding it. While I don’t think there was anything wrong with the game itself, it really drove home the point that a game like Last Epoch doesn’t really do anything more special than something like Titan Quest, which I own already, and can play offline and online with friends as I see fit.
I think it’d be fun to take Titan Quest to the endgame to see what it’s like there. I don’t expect much, I sort of expect more of the same, but I think there’s some challenges left to face, which is why I’m still excited about it. It’s also a break apart from what I’ve been doing in Oldschool RuneScape on my road to maxing my account.
{{ image_embed(url="/blog/february-update/hitting_a_rock.png" desc="I’ve been hitting this rock for a few weeks now….") }}
Anyways, gaming out of the way, let’s go back.
Builder is still coming along, but slowly. I get caught up with work since a co-worker is on vacation, so it’s been hectic at best. There’s few moments for me to sit down with it that I would rather just spend finishing up Titan Quest. Sometimes you need a nice break from coding all day, and that’s what’s been a cause of delay.
There are a few hurdles left that I have to solve, I don’t think they’re impossible, so I think once I have the mental energy for it, I can spend more time solving these problems.
I’ve been trying to apply as many functional programming principles I can where possible, so my algorithm for Builder is in the ball park of doing something like the following (in pseudocode):
for section in findAllSections "/content"
for page in findAllPages section
publicPage = renderPage page
copyAssets page publicPage
This, however, does not take into account tags fully. If I want to publish a tag-based system, I need to calculate every page that uses a tag, then update references on that tag feed to each and every page with that tag mention. I can’t change this algorithm, because I think this would be the most memory-efficient implementation off the top of my head, better than trying to load everything at once.
I think if I kept a dictionary of tags and their references, that might be more suitable.
tags = {}
for section in findAllSections "/content"
for page in findAllPages section
publicPage = renderPage page
copyAssets page publicPage
for tag in page.tags
tags.update tag, page
In this, tags.update
is a little obscure, but the idea is to create a mapping between tags and a list of pages. I can’t say I’m a huge fan of this, because it means the tags
dictionary will hold a reference to a page
object now until it’s burnt, which sort of goes against my idea of being memory-efficient. If I keep page objects relatively light-weight, then maybe this isn’t as big a deal as I’m making it out to be.
What I mean by light-weight is that I don’t want the actual text contents of each index.md
to be stored inside a page structure. During the renderPage
function, it should read the contents of index.md
and send it to the template engine. But tag information is stored inside the index.md
, so we can’t gather tag data until after the contents are parsed.
In my head, the tag system will need to keep references to pages that use the tag, which is a regular pointer, but the page object itself needs to parse file contents in order to absorb a list of tags itself. I’m more afraid of the danger scenario of collecting a whole bunch of pages which have very long HTML blobs associated with them.
I don’t think this algorithm is suitable, and I’ve been mulling over the design for days. There are going to be templates outside of each individual section that may need to access certain pages, and we should only have to calculate each page once at the minimum. Doing a re-calculation on a page would be a total failure. I think instead maybe the algorithm should look something like this instead.
sections = findAllSections "/content"
pages = findAllPages sections
tags = findAllTags pages
map renderPage pages
; everything else
If all the pages are in a global list, then for template purposes, it shouldn’t be impossible to access a certain section’s pages by filtering the global list. This puts more overhead on memory, but all pages will be rendered once and still accessible within the main engine as a resource that’s easy to fetch. Tags and section data is a filter
call away from having everything we need.
My concerns are seemingly tied with memory, and it’s something I take serious, but I think this might be something of a premature optimization I’m trying to make instead of solving my problem. Memory is “free” and largely available, I shouldn’t worry so much about it. Once I have a few thousand posts, then maybe I need to worry about it. Rust and Zola seem to get away with doing a render-page-once approach, so how come I can’t get away with it?
The goal for March that I’m setting out for myself is to become a fully-knowledgeable Nextcloud user. I have had some discussions, and there are at least three people in need of better collaboration tooling, so Nextcloud is going to be my next target.
This really … shouldn’t be a time-consuming project, by any means. In fact I can write up a Nextcloud docker-compose
file in probably a few minutes.. Except it took me a few hours, in reality.
version: '3'
volumes:
nextcloud:
db:
services:
cloudflared:
container_name: cloudflared-tunnel
image: cloudflare/cloudflared:latest
restart: unless-stopped
command: tunnel --no-autoupdate run
environment:
- TUNNEL_TOKEN=imatokenlol
db:
image: mariadb:10.6
command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- buncha_credentials_here=...
app:
container_name: nextcloud
image: nextcloud:latest
restart: unless-stopped
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- more_credentials_here=...
depends_on:
- db
- cloudflared
This is the base-level Apache version of Nextcloud. Supposedly the nextcloud:fpm
image is supposed to be “faster”, but for the hardware I’m currently running on and the scale, I don’t suspect it to be the real bottleneck (currently).
Nextcloud has some hang-ups that I’d love to solve, and homebrewing servers is going to become a passion project of mine over the next year I feel. Right now I have an Alienware Alpha as my home server, and frankly, it’s getting old, and probably won’t have the CPU capacity to handle things like total file encryption for all users, which is a privacy pain-point as I can (technically) view all my users files at any time. I won’t be doing that, but I would like to remove that capability entirely at some point.
The nice thing with this setup is that it runs in parallel to cloudflared
, the service to tunnel applications to the internet, a la Tailscale. The last time I used this, I had to download the application itself, but with it being parameterized through Docker, it’s one less thing I have to maintain outside of a Docker compose file. That’s one advantage.
There were a number of bugs I ran into setting this up, and it made debugging quite annoying. Since Docker puts containers into their own network groups and can dynamically change IPs on the virtual network, I had to bind containers to names to make it easier to configure cloudflared
on the Cloudflare dashboard. Specifically, naming the Nextcloud container allows me to point to nextcloud:80
.
Next I have to modify the Nextcloud trusted domains, which wasn’t fun, because that’s bugged and unaddressed for a weird amount of time. I put my real domain first, and that seems to have fixed it, I guess? I don’t want to have to modify files inside a Docker container, so I hope this works for a long time.
I’ll have some naysayers responding to this asking why I didn’t do this instead with Nix or Guix, and frankly, I found this still to be easier than trying to encapsulate it in a Nix flake, and Guix cannot do Nextcloud, to my knowledge. I don’t really have a problem with using Docker, as I don’t think it’s horribly bad, at least not yet. For a few lines of code, to get a fully-working Nextcloud stack, is not a terrible trade-off.
After about a week of testing, I’d love to put this into production, and by that I mean handing it off to family/friends. There are some things I need to learn and prepare for, like:
fail2ban
, etc)Those are questions I’ll be on the lookout for over the next month as I try to make this the core of my software organization.
Since Nextcloud isn’t that technically involved, I’d love to get some time in and try to make the remaining Builder changes to get a demo site working here. I have a lot of testing and tweaks to make to it, and I hope I can nail it out by April.
Until next time, thanks for reading!