Compare commits

39 Commits
drafts ... main

Author SHA1 Message Date
1b83a58138 change how pipenv is installed to hopefully fix the fucking gitea workflow
All checks were successful
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 30s
/ Build static site, docker image, upload artifact... (push) Successful in 3m6s
2025-11-16 07:42:23 -08:00
7d6afe6c13 bump python to 3.12
Some checks failed
/ Build static site, docker image, upload artifact... (push) Failing after 2m35s
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
2025-11-16 07:35:30 -08:00
23618e3f24 this time?
Some checks failed
/ Build static site, docker image, upload artifact... (push) Failing after 2m18s
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
2025-11-13 14:29:57 -08:00
e1614d148a fix issue with ci for realsies this time
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 2m26s
2025-11-13 14:22:26 -08:00
d19a07073c fix issue with ci 2025-11-13 14:21:40 -08:00
f8594af9ba Add entry on the fhs spec
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 2m52s
2025-11-13 13:59:55 -08:00
22346caeda Push blog entry for today.
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 1m16s
2025-09-10 19:42:12 -07:00
772bf54b85 Update workflow to allow pipfile updates to trigger action.
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 9m9s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 20s
2025-09-10 17:11:06 -07:00
31d1bc080a bump blag version 2025-09-10 17:08:56 -07:00
eccdee05bf Fix jamie's name
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 4m59s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 17s
2025-05-01 23:21:33 -07:00
37f24a6239 bump blag in pipfile 2024-10-26 03:48:34 -07:00
a030e42d19 update workflow triggers, add a script to create a blog entry.
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 6m36s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 21s
2024-10-26 03:11:18 -07:00
e398c00fd3 Update footer copyright notice
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 3m58s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 35s
2024-09-27 14:27:22 -07:00
78b76df473 Remove tables of contents since they don't work, fix base template.
Some checks failed
/ Build static site, docker image, upload artifact... (push) Successful in 3m28s
/ Connect to deployment host, update, and redeploy docs website. (push) Has been cancelled
2024-09-27 14:23:34 -07:00
08380ee33d Add new entry to blog
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 6m49s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 45s
- Revisit entry from may 12th
2024-09-27 14:02:43 -07:00
66a91927fc Add toc to old blog 2024-09-27 14:02:12 -07:00
b357ab14b4 Add table of contents to old blogs 2024-09-27 13:56:48 -07:00
a93700b334 Add copyright notice. 2024-09-27 13:56:10 -07:00
67310f0f7a fix bad dag
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 3m44s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 22s
2024-09-26 06:28:10 -07:00
ccb0ddff24 update workflow
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 2m57s
2024-09-26 06:19:31 -07:00
d3dadb5170 Configuration maangement blog entry. 2024-09-26 06:18:49 -07:00
fc6b7f0217 Change to arm architecture
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 4m5s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 1m57s
2024-09-21 19:22:43 -07:00
87244e2545 blog update 20240906
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Blocked by required conditions
/ Build static site, docker image, upload artifact... (push) Has been cancelled
2024-09-06 03:37:13 -07:00
aa54a8f2cd Force ci
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 1m0s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 22s
2024-09-06 02:12:11 -07:00
7f08d3b380 Update to rootless nginx. 2024-09-06 02:11:20 -07:00
17b7098bcc Finalize changes to gitea workflow. 2024-09-06 00:20:21 -07:00
70b474da99 FIX WORKFLOW FINALLY?
All checks were successful
/ Build static site, docker image, upload artifact... (push) Successful in 1m2s
/ Connect to deployment host, update, and redeploy docs website. (push) Successful in 20s
2024-09-06 00:02:48 -07:00
8d3eb79baf Fix workflow variable issue
Some checks failed
/ Build static site, docker image, upload artifact... (push) Successful in 1m50s
/ Connect to deployment host, update, and redeploy docs website. (push) Failing after 22s
2024-09-05 23:54:39 -07:00
41339fd8d2 FIX PIPFLE FUCK ME
Some checks failed
/ Build static site, docker image, upload artifact... (push) Successful in 1m10s
/ Connect to deployment host, update, and redeploy docs website. (push) Failing after 21s
2024-09-05 23:49:38 -07:00
3def5b025a fix pipfile
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 41s
2024-09-05 23:47:52 -07:00
631e14720c fix pipfile
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 39s
2024-09-05 23:46:15 -07:00
08b0c2fc09 Fix actions maybe?
Some checks failed
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
/ Build static site, docker image, upload artifact... (push) Failing after 1m46s
2024-09-05 22:07:44 -07:00
ef1756df5e Update blag version, improve workflow for new deployment host. 2024-09-05 21:46:21 -07:00
39ccd1c97f new blog entry
Some checks failed
/ Build static site, docker image, upload artifact... (push) Failing after 4s
/ Connect to deployment host, update, and redeploy docs website. (push) Has been skipped
2024-05-12 22:22:33 -07:00
63a6ef1152 Fix spelling on last blog entry 2024-01-31 19:05:56 -08:00
fbaeb7f56d Add new rice recipe to blog. 2024-01-31 18:46:14 -08:00
75ee536bd3 New blog post. 2024-01-19 05:59:21 -08:00
e4aabe4188 Update blog 2024-01-18 12:14:20 -08:00
584eb7dec8 New blogpost 2024-01-18 12:12:27 -08:00
21 changed files with 1191 additions and 74 deletions

View File

@@ -1,5 +1,5 @@
worker_processes 4;
pid /run/nginx.pid;
pid /tmp/nginx.pid;
error_log /dev/stderr info;

View File

@@ -2,7 +2,7 @@
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
pidfile=/run/supervisord.pid
pidfile=/tmp/supervisord.pid
[program:nginx]

View File

@@ -1,9 +1,12 @@
on:
push:
paths:
- "content/**"
- "static/**"
- "templates/**"
# paths:
# # - "content/**"
# # - "static/**"
# # - "templates/**"
# # - ".conf/**"
# # - ".gitea/**"
# # - ".pipfile"
branches:
- "main"
@@ -19,31 +22,30 @@ jobs:
run: echo "::set-output name=date::$(date +'%Y%m%d%H%M%S')"
-
name: Checkout the git repo...
uses: actions/checkout@v3
uses: https://github.com/actions/checkout@v3
-
name: Set up docker buildx...
uses: docker/setup-buildx-action@v3
uses: https://github.com/docker/setup-buildx-action@v3
-
name: Login to gitea registry
uses: docker/login-action@v3
uses: https://github.com/docker/login-action@v3
with:
registry: gitea.raer.me
username: ${{ secrets.PRODUCTION_REGISTRY_USERNAME }}
password: ${{ secrets.PRODUCTION_REGISTRY_TOKEN }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_TOKEN }}
-
name: Install required system packages...
run: |
export DEBIAN_FRONTEND=noninteractive
apt update
apt upgrade -y
apt install -y curl tar p7zip-full python3.11 pip pipx
apt install -y curl tar p7zip-full python3.12 pip pipenv
-
name: Install pipenv, build blog...
env:
PIPENV_USER: ${{ secrets.PRODUCTION_REGISTRY_USERNAME }}
PIPENV_PASS: ${{ secrets.PRODUCTION_REGISTRY_TOKEN }}
PIPENV_USER: ${{ secrets.REGISTRY_USERNAME }}
PIPENV_PASS: ${{ secrets.REGISTRY_TOKEN }}
run: |
pip install pipenv
pipenv install
pipenv run blag build
-
@@ -51,37 +53,30 @@ jobs:
run: 7z a -mx=9 ./artifact.7z build
-
name: Upload artifact...
uses: actions/upload-artifact@v3
uses: https://github.com/actions/upload-artifact@v3
with:
name: artifact_${{ steps.date.outputs.date }}
path: ./artifact.7z
retention-days: 7
-
name: Build and push docker image to gitea package store
uses: docker/build-push-action@v5
uses: https://github.com/docker/build-push-action@v5
with:
context: .
push: true
platforms: linux/amd64
platforms: linux/arm64
tags: gitea.raer.me/${{ gitea.repository }}:${{ gitea.ref_name }}
job2:
needs: job1
name: Connect to deployment host, update, and redeploy docs website.
runs-on: ubuntu-latest
steps:
-
name: Install required system packages...
run: |
export DEBIAN_FRONTEND=noninteractive
apt update
apt upgrade -y
apt install -y iputils-ping
-
name: Configure SSH...
env:
SSH_USER: ${{ secrets.PRODUCTION_SSH_USER }}
SSH_KEY: ${{ secrets.PRODUCTION_SSH_KEY }}
SSH_HOST: ${{ secrets.PRODUCTION_SSH_HOST }}
SSH_USER: ${{ secrets.DEPLOYMENT_USER }}
SSH_KEY: ${{ secrets.DEPLOYMENT_KEY }}
SSH_HOST: ${{ secrets.DEPLOYMENT_HOST }}
run: |
mkdir -p ~/.ssh/
echo "$SSH_KEY" > ~/.ssh/staging.key
@@ -95,38 +90,5 @@ jobs:
END
cat ~/.ssh/config
-
name: Test SSH Host...
env:
SSH_HOST: ${{ secrets.PRODUCTION_SSH_HOST }}
run: |
ping -c 3 $SSH_HOST
ssh staging 'ls'
-
name: Safety check (ensure dirs exist and repo has been cloned)...
run: |
echo "Adding ci dir if it doesn't exist..."
ssh staging 'bash -c "[ -d ci ] || mkdir ci"'
echo "Cloning git repo if it isn't already cloned..."
ssh staging 'cd ci; bash -c "[ -d ${{ gitea.event.repository.name }} ] || git clone https://${{ secrets.PRODUCTION_API_TOKEN }}@gitea.raer.me/${{ gitea.repository }}.git"'
-
name: Deploy testing script on remote...
run: |
ssh staging '\
cd ci/${{ gitea.event.repository.name }}; \
git remote remove origin; \
git remote add origin https://${{ secrets.PRODUCTION_API_TOKEN }}@gitea.raer.me/${{ gitea.repository} }.git; \
git checkout ${{ gitea.ref_name }}; \
git reset --hard HEAD; \
git pull origin ${{ gitea.ref_name }}; \
git remote remove origin;'
-
name: Pull new image and redeploy...
run: |
ssh staging '\
echo "${{ secrets.PRODUCTION_REGISTRY_TOKEN }}" | docker login --password-stdin --username ${{ secrets.PRODUCTION_REGISTRY_USERNAME }} gitea.raer.me; \
docker stop blog.raer.me-prod; \
docker rm blog.raer.me-prod; \
docker pull gitea.raer.me/${{ gitea.repository }}:${{ gitea.ref_name }}; \
docker run -d --name blog.raer.me-prod -p ${{ secrets.PRODUCTION_DEPLOYMENT_HOST }}:4020:80 gitea.raer.me/${{ gitea.repository }}:${{ gitea.ref_name }}; \
docker logout gitea.raer.me;'
name: Run deploy script.
run: ssh staging

View File

@@ -4,12 +4,17 @@
## Used by automation. Can be built manually for testing.
##
####
FROM alpine:3.17
FROM alpine:3.20
RUN apk add nginx supervisor
RUN mkdir -p /var/www
RUN rm -rf /etc/nginx
COPY build /var/www/build
COPY .conf/nginx /etc/nginx
COPY .conf/supervisor/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN chown -R nobody /var/www
RUN chown -R nobody /etc/nginx
RUN chown -R nobody /var/www/build
RUN chown -R nobody /etc/supervisor/conf.d/
USER nobody
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
EXPOSE 80

View File

@@ -9,9 +9,10 @@ verify_ssl = true
name = "gitea"
[packages]
blag = {version = "==2.3.0", index = "gitea"}
blag = {version = "==2.4.2", index = "gitea"}
pymdown-extensions = {version = "==10.9", index = "pypi"}
[dev-packages]
[requires]
python_version = "3.11"
python_version = "3.12"

View File

@@ -2,7 +2,7 @@ Title: First blog post built with blag
Description: Because every new blog needs a new post.
Date: 2024-01-17 10:28
Tags: personal, gitops, devops, technical
Edited: 2024-01-18 00:18
Edited: 2024-09-27 13:50
# A new post for a new blog
@@ -14,13 +14,13 @@ Previously, <https://blog.raer.me/> was an html-only website. The pages were cre
That's terribly inconvenient. To boot, the thing wasn't version managed and it was deployed entirely manually directly to a folder on my reverse proxy server ([see more...](../../../2023/06/28/automating-some-things.md)) Yikes! None of this was ideal at all!
## Fixing that old mess
# Fixing that old mess
So building the blog with html manually was a pain in the ass. But doing something like an MVC framework or a CMS for a simple blog seemed like too much hassle as well. I don't want a WYSIWYG, or something that's browser-based. I hate dealing with browser frontends. Afterall, a blog is mostly - if not *entirely* - text-based. Why should I have to deal with the overhead of a server-side scripted website? I just want to write my blog in markdown - like i do with all my documentation already. Then I could even keep it in a git repo, backed up to my private gitea instance.
The answer to all of that, is `static site generation`. Turns out, there are plenty of other people out there who have looked at available tools, thought something similar to me, then built their own new tool that can take markdown, then generate a whole-ass website with it. Simple, and clean. You write content, maybe tweak some CSS/HTML templates, then the generator handles all the dirty work. No more searching for dozens of instances of a link when I change something in the navbar. That navbar is now a single template file that's reused by the generator.
## Static Site Generation
# Static Site Generation
This all sounds very complicated, yes? Well, sure. But really, its not.
@@ -187,6 +187,8 @@ jobs:
```
# Conclusion
Bear in mind, this all required a bit of setup and learning to self-host. But, when the hosts & runners are all set up and running properly, with the above workflow, updating this blog is a simple matter of committing to a git repo then pushing it to my remote. The runners handle everything else.
Ain't gitops grand?

View File

@@ -0,0 +1,58 @@
title: Baja-style tacos
description: Some seafood tacos.
tags: cooking, recipes
date: 2024-01-19 05:59
# Baja Tacos
## Ingredients
- Street taco size tortillas
- Mayo
- Sour Cream
### Fresh
- 12x limes (some for juicing, some for serving)
- shredded coleslaw mix (or cabbage for shredding)
### Spices
- chili powder
- kosher salt
- black pepper
- garlic powder
- cumin
### Protein
- 1lb large prawns, shrimp, or fish filets
## Directions
### Prep
- Butterfly the protein.
- Mix spices in order most to least: Chili powder, cumin, garlic powder, salt, black pepper
- Preheat and oil a large skillet
- halve some limes. Cut some halves into quarters, save some for juicing.
### Slaw
Toss shredded slaw veggie mix with the juice of 1-2 lime halves, season lightly with salt and pepper
### Sauce
Mix 60:40 sour cream to mayo. Make about 2 cups worth. Give generous pinch of salt. few generous shakes of coarse black pepper and garlic powder. Mix juice of 2-4 lime halves. Sauce should be tangy with a hint of garlic.
### Rub
Rub the protein liberally with the spice mixture. Cook in hot oil until tender. (Butterflied shrimp/fish cook very fast this way)
## Assemble
1. Taco shell
2. Dallop and smear of sauce
3. generous helping of slaw
4. 2x prawns/shrimp OR 1x fish filet
5. Drizzle with sauce, serve with lime slice

View File

@@ -0,0 +1,78 @@
title: Cheesecake
description: My birthday is soon, here's a cheesecake recipe.
tags: baking, cooking, cheesecake, recipes
date: 2024-01-18 12:05
edited: 2024-09-27 13:52
# New York style cheesecake
Shamelessly stolen from [martha stewart dot com](https://www.marthastewart.com/865202/new-york-style-cheesecake). Thanks `Lucinda Scala Quinn` for a great recipe!
Converted to markdown and posted here to preserve for my own purposes.
## Ingredients
### For the Crust
- 4 ounces graham crackers, broken into pieces
- ¼ teaspoon coarse salt
- ⅓ cup sugar
- 4 tablespoons unsalted butter, melted
### For the Filling
- 2 ½ pounds cream cheese (five 8-ounce packages), room temperature
- 4 ounces unsalted butter, room temperature
- 8 ounces sour cream, room temperature
- 1 ¾ cups granulated sugar
- 5 large eggs, plus 2 egg yolks
- Zest of 1 lemon
- 1 teaspoon vanilla extract
## Directions
1. **Preheat oven and prepare pan:**
- Preheat oven to 375°F with rack in the lower third of the oven.
- Butter bottom and sides of a 9-inch springform pan.
- Line sides of the pan with 4-inch-high strips of parchment and butter parchment.
2. **Combine graham crackers and sugar for crust:**
- In a food processor, pulse graham crackers with salt and sugar to fine crumbs.
- Add butter and pulse until fully incorporated.
3. **Bake and cool crust:**
- Press evenly into the bottom of the prepared springform pan and bake until the crust is golden brown and set (15 minutes).
- Remove from oven and transfer to a wire rack to cool for 10 minutes.
- Use the bottom of a measuring cup or the flat side of a drinking glass to press the crumbs into a compact layer.
> Other cookies, like chocolate disks, gingersnaps, or Biscoff wafers, can be used instead of graham crackers.
4. **Mix cheesecake filling:**
- In a large stand mixer fitted with the paddle attachment, beat cream cheese, butter, and sour cream with sugar until light and smooth.
- Add eggs, yolks, zest, and vanilla:
- Beat in eggs one at a time until fully incorporated.
- Beat in remaining egg yolks, zest, and vanilla extract.
5. **Line pan with foil and parchment:**
- Crisscross two long pieces of foil and place a piece of parchment on top.
6. **Wrap exterior of pan in foil:**
- Place the springform in the center of the foil and wrap the foil tightly around the bottom and sides of the pan.
> Lining the pan with foil helps keep water from seeping into the cheesecake, which causes the crust to become soggy.
7. **Place pan in water bath; transfer to oven and bake:**
- Transfer to a roasting pan, pour filling into the springform pan, and smooth the top.
- Pour boiling water into the roasting pan to come halfway up the sides of the springform pan and carefully transfer to the oven.
- Bake for 1 hour until the top of the cheesecake is golden brown, edges are set, and the center jiggles slightly.
8. **Remove from water bath and foil; chill:**
- Lift cheesecake from the water bath, remove foil and parchment from outside of springform, and chill cheesecake in the refrigerator for at least 8 hours.
9. **Slice the cheesecake and serve:**
- To serve, remove the side of the springform pan and parchment strips.
- Cut the cheesecake with a long, thin-bladed knife.
### How to Slice Cheesecake
For perfect slices every time, run a long thin-bladed knife under hot tap water, wiping it clean between cuts.

View File

@@ -2,6 +2,7 @@ title: Increasing complexity
description: A small issue snowballs because I want independence
tags: technical, gitops, devops
date: 2024-01-18 01:09
edited: 2024-09-27 13:52
# Increasing Complexity
@@ -88,11 +89,11 @@ This isn't my first rodeo with setting these things up, though. I've got a simil
Okay. Enough talk, already! Lets do this!
## Actually doing the thing
# Actually doing the thing
So all that stuff I was talking about before was more-or-less me brainstorming what I needed to do. Here's some reporting back from me doing the stuff.
### Getting my fork on my package repo
## Getting my fork on my package repo
Turns out, the makefile is fine. Super easy. Just gotta hit it with a `make` command and its primo. So what I did, was I made the mirrors org and moved my blag mirror over there. Then I forked it to my personal gitea account. Then I cloned the fork, and made a `v2.3.0` branch because it was on `v2.2.x`. I updated the version in the source. Then I added the dependency for the new package `pymdown-extensions` in the appropriate files. Then I modified the `markdown.py` file to include the `footnotes` and `pymdownx.tilde` (strikethrough) extensions. Then I ran the makefile, which did its magic and made the stuff. Then I simply run twine to upload to my personal gitea package repo. Done. Version 2.3.0 is on my repo.

View File

@@ -0,0 +1,37 @@
title: Mediterranean-style rice
description: A very tasty rice and veggies recipe.
tags: cooking
date: 2024-01-31 18:45
# Mediterranean style rice
This recipe is inspired by a rice dish served by a mediterranean restaurant I used to go to in my hometown.
## Ingredeints
- 2 cups cooked basmati rice
- 1x medium to large sweet onion, diced
- 2x roma tomatoes, cut into 1cm thick slices and then strips.
- 1x green bell pepper, diced
- 2-3 cloves minced garlic
- 1 cup pickled banana peppers, diced.
- 0.5-1.0 cups banana pepper brine.
### Spices
- kosher salt
- black pepper
- turmeric
- oregano
- dill
- nutmeg
### Cooking fat
- olive oil
## Cooking instructions
In a hot cast iron skillet with near smoking olive oil, sweat the onions and bell peppers until all are soft and starting to brown/darken. Add minced garlic and stir into still sizzling onions/pepper until fragrent. Add generous pinch of kosher salt and coarse black pepper. A tablespoon or two of oregano. A teaspoon or two of dill. A teaspoon of turmeric. And half a teaspoon of nutmeg. Add roma tomato and banana peppers and a glug of olive oil and mix until tomatoes are soft. Add the brine and bring to a boil.
Serve heaping spoonful of vegetables over rice, with a splash of banana pepper brine on top.

View File

@@ -0,0 +1,36 @@
title: Using passwords in script, securely!
description: Keeping passwords inside of scripts safe from prying eyes.
tags: security, scripting, unix, linux
date: 2024-05-12 21:35
# Storing passwords in plaintext (sorta)
I came across an issue recently wherein I wanted to automate a backup process that requires three different passwords. I had just discovered [borg backup](https://borgbackup.readthedocs.io/en/stable/) and wanted to use it in place of the periodic `rsync -azvh --delete` that I was doing. The rsync method would just sync my home folder to a USB ssd, and one of my two fileservers. This worked, but didn't have the deduplication or archiving benefits of borg. It also required me to mount my fileservers vis nfs which is another manual step in the backup process.
Borg backup works by copying data in blocks. Its much smarter than rsync, and you can encrypt the backups on the fly. For remote backups, they recommend using ssh. They allow you to put the encryption passphrase in an environment variable for automation. I wanted to use borg to backup to three different locations at the push of a button, without storing the backup encryption passphrases in plaintext or entering them every time I run a backup.
Through some trial and error, I settled upon writing my script - passhprases and all - then writing *another* script that encrypts it with my gpg key, and sticks it into *yet another* script that will first decrypt the encrypted script then pipe it directly to bash. That looks like this:
```bash
#!/bin/bash
## This will stop the script if there's no script.sh file in the root dir.
set -e
mv script.sh script.sh
GPG_ID='ENTER_YOUR_GPG_ID'
cat script.sh | gpg --encrypt --armor -r $GPG_ID | base64 --wrap 0 > script.gpg.b64
printf "#!/bin/bash\n\nSCRIPT=\"$(cat script.gpg.b64)\"\n\necho \$SCRIPT | base64 -d | gpg --decrypt --quiet | bash\n\n" > script.obf
chmod +x script.obf
rm script.gpg.b64
```
Now I can take *any* script with passwords in it, and obfuscate it behind a gpg passphrase! How neat!

View File

@@ -0,0 +1,30 @@
title: Some changes have ocurred
tags: servers, server layout, gitops, devops
date: 2024-09-06 03:27
edited: 2024-09-27 13:53
# Some changes have ocurred
Server layout has undergone some changes, most notably:
- the OS on my pi
- how i do gitops
- how the deployment works
# Pi os
I needed docker on my pi, so i abandoned freebsd. It was a good run and taught me a lot about unix. But implementing a custom freebsd server is just. not my thing anymore. Docker is so much easier for versioning. And if i want to compile from scratch? I have that option, too, with docker.
The pi now runs openSUSE Leap 15.6.
# How I do gitops
I've more or less solidified how I do gitops. When I need to version control files on a remote server, I make a local git repo with those files that also contains a script which is used to deploy any of said files on the remote server. This is achieved over SSH. A bare git repo is initialized on the remote server, and added as a remote in the gitops repo. Then, that remote is pushed to in a way that it is always synced perfectly with the local copy. Then, a script in the git repo can SSH into the remote, clone the repo from the local copy, and do stuff with the files.
# How the deployment server works.
Before, i used rootless docker and usernames to sort of namespace things in an ineffective way. I was also using gitea actions configs to do things on the deployment server with an ssh key that had unlimited access to the user account. This provided a false sense of security.
Now, I'm just running a single rootful docker instance. I'm mindful of network segregation, ensuring no unsafe directories are given to containers, and I'm also not allowing any privileged containers.
I'm also doing CI a different way. An SSH keypair is made for each CI repository on gitea. Then, the private key is stored as a secret in the repo's actions settings. Then, a script is written and pushed to the deployment server that is called by the SSH public key. This ensures that no rogue activity happens, essentially locking each SSH key to a specific deployment script.

View File

@@ -0,0 +1,36 @@
title: Managing configs on my homeservers.
description: Managing configuration files - some lessons I've learned over the years.
tags: post, git, gitops, devops, cicd, tech, scripting
date: 2024-09-26 05:21
edited: 2024-09-27 13:53
# Managing homeserver configs
I've run my home services a number of different ways over the years. I've split things between multiple virtual machines, I've set up a 'bare metal ' kubernetes cluster distributed between multiple VMs and hardware devices on my home network. I've used FreeBSD and its Jails to run things I compiled by scratch in an effort to lower attack surface. I ran (and run) VMs and containers on proxmox, truenas core and truenas scale. Each method brings its pros & cons, security tradeoffs, and configuration complexity. Though I've practiced more complex enterprise-level user & permission management (ldap/active directory) techniques, I've settled on "good enough" security practices for my uses/needs (I don't have multiple people accessing things over ssh, for example, so I do the unthinkable and - gasp - ssh directly into root with an ed25519 keypair to administer servers). No SSH ports are exposed directly to the internet anyway - well, except for gitea. But that's also protected with keypairs.
Similarly, I seek to reduce complexity of my configuration management. I like to do as much work as possible, in my text editor of choice (that's VScode. I know, I know. I use the microsoft text editor. Controversial opinion: its good. Shoot me, emacs and vim nerds). That means using things like webuis to enter configs is out the window (looking at you, truenas scale kubernetes bullshit). Doing things in the text editor means I'm using git to version manage. I also like to use a combination of custom shell scripts and gitea actions config files to automate workflows. Any commands I run frequently get stuck into a shell script, no matter how mundane. I spent a long time manually deploying configs for docker - I know how that tool works. Hell, I know how *all* my tools work. I want to spend less time entering `ssh host "docker compose down;docker compose up -d;"` and more time doing a `./sripts/docker-down-up`. I don't want to enter an ever-changing esoteric webui for some haphazard k3s deployment to look for/edit a hacky series of docker-compose configs rearranged into different parts of said webui. That stuff just annoys me when I have to change things.
> __Speaking of k3s/k8s - Fuck that noise entirely in a home environment. Unless you're doing it to learn, I recommend staying away from kubernetes. Its just docker with extra steps and its far more trouble than its worth for the home - in my very strong opinion.__
# So how do I do things?
Well, as I alluded to earlier - I work in my text editor, out of git repositories. All of my services are deployed with docker - its just... easier, this way. I've run services so many different ways over the years and docker is simply the easiest to deal with. I can grab premade containers. Or I can make my own, push them to my gitea deployment, and pull them for use later. Its great. And its distro agnostic. Sure there are some security issues associated with it. But there are also well documented methods to nullify them. I can also use docker volumes to store everything in `/opt/{container_name}` which is super handy when it comes time to archive/back up the host since all I need to do to grab any important data is backup `/opt`!
Most things get pushed directly to my gitea server. If there are actions that need to be run (such as building and pushing docker containers or other packages), I write a gitea actions config to handle that - its for all intents and puposes exactly the same as github actions. Which is nice. It simply uses a privilaged docker container to spin up other docker containers to do stuff that I would normally do by hand or with a script called `build` or `deploy`.
Though, there are some things that have to be managed manually. one of them is the repo for all the config files for services run on the deployment host. The other, is the repo for my nginx reverse proxy - because if I use gitea to deploy that docker container, it will... turn off the reverse proxy. Which is a link between gitea and the act runner. So... Yeah... can't do that. Cus it causes issues.
These manually managed repos are pushed directly to a bare repo on the deployment host. Then, a script is run that SSHs into the host and runs some commands.
In the case of the main config repo, there also resides an ssh authorized_keys file, some scripts in a folder called `ci`, docker configs, and a big folder of scripts to deploy the thing, manually run actions on the docker configs, and more. The authorized_keys file and `ci` folder allow me to use gitea actions to deploy docker images on the host. I generate an ssh keypair, I store the private key as a secret in each individual repo that deploys to the host, then i put the public key in the authorized_keys file with a command that points to a script in `ci` that pulls and redeploys docker images.
# conclusion
This is more or less some rambling about how I manage configs in git. I hope any amount of this made sense.

View File

@@ -0,0 +1,98 @@
title: Using passwords in script, revisited!
description: Another look at my entry from 12th May, 2024 where I explore a method for obfuscating scripts to keep borg passphrases secure while automating my backup process.
tags: security, scripting, unix, linux
date: 2024-09-27 12:11
# Revisiting an old topic
[In this blog entry dated 12th May, 2024](https://blog.raer.me/2024/05/01/20240512.html) I discuss a method where I'm able to keep passphrases stored inside of a bash script, while still being able to execute the bash script. Its been a few months and I've improved the process for obfuscating/deobfuscating scripts, since I'm now using this method as part of my process in writing/editing backup scripts. Thus I'd like to retouch the topic since rereading the previous blogpost leaves me a bit undersatisfied.
# The process
You start by writing a script, and don't forget to include your sensitive data. I'll include an example below of one of my backup scripts:
```bash
#!/usr/bin/env bash
NAME=pi_$(date +'%Y%m%d%H%M%S')
DIR="/home/freyja/.mnt/pi"
tmux new-session -d -s borg-pi-archive "sshfs pi:/opt $DIR; \
export BORG_PASSPHRASE=yesthisistotallymyrealpassphraseyougotmeididanoopsie; \
borg create --stats --progress --compression lz4 /home/freyja/Documents/.borg/pi::$NAME $DIR; \
export BORG_PASSPHRASE=''; \
rclone sync -P -v /home/freyja/Documents/.borg/pi proton:/borg/pi; \
umount $DIR; \
"
```
## Obfuscation
Obviously, saving this as plaintext would be insecure. It has a whole encryption passphrase in there! But remembering all that and typing it in every time isn't ideal either. That's where the `obfuscate` script comes in. That looks a 'lil somethin' like this:
```bash
#!/bin/bash
##
# obfuscate script. This will encrypt any script with a gpg id,
# then stick it into the body of a variable in a new script that can
# decrypt the variable and pipe the original script directly to bash.
#
##
if [ -z "$1" ]; then
read -p "Please enter the filename: " filename
else
filename="$1"
fi
GPG_ID='my-gpg-identity'
printf "#!/bin/bash\n\nSCRIPT=\"$(cat $filename | gpg --encrypt --armor -r $GPG_ID | base64 --wrap 0)\"\n\necho \$SCRIPT | base64 -d | gpg --decrypt --quiet | bash\n\n" > $filename.obf
chmod +x $filename.obf
```
calling that with `obfuscate script` will spit out a file called `script.obf` that is just as easy to execute as the script you obfuscated. In fact, it is literally that exact script! Except its been encrypted with your supplied `GPG_ID`, turned into base64, then stuck inside a variable called `SCRIPT` inside of the `script.obf` file - along with some other stuff that allows `script.obf` to decrypt the data stored in `SCRIPT` variable then pipe it directly into bash.
The obfuscated script will look a bit like this:
```bash
#!/bin/bash
SCRIPT="LS0tLS1CRUdJTiBQR1AgTUVTU0FHRS0tLS0tCgpoUUlNQXhCRUVPVU9vQ1JEQVEvL1EvM29FOHBoeUlQSFN6bVFjSWtpUzFDR1pCQU1oWFY3RU9ld2ZaSVhpcDZOClZHM1lDMGQ5QjVwRjZ6emFQVUZmVkJPQzZSV2NPR1pHQU91QW83bXVSSmZNVU5YVWpYbDRrVFRIRnFpdE8ydFIKVHQ3d0lyVm5YMnJjVUNHR0Iva29lSDdsVG9xeG00TEtiZUkvSis2VVpCejNMSlBEQ2hkcXhtQURkbjlIRCtBMwpJT2pMNW5mbVZoRzRRUXBIa01KVFZyOUVoVXNsanJzNXRJNWNUMmg5NEZyaHBnbHE1OVhIOEJCOTJKQ25jSVo0Cis2eW1hZVdSRXBLOGFXRkF6aFVEZUd6bDdhV1VzMUs4NXRraUszdnFwRkYxUDZLdjZQb3dJOU9RRERHNmtLL08KaXlaRFNtTTNIVUJOQ3lEK29SKzBRRGJneG1TTGdUdkdkbW9xUUxCcm9ESVNLMkRiRWNBWkVqR3RwM25ienhwNwo2T1d6cDZLUzhGZURvTGZtMzdBdnpreTBMQ2ZTUGlZQm1QcFVEZzg1NlJOMytSQWlPeTgydkU0TFd4Mm14N3FZCmVCN1dQS3FNbTlhZDNnTVJUVEpUUGhzSUZrTWc4bEtEeW5uenhnUGFyQStFcnlyMUhnZVBqaThFRUZMVk44cmIKYlVtUE1BL3d6UVVFRFNyaXBBTE5WQ3lXOFF0eHh5QmtYWW9ERldTc2w5RXRCNDRCWHk4amozbHlKcXV6TzFZZgpTTjJmZThRdW9FMk1JTHM1MmRxbjBvdlFBZUN6RVF3Z29udC9hcXpzRUZ4b29tSFRPa1dVT1BUeGVoTXFhRjZKCjlKSnk2aUs0MEMrUHFTbVNVaktQMmtBZDRRQ3YzYXlabU8zaTZyRHZIM25hN0g1OHFLRlkwb2NoRWI4Qzh6blUKY0FFSkFoQk56bm5zRmNrTlVpOUhEQ05hcG1ubkxzb0xycDNWVE9yMFhZQUpyYUFZYStZZDh2RHJsYUNaeFNxTQpOYW1kUFZkc3J4K2JPTTdkbk1VbXFldm1GRzVPUm5nSUF0KytxdkNzaGNmRk42V01qcUpyYStBdWZRMFRldUZzClZZNHRVY3JzRU90REFrWHUxWDEzZVh3PQo9WENRdgotLS0tLUVORCBQR1AgTUVTU0FHRS0tLS0tCg=="
echo $SCRIPT | base64 -d | gpg --decrypt --quiet | bash
```
## Deobfuscation
As you can see, the encrypted script is stored as base64. To run it, that script is decoded, decrypted, then piped into bash. So long as the machine you're on has the private key for the gpg key you used to encrypt the script, it will simply ask you for your gpg passphrase before proceeding~~! How neat! But what if we want to edit the script? Seems like a bit of a pain to get your original script back, right? Well... to a degree. If you wanted to, you could lop of that pipe and `bash` command at the end of the last script, replace it with `>> script.deobfuscated`, and you'd have your original script back. But, that's also kind of annoying to do every time. So I just made a `deobfuscate` script that'll do the job for you. Its just like the `obfuscate` script but in reverse. It takes a file as input, looks for a variable called `script`, then decodes, decrypts, and dumps it into a file appended with `.deobfuscated`. Here's the deobfuscate script:
```bash
#!/bin/bash
if [ -z "$1" ]; then
read -p "Please enter the filename: " SCRIPT_FILE
else
SCRIPT_FILE="$1"
fi
if [ ! -f "$SCRIPT_FILE" ]; then
echo "$SCRIPT_FILE not found!"
exit 1
fi
cat $SCRIPT_FILE | xargs | awk -F 'SCRIPT=' '{print $2}' | awk -F ' ' '{print $1}' | base64 -d | gpg --decrypt --quiet > $SCRIPT_FILE.deobfuscated
if [ $? -eq 0 ]; then
echo "$SCRIPT_FILE has been deobfuscated."
else
echo "Failed to decrypt the script. Ensure you have the correct GPG key and it's available."
exit 1
fi
```
# Conclusion
That's about it, I don't have anything else. This section is simply a formality. I hope that if you're reading this, you find it interesting. <3

View File

@@ -0,0 +1,39 @@
title: An annoying issue with gitea
description: messing around with gitea stuff gets me into a wild goose chase.
tags: devops, gitea, sysadmin
date: 2025-09-10 19:28
# An annoying issue with gitea
Had this really frustrating issue today with gitea that I ultimately discovered was caused by a stale gpg lock file residing in a place I didn't expect.
It started out with me creating some orgs to better manage my personal forks. I have a few repositories that are forks of things from github, but not reflected as such in gitea. I started by making an org for github.com, migrating the repos there, renaming and archiving my copies, forking the repos to my gitea user, then pushing my local copy back to gitea. This allows me to maintain a local mirror of the github copy, and link to it as a fork in gitea. Pure semantics, honestly.
However there was this big issue when trying to view commit graphs on anything I've forked: I would ultimately get a timeout error & no page. Some prodding revealed gitea throwing this error multiple times before timeout:
```
2025/09/11 02:10:12 ...mmit_verification.go:229:ParseCommitWithSignature() [E] Error getting default signing key: ******** unable to get default signing key: ********, gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: WARNING: nothing exported
gpg: key export failed: Operation timed out
, exec(68c22f7a-13:gpg -a --export) failed: exit status 2(<nil>) stdout: stderr: gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: Note: database_open 134217901 waiting for lock (held by 1502) ...
gpg: WARNING: nothing exported
gpg: key export failed: Operation timed out
```
Some cursory web searches showed that this wasn't something other people have encountered. Then a bit of about the error revealed that this was caused by a stale lock file. Simple enough, I thought. So I navigate to my gitea instance's data directory. I delete `data/git/.gnupg/public-keys.d/pubring.db.lock` only to STILL have the issue. WTF???
So after some more searching and using chatgpt... I find from some github issue that there's a second `.gnupg` dir!!! Seems that gitea uses gnupg in two different contexts requiring two gnupghome directories. Neat. Cool. Awesome.
So finally, I delete `data/gitea/data/home/.gnupg/public-keys.d/pubring.db.lock` and BADDA BING BADDA BOOM! BOBS UR AUNTIE! Problem solved!
Sigh. What a waste of the last hour or so of my life. 'least the issue is fixed and I can view commit graphs again! Somehow, this also fixed a load time issue when loading mirror repos. It should also fix a long standing issue I had with merging pull requests on gitea's webui!
- Freyja (2025-09-10 19:41)

View File

@@ -0,0 +1,256 @@
title: The FHS spec
description: Notes taken by me on the Unix Filesystem Hierarchy Standard.
tags: unix, linux, notes, general
date: 2025-11-13 13:56
# FHS
These are my FHS spec notes from 2024-06-26, uploaded here (nostly so my gf can read it, hi Jamie! :D)
The File Hierarchy Standard is a unix standard defining the minimal required directory hierarchy for a functional, portable, distributable filesystem. Distributable refers to the ability to store certain directories on other devices, mounted to the root directory. These can include /var, /etc, /home, /usr, /boot...
- bin+
- boot
- dev
- etc
- lib+
- media
- mnt
- opt
- run
- sbin+
- srv
- tmp
- usr*
- var*
(* indicates complex directories)
(+ indicates a directory that is usually a symbolic link for one inside /usr)
## Table of Contents <!-- omit in toc -->
- [FHS](#fhs)
- [Prologue](#prologue)
- [(/usr)/bin](#usrbin)
- [/boot](#boot)
- [/dev](#dev)
- [/etc](#etc)
- [/home (Optional, apparently. lol)](#home-optional-apparently-lol)
- [User mounted stuff](#user-mounted-stuff)
- [/home summary](#home-summary)
- [(/usr)/lib](#usrlib)
- [/media](#media)
- [/mnt](#mnt)
- [/opt](#opt)
- [/root (optional)](#root-optional)
- [/run](#run)
- [(/usr)/sbin](#usrsbin)
- [/srv](#srv)
- [/tmp](#tmp)
- [/usr](#usr)
- [/var](#var)
- [Backup strategies](#backup-strategies)
- [Example backup strategy](#example-backup-strategy)
- [Configuration management strategies](#configuration-management-strategies)
- [When do I do things in /opt???](#when-do-i-do-things-in-opt)
## Prologue
I have a bug up my ass about studying the hierarchical directory system used on Linux and other unix-like computer systems. I think its a good idea to study things which we interact with and take for granted, in detail. This is something I implement as sort of a general philosophy of life lately. Thus, it has crept into my ongoing studies into computer science. Hence these notes on the FHS and how I can better conform to this standard so as to improve the fluidity of my workflow managing multiple home servers.
## (/usr)/bin
[Reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s04.html)
Binary files. The basic commands installed on your system. These are things like `rm`, `sudo`, `cp`, `mv` and more. This is usually a symbolic link to /usr/bin.
__ruleofthumb__ though it is standard for /bin to exist as a symbolic link, when doing a shebang to something in this dir I like to use the *real* directory `/usr/bin`.
Historically, this has been *separate from /usr/bin*. However, I think it makes more sense in this day-and-age to conform to a variation of the FHS where /bin is a symlink, since it allows /usr to be *unified system resources*.
## /boot
[Reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s05.html)
This is the boot partition. The bootloader and its various config files reside here, as well as the kernel.
## /dev
[Reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s06.html)
`/dev` is where device files reside. These are special files which refer to hardware (or virtual devices) made accessible to the system via drivers.
## /etc
[Reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s07.html)
Configuration files for the host system. `/etc/opt` should contain config files for apps installed to `/opt`
## /home (Optional, apparently. lol)
FHS doesn't officially standardize *shit* within a user home directory. They specify this, which I find useful:
- "To find a user's home directory, use a library function such as getpwent, getpwent_r of fgetpwent rather than relying on /etc/passwd because user information may be stored remotely using systems such as NIS." [source](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s08.html)
Anyway, here are some standard conventions which I see employed in user home directories - and some ideas I have about how users' home directories *should be dealt with*.
- In systems with DEs, you'll see capitalized directories called `Documents, Downloads, Music, Pictures, Videos`. These hold... what they say they hold. I like to use them as-is. KDE likes to include something called `Templates` which I've never used, or seen used, so I just delete it.
- `~/bin/` for user-installed binaries or scripts. This is usually defined on the user's PATH variable so they may stick scripts in here for quick execution from their account, without having to stick them in `/usr/`.
- I have a tendency to keep my scripts inside of `~/Documents/git/freyjagp/scripts/` because that is a git repo, and I keep my git repos there. Nonetheless, many tools that a user installs may install binaries/scripts in here. Or, into, as we'll describe below....
- `~/.{TOOL}/` - Many tools will create a hidden directory inside of the home directory for storing config data local to the tool's user. I'm going to think of this as a sort of personal workbench, as opposed to `/usr/bin/` or the like, which are like a community workshop.
- `~/.config/` *many* user applications will store their configuration files here. Anything that's intended to be run by the user, not as a system-critical program, likes to store configs either here, or `~/.local/share/`.
- `~/.local/` is similar to `/usr/local` in structure? In practice, I find it to be an inconsistently used place to store stuff that would otherwise be chucked in `~/.config/` or `~/bin/` or `~/.{TOOL}/` directories. It was - I suppose, at some point - supposed to unify how things are done in a `/home/user/` directory but... *sigh*.
- Other misc. files, such as users' rc files for shells, or shell histories, or somesuch, are also stored in here.
### User mounted stuff
Stuff that's mounted for the user automatically is usually done in `/run/user/...` and having read the standard, this makes sense. Its a runtime mount managed by software since the `/run` dir *should be* cleared on shutdown like `/tmp`.
Anything else that's mounted for the user's usage (and not for the system to have access to) should be mounted to `~/.mnt`. Else, stuff mounted for use by the system (variable or otherwise) should be mounted at `/var/nfs/...` or similar - not `/srv` since `srv` should be for things that are *publically accessible*. Staticfiles, usually.
### /home summary
Basically, `/home/user` stores anything a system user would want to keep backed up should shit hit the fan. It might be *convenient* to maintain backups of the whole system. But what *really matters* is the `/home` dir (to a basic user, at least).
## (/usr)/lib
[Reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s09.html)
Shared libraries and kernel modules. Stuff used by the OS and kernel to make code work properly, to compile stuff written in C/C++, etc.
## /media
[Reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s10.html)
Media on removable devices. Most commonly USB (though, many linux distributions use software that will mount removable media to /run/media/{USER}). Not really used, though, in my opinion.
## /mnt
Temporary mountpoints. So using this for a quick `mount -t nfs ...` isn't a bad idea. But for something in `/etc/fstab` I should actually be mounting to something like `/srv`, `/home/{USER}`, or `/var`. This isn't so much a *universal mountpoint for things that aren't system mounts* but moreso a place for a user or administrator to mount devices in a pinch.
## /opt
Applications/software packages. This blurb explains it better than I unerstand it at the time of writing:
```text
This directory is reserved for all the software and add-on packages that are not part of the default installation. For example, StarOffice, Kylix, Netscape Communicator and WordPerfect packages are normally found here. To comply with the FSSTND, all third party applications should be installed in this directory. Any package to be installed here must locate its static files (ie. extra fonts, clipart, database files) must locate its static files in a separate /opt/'package' or /opt/'provider' directory tree (similar to the way in which Windows will install new software to its own directory tree C:\Windows\Progam Files\"Program Name"), where 'package' is a name that describes the software package and 'provider' is the provider's LANANA registered name.
Although most distributions neglect to create the directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib, and /opt/man they are reserved for local system administrator use. Packages may provide "front-end" files intended to be placed in (by linking or copying) these reserved directories by the system administrator, but must function normally in the absence of these reserved directories. Programs to be invoked by users are located in the directory /opt/'package'/bin. If the package includes UNIX manual pages, they are located in /opt/'package'/man and the same substructure as /usr/share/man must be used. Package files that are variable must be installed in /var/opt. Host-specific configuration files are installed in /etc/opt.
Under no circumstances are other package files to exist outside the /opt, /var/opt, and /etc/opt hierarchies except for those package files that must reside in specific locations within the filesystem tree in order to function properly. For example, device lock files in /var/lock and devices in /dev. Distributions may install software in /opt, but must not modify or delete software installed by the local system administrator without the assent of the local system administrator.
The use of /opt for add-on software is a well-established practice in the UNIX community. The System V Application Binary Interface [AT&T 1990], based on the System V Interface Definition (Third Edition) and the Intel Binary Compatibility Standard v. 2 (iBCS2) provides for an /opt structure very similar to the one defined here.
Generally, all data required to support a package on a system must be present within /opt/'package', including files intended to be copied into /etc/opt/'package' and /var/opt/'package' as well as reserved directories in /opt. The minor restrictions on distributions using /opt are necessary because conflicts are possible between distribution installed and locally installed software, especially in the case of fixed pathnames found in some binary software.
The structure of the directories below /opt/'provider' is left up to the packager of the software, though it is recommended that packages are installed in /opt/'provider'/'package' and follow a similar structure to the guidelines for /opt/package. A valid reason for diverging from this structure is for support packages which may have files installed in /opt/ 'provider'/lib or /opt/'provider'/bin.
```
[source^](https://askubuntu.com/questions/982589/why-should-i-be-installing-my-applications-in-the-opt-location)
after reading the blurb, I understand that opt functions this way:
It can look like this:
- /opt
- /bin
- /doc
- /include
- /info
- /lib
- /man
These are reserved for system admin usage. They should store (if they are used) *copies or symlinks* to stuff inside of `/opt/{PACKAGE}/[bin,doc,include,info,lib,man]`. That's a little complex so I'll just stick to `/opt/{PACKAGE}/...`
`/var/opt` and `/etc/opt` can also serve variable and config files respectively, for applications stored in opt (as opposed to /opt/{PACKAGE}/[etc,var])
In general, though, anything for an application installed to `/opt` to work should be stored inside `/opt/{PACKAGE}/...`
Another valid option is `/opt/{PROVIDER}/{PACKAGE}` but I'll prolly ignore that one.
## /root (optional)
The root user's folder.
## /run
["This directory contains system information data describing the system since it was booted. Files under this directory must be cleared (removed or truncated as appropriate) at the beginning of the boot process."](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s15.html)
This is where things like sockets, or ["data relavent to running processes"](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s02.html) are kept.
__`/run` should not be writable for unprivileged users; it is a major security problem if any user can write in this directory. User-specific subdirectories should be writable only by each directory's owner.__
## (/usr)/sbin
[Utilities used for system administration (and other root-only commands) are stored in /sbin, /usr/sbin, and /usr/local/sbin. /sbin contains binaries essential for booting, restoring, recovering, and/or repairing the system in addition to the binaries in /bin. [18] Programs executed after /usr is known to be mounted (when there are no problems) are generally placed into /usr/sbin. Locally-installed system administration programs should be placed into /usr/local/sbin. [19]](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s16.html)
System binaries. Stuff used by the root user to do administrative and or system things.
## /srv
I think taticfiles should be served from here. On my nginx server, instead of doing `/var/www/bjongbeuf.com/{NFS-MOUNT}` or `/mnt/httpserve/bjongbeuf.com/{NFS-MOUNT}` I should just be doing `/srv/bjongbeuf.com/{NFS}`. I suppose, that this shouldn't even necessairily be limited to nfs. This should just have *anything* that's being *served to the public* (which in this day-and-age means files served over http). In my opinion, most linux distros do this wrong and it, quite frankly, makes sense for things that are being served to the public to be here. Sure they *can be* variable. But /var is better for shit like logs, in my opinion.
Funny enough, the standard agrees with me (I hadn't actually read it at the time of writing above paragraph). See [here](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s16.html).
There's no standard for how things should be kept here. So I'll probably just do it as I see fit until I settle upon something. For now, I'm thinking something like `/srv/{domain-name}/` since I mostly only serve static files from http anyway, it wouldn't really make sense to put them somewhere like `/src/http/...`.
## /tmp
Temporary files baby. Anything that you don't mind being lost or cleaned up at some point after you're done with it. Some things also put sockets here for some reason.
## /usr
[reference](https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04.html)
This boi is complex, but I think I can boil it down to a couple of simple things. We've already gone over some of the contents of `/usr` with `/bin` and `/sbin` and `/lib`. `/usr` is also meant to hold other directories for system purposes such as `include` and `share` and `libexec`. These aren't terribly important to know off the top of your head. Only that stuff in here that's *not* in the `/usr/local` is intended to be *installed and managed by the system*. If you're manually sticking something in `/usr/bin` it should be to update an existing package. Otherwise, `/usr/local/bin` should be used, as `/usr/local` is reserved for administration by the system administrator.
## /var
Var is kind of... where we chuck things that change with the system as time goes on, that must be stored, that aren't config files, or static files, or what-have-you. Logs, spools, mailboxes, and the like. Officially, you aren't supposed to do what I do - making apps store things here. But I'm going to do it anyway, because I need to for configuration management.
Anyway, the things of note in here are `/var/log` `/var/mail`. And for my purposes, `/var/gitlocal`.
## Backup strategies
A simple backup strategy employed by - I'd wager - most users (at least by myself) is to simply `rsync` (or something else, in my case I use `borg`) the entire `/home/user` directory. This works if all you care about on the system is your userdata. What if the rest of the system isn't quite so expendable? What if you're serving html content from `/srv/example.com` that isn't on an nfs share? What if you've installed binaries from external sources (or compiled them yourself)?
Well, we want to consider backing up other folders, then.
There's not going to be a one-size-fits-all solution here but I'll try to generalize...
### Example backup strategy
A basic strategy would consider backing up all of:
- `/boot`
- `/home`
- `/srv`
- `/var`
- `/root`
- `/etc`
- `/opt`
- `/usr/local`
A more efficient strategy might try to consider *what exactly do we want to keep from these and why*. It might also discard some directories entirely depending on whether we use them.
(I kinda thought I was cooking here but I wasn't lol).
## Configuration management strategies
Git, git, git, git, git. Git is god here. but *how*?
I'm going to attempt to, finally, devise a configuration management strategy that incorporates git, is scriptable, and compliant with FHS.
Configs are stored in `/etc` and `/usr/local/etc`. On any system I manage, this will be so - EXCEPT with things that run as docker containers, those will use `/opt` and I'll explain why later.
`/etc` and `/usr/local/etc` should each be initialized as git repositories on system initialization. Then, we create *bare git repositories* at `/var/gitlocal/etc.git` and `/var/gitlocal/usr_local_etc.git`. Optionally, we create (or import) a gpg key for the root account to use for signing git commits. Commits are pushed to, at a minimum, the bare git repos mentioned earlier. Any backup strategy employed on a system using version control under this configuration management strategy, must include at *least* `/var/gitlocal`, allowing configuration files to be version controlled and backed up.
## When do I do things in /opt???
The only case I see myself using `/opt` is when I'm deploying docker containers. Binaries installed to the system, but compiled by the sysadmin (such as how I implement nginx) should be installed/configured to `/usr/local/bin` and `/usr/local/etc` respectively. It might make sense to install them to `/opt` at first glance... but I see `opt` as more of a place to keep application binaries and data in one location should you want to do that. In my use-case, I like to avoid dealing with PVE volumes for any sort of configuration or permanent data stored by a docker container. Sometimes I want to poke around in that data, and a PVE is simply too god-damned inconvenient to deal with. They have their usecases, but *not* as persistent data stores in my environments. Therefore, `/opt` is the perfect candidate for my usecase. It has a loosely defined structure that allows me to do things like throwing a valheim server into a directory called `/opt/valheim` with a docker-compose.yml and its own `./etc` and `./var` dirs.
Note that there are optionally defined structures for `/opt` that include symlinking things to `/opt/bin` and using `/etc/opt` and `/var/opt` however I see these as obsolete and unnecessary. A refined standard in my opinion would remove these as options, sticking to opt as a place like I'm using here.

425
content/copyright.md Normal file
View File

@@ -0,0 +1,425 @@
title: copyright notice
description: coyright notice
# Copyright notice for content hosted on blog.raer.me
Copyright © 2024 Freyja R. L. Odinthrir
Code written inside code blocks on this website, is published under the [BSD 3-clause license](#bsd-3-clause-license). This license applies *ONLY* to code published in code blocks on this website.
All other content on this website - all original copy, images, stuff that isn't code, etc. - is distributed under and protected by the [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license](#creative-commons-by-nc-nd-40).
Full text of both licenses will be provided below.
## BSD 3-clause license
Copyright © 2024 Freyja R. L. Odinthrir
Redistribution and used in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
## Creative Commons BY-NC-ND 4.0
Copyright © 2024 Freyja R. L. Odinthrir
Attribution-NonCommercial-NoDerivatives 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution-NonCommercial-NoDerivatives 4.0 International Public
License ("Public License"). To the extent this Public License may be
interpreted as a contract, You are granted the Licensed Rights in
consideration of Your acceptance of these terms and conditions, and the
Licensor grants You such rights in consideration of benefits the
Licensor receives from making the Licensed Material available under
these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
c. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
d. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
e. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
f. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
g. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
h. NonCommercial means not primarily intended for or directed towards
commercial advantage or monetary compensation. For purposes of
this Public License, the exchange of the Licensed Material for
other material subject to Copyright and Similar Rights by digital
file-sharing or similar means is NonCommercial provided there is
no payment of monetary compensation in connection with the
exchange.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part, for NonCommercial purposes only; and
b. produce and reproduce, but not Share, Adapted Material
for NonCommercial purposes only.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties, including when
the Licensed Material is used other than for NonCommercial
purposes.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material, You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
For the avoidance of doubt, You do not have permission under
this Public License to Share Adapted Material.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database for NonCommercial purposes
only and provided You do not Share Adapted Material;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public
licenses. Notwithstanding, Creative Commons may elect to apply one of
its public licenses to material it publishes and in those instances
will be considered the “Licensor.” The text of the Creative Commons
public licenses is dedicated to the public domain under the CC0 Public
Domain Dedication. Except for the limited purpose of indicating that
material is shared under a Creative Commons public license or as
otherwise permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the
public licenses.
Creative Commons may be contacted at creativecommons.org.

View File

@@ -1,6 +1,6 @@
title: My lover
description: Hello love
# To my lover, Jaimie
# To my lover, Jamie
I love you very much, babydoll. <3 you bring me so much joy and happiness when we're together.

50
entry Executable file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env python3
import os
from datetime import datetime
# Get the current date and time
now = datetime.now()
year = now.strftime("%Y")
month = now.strftime("%m")
day = now.strftime("%d")
time_str = now.strftime("%H:%M")
time_str_sec = now.strftime("%H%M%S")
if str(day).endswith("1"): suffix = "st"
elif str(day).endswith("2"): suffix = "nd"
elif str(day).endswith("3"): suffix = "rd"
else: suffix = "th"
# Create the directory structure
directory = os.path.join("content", year, month, day)
os.makedirs(directory, exist_ok=True)
# Define the filename
filename = f"blog-entry-{time_str_sec}.md"
file_path = os.path.join(directory, filename)
print(file_path)
HEADER = f'''title:
description:
tags:
date: {year}-{month}-{day} {time_str}
# Blog Entry
'''
# Create the Markdown file and write the header
with open(file_path, 'a+') as file:
header = HEADER
file.write(header)
file.close()
print(f"Blog entry created: {file_path}")

View File

@@ -20,7 +20,7 @@
{%- endfor %}
{% endif %}
</p>
<p>published on {{ date.date() }} </p>
<p>published on {{ date.date() }} at {{ date.time() }}</p>
<p>{%- if edited %} edited on {{ edited }}{% endif -%}</p>
</aside>

View File

@@ -23,6 +23,7 @@
<h2>{{ site.description }}</h2>
<ul>
<li><h2><a href="/">Home</a></h2></li>
<li><h2><a href="/copyright.html">Copyright Notice</a></h2></li>
<li><h2><a href="/love.html">A love note</a></h2></li>
<li><h2><a href="/tags/">Tags</a></h2></li>
<li><h2><a href="/archive.html">Archive</a></h2></li>
@@ -38,6 +39,8 @@
</main>
<footer>
<a href="/copyright.html">Copyleft 🄯 2024 Freyja R. L. Odinthrir "All Wrongs Reversed"</a>
<br>
Subscribe to the <a href="/atom.xml">atom feed</a>.
<br>
<!-- Contact me via