Initial commit: Claude project knowledge documents

Infrastructure, media, photos, and networking project knowledge
files used as reference context for Claude Code sessions.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Jozsef Gitta 2026-04-09 13:28:41 -05:00
commit 6d3269f54c
5 changed files with 576 additions and 0 deletions

View File

@ -0,0 +1,9 @@
{
"permissions": {
"allow": [
"mcp__truenas__list_snapshots",
"mcp__truenas__list_nfs_exports",
"mcp__truenas__get_dataset"
]
}
}

View File

@ -0,0 +1,320 @@
# Homelab Infrastructure — Project Knowledge
## Host Machine
- **ThinkStation-P710** — Primary Proxmox host
- IP: `192.168.88.25` (jg-hud)
- Specs: Dual Xeon E5 v4, 110GB ECC RAM, 16 cores
- OS: Proxmox VE
- SSH: `root@192.168.88.25` (key: SHA256:i4whMVfdf+SxGttDKhL9PEltWv2ABPQs7BRUunCfcf8)
## Local WORKSTATION
- **ThinkStation-P710**
- Specs: Dual Xeon E5 v4
- OS Linux Mint
- Also runs Frigate NVR directly at `192.168.88.41`
## Network
- **MikroTik router**: `192.168.88.1`
- Main LAN: `192.168.88.0/24`
- IoT VLAN: `192.168.2.0/24`
- Static WAN IP: `184.170.161.177`
- **Synology RT6600ax**: WiFi AP only
- **DNS**: Split-DNS via MikroTik static entries + Pi-hole forwarding local domains
- **Cloudflare**:
- Zone ID: `3511a9ac47fa469b24a5c0f411063da4`
- DNS API token: `gEgJiSdJKSLhnCwQGXRiRWgz5WhTmkZQk0H89X8p`
- DNS Edit token: `cfat_RhlfvleyZGst8k6rTVx9Ti7x3b8NuE3uOxExka3i9d128e8c`
- All jgitta.com subdomains point to WAN IP
## Proxmox VMs & Containers
| ID | Name | IP | Specs | Role |
|---|---|---|---|---|
| VM102 | jellyfin | 192.168.88.10 | 2c/12GB | Jellyfin + *arr stack + qBittorrent/ProtonVPN |
| VM103 | next | 192.168.88.62 | 2c/8GB | Nextcloud + OnlyOffice |
| VM104 | Windows11-NVR | — | 8c/11GB | Windows VM |
| VM105 | openclaw-debian | — | 4c/4GB | — |
| VM106 | haos | 192.168.88.39 | 2c/8GB | Home Assistant OS |
| VM107 | pbs | 192.168.88.60 | 4c/8GB | Proxmox Backup Server (nightly full backups) |
| VM112 | siklos/docker-server | 192.168.88.27 | 4c/12GB | Main Docker host |
| VM113 | photos | 192.168.88.32 | 4c/16GB | PhotoPrism + Immich (dedicated photo VM) |
| VM113 | zorin-os | — | 2c/4GB | — |
| CT200 | gitea | 192.168.88.200 | 1c/512MB | Gitea git server (port 3000) |
| CT201 | openclaw | 192.168.88.29 | — | AI gateway (Gemini + Telegram bot, port 18789) |
| CT202 | caddy-proxy | 192.168.88.110 | 2c/4GB | Caddy reverse proxy |
| VM111 | debian-1 | — | — | Stopped/unused |
## Storage Pools (on jg-hud)
| Pool | Size | Type |
|---|---|---|
| nvme-thin | 4.3TB | LVM |
| SSD-1.7T | 1.7TB | — |
| big-11t | 10.8TB | — |
| HD4T-1 | 3.6TB | — |
| HD4T-2 | 3.6TB | — |
| nextcloud-fast | 1.8TB | ZFS |
| pbs | 39TB | PBS |
- **TrueNAS**: `192.168.88.24` — 25.5TB (TrueNAS-NFS pool)
- **Photos NFS**: `/mnt/big-11t/photos` on Proxmox host, NFS-exported to siklos and ThinkStation
## SSH Access Pattern
- Direct to Proxmox host: `root@192.168.88.25`
- To Siklos (VM112): `ssh -o StrictHostKeyChecking=no jgitta@192.168.88.27` (from jg-hud)
- To other VMs: use VM103 or VM107 as jump hosts
- Public key: `ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMRHmoQ63d1qi5yjYoFm8FgnBwUo5uNyRCPChW25DmjF root@jg-hud`
- **Note**: Reach Siklos via Proxmox host shell SSH — QEMU agent times out on long commands
## Caddy Reverse Proxy (CT202 — 192.168.88.110)
- Version: v2.11.2 with Cloudflare DNS plugin
- Config: `/etc/caddy/Caddyfile` imports `/etc/caddy/snippets.caddy` and `/etc/caddy/sites/*.caddy`
- TLS: DNS challenge via Cloudflare
- Snippets: `headers_base`, `internal_only`, `web_secure`, `web_media`, `stream_secure`, `proxy_timeouts`, `pve_proxy`
### sites/infrastructure.caddy
| Subdomain | Backend | Notes |
|---|---|---|
| proxmox.jgitta.com | 192.168.88.25:8006 | internal_only |
| pbs.jgitta.com | 192.168.88.60:8007 | internal_only |
| mesh.jgitta.com | 192.168.88.27:444 | MeshCentral |
| pihole.jgitta.com | 192.168.88.27:8080 | internal_only |
| homarr.jgitta.com | 192.168.88.27:7575 | — |
| apache.jgitta.com | 192.168.88.27:8383 | Guacamole |
| notes.jgitta.com | 192.168.88.27:3010 | Karakeep |
| links.jgitta.com | 192.168.88.27:3015 | Linkwarden |
| gitea.jgitta.com | 192.168.88.200:3000 | — |
| status.jgitta.com | 192.168.88.27:3001 | Uptime Kuma, internal_only |
| grafana.jgitta.com | 192.168.88.27:3020 | internal_only |
| glances.jgitta.com | 192.168.88.27:61208 | internal_only |
| dashboard.jgitta.com | 192.168.88.27:8096 | custom dashboard |
| blue.jgitta.com | 192.168.88.47:75 | — |
| claw.jgitta.com | 192.168.88.29:18789 | internal_only |
| office.jgitta.com | 192.168.88.27:8880 | OnlyOffice |
| jgitta.com / www | 192.168.88.27:8095 | WordPress |
### sites/media.caddy
| Subdomain | Backend | Notes |
|---|---|---|
| jellyfin.jgitta.com | 192.168.88.10:8096 | stream_secure |
| sonarr.jgitta.com | 192.168.88.10:8989 | internal_only |
| radarr.jgitta.com | 192.168.88.10:7878 | internal_only |
| prowlarr.jgitta.com | 192.168.88.10:9696 | internal_only |
| ha.jgitta.com | 192.168.88.39:8123 | Home Assistant |
| cameras.jgitta.com | 192.168.88.41:8971 | Frigate, stream_secure |
| next.jgitta.com | 192.168.88.62:80 | Nextcloud |
| collabora.jgitta.com | 192.168.88.62:9980 | Collabora (may be replaced by OnlyOffice) |
| hub.jgitta.com | 192.168.88.65 | Hubitat |
| pictures.jgitta.com | 192.168.88.32:2283 | Immich (VM113) |
| photos.jgitta.com | 192.168.88.32:2342 | PhotoPrism (VM113) |
### When adding new subdomains
Always handle DNS (Cloudflare) and Caddy changes together as a single operation.
## Jellyfin (VM102 — 192.168.88.10)
- URL: `https://jellyfin.jgitta.com`
- Port: 8096
- API Token: `b861664663924b68858a67b91f6d1188`
- Admin User ID: `12659173-DBBE-43E5-B116-DE70B63A470D`
- DB: `/var/lib/jellyfin/data/jellyfin.db` (SQLite)
- Virtual folders config: `/var/lib/jellyfin/root/default/<LibraryName>/options.xml`
- LiveTV config: `/etc/jellyfin/livetv.xml`
- HDHomeRun tuner: `http://192.168.88.22` (device ID: 10B00DE1)
### Media Libraries
| Library | CollectionFolder ID | Path |
|---|---|---|
| Shows | A656B907-EB3A-7353-2E40-E44B968D0225 | `/mnt/media/tv` |
| Movies | F137A2DD-21BB-C1B9-9AA5-C0F6BF02A805 | `/mnt/media/movies` |
| Recordings | 79A2726D-3C50-E769-A8AF-1E4184E4FCCF | `/mnt/media/recordings` |
Media mount: `/mnt/media` (5.8TB, ~55% used) — subdirs: `movies/`, `tv/`, `recordings/`, `torrents/`
### Critical: Library Path Management
**NEVER edit `options.xml` directly to change library paths.** Jellyfin ignores changes made directly to the file while running, and on restart it overwrites the file from its internal state.
Always use the API to add/remove paths:
```bash
# Add a path to a library
curl -X POST "http://localhost:8096/Library/VirtualFolders/Paths" \
-H "Authorization: MediaBrowser Token=\"<token>\"" \
-H "Content-Type: application/json" \
-d '{"Name":"Shows","PathInfo":{"Path":"/mnt/media/tv"}}'
# Remove a path from a library
curl -X DELETE "http://localhost:8096/Library/VirtualFolders/Paths?name=Shows&path=%2Fmnt%2Fmedia%2Ftv&refreshLibrary=false" \
-H "Authorization: MediaBrowser Token=\"<token>\""
```
### Root Cause History: TV Shows Not Appearing (April 2026)
The Recordings library was accidentally configured with path `/mnt/media` (the entire media mount) instead of `/mnt/media/recordings`. This caused:
- Recordings library to scan and "own" all TV shows and movies
- Shows library to have no path configured (empty PathInfos) — never scanned `/mnt/media/tv`
- All items getting `TopParentId` pointing to the `/mnt/media` Folder (not Shows CollectionFolder)
- Jellyfin auto-creating a `Recordings2` library whenever LiveTV checked `/mnt/media/recordings` didn't match Recordings' path `/mnt/media`
Fix: Use API to remove `/mnt/media` from Recordings, add `/mnt/media/recordings` to Recordings, add `/mnt/media/tv` to Shows, delete Recordings2, wipe BaseItems table (preserving UserData), restart + scan.
### DB Wipe Command (preserves UserData/watch history)
```sql
DELETE FROM AncestorIds; DELETE FROM BaseItemImageInfos;
DELETE FROM BaseItemMetadataFields; DELETE FROM BaseItemProviders;
DELETE FROM BaseItemTrailerTypes; DELETE FROM Chapters;
DELETE FROM MediaStreamInfos; DELETE FROM AttachmentStreamInfos;
DELETE FROM TrickplayInfos; DELETE FROM KeyframeData;
DELETE FROM PeopleBaseItemMap; DELETE FROM Peoples;
DELETE FROM MediaSegments; DELETE FROM ItemValuesMap;
DELETE FROM ItemValues; DELETE FROM BaseItems; VACUUM;
```
Run as: `sudo sqlite3 /var/lib/jellyfin/data/jellyfin.db "..."`
(Must stop Jellyfin first: `sudo systemctl stop jellyfin`)
### Recordings2 Auto-Creation
If a `Recordings2` library appears, it means LiveTV's recording path in `/etc/jellyfin/livetv.xml` doesn't match the Recordings library path. Fix via API (remove/add paths) rather than editing XML files.
## Dotfiles
- Gitea: `http://192.168.88.200:3000/jgitta/dotfiles.git`
- Token (read/write): `56ac53def371df34d8f9c4b5580b28d4f62ab1ab`
- Includes: micro config, bash aliases, `update-dotfiles` alias
- Preferred editor: `micro` (never nano)
## Backups
- PBS (VM107, 192.168.88.60): nightly full backups of all VMs
- TrueNAS (192.168.88.24): bulk storage backups
## Key Notes
- KSM enabled with ksmtuned on Proxmox host
- Watchtower runs on Siklos but Pi-hole is excluded from auto-updates
- When restoring VMs: restore all disks to HD4T-2 first, then `qm disk move` OS disk to nvme-thin
- `delete_vm` MCP tool removes all disks — always confirm before running
## DNS Resolution Flow (Split-DNS)
All `*.jgitta.com` subdomains resolve internally to Caddy (192.168.88.110) via MikroTik static DNS entries — **not** through Cloudflare/public DNS. If a subdomain doesn't resolve internally, check MikroTik static DNS first, then Caddy config.
- **Internal**: MikroTik static entries → 192.168.88.110 (Caddy)
- **External**: Cloudflare A records → 184.170.161.177 (WAN) → NAT → Caddy
- **Other names**: Pi-hole (192.168.88.27:53) handles upstream forwarding
- **`.homenet` names**: MikroTik static DNS only (e.g., `siklos.homenet`, `jellyfin.homenet`)
- **IoT DNS**: Force-redirected to Pi-hole via MikroTik DSTNAT rules
**Firewall gotcha**: `vlan20-IoT` is dynamically added to the WAN interface list. NAT rules using `in-interface-list=WAN` will match IoT traffic. WAN-facing rules must use `in-interface=ether1` instead.
## Service Architecture — Native vs Docker
**VM102 (jellyfin, 192.168.88.10)** — native systemd for core services:
- `jellyfin.service` — native systemd
- `qbittorrent-nox.service` — native systemd
- `me.proton.vpn.split_tunneling.service` — native systemd
- Radarr, Sonarr, Prowlarr, FlareSolverr — **Docker containers**
**VM103 (next, 192.168.88.62)** — Nextcloud native Apache:
- Nextcloud 33.0.0 — native Apache (`/etc/apache2/sites-enabled/nextcloud.conf`)
- Web root: `/var/www/nextcloud/`
- Config: `/var/www/nextcloud/config/config.php`
- Data dir: `/mnt/nextcloud-data` (local 2TB disk `/dev/sdb1`, ~65% used)
- DB: MySQL on localhost, db `nextcloud`, user `nextcloud`
- Tiered storage mounts:
- Warm: `192.168.88.25:/nvme5-1tb/nextcloud-data``/mnt/warm-storage` (900GB NFS)
- Cold: `192.168.88.24:/mnt/pool1/cold-storage``/mnt/cold-storage` (58TB NFS from TrueNAS)
- Media: `192.168.88.25:/mnt/big-11t/media``/mnt/media`
**VM112 / Siklos (192.168.88.27)** — all services Docker; compose files at `/srv/docker/<service>/` (RAM reduced to 12GB after photo services migrated out)
**VM113 / photos (192.168.88.32)** — dedicated photo services VM, Docker:
- PhotoPrism — port 2342, compose `/srv/docker/photoprism/docker-compose.yml`
- Storage: `/srv/docker/photoprism/storage_data/` (14GB)
- DB: MariaDB 11 at `/srv/docker/photoprism/db_data/`
- Immich — port 2283, compose `/srv/docker/immich/docker-compose.yml`
- Library: `/srv/docker/immich/library/` (22GB)
- DB: PostgreSQL at `/srv/docker/immich/postgres/`
- NFS: `/mnt/photos` from `192.168.88.25:/mnt/big-11t/photos`
- Node exporter: port 9100
**ThinkStation workstation (192.168.88.41)** — Linux Mint 22.3 (separate machine from Proxmox host at .25):
- Frigate NVR — Docker, image `ghcr.io/blakeblackshear/frigate:stable-tensorrt`
- Compose: `/opt/frigate/docker-compose.yml`
- Ports: 8971 (web UI), 5000, 85548555 (RTSP), 8555/udp
- Open-WebUI — Docker (no external ports)
- Disks: OS `/dev/sda2` (234GB, 73%), `/dev/sdb2` 14TB HD at `/mnt/14TB-HD` (72%), `/dev/nvme0n1p1` 469GB at `/mnt/INTEL-SSD`, `/dev/sdc1` 1.8TB at `/mnt/StorFly-SSD`
- NFS mount: `/mnt/photos` from `192.168.88.25:/mnt/big-11t/photos`
## VM Disk → Storage Pool Mapping
| VM | Disk | Pool | Size | Notes |
|---|---|---|---|---|
| VM102 jellyfin | scsi0 (OS) | big-11t | 32GB | |
| VM102 jellyfin | scsi1 (media) | big-11t | 6TB | `/mnt/media` inside VM |
| VM103 next | scsi0 (OS) | nvme-thin | 32GB | |
| VM103 next | scsi1 (data) | nvme-thin | 2TB | local nextcloud data disk |
| VM106 haos | scsi0 | HD4T-2 | 32GB | |
| VM107 pbs | scsi0 | SSD-1.7T | 32GB | |
| VM112 siklos | scsi0 | nvme-thin | 250GB | |
**Storage pool usage** (as of April 2026):
| Pool | Type | Total | Used% |
|---|---|---|---|
| nvme-thin | lvmthin | 4.4TB | 39% |
| big-11t | dir | 10.8TB | 59% |
| SSD-1.7T | dir | 1.7TB | 70% |
| HD4T-1 | dir | 3.6TB | 70% |
| HD4T-2 | dir | 3.6TB | 0.2% |
| pbs | PBS | 64TB | 23% |
| nextcloud-fast | zfspool | — | INACTIVE |
| nvme4-2tb | zfspool | 1.8TB | 0% |
| nvme5-1tb | zfspool | 900GB | 0% |
## NFS Export Map
**From jg-hud (192.168.88.25)**:
| Export | Clients | Mounted by |
|---|---|---|
| `/mnt/big-11t/media` | 192.168.88.0/24 | VM102 (`/mnt/media`), VM103 (`/mnt/media`) |
| `/mnt/big-11t/photos` | 192.168.88.0/24 | VM112 (`/mnt/photos`), ThinkStation (`/mnt/photos`) |
| `/nvme5-1tb/nextcloud-data` | 192.168.88.62 only | VM103 (`/mnt/warm-storage`) |
**From TrueNAS (192.168.88.24)**:
| Export | Mounted by |
|---|---|
| `/mnt/pool1/cold-storage` | VM103 (`/mnt/cold-storage`, 58TB) |
| PBS dataset | VM107 PBS (`/mnt/pbsdataset/pbs-store`) |
## Backup Schedule & Retention
**Job 1 — PBS (enabled)**: Mon/Wed/Fri at 21:00
- All VMs/CTs **except VM107** (PBS itself)
- Storage: `pbs` (PBS datastore `pbs-store` on TrueNAS NFS at `/mnt/pbsdataset/pbs-store`)
- Retention: keep-daily=7, keep-weekly=4, keep-monthly=3
- Email notification: `jgitta@jgitta.com`
**Job 2 — TrueNAS NFS (disabled)**: Sun/Tue/Thu/Sat at 00:00
- All VMs, storage: `Proxmox-TrueNAS-NFS`, keep-last=5
## TrueNAS (192.168.88.24)
- TrueNAS SCALE 24.10.2.4
- **pool1**: RAIDZ2, 8 disks, ~63.7TB used / ~104.4TB total, ONLINE/healthy
- Hosts PBS backup store (NFS-exported to VM107)
- Hosts cold-storage NFS export (58TB, mounted on VM103)
- API: use `curl -sk https://192.168.88.24/api/v2.0/<endpoint> -u "root:<password>"`
- MCP connection is broken — use curl API directly
- Disks with SMART warnings: sda/9JH31S2T (40 unreadable, 24 ATA errors), sdh/9JG47UTT (13 ATA errors)
## Log Locations
| Service | Location | Command |
|---|---|---|
| Jellyfin | `/var/log/jellyfin/` | `tail -f /var/log/jellyfin/jellyfin$(date +%Y%m%d).log` on VM102 |
| FFmpeg transcode | `/var/log/jellyfin/FFmpeg.Transcode-*.log` | VM102 |
| Caddy | systemd journal | `journalctl -u caddy -f` on CT202 |
| Pi-hole | FTL DB / journal | `/etc/pihole/pihole-FTL.db`, `journalctl -u pihole-FTL` on Siklos |
| Docker services (Siklos) | docker logs | `docker compose logs -f` in `/srv/docker/<service>/` |
| Proxmox | `/var/log/pve/` | on jg-hud (192.168.88.25) |
| qBittorrent | systemd journal | `journalctl -u qbittorrent-nox -f` on VM102 |
| ProtonVPN | systemd journal | `journalctl -u me.proton.vpn.split_tunneling -f` on VM102 |
## Uptime Kuma Monitors (Siklos, port 3001)
HTTP checks: Caddy Health, Gitea, Glances, Grafana, Homarr, Home Assistant, Immich, Jellyfin, Karakeep, Linkwarden, MeshCentral, Nextcloud, PBS, Pi-hole web, Prometheus
Port checks: MikroTik SSH (:22), Pi-hole DNS (:53), Proxmox SSH (:22), Siklos SSH (:22)
DNS check: Pi-hole DNS resolution (google.com via 192.168.88.27:53)
Metric check: Node Exporter on ThinkStation (192.168.88.41:9100)
**Not monitored**: VM102 SSH, qBittorrent, ProtonVPN status, Radarr/Sonarr/Prowlarr health, TrueNAS, Frigate

View File

@ -0,0 +1,90 @@
# Siklos Docker Services — Project Knowledge
## Host
- **VM112 / siklos / docker-server**
- IP: `192.168.88.27`
- Specs: 4c/12GB RAM (reduced from 16GB after photo services migrated to VM113)
- SSH: `jgitta@192.168.88.27` (via Proxmox host jump)
- Docker compose files: `/srv/docker/<service>/docker-compose.yml`
- Note: `vm.swappiness=10` set in `/etc/sysctl.conf` (April 2026)
## Running Containers (as of April 2026)
| Container | Image | Port(s) | Compose Path |
|---|---|---|---|
| pihole | pihole/pihole:latest | 53, 8080 | /srv/docker/pihole/ |
| onlyoffice | onlyoffice/documentserver | 8880 | /srv/docker/media/ |
| homarr | homarr:latest | 7575 | /srv/docker/homarr/ |
| uptime-kuma | uptime-kuma:2 | 3001 | /srv/docker/uptime-kuma/ |
| grafana | grafana:latest | 3020 | /srv/docker/monitoring/ |
| prometheus | prom/prometheus:latest | 9090 | /srv/docker/monitoring/ |
| node-exporter | prom/node-exporter:latest | 9100 | /srv/docker/monitoring/ |
| cadvisor | cadvisor:latest | 8090 | /srv/docker/monitoring/ |
| graphite-exporter | prom/graphite-exporter:latest | 9108-9109 | /srv/docker/monitoring/ |
| glances | nicolargo/glances:latest | 61208 | /srv/docker/glances/ |
| meshcentral | typhonragewind/meshcentral:latest | 444 | /srv/docker/meshcentral/ |
| guacamole | jwetzell/guacamole | 8383 | /srv/docker/guacamole/ |
| karakeep-web-1 | karakeep:release | 3010 | /srv/docker/karakeep/ |
| karakeep-meilisearch-1 | meilisearch:v1.13.3 | 7700 (internal) | /srv/docker/karakeep/ |
| karakeep-chrome-1 | alpine-chrome:124 | — | /srv/docker/karakeep/ |
| linkwarden-linkwarden-1 | linkwarden:latest | 3015 | /srv/docker/linkwarden/ |
| linkwarden-postgres-1 | postgres:16-alpine | 5432 | /srv/docker/linkwarden/ |
| wordpress | wordpress:php8.3-apache | 8095 | /srv/docker/wordpress/ |
| wordpress-db | mariadb:10.11 | 3306 (internal) | /srv/docker/wordpress/ |
| dashy | lissy93/dashy:latest | 8081 | /srv/docker/dashy/ |
| dashboard | dashboard-dashboard | 8096 | /srv/docker/dashboard/ |
| grav | linuxserver/grav:latest | 8585 | /srv/docker/grav/ |
| watchtower | containrrr/watchtower | — | /srv/docker/watchtower/ |
## Migrated Services
- **PhotoPrism** and **Immich** were migrated to VM113 (photos, 192.168.88.32) in April 2026
- See `/home/jgitta/Documents/Claude/Projects/Photos/photos.md` for current details
## Pi-hole
- Port: 53 (DNS), 8080 (web UI)
- URL: `https://pihole.jgitta.com`
- Docker network: `pihole_default`
- Docker IP: `172.28.0.2` (used by Uptime Kuma DNS monitor)
- Config: `listeningMode = "ALL"` in pihole.toml (required for Docker)
- FTL DB: `/etc/pihole/pihole-FTL.db`
- Rate limit: 300 concurrent queries
- Excluded from Watchtower auto-updates
- Pi-hole v6
## Monitoring Stack
- Compose: `/srv/docker/monitoring/`
- Grafana: port 3020 (`grafana.jgitta.com`), datasource UID: `cffiqslf48feod`
- Prometheus: port 9090
- Node Exporter on: siklos (.27), proxmox (.25), nextcloud (.62), jellyfin (.10), pbs (.60), caddy (.110), thinkstation (.41) — all port 9100
- Grafana alert folder "Homelab Alerts":
- High RAM >90% for 5min
- Swap >50% for 5min
- CPU >90% for 10min
- Disk >85% for 5min
- Node Down 2min
- Alert annotations: `{{ $labels.instance }}` and `{{ $values.B }}%`
- Alerts use three-step reduce+threshold pipeline (not classic conditions)
- Telegram: bot token `8758434542:AAEW6omM7twyInsb2INuy6mD1w2EWXHqmzE`, chat `8260387200`, repeat every 4h
- Uptime Kuma: port 3001 (`status.jgitta.com`), joined to `pihole_default` network
## OnlyOffice
- URL: `https://office.jgitta.com`
- Port: 8880 (all interfaces)
- Compose: `/srv/docker/media/docker-compose.yml`
- Replaces Collabora (`richdocuments` app is disabled in Nextcloud; `onlyoffice` app is enabled)
- JWT secret (must match Nextcloud config): `4f2b0c719af2de99befacfec9ca5e8373cbdeb76`
- Nextcloud `occ` settings (set on VM103/next):
- `DocumentServerUrl` = `https://office.jgitta.com/`
- `DocumentServerInternalUrl` = `http://192.168.88.27:8880/`
- `StorageUrl` = `https://next.jgitta.com/`
- `jwt_secret` = (matches container `local.json` above)
- `jwt_header` = `Authorization`
- To reconfigure after container recreation: re-run `occ config:app:set onlyoffice jwt_secret --value="<secret from local.json>"`
- Container JWT secret location: `/etc/onlyoffice/documentserver/local.json``.services.CoAuthoring.secret.inbox.string`
## Key Notes
- Watchtower excludes Pi-hole from auto-updates
- NFS mount `/mnt/photos` was removed from Siklos `/etc/fstab` after PhotoPrism/Immich migration
- OnlyOffice replaced Collabora (lighter RAM usage)
- RAM reduced from 16GB → 12GB (April 2026, live via Proxmox balloon driver, no reboot)
- `vm.swappiness=10` set to reduce swap pressure after photo services migrated out

View File

@ -0,0 +1,75 @@
# Media Stack — Project Knowledge
## Host: VM102 / jellyfin
- IP: `192.168.88.10`
- Specs: 2c/12GB RAM
- SSH: `root@192.168.88.10` (via Proxmox host jump)
- Media disk: 6000GB at `/mnt/media/`
- Single `/data` mount structure for hardlink support
## Services on VM102
| Service | Port | URL |
|---|---|---|
| Jellyfin | 8096 | jellyfin.jgitta.com |
| qBittorrent-nox | — | — |
| Radarr | 7878 | radarr.jgitta.com (internal only) |
| Sonarr | 8989 | sonarr.jgitta.com (internal only) |
| Prowlarr | 9696 | prowlarr.jgitta.com (internal only) |
| FlareSolverr | 8191 | — |
## Jellyfin
- Port: 8096
- GPU: NVIDIA (VFIO passthrough from Proxmox host — GPU is NOT on host, it's passed to VM102)
- Hardware transcoding: NVIDIA NVENC enabled
- Live TV: HDHomeRun FLEX QUATRO
- DeviceID: `10B00DE1`
- IP: `192.168.88.22`
- DeviceAuth-based XMLTV guide (refreshed every 20h via cron)
- Script: `/home/jgitta/scripts/refresh-hdhomerun-guide.sh`
- DVR license key: unassigned (SiliconDust ticket raised)
- DNS fix service: `fix-jellyfin-dns.service` (fixes DNS after ProtonVPN kill switch on boot)
- Sets nameserver to `192.168.88.1`
- Removes kill switch default route
## ProtonVPN
- Interface: `proton0` / `pvpnksintrf1` (kill switch interface)
- Kill switch enabled
- qBittorrent bound to proton0
- Port forwarding: port `54514`, renews via NAT-PMP every 60s
- Cron sync script: runs every 5 minutes (`*/5`)
- Split tunneling service: `me.proton.vpn.split_tunneling.service`
## qBittorrent-nox
- Bound to ProtonVPN interface
- systemd hard cap: `MemoryMax=2G`
- Downloads to `/mnt/media/downloads/`
## *arr Stack
- All services in Docker on VM102
- Quality profiles: HD-1080p (815GB cap)
- Prowlarr URLs must point to `192.168.88.10` (not wrong IP)
- IndexerStatus/DownloadClientStatus cleared from SQLite DBs if stale
## Photos (on VM113 / photos — 192.168.88.32)
- PhotoPrism: `https://photos.jgitta.com` (port 2342)
- Immich: `https://pictures.jgitta.com` (port 2283)
- Source: `/mnt/photos/joe` and `/mnt/photos/cynthia` (NFS from big-11t, mounted at `/mnt/photos`)
- Migrated from Siklos (VM112) to dedicated VM113: April 2026
## Home Automation
- **Home Assistant** (VM106): `192.168.88.39`, `ha.jgitta.com`
- Integrated with Hubitat via HACS
- Grafana/Telegram monitoring
- **Hubitat**: `192.168.88.65`, `hub.jgitta.com`
- Primary Z-Wave/Zigbee hub
- Maker API: app ID `146`, token `8fb12954-f427-4972-99e2-f9c893d4420c`
- Zooz S505D Matter dimmer "Island Light": static DHCP at `192.168.88.15`
- **Frigate NVR** (ThinkStation, `192.168.88.41`): `cameras.jgitta.com`
- Auth disabled, Caddy proxy headers: `Remote-User: guest_viewer`, `Remote-Role: viewer`
- Reolink cameras: `192.168.88.100105`
- Front driveway zone: car detection → Alexa announcements
- Version: 0.17
## SMTP
- Fastmail: `jgitta@jgitta.com`

82
Photos/photos.md Normal file
View File

@ -0,0 +1,82 @@
# Photos — Project Knowledge
## Storage
- **NFS source**: `/mnt/big-11t/photos` on Proxmox host (jg-hud, 192.168.88.25)
- NFS exported to:
- Photos/VM113 (`192.168.88.32`) — mounted as `/mnt/photos`
- ThinkStation (`192.168.88.41`) — mounted as `/mnt/photos`
- Systemd mount units on both hosts with `network-online.target` dependency (fixes boot-time NFS failures)
- Folder structure:
- `/mnt/photos/joe` — Joe's photos (organized by year: 1972, 1997, 1998...)
- `/mnt/photos/cynthia` — Cynthia's photos (organized by named folders)
## PhotoPrism
- URL: `https://photos.jgitta.com`
- Host: **VM113 / photos** (192.168.88.32), port 2342
- Compose: `/srv/docker/photoprism/docker-compose.yml`
- Admin credentials: `admin` / `Jogiocsi1211+`
- Database: MariaDB 11 (container: photoprism-db)
- DB: `photoprism`, user: `photoprism`, pass: `photoprism_db_pass`, root: `photoprism_root_pass`
- Volumes (all bind mounts):
- `/mnt/photos/joe``/photoprism/originals/joe`
- `/mnt/photos/cynthia``/photoprism/originals/cynthia`
- Storage: `/srv/docker/photoprism/storage_data``/photoprism/storage`
- DB data: `/srv/docker/photoprism/db_data``/var/lib/mysql`
- Mode: **Shared library, single login** (Option B)
- Browse Joe vs Cynthia separately via **Library → Folders** in the UI
- Timeline/Browse view shows all photos mixed by date (normal behaviour)
- READONLY: `false` — deletions remove actual files from disk (rely on PBS backups)
- Face detection: enabled (TensorFlow) — slows initial indexing significantly
- Migrated from Siklos (VM112) to VM113: April 2026
- Site URL in config: `https://photos.jgitta.com/`
## Immich
- URL: `https://pictures.jgitta.com`
- Host: **VM113 / photos** (192.168.88.32), port 2283
- Compose: `/srv/docker/immich/`
- External libraries:
- `/mnt/photos/joe`
- `/mnt/photos/cynthia`
- Database: PostgreSQL (pgvecto-rs) — data at `/srv/docker/immich/postgres/`
- Library data: `/srv/docker/immich/library/`
- Machine learning: enabled (separate container)
- Migrated from Siklos (VM112) to VM113: April 2026
## digiKam
- Installed on ThinkStation (192.168.88.41) as AppImage
- Used for photo editing and organisation
- Collections configured under Settings → Configure digiKam → Collections
- Fingerprints generated via Tools → Maintenance → Finger Prints
- Accesses photos over NFS at `/mnt/photos`
- Scanning over NFS is slower than local — disable system sleep for large scans
## Duplicate Detection
- `fdupes` — command line, byte-level duplicate detection
- `-f` flag useful: omit first directory (compare master vs secondary)
- `-r` recursive, `-d -N` auto-delete
- `dupeGuru` — GUI alternative (`sudo apt install dupeguru`)
- Picture Mode for visually similar image matching
- Safer review-before-delete workflow
## Caddy Entry (on CT202)
File: `/etc/caddy/sites/media.caddy`
```
photos.jgitta.com {
import web_secure
reverse_proxy 192.168.88.32:2342 {
import proxy_timeouts
}
}
pictures.jgitta.com {
import web_secure
reverse_proxy 192.168.88.32:2283 {
import proxy_timeouts
}
}
```
## Notes
- PhotoPrism initial indexing with face detection can take several hours for large collections
- After indexing, Joe and Cynthia folders visible under Library → Folders in PhotoPrism UI
- Cloudflare DNS A record for `photos.jgitta.com``184.170.161.177` (proxied)