- Primary Drawbacks & Warnings (Reiterated & Expanded):
- Prerequisites
- Step-by-Step Instructions
- data
- documentation
- paper_trail
This guide provides comprehensive, step-by-step instructions for configuring a single USB flash drive (or potentially an external USB hard drive) to perform two distinct functions simultaneously:
- Booting the Sbnb Linux Operating System: The drive will be prepared with a standard UEFI-compatible structure, specifically an EFI System Partition (ESP) containing the Sbnb EFI bootloader (
sbnb.efi
) and necessary configuration files. This allows the server’s firmware to locate and start the Sbnb boot process. Thesbnb.efi
file itself is typically a Unified Kernel Image (UKI), bundling the Linux kernel, initramfs, and kernel command line into a single executable file. - Providing Simple Persistent Storage: Utilizing a separate partition on the same physical USB drive, formatted with a standard Linux filesystem (
ext4
is used in this guide). This partition is intended to be automatically mounted at the/mnt/sbnb-data
directory path within the running Sbnb Linux system via a custom boot script (sbnb-cmds.sh
). This provides a space where data (like container volumes, application data, logs, user files) can persist across reboots of the otherwise ephemeral, RAM-based Sbnb OS.
Why ext4
instead of LVM: Initial analysis suggested LVM might be suitable, but further review of the default Sbnb Linux build configuration indicates the necessary lvm2
user-space tools are likely missing from the base runtime environment. Without these tools, managing LVM volumes during boot via standard scripts is infeasible unless you create a custom Sbnb build that includes the lvm2
package. This revised guide therefore uses a standard ext4
filesystem partition, relying only on basic tools expected to be present in Sbnb.
Contrasting with Standard Sbnb Workflow: It’s crucial to understand that this guide describes a highly non-standard setup. The intended Sbnb workflow prioritizes resilience, performance, and statelessness:
- Boot the minimal Sbnb OS from simple USB/network.
- Use automation (Ansible) or manual scripts (
sbnb-configure-storage.sh
) post-boot to configure LVM on internal server drives. - Run workloads utilizing this fast, reliable internal storage. This guide’s method compromises these benefits for single-drive convenience under specific constraints.
*** EXTREME CAUTION: IRREVERSIBLE DATA DESTRUCTION IMMINENT! ***
This procedure involves low-level disk operations (partitioning, formatting) that will completely and PERMANENTLY ERASE ALL DATA currently residing on the USB drive you select. There is NO UNDO function. Data recovery after accidental formatting is often impossible.
The most critical risk is selecting the wrong target device. Mistakenly choosing your computer’s internal hard drive (e.g.,
/dev/sda
,/dev/nvme0n1
) instead of the intended USB drive (e.g.,/dev/sdb
,/dev/sdc
) WILL RESULT IN CATASTROPHIC AND LIKELY IRRECOVERABLE LOSS OF YOUR OPERATING SYSTEM, APPLICATIONS, AND PERSONAL FILES.You MUST verify the target device name multiple times using different commands (like
lsblk
,fdisk
,parted
) and cross-reference with expected drive sizes and models before executing any partitioning or formatting commands. Proceed with extreme vigilance, double-checking each step, entirely at your own sole risk!
#Primary Drawbacks & Warnings (Reiterated & Expanded):
- Highly Non-Standard & Complex: Deviates significantly from Sbnb’s design. Setup is intricate, runtime behavior depends on precise script execution and timing. Future Sbnb updates might break this.
- Severe Performance Penalty: USB storage is inherently slow (latency, throughput, IOPS) compared to internal NVMe/SATA drives. Disk I/O to
/mnt/sbnb-data
will be a major bottleneck. - Drastically Reduced Lifespan & Reliability: USB flash drives will wear out quickly under persistent write load due to limited write cycles, write amplification, and lack of TRIM support. Unsuitable for write-intensive workloads or high reliability needs. Expect eventual failure and data loss without robust backups.
- Potential Instability & Boot Issues: Relies on correct partition detection, udev node creation, filesystem integrity, and
sbnb-cmds.sh
execution timing. Failures can leave persistent storage unavailable.
#When Might This Be Considered? (Limited Scenarios with Full Risk Acceptance)
- Temporary Testing/Experimentation ONLY: Brief evaluations on hardware lacking internal drives.
- Specific, Very Low-Intensity, Read-Mostly Use Cases: Infrequent writes, performance irrelevant (e.g., static config kiosk).
- Absolute Hardware Constraints: Sealed systems where internal drives are impossible, and risks are fully accepted.
Even in these limited scenarios, regular, automated, and verified backups are non-negotiable.
#Prerequisites
- A Suitable USB Flash Drive:
- Capacity: Min ~1GB ESP + desired data size (32GB+ recommended).
- Quality & Speed: Reputable brand, USB 3.0+ advised for marginal speed benefit. Endurance matters more than peak speed.
- A Working Linux System (Preparation Environment):
- Necessity: Required for partitioning/formatting the target USB safely. openSUSE Tumbleweed assumed.
- Live Environment Benefit: Using a Live USB/CD (e.g., openSUSE Tumbleweed Live) is highly recommended as it provides a non-destructive environment.
- Sbnb Linux Boot File (
sbnb.efi
):- Method 1 (Easier): Run official Sbnb install script on a temporary USB, then copy
/EFI/BOOT/BOOTX64.EFI
from its ESP. - Method 2 (Advanced): Build Sbnb from source, find
sbnb.efi
inoutput/images/
.
- Method 1 (Easier): Run official Sbnb install script on a temporary USB, then copy
- Root/Sudo Privileges: Needed on the openSUSE prep system for disk commands.
- Internet Connection: May be needed for
zypper
.
#Step-by-Step Instructions
(Reminder: TRIPLE-CHECK your target device name, e.g., /dev/sdX
, before every destructive command!)
#Phase 1: Prepare the Linux Environment (openSUSE Tumbleweed)
- Boot into openSUSE: Start your preparation environment.
- Install Necessary Tools: Open a terminal.
zypper refresh
updates package lists.zypper install
installs tools.sudo zypper refresh sudo zypper install -y parted lvm2 dosfstools e2fsprogs
- Identify Target USB Drive: CRITICAL SAFETY STEP! Unplug other USB storage.
- Insert the target USB drive.
- Use multiple commands. Compare SIZE and MODEL. Check
dmesg | tail
after plugging in for kernel messages likesd 2:0:0:0: [sdc] Attached SCSI removable disk
.lsblk -d -o NAME,SIZE,MODEL,VENDOR,TYPE | grep 'disk' sudo fdisk -l | grep '^Disk /dev/' sudo parted -l | grep '^Disk /dev/' # Example: If consistently identified as /dev/sdc, use /dev/sdc below.
- Visually confirm with YaST Partitioner (
sudo yast2 partitioner
) or GParted (sudo zypper install -y gparted && sudo gparted
) if preferred. Look for the drive matching the expected size and vendor/model. - Assume
/dev/sdX
is your verified target drive. Replace it carefully!
#Phase 2: Partition the USB Drive
(Warning: The following parted
commands are DESTRUCTIVE to /dev/sdX
. Double-check the device name!)
#!/bin/bash
# --- Configuration ---
# Exit immediately if a command exits with a non-zero status.
# Treat unset variables as an error when substituting.
# Pipelines return the exit status of the last command to exit non-zero.
set -euo pipefail
# --- Variables ---
# EFI System Partition (ESP) Label (CRITICAL - must match bootloader config)
ESP_LABEL="sbnb"
# Data Partition Label (Recommended for identification)
DATA_LABEL="SBNB_DATA"
# ESP Size (Adjust if needed, ~1GB is usually sufficient)
ESP_SIZE="1025MiB"
# List of required commands for the script to function
REQUIRED_CMDS=(
"parted" "mkfs.vfat" "mkfs.ext4" "wipefs" "findmnt" "lsblk"
"blkid" "fsck.vfat" "e2fsck" "sync" "id" "grep" "read"
"sleep" "xargs" "umount" "partprobe" "realpath"
)
# --- Functions ---
# Function to check for required commands
check_dependencies() {
echo "--- Checking for required commands ---"
local missing_cmds=()
for cmd in "${REQUIRED_CMDS[@]}"; do
if ! command -v "$cmd" &> /dev/null; then
missing_cmds+=("$cmd")
fi
done
if [ ${#missing_cmds[@]} -ne 0 ]; then
echo "ERROR: The following required commands are not found:" >&2
printf " - %s\n" "${missing_cmds[@]}" >&2
echo "Please install them and try again." >&2
exit 1
fi
echo "All required commands found."
}
# Function to get the base block device for a given path (handles partitions, links, etc.)
get_base_device() {
local path="$1"
local resolved_path
resolved_path=$(realpath "$path") || { echo "ERROR: Cannot resolve path '$path'" >&2; return 1; }
# lsblk -no pkname gets the parent kernel name (base device)
lsblk -no pkname "$resolved_path" || { echo "ERROR: Cannot find base device for '$resolved_path' using lsblk." >&2; return 1; }
}
# --- Script Start ---
echo "-----------------------------------------------------"
echo "--- USB Drive Partitioning and Formatting Script ---"
echo "--- (Version 2 - Enhanced Safety) ---"
echo "-----------------------------------------------------"
echo ""
echo "WARNING: This script is DESTRUCTIVE and will ERASE"
echo " ALL DATA on the target device."
echo ""
# --- Check for Root Privileges ---
if [ "$(id -u)" -ne 0 ]; then
echo "ERROR: This script must be run as root (e.g., using sudo)." >&2
exit 1
fi
# --- Check Dependencies ---
check_dependencies
# --- Check for Device Argument ---
if [ -z "${1:-}" ]; then
echo "Usage: $0 /dev/sdX"
echo "ERROR: Please provide the target block device (e.g., /dev/sda, /dev/sdb)." >&2
echo ""
echo "Available block devices (excluding ROM, loop, and RAM devices):"
lsblk -d -o NAME,SIZE,TYPE,MODEL | grep -vE 'rom|loop|ram'
exit 1
fi
DEVICE="$1"
# --- Validate Device ---
if [ ! -b "$DEVICE" ]; then
echo "ERROR: '$DEVICE' is not a valid block device." >&2
exit 1
fi
# --- CRITICAL SAFETY CHECK: Prevent targeting the root filesystem device ---
echo "--- Performing safety checks ---"
ROOT_DEV_PATH=$(findmnt -n -o SOURCE /)
ROOT_BASE_DEV_NAME=$(get_base_device "$ROOT_DEV_PATH") || exit 1 # Exit if function fails
TARGET_BASE_DEV_NAME=$(get_base_device "$DEVICE") || exit 1
# Construct full device paths for comparison
ROOT_BASE_DEV="/dev/${ROOT_BASE_DEV_NAME}"
TARGET_BASE_DEV="/dev/${TARGET_BASE_DEV_NAME}" # Assumes the input $DEVICE is the base device
if [ "$TARGET_BASE_DEV" == "$ROOT_BASE_DEV" ]; then
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" >&2
echo "FATAL ERROR: Target device '$DEVICE' appears to be the same" >&2
echo " device ('$ROOT_BASE_DEV') as the running root" >&2
echo " filesystem ('$ROOT_DEV_PATH')." >&2
echo " Aborting to prevent data loss." >&2
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" >&2
exit 1
fi
echo "Safety check passed: Target device '$DEVICE' is not the root filesystem device ('$ROOT_BASE_DEV')."
# Check if the device looks like an SD card reader often used for the OS drive
if [[ "$DEVICE" == /dev/mmcblk* ]]; then
echo "WARNING: '$DEVICE' looks like an SD card (e.g., /dev/mmcblk0)."
echo " Double-check this is not your primary OS drive!"
fi
# --- Confirmation ---
echo ""
echo "Target Device: $DEVICE"
echo "Partitions to be created:"
echo " 1: EFI System Partition (ESP), FAT32, Label: '$ESP_LABEL', Size: $ESP_SIZE, Flags: boot, esp"
echo " 2: Linux Data Partition, ext4, Label: '$DATA_LABEL', Size: Remaining space"
echo ""
read -p "ARE YOU ABSOLUTELY SURE you want to erase '$DEVICE' and proceed? (yes/NO): " CONFIRMATION
CONFIRMATION=${CONFIRMATION:-NO} # Default to NO if user just presses Enter
if [[ "$CONFIRMATION" != "yes" ]]; then
echo "Operation cancelled by user."
exit 0
fi
echo ""
echo "--- Proceeding with operations on $DEVICE ---"
# --- Phase 2: Partition the USB Drive ---
# 1. Unmount Existing Partitions
echo ""
echo "--- Unmounting any existing partitions on ${DEVICE}* ---"
# Use findmnt to get mount points and umount them safely
# Also try to unmount the base device itself in case it's loop-mounted etc.
findmnt -n -o TARGET --source "${DEVICE}*" | xargs --no-run-if-empty umount -v -l || echo "Info: No partitions were mounted or umount failed (might be okay)."
umount "$DEVICE" &>/dev/null || true # Attempt to unmount base device, ignore errors
sleep 1 # Give time for umount to settle
lsblk "$DEVICE"
# 2. Wipe Existing Signatures (Recommended)
echo ""
echo "--- Wiping filesystem/partition signatures from $DEVICE ---"
wipefs --all --force "$DEVICE"
sync # Flush kernel buffers to disk to ensure changes are physically written
# 3. Create New GPT Partition Table
echo ""
echo "--- Creating new GPT partition table on $DEVICE ---"
parted "$DEVICE" --script -- mklabel gpt
sync # Flush kernel buffers to disk
# 4. Create EFI System Partition (ESP)
echo ""
echo "--- Creating ESP partition (1) on $DEVICE ---"
parted "$DEVICE" --script -- mkpart "${ESP_LABEL}" fat32 1MiB "${ESP_SIZE}"
parted "$DEVICE" --script -- set 1 boot on
parted "$DEVICE" --script -- set 1 esp on
sync # Flush kernel buffers to disk
# 5. Create Linux Data Partition
echo ""
echo "--- Creating Linux data partition (2) on $DEVICE ---"
# Use the end of the ESP as the start for the data partition
parted "$DEVICE" --script -- mkpart "${DATA_LABEL}" ext4 "${ESP_SIZE}" 100%
sync # Flush kernel buffers to disk
echo "Waiting briefly for kernel to recognize new partitions..."
sleep 2
# Define partition variables (assuming standard naming, e.g., /dev/sda1, /dev/sda2)
# Adding 'p' for NVMe devices (e.g., /dev/nvme0n1p1) - check if base device name contains 'nvme'
if [[ "$DEVICE" == *nvme* ]]; then
PART_PREFIX="p"
else
PART_PREFIX=""
fi
ESP_PARTITION="${DEVICE}${PART_PREFIX}1"
DATA_PARTITION="${DEVICE}${PART_PREFIX}2"
# Check if partition devices exist, retry with partprobe if needed
echo "--- Checking for partition device nodes (${ESP_PARTITION}, ${DATA_PARTITION}) ---"
PARTITIONS_FOUND=false
for i in {1..5}; do
if [ -b "$ESP_PARTITION" ] && [ -b "$DATA_PARTITION" ]; then
echo "Partition nodes found."
PARTITIONS_FOUND=true
break
fi
echo "Partition nodes not yet found. Retrying probe (Attempt $i/5)..."
partprobe "$DEVICE" || echo "Warning: partprobe command failed, continuing check..."
sleep 1
done
if [ "$PARTITIONS_FOUND" = false ]; then
echo "ERROR: Partition devices ($ESP_PARTITION, $DATA_PARTITION) not found after partitioning and retries." >&2
echo " Please check manually ('lsblk $DEVICE', 'parted $DEVICE print')." >&2
lsblk "$DEVICE"
exit 1
fi
# 6. Verify Partitioning
echo ""
echo "--- Verifying partitions on $DEVICE ---"
parted "$DEVICE" --script -- print
echo ""
echo "--- Block device view: ---"
lsblk -o NAME,SIZE,TYPE,FSTYPE,PARTLABEL,MOUNTPOINT,PARTFLAGS "$DEVICE"
echo "----------------------------"
echo "Expected: ${ESP_PARTITION} (~${ESP_SIZE}), Type EFI System, Flags: boot, esp"
echo "Expected: ${DATA_PARTITION} (Remaining size), Type Linux filesystem"
echo "----------------------------"
sleep 2 # Pause for user to review
# --- Phase 3: Format Filesystems ---
# 1. Format EFI Partition
echo ""
echo "--- Formatting ESP partition (${ESP_PARTITION}) as FAT32 with label '${ESP_LABEL}' ---"
mkfs.vfat -F 32 -n "${ESP_LABEL}" "${ESP_PARTITION}"
sync # Flush kernel buffers to disk
# Check filesystem integrity
echo "--- Checking ESP filesystem (fsck.vfat) ---"
FSCK_VFAT_EXIT_CODE=0
fsck.vfat -a "${ESP_PARTITION}" || FSCK_VFAT_EXIT_CODE=$? # Run fsck, capture exit code on failure
if [ $FSCK_VFAT_EXIT_CODE -eq 0 ]; then
echo "ESP filesystem check passed (or no check performed)."
elif [ $FSCK_VFAT_EXIT_CODE -eq 1 ]; then
# Exit code 1 usually means errors were found AND corrected.
echo "WARNING: fsck.vfat found and corrected errors on ESP partition (${ESP_PARTITION}). Check output above."
else
# Exit codes > 1 typically indicate uncorrected errors.
echo "ERROR: fsck.vfat reported uncorrectable errors (Exit Code: $FSCK_VFAT_EXIT_CODE) on ESP partition (${ESP_PARTITION})." >&2
echo " Cannot proceed safely. Please investigate manually." >&2
exit 1
fi
# Verify label using blkid
echo "--- Verifying ESP label ---"
if blkid -s LABEL -o value "${ESP_PARTITION}" | grep -q "^${ESP_LABEL}$"; then
echo "ESP Label '${ESP_LABEL}' verified successfully on ${ESP_PARTITION}."
else
echo "ERROR: Failed to verify ESP Label '${ESP_LABEL}' on ${ESP_PARTITION}." >&2
blkid "${ESP_PARTITION}" # Show full blkid output for debugging
exit 1
fi
# 2. Format Data Partition
echo ""
echo "--- Formatting Data partition (${DATA_PARTITION}) as ext4 with label '${DATA_LABEL}' ---"
mkfs.ext4 -m 0 -L "${DATA_LABEL}" "${DATA_PARTITION}"
sync # Flush kernel buffers to disk
# Check the new ext4 filesystem integrity
echo "--- Checking Data partition filesystem (e2fsck) ---"
# -f forces check even if clean, -y assumes yes to all prompts (use with caution)
E2FSCK_EXIT_CODE=0
e2fsck -f -y "${DATA_PARTITION}" || E2FSCK_EXIT_CODE=$? # Capture exit code on failure
if [ $E2FSCK_EXIT_CODE -eq 0 ]; then
echo "Data partition filesystem check passed."
elif [ $E2FSCK_EXIT_CODE -eq 1 ]; then
# Exit code 1 means errors were corrected.
echo "WARNING: e2fsck found and corrected errors on Data partition (${DATA_PARTITION}). Check output above."
else
# Exit codes > 1 indicate uncorrected errors.
echo "ERROR: e2fsck reported uncorrectable errors (Exit Code: $E2FSCK_EXIT_CODE) on Data partition (${DATA_PARTITION})." >&2
echo " Cannot proceed safely. Please investigate manually." >&2
exit 1
fi
# Verify the label using blkid
echo "--- Verifying Data partition label ---"
if blkid -s LABEL -o value "${DATA_PARTITION}" | grep -q "^${DATA_LABEL}$"; then
echo "Data Label '${DATA_LABEL}' verified successfully on ${DATA_PARTITION}."
else
echo "ERROR: Failed to verify Data Label '${DATA_LABEL}' on ${DATA_PARTITION}." >&2
blkid "${DATA_PARTITION}" # Show full blkid output for debugging
exit 1
fi
echo ""
echo "-----------------------------------------------------"
echo "--- Script finished successfully! ---"
echo "Device: $DEVICE"
echo "Partitions created and formatted:"
lsblk -o NAME,SIZE,TYPE,FSTYPE,LABEL,PARTLABEL,MOUNTPOINT "$DEVICE"
echo "-----------------------------------------------------"
exit 0
#Phase 4: Install Sbnb Boot Files and Configuration
- Mount EFI Partition: Access the ESP filesystem.
echo "--- Mounting ESP partition ---" sudo mkdir -p /mnt/sbnb-mount sudo mount /dev/sdX1 /mnt/sbnb-mount
- Create EFI Boot Directory: Standard UEFI fallback path.
echo "--- Creating EFI boot directories ---" sudo mkdir -p /mnt/sbnb-mount/EFI/BOOT
- Copy Sbnb EFI Boot File: Place the bootloader (
sbnb.efi
asBOOTX64.EFI
).echo "--- Copying Sbnb EFI boot file ---" sudo cp sbnb.efi /mnt/sbnb-mount/EFI/BOOT/BOOTX64.EFI
- (Recommended) Create Sbnb Configuration File: Place
sbnb-tskey.txt
in ESP root (/mnt/sbnb-mount/
). Theboot-sbnb.sh
script reads this to configure Tailscale.echo "--- Creating Sbnb configuration file (sbnb-tskey.txt) ---" echo "tskey-auth-..." | sudo tee /mnt/sbnb-mount/sbnb-tskey.txt > /dev/null
- (Crucial) Handle Data Partition Mounting via
sbnb-cmds.sh
:- Context & Goal: Sbnb boots -> systemd ->
sbnb.service
->boot-sbnb.sh
-> mounts ESP to/mnt/sbnb
-> executes/mnt/sbnb/sbnb-cmds.sh
. This script mounts the data partition (labeledSBNB_DATA
) to/mnt/sbnb-data
. - Device Detection Timing: There’s a potential race: the kernel/udev might not have created the
/dev/disk/by-label/SBNB_DATA
symlink or the/dev/sdX2
node exactly when the script runs. The wait loop mitigates this. - Default Script Conflict Uncertainty: The
boot-sbnb.sh
excerpt doesn’t show conflicting actions at its execution point. However, other early boot mechanisms could exist in Sbnb. If/mnt/sbnb-data
behaves unexpectedly, investigate potential early boot scripts/services related to storage in your Sbnb version (advanced). sbnb-cmds.sh
Script (Wait Loop & Logging): Create this in the ESP root (/mnt/sbnb-mount/
during prep). ```bash #!/bin/sh#Custom sbnb-cmds.sh for USB Persistent Partition setup (No LVM)
#Mounts partition labeled DATA_LABEL to MOUNT_POINT after waiting for device.
#For debugging, uncomment ‘set -x’ to trace command execution.
#set -x
- Context & Goal: Sbnb boots -> systemd ->
#Function to log messages consistently to kernel buffer (dmesg) and console (tty)
log_msg() { echo “sbnb-cmds.sh: $1” | tee /dev/kmsg }
log_msg “— Running Custom USB Partition Mount Script —”
#— Configuration —
MOUNT_POINT=”/mnt/sbnb-data” # Target directory for persistent data DATA_LABEL=”SBNB_DATA” # Filesystem label of the data partition (MUST match mkfs.ext4 -L)
#Alternative: Use UUID for potentially more stable identification if labels change/conflict
#Get UUID using ‘sudo blkid /dev/sdX2’ on prep machine, then set:
#DATA_UUID=”YOUR-UUID-HERE”
MAX_WAIT_SECONDS=15 # Max time (seconds) to wait for the device node/label WAIT_INTERVAL=1 # Check frequency (seconds) MOUNT_OPTS=”defaults,noatime,nodiratime” # Mount options (noatime/nodiratime reduce writes on flash)
#— End Configuration —
DATA_DEVICE=”” # Will hold the found device path
#— Wait Loop for Device —
#Attempts to find the device by label or UUID (if configured).
#Waits because device node creation by kernel/udev might be delayed.
elapsed_wait=0 log_msg “Waiting up to ${MAX_WAIT_SECONDS}s for device (Label: ${DATA_LABEL:-N/A})…” while [ -z “$DATA_DEVICE” ] && [ $elapsed_wait -lt $MAX_WAIT_SECONDS ]; do # Check for device using the label symlink first (usually faster if udev ran) label_path=”/dev/disk/by-label/${DATA_LABEL}” if [ -e “$label_path” ]; then # Readlink -f resolves the symlink to the actual device path (e.g., /dev/sdb2) DATA_DEVICE=$(readlink -f “$label_path”) log_msg “Found device via label symlink: $label_path -> $DATA_DEVICE” break # Exit loop fi
# Fallback: Use blkid command to scan for the label (can be slower) blkid_device=$(blkid -L “${DATA_LABEL}” 2>/dev/null) if [ -n “$blkid_device” ]; then DATA_DEVICE=”$blkid_device” log_msg “Found device via blkid label lookup: $DATA_DEVICE” break # Exit loop fi
# (Add similar checks here using DATA_UUID if using UUID instead of Label)
# Device not found yet, wait before next check sleep $WAIT_INTERVAL elapsed_wait=$((elapsed_wait + WAIT_INTERVAL)) done
#— Mount Logic —
#Proceed only if a device path was successfully determined
if [ -n “$DATA_DEVICE” ] && [ -e “$DATA_DEVICE” ]; then log_msg “Data partition device resolved to ${DATA_DEVICE} after ${elapsed_wait}s.”
# Check if the target directory is already a mount point if ! mountpoint -q “$MOUNT_POINT”; then log_msg “Attempting to mount $DATA_DEVICE at $MOUNT_POINT with options: $MOUNT_OPTS…” # Ensure the target directory exists mkdir -p “$MOUNT_POINT” # Mount the device if mount -o “$MOUNT_OPTS” “$DATA_DEVICE” “$MOUNT_POINT”; then log_msg “Successfully mounted persistent partition at $MOUNT_POINT.” else mount_exit_code=$? log_msg “ERROR: Failed to mount $DATA_DEVICE at $MOUNT_POINT (exit code: $mount_exit_code). Check filesystem type/integrity (run fsck?). See dmesg for details.” >&2 fi else # Mount point exists, verify if it’s the correct device log_msg “$MOUNT_POINT is already a mount point. Checking device…” # Check /proc/mounts for the currently mounted device at MOUNT_POINT if grep -qs “$DATA_DEVICE $MOUNT_POINT” /proc/mounts; then log_msg “Persistent partition already correctly mounted at $MOUNT_POINT.” else mounted_dev=$(grep -s “$MOUNT_POINT” /proc/mounts | awk ‘{print $1}’) log_msg “ERROR: $MOUNT_POINT is already mounted, but by ‘$mounted_dev’ NOT ‘$DATA_DEVICE’! Check system configuration.” >&2 fi fi else # Device wasn’t found within the timeout log_msg “ERROR: Data partition device (Label: ${DATA_LABEL:-N/A}) not found after waiting ${MAX_WAIT_SECONDS}s. Cannot mount persistent storage.” >&2 fi
mkdir -p /etc/docker cp /mnt/sbnb-data/docker/docker-daemon.json.template /etc/docker/daemon.json
log_msg “— Finished Custom USB Partition Mount Script —”
#Exit 0 ensures the rest of the Sbnb boot sequence continues
exit 0
* Place script content into `/mnt/sbnb-mount/sbnb-cmds.sh`.
* Make executable: `sudo chmod +x /mnt/sbnb-mount/sbnb-cmds.sh`.
6. **Unmount the EFI Partition:**
```bash
echo "--- Unmounting ESP partition ---"
# Ensure buffers are flushed before unmounting
sync
sudo umount /mnt/sbnb-mount
```
### Phase 4.5: Backing Up Data (CRITICAL!)
* **Why Essential:** High risk of USB drive failure. Backups are mandatory.
* **Strategy:** Automate regular backups of `/mnt/sbnb-data`.
* **File Data Backup (`rsync`):** Ensure the backup destination (NAS, cloud, another server) has sufficient free space.
```bash
# Example: From Sbnb to backup-server (requires ssh key auth)
rsync -avz --delete --progress --human-readable /mnt/sbnb-data/ user@backup-server:/path/to/backups/sbnb-usb-data/
```
* **Frequency:** Daily recommended for active data.
* **Automation:** Use cron/systemd timers or remote triggers.
* **Testing Restores:** Vital! Don't assume backups work.
* **Conceptual Restore:** Boot Linux Live env -> Mount backup source -> Mount target USB data partition (new/reformatted) to `/mnt/restore` -> `sudo rsync -av --progress /path/to/backup/sbnb-usb-data/ /mnt/restore/` -> Verify restored files (count, size, checksums, spot checks).
* **Verification:** Use tools like `diff -r`, `md5sum`, or `sha256sum` to compare restored files against originals or known good copies.
* *Untested backups provide a false sense of security.*
### Phase 5: Boot and Verify
1. **Safely Eject:** Eject USB from prep system.
2. **Configure Server BIOS/UEFI:** Enter setup (DEL, F2, F10, F12, etc.). Ensure UEFI Mode ON, CSM/Legacy OFF, Secure Boot OFF. Set "UEFI: USB..." as first boot device. Save & Exit.
3. **Boot Sbnb Linux.**
4. **Verify Operation:**
* Monitor Boot: Watch console for `sbnb-cmds.sh` logs, errors.
* SSH into Sbnb.
* Check Mounts:
```bash
lsblk -o NAME,SIZE,TYPE,FSTYPE,LABEL,MOUNTPOINT # Look for mount at /mnt/sbnb-data
df -hT | grep -E 'Filesystem|/mnt/sbnb-data' # Check usage/type
mount | grep /mnt/sbnb-data # Check mount options (rw, noatime)
findmnt /mnt/sbnb-data # Another way to check mount info
```
* Test Persistence:
```bash
# After SSHing in:
TIMESTAMP=$(date)
echo "Sbnb USB Persistence test - $TIMESTAMP" | sudo tee /mnt/sbnb-data/persistence_test.txt > /dev/null
sync && echo "Synced data to disk."
echo "File created. Content:" && sudo cat /mnt/sbnb-data/persistence_test.txt
echo "Rebooting server now..." && sudo reboot
# --- Wait for reboot and reconnect via SSH ---
echo "Checking for file after reboot..."
if [ -f /mnt/sbnb-data/persistence_test.txt ]; then
echo "SUCCESS: File found. Content:" && sudo cat /mnt/sbnb-data/persistence_test.txt
sudo rm /mnt/sbnb-data/persistence_test.txt # Clean up
else
echo "FAILURE: File NOT FOUND after reboot! Persistence failed."
fi
```
## Troubleshooting
* **Doesn't Boot / No Bootable Device:**
* Re-verify BIOS settings (UEFI, Secure Boot OFF, Boot Order).
* Re-verify USB Prep: Partitions (`parted print`), ESP flags (`boot`,`esp`), ESP filesystem label (`blkid /dev/sdX1` -> `LABEL="sbnb"`), EFI file path (`/EFI/BOOT/BOOTX64.EFI`).
* Try different USB ports (check if port provides sufficient power). Test drive health on prep machine (`fsck`, `badblocks -nvs /dev/sdX`). Recreate drive meticulously.
* **Data Partition Not Mounted / `/mnt/sbnb-data` Empty:**
* Check boot logs (`journalctl -b`, console) for `sbnb-cmds.sh` errors ("Device... not found", "Failed to mount"). Check `dmesg` for USB errors (`dmesg | grep -iE 'usb|sdX'`) or filesystem errors (`dmesg | grep -i ext4`).
* SSH in:
* Verify partition & label: `sudo blkid`, `ls -l /dev/disk/by-label/`. Is `SBNB_DATA` present? Does it point to the correct device?
* If label wrong/missing: Re-label from prep env (`sudo e2label /dev/sdX2 SBNB_DATA`).
* If device/label exists, try manual mount: `sudo mkdir -p /mnt/sbnb-data && sudo mount /dev/disk/by-label/SBNB_DATA /mnt/sbnb-data`. Check `dmesg` for errors (e.g., `mount: wrong fs type, bad option, bad superblock`). If manual mount works, debug `sbnb-cmds.sh` (add `set -x`, check paths, loop duration, check script permissions `ls -l /mnt/sbnb/sbnb-cmds.sh`).
* Run filesystem check (unmounted): `sudo e2fsck -f /dev/disk/by-label/SBNB_DATA`.
* Check kernel modules: `lsmod | grep ext4`. Is the module loaded? Check `dmesg` for errors loading filesystem modules.
* **Poor Performance / Drive Failure:**
* **Performance:** Inherent limitation.
* **Lifespan/Failure:** Monitor `dmesg` for I/O errors. Restore from verified backups upon failure. This setup will wear out consumer flash drives with persistent writes.
# CODE
sbnb-cmds.sh
<section class="code-block-container" role="group" aria-label="Shell Code Block" data-filename="shell_code_block.sh" data-code="#!/bin/sh
# Sbnb Custom Commands Script
# Mounts persistent data partition at /mnt/sbnb-data, configures Docker data-root on it,
# restores backup, restarts Docker, updates dev env script atomically, enables backup units.
# Exit immediately if a command exits with a non-zero status. Crucial for boot scripts.
# Use pipefail to ensure pipeline failure is detected.
set -e -o pipefail
# --- Script Start Logging ---
echo "[sbnb-cmds.sh] Starting custom boot commands..." > /dev/kmsg
# --- Mount Persistent Data Partition ---
DATA_LABEL="SBNB_DATA"
DATA_DEVICE_SYMLINK="/dev/disk/by-label/${DATA_LABEL}"
# --- NEW Mount Point ---
DATA_MOUNT_POINT="/mnt/sbnb-data"
MAX_WAIT_SECONDS=15
WAIT_INTERVAL=1
elapsed_time=0
echo "[sbnb-cmds.sh] Waiting up to ${MAX_WAIT_SECONDS}s for data device (Label: ${DATA_LABEL})..." > /dev/kmsg
# Wait for the device symlink to appear
while [ ! -e "${DATA_DEVICE_SYMLINK}" ]; do
if [ ${elapsed_time} -ge ${MAX_WAIT_SECONDS} ]; then
echo "[sbnb-cmds.sh] ERROR: Timeout waiting for device ${DATA_DEVICE_SYMLINK}. Persistent data cannot be mounted." > /dev/kmsg
exit 1 # Exit: cannot proceed without data partition
fi
sleep ${WAIT_INTERVAL}
elapsed_time=$((elapsed_time + WAIT_INTERVAL))
done
# Resolve the actual device node using readlink -f for canonical path
DATA_DEVICE=$(readlink -f "${DATA_DEVICE_SYMLINK}")
echo "[sbnb-cmds.sh] Data partition device resolved to ${DATA_DEVICE} after ${elapsed_time}s." > /dev/kmsg
# Create mount point (ensure parent /mnt/sbnb-data exists if needed, mkdir -p handles this)
# Assuming /mnt/sbnb-data exists, we create the /docker subdirectory for mounting.
mkdir -p "${DATA_MOUNT_POINT}"
echo "[sbnb-cmds.sh] Attempting to mount ${DATA_DEVICE} at ${DATA_MOUNT_POINT}..." > /dev/kmsg
# Mount read-write. noatime/nodiratime improve performance by reducing metadata writes.
if mount -o rw,noatime,nodiratime "${DATA_DEVICE}" "${DATA_MOUNT_POINT}"; then
echo "[sbnb-cmds.sh] Successfully mounted persistent partition at ${DATA_MOUNT_POINT}." > /dev/kmsg
else
echo "[sbnb-cmds.sh] ERROR: Failed to mount ${DATA_DEVICE} at ${DATA_MOUNT_POINT}!" > /dev/kmsg
exit 1 # Exit: cannot proceed without data partition mounted
fi
# --- Configure Docker data-root and Restore Data ---
echo "[sbnb-cmds.sh] Configuring Docker data-root and checking restore..." > /dev/kmsg
# Define the NEW location for Docker's root data directory on the persistent partition
# --- Path updated relative to new mount point ---
DOCKER_DATA_ROOT_PERSISTENT="${DATA_MOUNT_POINT}/docker-root"
# Define the standard location for Docker's configuration file
DOCKER_CONFIG_DIR="/etc/docker"
DOCKER_CONFIG_FILE="${DOCKER_CONFIG_DIR}/daemon.json"
# Define backup location
# --- Path updated relative to new mount point ---
BACKUP_DIR="${DATA_MOUNT_POINT}/backups/docker" # Path on the mounted data partition
LATEST_LINK="${BACKUP_DIR}/docker_latest.tar.gz"
# 1. Ensure the new Docker data-root directory exists on the persistent partition
echo "[sbnb-cmds.sh] Ensuring Docker data directory exists: ${DOCKER_DATA_ROOT_PERSISTENT}" > /dev/kmsg
mkdir -p "${DOCKER_DATA_ROOT_PERSISTENT}"
if [ $? -ne 0 ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to create persistent Docker data directory ${DOCKER_DATA_ROOT_PERSISTENT}!" > /dev/kmsg
exit 1
fi
# 2. Create/Update Docker daemon configuration to use the new data-root
echo "[sbnb-cmds.sh] Configuring Docker daemon (${DOCKER_CONFIG_FILE}) to use data-root: ${DOCKER_DATA_ROOT_PERSISTENT}" > /dev/kmsg
mkdir -p "${DOCKER_CONFIG_DIR}"
# Create a minimal daemon.json setting the data-root.
# WARNING: This command OVERWRITES any existing ${DOCKER_CONFIG_FILE}.
# If you have other custom daemon settings (log drivers, storage opts, mirrors, etc.),
# they will be LOST. Backup the existing file first if unsure (e.g., cp ${DOCKER_CONFIG_FILE} ${DOCKER_CONFIG_FILE}.bak).
# For merging settings, consider using 'jq' if available in this environment.
printf '{\n "data-root": "%s"\n}\n' "${DOCKER_DATA_ROOT_PERSISTENT}" > "${DOCKER_CONFIG_FILE}"
if [ $? -ne 0 ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to write Docker config file ${DOCKER_CONFIG_FILE}!" > /dev/kmsg
exit 1
fi
echo "[sbnb-cmds.sh] Docker daemon configuration updated." > /dev/kmsg
# 3. Restore Docker data INTO the NEW persistent location (if backup exists)
echo "[sbnb-cmds.sh] Checking for Docker backup..." > /dev/kmsg
# Check if the target directory is empty before attempting restore.
# This prevents accidentally overwriting existing data if restore is run multiple times without cleanup.
if [ -L "${LATEST_LINK}" ] && [ -z "$(ls -A "${DOCKER_DATA_ROOT_PERSISTENT}")" ]; then
# Attempt to resolve the absolute path of the file the symlink points to
ACTUAL_BACKUP_FILE=$(readlink -f "${LATEST_LINK}")
# Check if readlink succeeded (exit status 0) AND the target file actually exists
if [ $? -eq 0 ] && [ -f "${ACTUAL_BACKUP_FILE}" ]; then
echo "[sbnb-cmds.sh] Found latest backup: ${ACTUAL_BACKUP_FILE}. Restoring to ${DOCKER_DATA_ROOT_PERSISTENT}..." > /dev/kmsg
# Extract archive directly into the new persistent data-root directory.
# IMPORTANT: Assumes the backup archive contains the contents *of* the docker data dir
# (e.g., image/, volumes/, containers/) directly at the top level.
# If the archive contains a leading directory (e.g., 'docker/' or './'), adjust the -C path or use --strip-components=1 with tar.
# Verify archive structure first if unsure, e.g., using: tar -tf "${ACTUAL_BACKUP_FILE}" | head -n 5
if tar -xzf "${ACTUAL_BACKUP_FILE}" -C "${DOCKER_DATA_ROOT_PERSISTENT}"; then
echo "[sbnb-cmds.sh] Docker data restored successfully to persistent storage." > /dev/kmsg
else
echo "[sbnb-cmds.sh] ERROR: Failed to extract Docker data from ${ACTUAL_BACKUP_FILE} to ${DOCKER_DATA_ROOT_PERSISTENT}! Docker might start fresh or with inconsistent data." > /dev/kmsg
# Clean up potentially corrupted/partial restore attempt
rm -rf "${DOCKER_DATA_ROOT_PERSISTENT:?}/"*
# Allowing Docker to start fresh might be safer.
fi
else
echo "[sbnb-cmds.sh] WARNING: Latest backup link '${LATEST_LINK}' exists but is broken or points to non-existent file. Skipping restore." > /dev/kmsg
fi
elif [ -L "${LATEST_LINK}" ]; then
echo "[sbnb-cmds.sh] Docker data directory ${DOCKER_DATA_ROOT_PERSISTENT} is not empty. Skipping restore to avoid overwrite." > /dev/kmsg
else
echo "[sbnb-cmds.sh] No latest Docker backup link found (${LATEST_LINK}). Docker will use/create data in ${DOCKER_DATA_ROOT_PERSISTENT}." > /dev/kmsg
fi
echo "[sbnb-cmds.sh] Docker data-root configuration and restore check finished." > /dev/kmsg
# --- Restart Docker Service ---
# Explicitly reload daemon config and restart Docker to apply data-root change.
# Reload ensures systemd is aware of the potentially changed daemon.json before restarting.
echo "[sbnb-cmds.sh] Reloading systemd daemon and restarting Docker service..." > /dev/kmsg
if systemctl daemon-reload && systemctl restart docker; then
echo "[sbnb-cmds.sh] Docker service restarted successfully." > /dev/kmsg
else
# --- FIX: Exit on Docker restart failure ---
echo "[sbnb-cmds.sh] ERROR: Failed to reload systemd or restart Docker service! Halting script." > /dev/kmsg
# Exit because subsequent steps might depend on Docker running correctly.
# Remove 'exit 1' below and restore comment if continuing on failure is the desired behavior.
exit 1
fi
# --- Update sbnb-dev-env.sh Script (Atomic Method) ---
# This script uses a named volume backed by a path on the persistent storage.
TARGET_DEV_ENV_SCRIPT="/usr/sbin/sbnb-dev-env.sh"
TARGET_DIR=$(dirname "${TARGET_DEV_ENV_SCRIPT}")
TMP_SCRIPT="" # Initialize temporary script variable
# Setup trap to automatically clean up the temporary file ${TMP_SCRIPT} if the script exits
# prematurely (e.g., due to an error [EXIT] or signals [HUP, INT, QUIT, TERM]).
trap 'if [ -n "${TMP_SCRIPT}" ] && [ -f "${TMP_SCRIPT}" ]; then rm -f "${TMP_SCRIPT}"; echo "[sbnb-cmds.sh] Cleaned up temporary file ${TMP_SCRIPT}" > /dev/kmsg; fi' EXIT HUP INT QUIT TERM
echo "[sbnb-cmds.sh] Attempting atomic update of ${TARGET_DEV_ENV_SCRIPT}..." > /dev/kmsg
# NOTE: This section assumes ${TARGET_DIR} is writable in the current boot environment.
# Ensure the target directory exists
if [ ! -d "${TARGET_DIR}" ]; then
echo "[sbnb-cmds.sh] ERROR: Target directory ${TARGET_DIR} does not exist. Cannot update script." > /dev/kmsg
exit 1
fi
# Create a temporary file securely in the target directory
# Using mktemp is preferred for security and avoiding collisions.
TMP_SCRIPT=$(mktemp "${TARGET_DIR}/sbnb-dev-env.sh.XXXXXX")
if [ -z "${TMP_SCRIPT}" ] || [ ! -f "${TMP_SCRIPT}" ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to create temporary file in ${TARGET_DIR}." > /dev/kmsg
exit 1
fi
echo "[sbnb-cmds.sh] Created temporary file: ${TMP_SCRIPT}" > /dev/kmsg
# Write the new script content to the temporary file using a quoted here document.
# Quoting 'EOF' prevents shell expansion of variables ($VAR) inside the here document.
cat <<'EOF' > "${TMP_SCRIPT}"
#!/bin/sh
# Exit immediately if a command exits with a non-zero status.
# Treat unset variables as an error when substituting.
# Print commands and their arguments as they are executed.
# The return value of a pipeline is the status of the last command to exit with a non-zero status,
# or zero if no command exited with a non-zero status.
set -euxo pipefail
# --- Configuration ---
# Use Ubuntu 24.04 LTS as the base image.
IMAGE="ubuntu:24.04"
# Name for the development environment container.
NAME="sbnb-dev-env"
# Name for the Docker named volume for persistent data FOR THIS CONTAINER.
DATA_VOLUME_NAME="sbnb-dev-data"
# --- Specify the host path where THIS specific named volume's data should be stored ---
# --- Path updated relative to new mount point /mnt/sbnb-data ---
DATA_VOLUME_HOST_PATH="/mnt/sbnb-data/docker-volumes/${DATA_VOLUME_NAME}"
# Target directory inside the container for persistent data.
DATA_CONTAINER_DIR="/data"
# Initialization script path on the host (relative to the mounted '/' -> '/host').
INIT_SCRIPT_HOST_PATH="/usr/sbin/_sbnb-dev-env-container.sh"
# --- Check if container is already running ---
# Check if container is already running using Docker's name filter.
# -q outputs only IDs, --filter selects by name (anchored ^/name$), grep -q . checks if any output exists.
echo "Checking for existing container: ${NAME}..."
if docker ps -q --filter "name=^/${NAME}$" | grep -q .; then
echo "Attaching to existing container: ${NAME}"
# Execute tmux, creating a new session named 'sbnb-dev-env' if it doesn't exist,
# or attaching to it if it does.
docker exec -it "${NAME}" tmux new-session -A -s sbnb-dev-env
exit 0 # Exit successfully after attaching
fi
# --- Prerequisites Check ---
# --- Ensure the specific host directory FOR THIS VOLUME exists ---
# When binding a named volume to a specific host path using the 'local' driver options below,
# the target host path MUST exist beforehand.
echo "Checking if host path for volume data exists: ${DATA_VOLUME_HOST_PATH}"
# Create the specific directory for this volume if it doesn't exist
mkdir -p "${DATA_VOLUME_HOST_PATH}"
if [ $? -ne 0 ]; then
echo "Error: Failed to create host path for volume data: ${DATA_VOLUME_HOST_PATH}" >&2
exit 1
fi
echo "Host path for volume data found/created: ${DATA_VOLUME_HOST_PATH}"
# ---> IMPORTANT PERMISSIONS NOTE <---
# Ensure this host directory (${DATA_VOLUME_HOST_PATH}) has the correct permissions
# (e.g., ownership/group/mode) for the user/process running inside the container
# that needs to write to ${DATA_CONTAINER_DIR}. Docker does NOT automatically manage
# permissions on pre-existing host paths used this way. Incorrect host path permissions
# are a common cause of 'Permission denied' errors inside the container for this setup.
# Example: sudo chown $(id -u):$(id -g) ${DATA_VOLUME_HOST_PATH} (Run this manually if needed)
# Ensure the Docker named volume exists, create if not, binding it to the specified host path.
echo "Checking if Docker volume exists: ${DATA_VOLUME_NAME}"
if ! docker volume inspect "${DATA_VOLUME_NAME}" > /dev/null 2>&1; then
echo "Volume ${DATA_VOLUME_NAME} not found. Creating and binding to ${DATA_VOLUME_HOST_PATH}..."
# Create the volume using the 'local' driver with options to bind it to a specific host path.
docker volume create \
--driver local \
--opt type=none \
--opt "device=${DATA_VOLUME_HOST_PATH}" \
--opt o=bind \
"${DATA_VOLUME_NAME}"
echo "Volume ${DATA_VOLUME_NAME} created and bound to host path."
else
echo "Volume ${DATA_VOLUME_NAME} already exists."
# NOTE (Edge Case): This check assumes the existing volume is correctly bound to ${DATA_VOLUME_HOST_PATH}.
fi
# Ensure the initialization script exists on the host
# This check remains crucial as the script is still accessed via the host mount.
echo "Checking if initialization script exists: ${INIT_SCRIPT_HOST_PATH}"
if [ ! -f "${INIT_SCRIPT_HOST_PATH}" ]; then
echo "Error: Initialization script not found on host: ${INIT_SCRIPT_HOST_PATH}" >&2
exit 1 # Exit if script doesn't exist
fi
echo "Initialization script found."
# --- Create and run a new dev container ---
echo "Creating new development container: ${NAME} with image ${IMAGE}"
# Note: Docker's main data-root is now configured via daemon.json (by sbnb-cmds.sh)
# to use persistent storage. This volume mount provides specific persistent storage for this container.
docker run \
-it \
-d \
--privileged \
-v /root:/root \
-v /dev:/dev \
-v /:/host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "${DATA_VOLUME_NAME}:${DATA_CONTAINER_DIR}" \
--net=host \
--name "${NAME}" \
--rm \
--pull=always \
--ulimit nofile=262144:262144 \
"${IMAGE}" \
sleep infinity
# End of docker run command options. IMAGE and COMMAND follow.
# --- Execute initialization script inside the container ---
echo "Executing initialization script inside the container (${INIT_SCRIPT_HOST_PATH})..."
# CRITICAL: The successful setup and usability of the container environment heavily depend
# on the correct execution and content of the script located at /host${INIT_SCRIPT_HOST_PATH}.
# Ensure that script performs all necessary setup steps required within the container.
docker exec -it "${NAME}" bash "/host${INIT_SCRIPT_HOST_PATH}" # Note the path combines /host + script path
# --- Attach to the container's tmux session ---
echo "Attaching to the new container's tmux session..."
docker exec -it "${NAME}" tmux new-session -A -s sbnb-dev-env
echo "-----------------------------------------------------"
echo "Container ${NAME} is running. Initialization script executed. Attached via tmux."
echo "Specific data for this container is persisted in Docker named volume: ${DATA_VOLUME_NAME}"
echo "(Volume data is stored on the host at: ${DATA_VOLUME_HOST_PATH})"
echo "Docker main data-root is configured via /etc/docker/daemon.json."
echo "Remember to configure applications inside the container to use '${DATA_CONTAINER_DIR}' for storage."
echo "Ensure host path '${DATA_VOLUME_HOST_PATH}' has correct permissions for container processes."
echo "-----------------------------------------------------"
EOF
# Check if cat succeeded (redundant check if set -e is active, but harmless)
if [ $? -ne 0 ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to write content to temporary file ${TMP_SCRIPT}." > /dev/kmsg
# Trap will handle cleanup
exit 1
fi
# Set execute permissions on the temporary file
echo "[sbnb-cmds.sh] Setting execute permissions on ${TMP_SCRIPT}..." > /dev/kmsg
if ! chmod +x "${TMP_SCRIPT}"; then
echo "[sbnb-cmds.sh] ERROR: Failed to set execute permissions on temporary file ${TMP_SCRIPT}." > /dev/kmsg
# Trap will handle cleanup
exit 1 # Exit: script must be executable
fi
# Atomically replace the target script with the temporary file
# mv is atomic when moving files within the same filesystem.
echo "[sbnb-cmds.sh] Atomically replacing ${TARGET_DEV_ENV_SCRIPT} with ${TMP_SCRIPT}..." > /dev/kmsg
if ! mv "${TMP_SCRIPT}" "${TARGET_DEV_ENV_SCRIPT}"; then
echo "[sbnb-cmds.sh] ERROR: Failed to move temporary file ${TMP_SCRIPT} to ${TARGET_DEV_ENV_SCRIPT}." > /dev/kmsg
# Trap will handle cleanup of TMP_SCRIPT if it still exists
exit 1
fi
# If mv succeeds, the temporary file no longer exists under its old name.
# Clear TMP_SCRIPT variable so the trap doesn't try to remove the (now moved) file.
TMP_SCRIPT=""
echo "[sbnb-cmds.sh] Successfully updated ${TARGET_DEV_ENV_SCRIPT}." > /dev/kmsg
echo "[sbnb-cmds.sh] Update of ${TARGET_DEV_ENV_SCRIPT} finished." > /dev/kmsg
# --- Enable Systemd Units for Backup/Purge ---
# --- Path updated relative to new mount point ---
SYSTEMD_SOURCE_DIR="/mnt/sbnb/systemd" # Units stored on data partition
echo "[sbnb-cmds.sh] Enabling custom systemd units for Docker backup/purge (Source: ${SYSTEMD_SOURCE_DIR})..." > /dev/kmsg
SYSTEMD_TARGET_DIR="/etc/systemd/system"
TIMERS_WANTS_DIR="${SYSTEMD_TARGET_DIR}/timers.target.wants"
# Ensure systemd directories exist in the ephemeral overlay filesystem
mkdir -p "${SYSTEMD_TARGET_DIR}"
mkdir -p "${TIMERS_WANTS_DIR}"
# Check if source directory with unit files exists on persistent storage
if [ -d "${SYSTEMD_SOURCE_DIR}" ]; then
# Symlink the unit files from persistent storage to the ephemeral systemd directory.
# Use -f to force overwrite if links already exist.
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-backup.service" "${SYSTEMD_TARGET_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-shutdown-backup.service" "${SYSTEMD_TARGET_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-purge.service" "${SYSTEMD_TARGET_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-backup.timer" "${SYSTEMD_TARGET_DIR}/" # Link base timer unit too
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-purge.timer" "${SYSTEMD_TARGET_DIR}/" # Link base timer unit too
# Link timer units into timers.target.wants to ensure they are started by systemd
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-backup.timer" "${TIMERS_WANTS_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-purge.timer" "${TIMERS_WANTS_DIR}/"
# --- FIX: Removed redundant systemd daemon-reload ---
# Reloading systemd configuration is necessary after linking new unit files or changing configs like daemon.json.
# The reload before 'docker restart' already handled the daemon.json change awareness.
# While reloading again here is safe, it's likely redundant unless the custom units have complex dependencies resolved by the reload.
# systemctl daemon-reload
# Explicitly enable the units. This creates necessary symlinks for boot/shutdown targets.
systemctl enable docker-backup.timer docker-purge.timer docker-shutdown-backup.service
echo "[sbnb-cmds.sh] Systemd units for backup linked and enabled." > /dev/kmsg
else
echo "[sbnb-cmds.sh] WARNING: Systemd source directory ${SYSTEMD_SOURCE_DIR} not found. Cannot enable backup units." > /dev/kmsg
fi
# --- Script Finish Logging ---
echo "[sbnb-cmds.sh] Finished custom boot commands." > /dev/kmsg
# Clear trap on successful exit to prevent it from running unnecessarily.
trap - EXIT HUP INT QUIT TERM
exit 0" data-download-link data-download-label="Download Shell">
<code class="language-shell">#!/bin/sh
# Sbnb Custom Commands Script
# Mounts persistent data partition at /mnt/sbnb-data, configures Docker data-root on it,
# restores backup, restarts Docker, updates dev env script atomically, enables backup units.
# Exit immediately if a command exits with a non-zero status. Crucial for boot scripts.
# Use pipefail to ensure pipeline failure is detected.
set -e -o pipefail
# --- Script Start Logging ---
echo "[sbnb-cmds.sh] Starting custom boot commands..." > /dev/kmsg
# --- Mount Persistent Data Partition ---
DATA_LABEL="SBNB_DATA"
DATA_DEVICE_SYMLINK="/dev/disk/by-label/${DATA_LABEL}"
# --- NEW Mount Point ---
DATA_MOUNT_POINT="/mnt/sbnb-data"
MAX_WAIT_SECONDS=15
WAIT_INTERVAL=1
elapsed_time=0
echo "[sbnb-cmds.sh] Waiting up to ${MAX_WAIT_SECONDS}s for data device (Label: ${DATA_LABEL})..." > /dev/kmsg
# Wait for the device symlink to appear
while [ ! -e "${DATA_DEVICE_SYMLINK}" ]; do
if [ ${elapsed_time} -ge ${MAX_WAIT_SECONDS} ]; then
echo "[sbnb-cmds.sh] ERROR: Timeout waiting for device ${DATA_DEVICE_SYMLINK}. Persistent data cannot be mounted." > /dev/kmsg
exit 1 # Exit: cannot proceed without data partition
fi
sleep ${WAIT_INTERVAL}
elapsed_time=$((elapsed_time + WAIT_INTERVAL))
done
# Resolve the actual device node using readlink -f for canonical path
DATA_DEVICE=$(readlink -f "${DATA_DEVICE_SYMLINK}")
echo "[sbnb-cmds.sh] Data partition device resolved to ${DATA_DEVICE} after ${elapsed_time}s." > /dev/kmsg
# Create mount point (ensure parent /mnt/sbnb-data exists if needed, mkdir -p handles this)
# Assuming /mnt/sbnb-data exists, we create the /docker subdirectory for mounting.
mkdir -p "${DATA_MOUNT_POINT}"
echo "[sbnb-cmds.sh] Attempting to mount ${DATA_DEVICE} at ${DATA_MOUNT_POINT}..." > /dev/kmsg
# Mount read-write. noatime/nodiratime improve performance by reducing metadata writes.
if mount -o rw,noatime,nodiratime "${DATA_DEVICE}" "${DATA_MOUNT_POINT}"; then
echo "[sbnb-cmds.sh] Successfully mounted persistent partition at ${DATA_MOUNT_POINT}." > /dev/kmsg
else
echo "[sbnb-cmds.sh] ERROR: Failed to mount ${DATA_DEVICE} at ${DATA_MOUNT_POINT}!" > /dev/kmsg
exit 1 # Exit: cannot proceed without data partition mounted
fi
# --- Configure Docker data-root and Restore Data ---
echo "[sbnb-cmds.sh] Configuring Docker data-root and checking restore..." > /dev/kmsg
# Define the NEW location for Docker's root data directory on the persistent partition
# --- Path updated relative to new mount point ---
DOCKER_DATA_ROOT_PERSISTENT="${DATA_MOUNT_POINT}/docker-root"
# Define the standard location for Docker's configuration file
DOCKER_CONFIG_DIR="/etc/docker"
DOCKER_CONFIG_FILE="${DOCKER_CONFIG_DIR}/daemon.json"
# Define backup location
# --- Path updated relative to new mount point ---
BACKUP_DIR="${DATA_MOUNT_POINT}/backups/docker" # Path on the mounted data partition
LATEST_LINK="${BACKUP_DIR}/docker_latest.tar.gz"
# 1. Ensure the new Docker data-root directory exists on the persistent partition
echo "[sbnb-cmds.sh] Ensuring Docker data directory exists: ${DOCKER_DATA_ROOT_PERSISTENT}" > /dev/kmsg
mkdir -p "${DOCKER_DATA_ROOT_PERSISTENT}"
if [ $? -ne 0 ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to create persistent Docker data directory ${DOCKER_DATA_ROOT_PERSISTENT}!" > /dev/kmsg
exit 1
fi
# 2. Create/Update Docker daemon configuration to use the new data-root
echo "[sbnb-cmds.sh] Configuring Docker daemon (${DOCKER_CONFIG_FILE}) to use data-root: ${DOCKER_DATA_ROOT_PERSISTENT}" > /dev/kmsg
mkdir -p "${DOCKER_CONFIG_DIR}"
# Create a minimal daemon.json setting the data-root.
# WARNING: This command OVERWRITES any existing ${DOCKER_CONFIG_FILE}.
# If you have other custom daemon settings (log drivers, storage opts, mirrors, etc.),
# they will be LOST. Backup the existing file first if unsure (e.g., cp ${DOCKER_CONFIG_FILE} ${DOCKER_CONFIG_FILE}.bak).
# For merging settings, consider using 'jq' if available in this environment.
printf '{\n "data-root": "%s"\n}\n' "${DOCKER_DATA_ROOT_PERSISTENT}" > "${DOCKER_CONFIG_FILE}"
if [ $? -ne 0 ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to write Docker config file ${DOCKER_CONFIG_FILE}!" > /dev/kmsg
exit 1
fi
echo "[sbnb-cmds.sh] Docker daemon configuration updated." > /dev/kmsg
# 3. Restore Docker data INTO the NEW persistent location (if backup exists)
echo "[sbnb-cmds.sh] Checking for Docker backup..." > /dev/kmsg
# Check if the target directory is empty before attempting restore.
# This prevents accidentally overwriting existing data if restore is run multiple times without cleanup.
if [ -L "${LATEST_LINK}" ] && [ -z "$(ls -A "${DOCKER_DATA_ROOT_PERSISTENT}")" ]; then
# Attempt to resolve the absolute path of the file the symlink points to
ACTUAL_BACKUP_FILE=$(readlink -f "${LATEST_LINK}")
# Check if readlink succeeded (exit status 0) AND the target file actually exists
if [ $? -eq 0 ] && [ -f "${ACTUAL_BACKUP_FILE}" ]; then
echo "[sbnb-cmds.sh] Found latest backup: ${ACTUAL_BACKUP_FILE}. Restoring to ${DOCKER_DATA_ROOT_PERSISTENT}..." > /dev/kmsg
# Extract archive directly into the new persistent data-root directory.
# IMPORTANT: Assumes the backup archive contains the contents *of* the docker data dir
# (e.g., image/, volumes/, containers/) directly at the top level.
# If the archive contains a leading directory (e.g., 'docker/' or './'), adjust the -C path or use --strip-components=1 with tar.
# Verify archive structure first if unsure, e.g., using: tar -tf "${ACTUAL_BACKUP_FILE}" | head -n 5
if tar -xzf "${ACTUAL_BACKUP_FILE}" -C "${DOCKER_DATA_ROOT_PERSISTENT}"; then
echo "[sbnb-cmds.sh] Docker data restored successfully to persistent storage." > /dev/kmsg
else
echo "[sbnb-cmds.sh] ERROR: Failed to extract Docker data from ${ACTUAL_BACKUP_FILE} to ${DOCKER_DATA_ROOT_PERSISTENT}! Docker might start fresh or with inconsistent data." > /dev/kmsg
# Clean up potentially corrupted/partial restore attempt
rm -rf "${DOCKER_DATA_ROOT_PERSISTENT:?}/"*
# Allowing Docker to start fresh might be safer.
fi
else
echo "[sbnb-cmds.sh] WARNING: Latest backup link '${LATEST_LINK}' exists but is broken or points to non-existent file. Skipping restore." > /dev/kmsg
fi
elif [ -L "${LATEST_LINK}" ]; then
echo "[sbnb-cmds.sh] Docker data directory ${DOCKER_DATA_ROOT_PERSISTENT} is not empty. Skipping restore to avoid overwrite." > /dev/kmsg
else
echo "[sbnb-cmds.sh] No latest Docker backup link found (${LATEST_LINK}). Docker will use/create data in ${DOCKER_DATA_ROOT_PERSISTENT}." > /dev/kmsg
fi
echo "[sbnb-cmds.sh] Docker data-root configuration and restore check finished." > /dev/kmsg
# --- Restart Docker Service ---
# Explicitly reload daemon config and restart Docker to apply data-root change.
# Reload ensures systemd is aware of the potentially changed daemon.json before restarting.
echo "[sbnb-cmds.sh] Reloading systemd daemon and restarting Docker service..." > /dev/kmsg
if systemctl daemon-reload && systemctl restart docker; then
echo "[sbnb-cmds.sh] Docker service restarted successfully." > /dev/kmsg
else
# --- FIX: Exit on Docker restart failure ---
echo "[sbnb-cmds.sh] ERROR: Failed to reload systemd or restart Docker service! Halting script." > /dev/kmsg
# Exit because subsequent steps might depend on Docker running correctly.
# Remove 'exit 1' below and restore comment if continuing on failure is the desired behavior.
exit 1
fi
# --- Update sbnb-dev-env.sh Script (Atomic Method) ---
# This script uses a named volume backed by a path on the persistent storage.
TARGET_DEV_ENV_SCRIPT="/usr/sbin/sbnb-dev-env.sh"
TARGET_DIR=$(dirname "${TARGET_DEV_ENV_SCRIPT}")
TMP_SCRIPT="" # Initialize temporary script variable
# Setup trap to automatically clean up the temporary file ${TMP_SCRIPT} if the script exits
# prematurely (e.g., due to an error [EXIT] or signals [HUP, INT, QUIT, TERM]).
trap 'if [ -n "${TMP_SCRIPT}" ] && [ -f "${TMP_SCRIPT}" ]; then rm -f "${TMP_SCRIPT}"; echo "[sbnb-cmds.sh] Cleaned up temporary file ${TMP_SCRIPT}" > /dev/kmsg; fi' EXIT HUP INT QUIT TERM
echo "[sbnb-cmds.sh] Attempting atomic update of ${TARGET_DEV_ENV_SCRIPT}..." > /dev/kmsg
# NOTE: This section assumes ${TARGET_DIR} is writable in the current boot environment.
# Ensure the target directory exists
if [ ! -d "${TARGET_DIR}" ]; then
echo "[sbnb-cmds.sh] ERROR: Target directory ${TARGET_DIR} does not exist. Cannot update script." > /dev/kmsg
exit 1
fi
# Create a temporary file securely in the target directory
# Using mktemp is preferred for security and avoiding collisions.
TMP_SCRIPT=$(mktemp "${TARGET_DIR}/sbnb-dev-env.sh.XXXXXX")
if [ -z "${TMP_SCRIPT}" ] || [ ! -f "${TMP_SCRIPT}" ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to create temporary file in ${TARGET_DIR}." > /dev/kmsg
exit 1
fi
echo "[sbnb-cmds.sh] Created temporary file: ${TMP_SCRIPT}" > /dev/kmsg
# Write the new script content to the temporary file using a quoted here document.
# Quoting 'EOF' prevents shell expansion of variables ($VAR) inside the here document.
cat <<'EOF' > "${TMP_SCRIPT}"
#!/bin/sh
# Exit immediately if a command exits with a non-zero status.
# Treat unset variables as an error when substituting.
# Print commands and their arguments as they are executed.
# The return value of a pipeline is the status of the last command to exit with a non-zero status,
# or zero if no command exited with a non-zero status.
set -euxo pipefail
# --- Configuration ---
# Use Ubuntu 24.04 LTS as the base image.
IMAGE="ubuntu:24.04"
# Name for the development environment container.
NAME="sbnb-dev-env"
# Name for the Docker named volume for persistent data FOR THIS CONTAINER.
DATA_VOLUME_NAME="sbnb-dev-data"
# --- Specify the host path where THIS specific named volume's data should be stored ---
# --- Path updated relative to new mount point /mnt/sbnb-data ---
DATA_VOLUME_HOST_PATH="/mnt/sbnb-data/docker-volumes/${DATA_VOLUME_NAME}"
# Target directory inside the container for persistent data.
DATA_CONTAINER_DIR="/data"
# Initialization script path on the host (relative to the mounted '/' -> '/host').
INIT_SCRIPT_HOST_PATH="/usr/sbin/_sbnb-dev-env-container.sh"
# --- Check if container is already running ---
# Check if container is already running using Docker's name filter.
# -q outputs only IDs, --filter selects by name (anchored ^/name$), grep -q . checks if any output exists.
echo "Checking for existing container: ${NAME}..."
if docker ps -q --filter "name=^/${NAME}$" | grep -q .; then
echo "Attaching to existing container: ${NAME}"
# Execute tmux, creating a new session named 'sbnb-dev-env' if it doesn't exist,
# or attaching to it if it does.
docker exec -it "${NAME}" tmux new-session -A -s sbnb-dev-env
exit 0 # Exit successfully after attaching
fi
# --- Prerequisites Check ---
# --- Ensure the specific host directory FOR THIS VOLUME exists ---
# When binding a named volume to a specific host path using the 'local' driver options below,
# the target host path MUST exist beforehand.
echo "Checking if host path for volume data exists: ${DATA_VOLUME_HOST_PATH}"
# Create the specific directory for this volume if it doesn't exist
mkdir -p "${DATA_VOLUME_HOST_PATH}"
if [ $? -ne 0 ]; then
echo "Error: Failed to create host path for volume data: ${DATA_VOLUME_HOST_PATH}" >&2
exit 1
fi
echo "Host path for volume data found/created: ${DATA_VOLUME_HOST_PATH}"
# ---> IMPORTANT PERMISSIONS NOTE <---
# Ensure this host directory (${DATA_VOLUME_HOST_PATH}) has the correct permissions
# (e.g., ownership/group/mode) for the user/process running inside the container
# that needs to write to ${DATA_CONTAINER_DIR}. Docker does NOT automatically manage
# permissions on pre-existing host paths used this way. Incorrect host path permissions
# are a common cause of 'Permission denied' errors inside the container for this setup.
# Example: sudo chown $(id -u):$(id -g) ${DATA_VOLUME_HOST_PATH} (Run this manually if needed)
# Ensure the Docker named volume exists, create if not, binding it to the specified host path.
echo "Checking if Docker volume exists: ${DATA_VOLUME_NAME}"
if ! docker volume inspect "${DATA_VOLUME_NAME}" > /dev/null 2>&1; then
echo "Volume ${DATA_VOLUME_NAME} not found. Creating and binding to ${DATA_VOLUME_HOST_PATH}..."
# Create the volume using the 'local' driver with options to bind it to a specific host path.
docker volume create \
--driver local \
--opt type=none \
--opt "device=${DATA_VOLUME_HOST_PATH}" \
--opt o=bind \
"${DATA_VOLUME_NAME}"
echo "Volume ${DATA_VOLUME_NAME} created and bound to host path."
else
echo "Volume ${DATA_VOLUME_NAME} already exists."
# NOTE (Edge Case): This check assumes the existing volume is correctly bound to ${DATA_VOLUME_HOST_PATH}.
fi
# Ensure the initialization script exists on the host
# This check remains crucial as the script is still accessed via the host mount.
echo "Checking if initialization script exists: ${INIT_SCRIPT_HOST_PATH}"
if [ ! -f "${INIT_SCRIPT_HOST_PATH}" ]; then
echo "Error: Initialization script not found on host: ${INIT_SCRIPT_HOST_PATH}" >&2
exit 1 # Exit if script doesn't exist
fi
echo "Initialization script found."
# --- Create and run a new dev container ---
echo "Creating new development container: ${NAME} with image ${IMAGE}"
# Note: Docker's main data-root is now configured via daemon.json (by sbnb-cmds.sh)
# to use persistent storage. This volume mount provides specific persistent storage for this container.
docker run \
-it \
-d \
--privileged \
-v /root:/root \
-v /dev:/dev \
-v /:/host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "${DATA_VOLUME_NAME}:${DATA_CONTAINER_DIR}" \
--net=host \
--name "${NAME}" \
--rm \
--pull=always \
--ulimit nofile=262144:262144 \
"${IMAGE}" \
sleep infinity
# End of docker run command options. IMAGE and COMMAND follow.
# --- Execute initialization script inside the container ---
echo "Executing initialization script inside the container (${INIT_SCRIPT_HOST_PATH})..."
# CRITICAL: The successful setup and usability of the container environment heavily depend
# on the correct execution and content of the script located at /host${INIT_SCRIPT_HOST_PATH}.
# Ensure that script performs all necessary setup steps required within the container.
docker exec -it "${NAME}" bash "/host${INIT_SCRIPT_HOST_PATH}" # Note the path combines /host + script path
# --- Attach to the container's tmux session ---
echo "Attaching to the new container's tmux session..."
docker exec -it "${NAME}" tmux new-session -A -s sbnb-dev-env
echo "-----------------------------------------------------"
echo "Container ${NAME} is running. Initialization script executed. Attached via tmux."
echo "Specific data for this container is persisted in Docker named volume: ${DATA_VOLUME_NAME}"
echo "(Volume data is stored on the host at: ${DATA_VOLUME_HOST_PATH})"
echo "Docker main data-root is configured via /etc/docker/daemon.json."
echo "Remember to configure applications inside the container to use '${DATA_CONTAINER_DIR}' for storage."
echo "Ensure host path '${DATA_VOLUME_HOST_PATH}' has correct permissions for container processes."
echo "-----------------------------------------------------"
EOF
# Check if cat succeeded (redundant check if set -e is active, but harmless)
if [ $? -ne 0 ]; then
echo "[sbnb-cmds.sh] ERROR: Failed to write content to temporary file ${TMP_SCRIPT}." > /dev/kmsg
# Trap will handle cleanup
exit 1
fi
# Set execute permissions on the temporary file
echo "[sbnb-cmds.sh] Setting execute permissions on ${TMP_SCRIPT}..." > /dev/kmsg
if ! chmod +x "${TMP_SCRIPT}"; then
echo "[sbnb-cmds.sh] ERROR: Failed to set execute permissions on temporary file ${TMP_SCRIPT}." > /dev/kmsg
# Trap will handle cleanup
exit 1 # Exit: script must be executable
fi
# Atomically replace the target script with the temporary file
# mv is atomic when moving files within the same filesystem.
echo "[sbnb-cmds.sh] Atomically replacing ${TARGET_DEV_ENV_SCRIPT} with ${TMP_SCRIPT}..." > /dev/kmsg
if ! mv "${TMP_SCRIPT}" "${TARGET_DEV_ENV_SCRIPT}"; then
echo "[sbnb-cmds.sh] ERROR: Failed to move temporary file ${TMP_SCRIPT} to ${TARGET_DEV_ENV_SCRIPT}." > /dev/kmsg
# Trap will handle cleanup of TMP_SCRIPT if it still exists
exit 1
fi
# If mv succeeds, the temporary file no longer exists under its old name.
# Clear TMP_SCRIPT variable so the trap doesn't try to remove the (now moved) file.
TMP_SCRIPT=""
echo "[sbnb-cmds.sh] Successfully updated ${TARGET_DEV_ENV_SCRIPT}." > /dev/kmsg
echo "[sbnb-cmds.sh] Update of ${TARGET_DEV_ENV_SCRIPT} finished." > /dev/kmsg
# --- Enable Systemd Units for Backup/Purge ---
# --- Path updated relative to new mount point ---
SYSTEMD_SOURCE_DIR="/mnt/sbnb/systemd" # Units stored on data partition
echo "[sbnb-cmds.sh] Enabling custom systemd units for Docker backup/purge (Source: ${SYSTEMD_SOURCE_DIR})..." > /dev/kmsg
SYSTEMD_TARGET_DIR="/etc/systemd/system"
TIMERS_WANTS_DIR="${SYSTEMD_TARGET_DIR}/timers.target.wants"
# Ensure systemd directories exist in the ephemeral overlay filesystem
mkdir -p "${SYSTEMD_TARGET_DIR}"
mkdir -p "${TIMERS_WANTS_DIR}"
# Check if source directory with unit files exists on persistent storage
if [ -d "${SYSTEMD_SOURCE_DIR}" ]; then
# Symlink the unit files from persistent storage to the ephemeral systemd directory.
# Use -f to force overwrite if links already exist.
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-backup.service" "${SYSTEMD_TARGET_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-shutdown-backup.service" "${SYSTEMD_TARGET_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-purge.service" "${SYSTEMD_TARGET_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-backup.timer" "${SYSTEMD_TARGET_DIR}/" # Link base timer unit too
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-purge.timer" "${SYSTEMD_TARGET_DIR}/" # Link base timer unit too
# Link timer units into timers.target.wants to ensure they are started by systemd
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-backup.timer" "${TIMERS_WANTS_DIR}/"
ln -sf "${SYSTEMD_SOURCE_DIR}/docker-purge.timer" "${TIMERS_WANTS_DIR}/"
# --- FIX: Removed redundant systemd daemon-reload ---
# Reloading systemd configuration is necessary after linking new unit files or changing configs like daemon.json.
# The reload before 'docker restart' already handled the daemon.json change awareness.
# While reloading again here is safe, it's likely redundant unless the custom units have complex dependencies resolved by the reload.
# systemctl daemon-reload
# Explicitly enable the units. This creates necessary symlinks for boot/shutdown targets.
systemctl enable docker-backup.timer docker-purge.timer docker-shutdown-backup.service
echo "[sbnb-cmds.sh] Systemd units for backup linked and enabled." > /dev/kmsg
else
echo "[sbnb-cmds.sh] WARNING: Systemd source directory ${SYSTEMD_SOURCE_DIR} not found. Cannot enable backup units." > /dev/kmsg
fi
# --- Script Finish Logging ---
echo "[sbnb-cmds.sh] Finished custom boot commands." > /dev/kmsg
# Clear trap on successful exit to prevent it from running unnecessarily.
trap - EXIT HUP INT QUIT TERM
exit 0</code>
</section>
sbnb-tskey.txt
tskey-auth-…
scripts\backup-docker.sh
```bash
#!/bin/sh
# File: /mnt/sbnb-data/scripts/backup-docker.sh
# Script to stop docker, create a backup, and restart docker.
# Assumes tools like tar, gzip, systemctl, ln, mv, date, sleep, mkdir, nice are available.
set -e # Exit on error
BACKUP_DIR="/mnt/sbnb-data/backups/docker"
DOCKER_DATA_DIR="/var/lib/docker"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
BACKUP_FILE="${BACKUP_DIR}/docker_backup_${TIMESTAMP}.tar.gz"
LATEST_LINK="${BACKUP_DIR}/docker_latest.tar.gz" # Symlink to the latest backup
echo "[backup-docker.sh] Starting Docker backup process..." > /dev/kmsg
# Ensure backup directory exists
mkdir -p "${BACKUP_DIR}"
if [ $? -ne 0 ]; then
echo "[backup-docker.sh] ERROR: Failed to create backup directory ${BACKUP_DIR}!" > /dev/kmsg
exit 1
fi
# Stop Docker gracefully
echo "[backup-docker.sh] Stopping Docker service..." > /dev/kmsg
if systemctl is-active --quiet docker.service; then
if ! systemctl stop docker.service; then
echo "[backup-docker.sh] WARNING: Failed to stop Docker service gracefully. Proceeding with backup cautiously." > /dev/kmsg
else
# Add a small delay to ensure Docker processes have terminated
sleep 5
fi
else
echo "[backup-docker.sh] Docker service already stopped." > /dev/kmsg
fi
# Create the compressed backup
echo "[backup-docker.sh] Creating backup archive: ${BACKUP_FILE}" > /dev/kmsg
if [ -d "${DOCKER_DATA_DIR}" ]; then
# Create archive containing paths relative to /var/lib (e.g., 'docker/...')
# Use nice to lower CPU priority (if available, remove 'nice -n 19' if command is missing)
# ionice was removed as it's unavailable in this environment.
if nice -n 19 tar -czf "${BACKUP_FILE}" -C /var/lib docker; then
echo "[backup-docker.sh] Backup created successfully: ${BACKUP_FILE}" > /dev/kmsg
# Update the 'latest' symlink atomically
ln -sfT "${BACKUP_FILE}" "${LATEST_LINK}.tmp" && mv -Tf "${LATEST_LINK}.tmp" "${LATEST_LINK}"
if [ $? -eq 0 ]; then
echo "[backup-docker.sh] Updated latest backup link to point to ${BACKUP_FILE}" > /dev/kmsg
else
echo "[backup-docker.sh] WARNING: Failed to update latest backup link." > /dev/kmsg
fi
else
echo "[backup-docker.sh] ERROR: tar command failed! Backup not created." > /dev/kmsg
# Attempt to restart docker even if backup failed
fi
else
echo "[backup-docker.sh] WARNING: Docker data directory ${DOCKER_DATA_DIR} not found. Skipping backup." > /dev/kmsg
fi
# Restart Docker
echo "[backup-docker.sh] Starting Docker service..." > /dev/kmsg
if ! systemctl start docker.service; then
echo "[backup-docker.sh] WARNING: Failed to start Docker service after backup attempt." > /dev/kmsg
fi
echo "[backup-docker.sh] Docker backup process finished." > /dev/kmsg
exit 0
scripts\purge-docker-backups.sh
#!/bin/sh
# File: /mnt/sbnb-data/scripts/purge-docker-backups.sh
# Script to remove old Docker backups, keeping the last N.
# Assumes tools like find, sort, head, cut, xargs, rm, wc, mkdir, echo are available.
set -e # Exit on error
BACKUP_DIR="/mnt/sbnb-data/backups/docker"
KEEP_COUNT=3 # Number of backups to keep (adjust as needed)
echo "[purge-docker-backups.sh] Purging old Docker backups in ${BACKUP_DIR}, keeping ${KEEP_COUNT}..." > /dev/kmsg
# Ensure backup directory exists
mkdir -p "${BACKUP_DIR}"
if [ $? -ne 0 ]; then
echo "[purge-docker-backups.sh] ERROR: Failed to ensure backup directory ${BACKUP_DIR} exists!" > /dev/kmsg
exit 1
fi
# Count existing backups (only files matching the pattern)
# Use find ... -print | wc -l which is safer than parsing ls output
backup_count=$(find "${BACKUP_DIR}" -maxdepth 1 -name 'docker_backup_*.tar.gz' -type f -print | wc -l)
if [ "${backup_count}" -gt "${KEEP_COUNT}" ]; then
# List all backup files by modification time (oldest first),
# calculate how many to delete, and delete them.
to_delete_count=$(( backup_count - KEEP_COUNT ))
echo "[purge-docker-backups.sh] Found ${backup_count} backups. Deleting ${to_delete_count} oldest ones." > /dev/kmsg
# Use -print0 and xargs -0 for safety with filenames containing special characters
find "${BACKUP_DIR}" -maxdepth 1 -name 'docker_backup_*.tar.gz' -type f -printf '%T@ %p\0' | \
sort -zn | \
head -zn "${to_delete_count}" | \
cut -z -d' ' -f2- | \
xargs -0 -r rm -v -- # Use -r to avoid running rm if head outputs nothing
if [ $? -eq 0 ]; then
echo "[purge-docker-backups.sh] Purge completed." > /dev/kmsg
else
# Log error but don't exit, as failure isn't critical for system operation
echo "[purge-docker-backups.sh] WARNING: Purge command finished with errors (check rm output above)." > /dev/kmsg
fi
else
echo "[purge-docker-backups.sh] ${backup_count} backups found, which is less than or equal to ${KEEP_COUNT}. No backups purged." > /dev/kmsg
fi
exit 0
System Volume Information\IndexerVolumeGuid
{ A A A 9 C D 4 5 - 2 E 9 1 - 4 F 7 9 - B 1 4 F - 2 0 9 9 6 F 9 C F 1 C A }
systemd\docker-backup.service
# File: /mnt/sbnb-data/systemd/docker-backup.service
# Service unit to run the backup script
[Unit]
Description=Backup Docker Data to Persistent Storage
Requires=mnt-sbnb-data.mount docker.service
After=mnt-sbnb-data.mount docker.service
[Service]
Type=oneshot
# Run the backup script stored on the persistent drive
ExecStart=/mnt/sbnb/scripts/backup-docker.sh
systemd\docker-backup.timer
# File: /mnt/sbnb-data/systemd/docker-backup.timer
# Timer unit to trigger the backup service daily at 5 AM
[Unit]
Description=Daily Docker Backup Timer
[Timer]
# Run daily at 5 AM system time
OnCalendar=*-*-* 05:00:00
AccuracySec=1h
Persistent=true # Run once on boot if missed due to downtime
[Install]
WantedBy=timers.target
systemd\docker-purge.service
# File: /mnt/sbnb-data/systemd/docker-purge.service
# Service unit to run the purge script
[Unit]
Description=Purge Old Docker Backups
Requires=mnt-sbnb-data.mount
After=mnt-sbnb-data.mount
[Service]
Type=oneshot
ExecStart=/mnt/sbnb/scripts/purge-docker-backups.sh
systemd\docker-purge.timer
# File: /mnt/sbnb-data/systemd/docker-purge.timer
# Timer unit to trigger the purge service daily (e.g., at 6 AM)
[Unit]
Description=Daily Docker Backup Purge Timer
[Timer]
OnCalendar=*-*-* 06:00:00
AccuracySec=1h
Persistent=true
[Install]
WantedBy=timers.target
systemd\docker-shutdown-backup.service
# File: /mnt/sbnb-data/systemd/docker-shutdown-backup.service
# Service unit to attempt a backup on clean shutdown
[Unit]
Description=Backup Docker Data on Shutdown (Best Effort)
# Run late in shutdown, requires persistent storage & docker (to stop it)
DefaultDependencies=no
Requires=mnt-sbnb-data.mount docker.service
After=mnt-sbnb-data.mount docker.service
Before=shutdown.target reboot.target halt.target kexec.target umount.target final.target
[Service]
Type=oneshot
RemainAfterExit=true # Allows ExecStop to run
# Run the backup script when the service is stopped during shutdown
ExecStop=/mnt/sbnb/scripts/backup-docker.sh
[Install]
WantedBy=shutdown.target reboot.target halt.target kexec.target
#data
- Project: Custom Sbnb Linux setup using a single USB drive for boot and persistent data storage.
- Hardware Context: x86-64 machine, single USB drive partitioned into ESP (FAT32) and Data (Ext4).
- Software Context: Sbnb Linux (Buildroot-based, Systemd init, ephemeral overlayfs root), Docker.
- Initial Issue: Boot stall after
systemd[1]: Started Journal Service.
. - Initial Issue Root Cause: Mismatch between
sbnb.service
dependency (Wants=dev-disk-by-partlabel-sbnb.device
) and USB ESP partition setup (missing partition labelsbnb
). - Initial Issue Solution: Set partition label
sbnb
on ESP partition (/dev/sdx1
) usingparted name 1 sbnb
. - Second Issue: Requirement for Docker data persistence (
/var/lib/docker
) on the data partition (/mnt/sbnb-data
). - Chosen Strategy: Keep Docker data ephemeral (
/var/lib/docker
in RAM overlay) and use a backup/restore mechanism with persistent storage (/mnt/sbnb-data
). - Implementation Files (Persistent Storage -
/mnt/sbnb-data
):/mnt/sbnb-data/scripts/backup-docker.sh
(Executable Shell Script)/mnt/sbnb-data/scripts/purge-docker-backups.sh
(Executable Shell Script)/mnt/sbnb-data/systemd/docker-backup.service
(Systemd Unit)/mnt/sbnb-data/systemd/docker-backup.timer
(Systemd Unit)/mnt/sbnb-data/systemd/docker-shutdown-backup.service
(Systemd Unit)/mnt/sbnb-data/systemd/docker-purge.service
(Systemd Unit)/mnt/sbnb-data/systemd/docker-purge.timer
(Systemd Unit)/mnt/sbnb-data/backups/docker/
(Directory for backup archives)
- Implementation File (ESP -
/mnt/sbnb
):/mnt/sbnb/sbnb-cmds.sh
(Executable Shell Script, modified for restore and unit enabling)
- Key Script Logic (
sbnb-cmds.sh
): Mounts data partition, restores latest backup from data partition to ephemeral/var/lib/docker
, creates symlinks for systemd units from data partition to ephemeral/etc/systemd/system
, reloads systemd, enables units. - Key Script Logic (
backup-docker.sh
): Stops Docker, creates compressed tar archive of/var/lib/docker
onto data partition backup dir, updates_latest
symlink, restarts Docker. Runs with lowered CPU priority (nice
), I/O priority adjustment (ionice
) removed due to unavailability. - Key Script Logic (
purge-docker-backups.sh
): Finds backups matching pattern, keeps the latestN
(default 3), deletes older ones. - Systemd Units: Define services to run scripts and timers to trigger them periodically (
OnCalendar
) or on shutdown (ExecStop=
in shutdown service). - Verification: Use
systemctl list-timers
,systemctl status
,journalctl -u
,journalctl -b | grep ...
, and practical tests (manual trigger, check files after reboot). data
#documentation
#**Project: Sbnb Single-USB Boot and Persistent Docker Data via Backup/Restore
1. Background and Goal:
- System: Sbnb Linux (minimalist, RAM-based overlayfs root, systemd init).
- Objective: Configure Sbnb to boot from a single USB drive while also using a separate partition on the same drive for persistent data storage, specifically for Docker containers and volumes.
- Deviation: This setup deviates from the standard Sbnb practice of using internal server storage (LVM) configured post-boot via automation (e.g., Ansible).
- Reference Tutorial: Initial attempts followed concepts from the “Single USB for Sbnb Boot and Persistent Storage” tutorial.
2. Initial Boot Problem and Resolution:
- Symptom: System boot stalled indefinitely after the log message
systemd[1]: Started Journal Service.
. - Analysis: Debugging using
journalctl
and Sbnb source code (sbnb.service
) revealed a dependency conflict. Thesbnb.service
unit explicitly required (Wants=
) the devicedev-disk-by-partlabel-sbnb
. However, the USB preparation steps (both from the tutorial and Sbnb’screate_raw.sh
) only set the filesystem label (mkfs.vfat -n sbnb
) on the ESP partition, not the required partition label. Systemd could not find the device via its partition label and waited indefinitely. - Solution: Boot into a rescue/live environment, identify the USB device (e.g.,
/dev/sdx
), and set the partition label on the first (ESP) partition usingsudo parted /dev/sdx name 1 sbnb
. - Result: After applying the partition label, the system booted successfully past the previous stall point.
3. Docker Data Persistence Strategy (Backup/Restore):
- Problem: Docker’s default data directory (
/var/lib/docker
) resides within Sbnb’s ephemeral RAM overlay filesystem and is lost on reboot. Direct configuration of Docker’sdata-root
to persistent storage via/etc/docker/daemon.json
failed because/etc
is also ephemeral. - Chosen Strategy: Keep Docker running with its data in the ephemeral
/var/lib/docker
directory. Implement a backup and restore mechanism using the persistent data partition (/mnt/sbnb-data
).- Restore: On each boot, the
sbnb-cmds.sh
script restores the most recent backup from/mnt/sbnb-data/backups/docker/
into/var/lib/docker
before the Docker service is started by systemd. - Backup: Systemd timers trigger a service periodically (e.g., daily) and on clean shutdown (best-effort) to run a script (
backup-docker.sh
) that stops Docker, creates a compressed archive (.tar.gz
) of/var/lib/docker
onto/mnt/sbnb-data/backups/docker/
, updates adocker_latest.tar.gz
symlink, and restarts Docker. - Purge: A separate systemd timer triggers a script (
purge-docker-backups.sh
) to delete old backups, retaining a configured number of recent ones.
- Restore: On each boot, the
4. Implementation Details:
- Persistent File Locations (on Data Partition, mounted at
/mnt/sbnb-data
):- Backup Archives:
/mnt/sbnb-data/backups/docker/docker_backup_YYYYMMDD_HHMMSS.tar.gz
- Latest Backup Symlink:
/mnt/sbnb-data/backups/docker/docker_latest.tar.gz
- Scripts:
/mnt/sbnb-data/scripts/backup-docker.sh
,/mnt/sbnb-data/scripts/purge-docker-backups.sh
(Must be executable). - Systemd Units:
/mnt/sbnb-data/systemd/
containingdocker-backup.service
,docker-backup.timer
,docker-shutdown-backup.service
,docker-purge.service
,docker-purge.timer
.
- Backup Archives:
- Boot Script (on ESP Partition, mounted at
/mnt/sbnb
):/mnt/sbnb/sbnb-cmds.sh
: Contains logic executed early in boot byboot-sbnb.sh
(which is run bysbnb.service
). See artifactsbnb_cmds_backup_restore
for the full script content. Key actions:- Waits for and mounts the data partition (
/dev/disk/by-label/SBNB_DATA
) to/mnt/sbnb-data
. - Checks for
${BACKUP_DIR}/docker_latest.tar.gz
. If valid, removes existing/var/lib/docker/*
and extracts the backup archive to/var/lib
. Handles missing/broken links gracefully. - Creates symlinks from the persistent systemd units in
/mnt/sbnb-data/systemd/
to the ephemeral/etc/systemd/system/
directory and appropriate.wants
directories. - Runs
systemctl daemon-reload
. - Runs
systemctl enable
for the timers and the shutdown service.
- Waits for and mounts the data partition (
- Backup Script (
backup-docker.sh
): See artifactsbnb_docker_backup_scripts_v2
. Key actions: Stops docker, runsnice -n 19 tar -czf ... -C /var/lib docker
, updates symlink, starts docker.ionice
was removed due to unavailability. - Purge Script (
purge-docker-backups.sh
): See artifactsbnb_docker_backup_scripts_v2
. Key actions: Finds backups by pattern, counts them, deletes the oldest if count exceedsKEEP_COUNT
(default 3). - Systemd Units: See artifact
sbnb_docker_backup_units_v2
. Define the services that execute the scripts and the timers (OnCalendar=daily
) that trigger the backup and purge services. The shutdown service usesExecStop=
andDefaultDependencies=no
to run during shutdown.
5. Verification:
- Boot: Check
journalctl -b | grep sbnb-cmds.sh
for successful execution and restore messages. - Docker: Verify Docker starts and
docker ps
,docker images
show expected state restored from backup (if one existed). - Timers: Check
systemctl list-timers | grep docker-
to ensure timers are active and scheduled. - Services: Check
systemctl status *.timer *.service | grep docker-
to see loaded/active/enabled states. - Manual Trigger: Test backup via
sudo systemctl start docker-backup.service
and check for the archive file and symlink in/mnt/sbnb-data/backups/docker/
. Test purge similarly. - Reboot Test: Create data within a container volume, trigger a backup manually or wait for the timer, reboot, and verify the data is present after the restore runs on the next boot.
documentation organized_information
#paper_trail
- Initial Goal: Configure Sbnb Linux for single-USB boot and persistent storage based on an online tutorial.
- Problem Encountered: System failed to boot, stalling after
systemd[1]: Started Journal Service.
. - Initial Analysis: Logs suggested potential issues with device dependencies (
/dev/disk/by-partlabel/sbnb
), overlayfs, or EFI mounts. - Refined Analysis (with Tutorial Context): Determined the tutorial set a filesystem label (
LABEL=sbnb
) but Sbnb’ssbnb.service
required a partition label (PARTLABEL=sbnb
), causing an unmet systemd dependency and the boot stall. - Solution 1 Implemented: Partition label
sbnb
was set on the ESP usingparted
. Outcome: System booted successfully. - New Goal: Ensure Docker data persists on the second USB partition (
/mnt/sbnb-data
) as/var/lib/docker
is ephemeral. - Strategy Considered (Persistent
data-root
): Configure Docker via/etc/docker/daemon.json
to use/mnt/sbnb-data/docker-root
. - Problem with Strategy 1: Realized
/etc
is ephemeral in Sbnb;daemon.json
changes wouldn’t survive reboot. - Strategy Considered (Backup/Restore): Keep Docker ephemeral, restore data from
/mnt/sbnb-data
on boot, back up data to/mnt/sbnb-data
periodically/on shutdown. - Implementation Attempt (Backup/Restore): User attempted implementation, but logs showed failures.
- Analysis of Failure: Logs indicated
log_message: not found
errors (undefined function insbnb-cmds.sh
) and an inappropriatesystemctl reload
call withinsbnb-cmds.sh
. These errors prevented the Docker restore/config logic from running correctly, leading to Docker using default ephemeral storage and failing storage driver setup. - Decision: User chose to stick with the Backup/Restore strategy.
- Solution 2 Implemented: Provided corrected
sbnb-cmds.sh
script specifically for the Backup/Restore strategy, removing thelog_message
andsystemctl reload
errors, ensuring proper restore logic, and including systemd unit enabling. Provided necessary backup/purge scripts and systemd units to be stored on the persistent data partition. - Current Status: User has the final set of scripts, units, and boot script modifications required to implement the chosen Backup/Restore strategy for Docker data persistence. Verification steps provided.