`prioritylist.app` cheatsheet & `dasel`

Slug: prioritylist

40040 characters 5574 words

https://prioritylist.app/

#health: 0~10

Impact Level Effort (Est. Time) Key Question
10 ~1 minute Will delay cause irreversible parent failure?
9 ~5 minutes Will delay make recovery prohibitively difficult/costly?
8 ~15 minutes Will delay cause a significant slip in core resources?
7 ~30 minutes Does delay completely block other people from working?
6 ~1 hour Does delay create significant problems or rework for others?
5 ~2-3 hours Are the consequences of delay manageable and absorbable without derailing parent core progress? (parent doesn’t depend on it, but it adds value)
4 ~4-6 hours Does delay only cause minor inconvenience for you?
3 ~1 day Is the impact a trivial, easily-fixed annoyance for you alone?
2 ~2-3 days Would you barely notice if this was delayed for another month?
1 ~4-6 days Is this a “nice to have” with no real negative consequence?
0 ~1+ weeks Is this task irrelevant or obsolete?

#financial: 0~9

Impact Level Key Question
9 Will delay cause irreversible parent failure?
8 Will delay make recovery prohibitively difficult/costly?
7 Will delay cause a significant slip in core resources?
6 Does delay completely block other people from working?
5 Does delay create significant problems or rework for others?
4 Are the consequences of delay manageable and absorbable without derailing parent core progress?
3 Does delay only cause minor inconvenience for you?
2 Is the impact a trivial, easily-fixed annoyance for you alone?
1 Would you barely notice if this was delayed for another month?
0 Is this task a “nice to have” with no real negative consequence or irrelevant/obsolete?

#relational: 0~8

Impact Level Key Question
8 Will a delay cause irreversible failure of the parent project or goal?
7 Will a delay make recovery prohibitively difficult or costly?
6 Will a delay cause a significant slip in core resources (time, money, or scope)?
5 Does a delay completely block other people from working, creating a bottleneck?
4 Will a delay create significant problems or force rework for others?
3 Are the consequences of the delay manageable and absorbable without derailing the parent’s core progress?
2 Does a delay cause only a minor, easily remedied inconvenience for you?
1 Would you barely notice if this task’s completion were delayed by another month?
0 Is this task essentially a “nice to have” with no real negative consequence?

#spiritual: 0~7

Impact Level Key Decision Question
7 Will a delay cause irreversible failure of the parent project or make recovery prohibitively difficult/costly? (Existential threat: no recovery possible.)
6 Will a delay cause a significant slip in core resources (time, money, or scope) that jeopardizes critical milestones? (Major setback requiring urgent attention.)
5 Will a delay completely block essential work by others or create a bottleneck in the workflow? (High harm: your delay halts progress for a team.)
4 Will a delay create significant problems or force rework for others, affecting project momentum? (Moderate harm causing inefficiencies or extra work.)
3 Are the consequences of a delay noticeable but manageable, impacting non-critical elements without derailing core progress? (Tolerable harm that adds “quality debt.”)
2 Does a delay result in minor inconveniences or easily resolved issues that affect only your personal workflow? (Low harm that is quickly fixable.)
1 Would a delay have barely any noticeable impact on the overall project, making it almost optional? (Negligible disruption.)
0 Is this task essentially a “nice-to-have” or obsolete, with no real negative consequence if delayed or removed? (Backlog noise – non-actionable.)

#recreational: 0~6

Impact Level Key Question Guiding Principle and Rationale
6 Will delay cause irreversible parent failure or make recovery prohibitively difficult/costly? Catastrophic/Critical Harm: This level combines the highest stakes—existential threats where delay means the project dies (original L10) or enters a crisis so severe that recovery jeopardizes its future (original L9).
5 Will delay cause a significant slip in core resources (e.g., time, budget, scope)? Substantial Harm: This is about major, measurable setbacks that require formal changes to the project plan. The delay consumes a significant portion of resources and guarantees a slip in delivery (original L8).
4 Does delay completely block other people from making progress? Collaborative Gridlock: The principle here is that you have become a hard bottleneck. The cost of your delay is multiplied by the number of people who are fully stopped and waiting on you to proceed (original L7).
3 Does delay create significant problems or rework for others? Team Friction: This is distinct from a full stop. Others can proceed, but with incomplete information that will cause inefficiency, frustration, and rework later. It erodes project velocity and morale (original L6).
2 Are the consequences of delay manageable, primarily impacting internal quality or standards without derailing core progress? Tolerable Harm (Quality Debt): This task is important but not urgent for the parent goal’s immediate progress. Delaying it impacts professionalism and best practices (e.g., documentation, code refactoring), creating “debt” that must be paid later (original L5).
1 Does delay only cause a minor inconvenience or a trivial, easily-fixed annoyance for you alone? Minor Harm (Personal Clutter): The negative impact is contained entirely to your personal workflow and mental load. It does not affect anyone else or the project’s bottom line. This covers overflowing inboxes or disorganized personal files (original L4 & L3).
0 Is this task a “nice-to-have,” something you’d barely notice if delayed, or is it potentially irrelevant or obsolete? Negligible Harm (Backlog Noise): This level combines all tasks that should be aggressively challenged. The principle is to identify and remove vague ideas or outdated tasks to create focus. If you could delete it with no real consequence, it’s a Level 1 (original L2, L1, & L0).

How to Use This Framework

  1. For Your Weekly Review: Start at the bottom and work your way up. First, ask: “Are there any cracks in my foundation (Health)?” Then, “Are my structural walls (Financial, Relational) secure?” Only after confirming the stability of the lower levels should you focus significant energy on the apex goals.
  2. In Times of Crisis: The pyramid shows you what to protect first. In an acute crisis, all resources must be diverted to shoring up the lowest affected layer.
  3. To Counteract Modern Pressures: Society often pressures us to invert the pyramid—to sacrifice Health for Financial success. This framework serves as a conscious corrective, reminding you that such a strategy is fundamentally unstable and leads to inevitable collapse.

The process involves two key steps: Triage and Strategy.

  1. Triage (The Table Below): This is the objective assessment phase. For each task, you will assign two scores, removing emotion and focusing on measurable data.
    • Impact Level: Start at Level 10 and work down. The first “Key Question” you can answer with “Yes” determines the task’s impact score.
    • Effort Score: Estimate the time required and find the corresponding score on the scale. A crucial part of this is being realistic, not optimistic, about the time commitment.
  2. Strategy (The Guide Following the Table): This is the decision-making phase. Once you have the Impact and Effort scores, use the “From Triage to Strategy” guide to decide how and when to allocate your most valuable resource: your focused time.

Level 10: Catastrophic Harm (Existential Threat)

  • The Principle: Delay means the project or goal ceases to exist in its current form. There is no recovery.
  • Think About: Legal statutes of limitation, final contract deadlines with no extension, critical safety failures that would end the business.
  • Example: A software update has a bug that is actively corrupting all user data. The task to roll back the update is a Level 10 because any delay causes irreversible damage to customer trust and data integrity, from which the company may not recover.

Level 9: Critical Harm (Project Jeopardy)

  • The Principle: Delay triggers a crisis. The project can theoretically be saved, but only with an extreme, costly, and disruptive effort that jeopardizes future plans.
  • Think About: Losing a “once in a lifetime” opportunity, damage to your reputation that will take years to repair, financial penalties that cripple your budget for the next quarter.
  • Example: Forgetting to book a flight for a critical, can’t-miss client presentation tomorrow. You can still get there by paying an exorbitant last-minute fare and pulling an all-nighter, but the financial cost and personal stress are immense, and you’ll perform poorly.

Level 8: Substantial Harm (Major Setback)

  • The Principle: Delay causes a major Resource Drain (time, money, or scope) that requires formal explanation and adjustment of the project plan.
  • Think About: Will this require you to formally report a budget overrun or a change in the delivery date to stakeholders? Does it consume a significant portion (e.g., >20%) of the project’s remaining resources?
  • Example: Delaying a decision on a key software library forces an entire development team of five people to pause work for several days. This directly translates to thousands of dollars in wasted salary and a guaranteed slip in the product release date.

Level 7: High Harm (Collaborative Gridlock)

  • The Principle: Your delay creates a full stop for someone else. You are a bottleneck, and the cost of your delay is now multiplied by the number of people waiting on you.
  • Think About: Is there a person or team whose next action begins with “Wait for [Your Name] to finish…”? This is not about making their work harder; it’s about making it impossible for them to start.
  • Example: You are responsible for providing the final sales numbers to the finance department so they can close the quarterly books. They cannot proceed with their reporting, investor calls, or financial planning until you are done.

Level 6: Moderate Harm (Team Friction)

  • The Principle: Your delay creates inefficiency and frustration. It doesn’t stop others, but it makes their work harder, forces rework, or lowers their quality. This erodes morale and project velocity.
  • Think About: Will someone have to redo their work later because they had to proceed with incomplete information from you? Are you handing off a “known issue” that will cause problems downstream?
  • Example: You provide a “draft” version of a document for review, knowing it has errors. Your manager spends an hour reviewing it, but will have to spend another hour reviewing the corrected version later, doubling their effort and disrupting their schedule twice.

Level 5: Tolerable Harm (Quality Debt)

  • The Principle: Delay primarily impacts internal quality, standards, and best practices. Accumulating these creates “quality debt” that will have to be paid down later, often at a higher cost.
  • Think About: These are the “important, but not urgent” tasks that separate a professional operation from a chaotic one. Refactoring code, improving documentation, organizing files.
  • Example: The project’s shared document folder is disorganized. It doesn’t stop daily work, but it makes finding information slightly slower for everyone, every time they look. Delaying the cleanup is tolerable today, but over a year, it costs hundreds of hours of wasted team productivity.

Levels 4-3: Minor Harm (Personal Clutter)

  • The Principle: The negative impact is contained entirely to you and your personal workflow. These tasks are often a source of “productive procrastination.”
  • Think About: These tasks add to your mental load and create a feeling of being busy without generating real impact. They do not affect anyone else or the project’s bottom line.
  • Example: (L4) Your email inbox is overflowing, causing you minor stress. (L3) Your desktop has a few extra files that need to be sorted into the correct folders.

Levels 2-0: Negligible Harm (Backlog Noise)

  • The Principle: These items are not real, actionable tasks and should be aggressively challenged and removed to create clarity. A bloated backlog is the enemy of focus.
  • Think About: Does this item represent a vague idea, an optional exploration, or an outdated concept? If you deleted it right now, would anyone notice in a month?
  • Example: (L2) A task to “Explore new note-taking apps” when your current system works. (L1) A vague idea to “Learn more about marketing.” (L0) A task to “Prepare for Q2 meeting” when it’s now Q4.

for printing

# DEFINE PARENT GOAL Before scoring, define the "parent" project or goal for each task. The impact of a task depends on its parent's importance. Never score a task in a vacuum. # TASK TRIAGE SUMMARY ## 10 ~1 minute Will delay cause irreversible parent failure? ## 9 ~5 minutes Will delay make recovery prohibitively difficult/costly? ## 8 ~15 minutes Will delay cause a significant slip in core resources? ## 7 ~30 minutes Does delay completely block other people from working? ## 6 ~1 hour Does delay create significant problems or rework for others? ## 5 ~2-3 hours Are the consequences of delay manageable and absorbable without derailing parent progress? (Parent doesn't depend on it, but it adds value). ## 4 ~4-6 hours Does delay only cause minor inconvenience for you? ## 3 ~1 day Is the impact a trivial, easily-fixed annoyance for you alone? ## 2 ~2-3 days Would you barely notice if this was delayed for another month? ## 1 ~4-6 days Is this a "nice to have" with no real negative consequence? ## 0 ~1+ weeks Is this task irrelevant or obsolete? # IMPACT LEVELS ## 10: Catastrophic Harm (Existential Threat) Principle: Delay means the project or goal ceases to exist. There is no recovery. Think About: Legal deadlines, final contract dates, critical safety failures. Example: A software bug is actively corrupting all user data. Rolling it back is Level 10 because delay causes irreversible damage. ## 9: Critical Harm (Project Jeopardy) Principle: Delay triggers a crisis. Recovery is possible but extremely costly and disruptive. Think About: Losing a "once in a lifetime" opportunity, major reputation damage. Example: Forgetting to book a flight for a critical client presentation tomorrow. You can go, but at immense financial and personal cost. ## 8: Substantial Harm (Major Setback) Principle: Delay causes a major Resource Drain (time, money, scope) that requires formal explanation. Think About: Will this require you to formally report a budget overrun or a change in delivery date? Example: Delaying a key software decision forces a team of five to pause work for days. ## 7: High Harm (Collaborative Gridlock) Principle: Your delay creates a full stop for someone else. You are a bottleneck. Think About: Is there a person whose next action is "Wait for \[Your Name\] to finish..."? Example: Finance cannot close the quarterly books until you provide the final sales numbers. ## 6: Moderate Harm (Team Friction) Principle: Your delay creates inefficiency and frustration. It forces rework and lowers quality. Think About: Will someone have to redo their work because they had incomplete info from you? Example: Providing a draft with known errors forces your manager to review it twice, doubling their effort. ## 5: Tolerable Harm (Quality Debt) Principle: Delay impacts internal quality and best practices. This creates "quality debt" to be paid later. Think About: The "important, but not urgent" tasks like refactoring code or organizing files. Example: A messy shared folder makes finding info slow, costing hundreds of hours of productivity over a year. ## 4-3: Minor Harm (Personal Clutter) Principle: The negative impact is contained entirely to you and your personal workflow. Think About: These tasks add mental load but don't generate real impact. Example: (L4) A messy inbox. (L3) Desktop files that need sorting. ## 2-0: Negligible Harm (Backlog Noise) Principle: These items are not real, actionable tasks and should be aggressively removed. Think About: If you deleted it right now, would anyone notice in a month? Example: (L2) "Explore new note apps." (L1) "Learn about marketing." (L0) "Prep for Q2 meeting" (in Q4).

#dasel-only JSON

#1. Introduction: The Power of a Universal Selector

Before diving into the solution, it’s essential to understand why dasel is a powerful choice. The README.md documentation highlights its core principle:

“Say good bye to learning new tools just to work with a different data format. Dasel uses a standard selector syntax no matter the data format.”

This means the solution we construct for your JSON data would work identically if your data were in YAML, TOML, or another supported format, simply by changing the read flag (e.g., -r yaml). Our goal is to build a single, robust query to extract all name fields from your document.

#2. The Challenge: Extracting Data from Multiple Depths

A quick look at your JSON reveals that the name key exists in at least three different locations and depths:

  1. At the root of the object.
  2. Inside each object within the sublists array.
  3. Inside each object within the nested items array.

A simple selector like .name would only find the first instance. We need a strategy to query all these distinct paths simultaneously and combine their results.

#3. The dasel Strategy: Explicit Path Merging

The ideal tool for this job in the dasel ecosystem is the merge() function. As described in functions/merge.md, when merge() is given one or more selectors as arguments, it runs each one from the root of the document and gathers all the results into a single list.

Our strategy will be to define a selector for each unique path to a name key and pass them all to the merge() function.

#4. Building a Robust Selector: A Step-by-Step Guide

Real-world data is often imperfect. A sublist might be missing its items array, or an item might not have a name. A robust selector must not fail in these cases. The functions/property.md documentation notes that appending a ? to a property lookup makes it optional. We will use this feature to make our query production-safe.

#Step 4.1: Targeting the Root name

This is the simplest path. We make it optional just in case the root name is ever missing.

  • Selector: name?

#Step 4.2: Targeting the sublists name

Here, we must account for the possibility that sublists itself could be missing.

  1. sublists?: Safely access the sublists array. If it doesn’t exist, the query stops gracefully for this path.
  2. .all(): Iterate over each element in the array.
  3. .name?: Safely access the name from each element.
    • Selector: sublists?.all().name?

#Step 4.3: Targeting the items name

This path is the most deeply nested and has the most potential points of failure, which we will handle with optional lookups.

  1. sublists?: Safely access the sublists array.
  2. .all(): Iterate over its elements.
  3. .items?: Safely access the items array within each sublist element.
  4. .all(): Iterate over the items.
  5. .name?: Safely access the name from each item.
    • Selector: sublists?.all().items?.all().name?

#5. Assembling the Final Command

We now combine these robust selectors using the merge() function. The JSON data will be piped (|) to dasel, which will read it from standard input.

# This is a placeholder for your actual command that generates the JSON program_that_generates_the_json_data | dasel -r json 'merge(name?, sublists?.all().name?, sublists?.all().items?.all().name?)'

This command will produce a clean, newline-separated stream of all name values found within the document.

#6. Controlling the Output Format

By default, dasel outputs a stream of values. However, you can easily change this to produce a single, valid JSON array.

#Default Stream Output

The command from Step 5 produces a simple stream:

"What harm if delayed?" "9158" "[10;health;8;carro consertado] VERIFICAR - liquidos e manutencao basica" ...

#Generating a Valid JSON Array

To collect all the results into a single array, we can use the merge() function again, but this time at the end of the selector chain. This takes all the preceding results and merges them into one list.

program_that_generates_the_json_data | dasel -r json 'merge(name?, sublists?.all().name?, sublists?.all().items?.all().name?).merge()'

Expected JSON Array Output:

[ "What harm if delayed?", "9158", "[10;health;8;carro consertado] VERIFICAR - liquidos e manutencao basica", "[10;health;8;carro consertado] IR - em borracheiro para gastar com o conserto do pneu furado", "[10;health;8;casa entregue] GASTAR - com a troca do vidro da porta", "[10;health;8;casa entregue] CONSERTAR - danos e marcas na casa do gildenicio", "[10;health;8;carro consertado] IR - na mecanica para gastar com alinhamento e balanceamento e farois", "[10;health;8;carro consertado] IR - gastar com lavajato", "[9;financial;9;concursado] PESQUISAR - receita amarela via psiquiatra online", "[10;health] FAZER - farmers walk - EVERYDAY", "[9;financial;7;pkm] JUNTAR - arquivos e upar tudo para o googledrive (restic already configured at rk3588, but verify AI conversations regarding uploading with rclone only for gdrive usable files)", "[9;financial;7;pkm;;6;hardwares configurados] CONFIGURAR - rk3588 root space resize", "[9;financial;7;pkm;;6;hardwares configurados] CONFIGURAR - rk3588 kvm submodule" ]

#7. Contextual Comparison: dasel vs. jq

It’s helpful to understand how dasel’s approach compares to other popular tools like jq. To solve this same problem, a jq user might use the recursive descent operator (..), which implicitly searches through the entire structure.

  • jq Approach (Implicit Recursion):
    # The '..' operator finds all values, then we filter for objects that have a 'name' key. # The '// empty' handles cases where a value is null. program_that_generates_the_json_data | jq -r '.. | .name? // empty'
  • dasel Approach (Explicit Path Definition):
    program_that_generates_the_json_data | dasel -r json 'merge(name?, sublists?.all().name?, sublists?.all().items?.all().name?)'

This comparison highlights a key difference in philosophy: jq offers a powerful, implicit way to search an entire document, while dasel favors a more explicit and declarative approach where the user defines the exact paths to be queried.

#8. Conclusion: Key Takeaways

By following this guide, you have learned to:

  • Leverage dasel’s core merge function to combine results from multiple distinct queries.
  • Write robust selectors using optional chaining (?) to handle imperfect or evolving data schemas.
  • Control the output format to produce either a simple stream or a valid JSON array.
  • Understand dasel’s philosophy of explicit path definition and how it provides a universal query language across different data formats.

#pipeline to deduplicate and sort JSON

#Prerequisites: Installing jq

The proposed solutions rely on jq, a powerful command-line JSON processor. It is not installed by default on Debian 12, so you must first install it:

sudo apt update sudo apt install -y jq

#The Pipeline Strategy: A High-Level Overview

Our primary approach will be a multi-stage pipeline where the output of one command becomes the input for the next. This is a cornerstone of the Unix philosophy.

  1. dasel (Select): Extracts all values associated with the name key from the input JSON, producing a JSON array.
  2. Extractor (grep/sed/jq): Processes the array, applying a regular expression to extract only the bracketed metadata from each string.
  3. sort & uniq (Deduplicate & Order): The classic Unix duo for sorting the extracted lines and removing duplicates.
  4. jq (Re-assemble): Gathers the cleaned, unique, and sorted lines back into a final, well-formed JSON array.

#Detailed Step-by-Step Breakdown

#Step 1: Initial Data Extraction with dasel

This command remains the foundation of our solution. It navigates the nested structure of your document and reliably extracts every name value into a single JSON array. The optional chaining (?) ensures the command doesn’t fail if a key is missing.

# In your script, this would be `program_that_generates_the_json_data | ...` cat data.json | dasel -r json 'merge(name?, sublists?.all().name?, sublists?.all().items?.all().name?).merge()'

Output: A single JSON array containing all name values.


#Step 2: The Extraction Stage - A Comparative Analysis

This is the most critical transformation step. We will unpack the JSON array and extract the relevant substrings. Here are three excellent methods, with grep being the recommended choice for its robustness and clarity.

  • Option A: grep (Recommended) This method is concise, highly efficient, and correctly handles the data structure without being “greedy.”
    • Command: grep -oP '\[\K[^]]*'
    • Explanation:
      • -o: Print only the matching parts of a line.
      • -P: Use Perl-Compatible Regular Expressions (PCRE), which are highly expressive.
      • \[\K: Matches a literal [ and the \K escape sequence then discards this matched part from the final output.
      • [^]]*: Matches any sequence of characters that is not a closing bracket (]). This is inherently non-greedy and robust.
  • Option B: sed The classic stream editor can also perform this task. This version uses modern syntax and is more robust than a simple (.*).
    • Command: sed -E -n 's/.*\[([^]]*)\].*/\1/p'
    • Explanation:
      • -E: Use Extended Regular Expressions for cleaner syntax (no need for \().
      • -n: Suppress normal output.
      • s/.../.../p: Substitute the pattern and print only if successful.
      • \[([^]]*)\]: Matches a [ followed by a captured group of non-] characters, followed by a ].
  • Option C: jq We can also perform the extraction within jq itself immediately after unpacking the array.
    • Command: jq -r '.[] | capture("\\[(?<content>.*?)\\]") | .content'
    • Explanation: The capture function applies a regex and extracts named groups. This is effective but can be more verbose than the grep equivalent for a single match.

#Step 3: Deduplication and Sorting with sort and uniq

This is a time-honored Unix pattern. The output from the extraction step is a simple stream of text, perfect for these tools.

  • Command: sort | uniq
  • Explanation:
    • sort: Sorts the lines alphabetically. This is a prerequisite for uniq, which only removes adjacent duplicate lines.
    • uniq: Removes the consecutive duplicate lines, leaving a unique, sorted list.

#Step 4: Re-assembling the Final JSON Array with jq

To convert our clean stream of text back into a valid JSON array, we use a robust, two-stage jq pipe. This is superior to manually splitting strings as it’s more idiomatic and gracefully handles edge cases.

  • Command: jq -R . | jq -s .
  • Explanation:
    • jq -R .: The first jq reads the input as Raw text (-R) and outputs each line as a valid, quoted JSON string (.).
    • jq -s .: The second jq slurps (-s) this stream of JSON strings into a single, perfectly formed JSON array (.).

Using our recommended components (grep for extraction and the two-stage jq for re-assembly), the final, commented command is:

# Replace `cat data.json` with your actual program cat data.json | dasel -r json 'sublists?.all().items?.all().name?.merge()' | jq -r '.[]' | grep -oP '\[\K[^]]*' | sort | uniq | jq -R . | jq -s .

#Alternative Approach: The Integrated jq Pipeline

For those who prefer to minimize the number of separate processes, it’s possible to perform the extraction, sorting, and deduplication entirely within a single jq script after the initial dasel extraction.

This approach combines steps 2, 3, 4, and 6 into one jq command.

# Replace `cat data.json` with your actual program cat data.json | \ # 1. Use dasel to extract all 'name' fields, as before dasel -r json 'merge(name?, sublists?.all().name?, sublists?.all().items?.all().name?).merge()' | \ # 2. Use a single, powerful jq script to perform all transformations jq '[.[] | capture("\\[(?<content>.*?)\\]") | .content] | map(select(. != null)) | unique | sort'

Explanation of the jq script:

  1. [.[] | ... ]: Iterates over the input array and builds a new one.
  2. capture(...) | .content: Extracts the bracketed content, resulting in null for non-matching strings.
  3. map(select(. != null)): Filters out the null values from the non-matching strings.
  4. unique: jq’s built-in function to remove duplicates (does not require pre-sorting).
  5. sort: jq’s built-in function to sort the final array alphabetically.

Trade-offs:

  • Pipeline Approach: More idiomatic in a general Unix context, potentially more readable to those unfamiliar with advanced jq, and allows for easy substitution of components (e.g., swapping grep for sed).
  • Integrated jq Approach: More efficient as it uses fewer processes. It’s a very powerful and concise solution if you are comfortable with jq’s syntax and functions.

Both approaches are excellent and will produce your desired output. The recommended pipeline is often easier to debug and understand, while the integrated jq approach is more streamlined.

#pipeline bash script

#!/bin/bash set -euo pipefail # ============================================================================== # SCRIPT METADATA # ============================================================================== # Purpose: A reusable command-line utility to process JSON data using # powerful pipelines. # Tutorial: Based on "debian 12 pipeline to maneauver JSON data". # Target System: Debian 12 (Bookworm) / Debian 11 (Bullseye) on ARM64. # Execution: Run by a user with sudo privileges. # # Description: # This script serves two purposes: # 1. As a command-line utility to process JSON files using one of two methods. # 2. As a self-contained demonstration of the pipelines (--demo mode). # # It installs necessary tools (jq, dasel), and provides fine-grained control # over input, output, and processing method via command-line flags. # ============================================================================== # --- Configuration Variables --- # Pinning the version of dasel ensures consistent behavior. # Checked against https://github.com/TomWright/dasel/releases DASEL_VERSION="v2.4.0" DASEL_INSTALL_DIR="/usr/local/bin" DASEL_EXECUTABLE_PATH="${DASEL_INSTALL_DIR}/dasel" # --- Temporary File Management --- # Create a temporary directory that will be automatically removed on exit. TEMP_DIR=$(mktemp -d) trap 'echo "INFO: Cleaning up temporary directory..." >&2; rm -rf "$TEMP_DIR"' EXIT SIGINT SIGTERM # ============================================================================== # HELPER AND CORE LOGIC FUNCTIONS # ============================================================================== # # Prints the script's usage instructions and exits. # usage() { cat << EOF Usage: $(basename "$0") [OPTIONS] A reusable command-line utility to process JSON data using powerful pipelines. OPTIONS: -i, --input <file> Required. Path to the input JSON file. -o, --output <file> Optional. Path to the output file. If not provided, output is written to standard output (stdout). -p, --pipeline <method> Optional. The pipeline method to use. - "classic": (Default) dasel | jq | grep | sort | uniq | jq - "jq": dasel | jq (integrated logic) --demo Run a self-contained demonstration using sample data. This option ignores all other flags. -h, --help Display this help message and exit. EOF exit 1 } # # Checks if a given command-line tool is installed. If not, it attempts to # install it using apt-get. # # @param $1: The name of the command to check (e.g., "jq"). # @param $2: (Optional) The name of the package to install if different. # ensure_tool_installed() { local tool_name="$1" local package_name="${2:-$tool_name}" if ! command -v "$tool_name" >/dev/null 2>&1; then echo "INFO: Tool '$tool_name' not found. Installing package '$package_name'..." if ! sudo apt-get install -y "$package_name"; then echo "ERROR: Failed to install '$package_name'. Please try installing it manually." >&2 exit 1 fi echo "INFO: Package '$package_name' installed successfully." else echo "INFO: Tool '$tool_name' is already installed." fi } # # Downloads and installs the 'dasel' binary for ARM64 architecture. # The function is idempotent. # install_dasel() { if [ -f "$DASEL_EXECUTABLE_PATH" ]; then echo "INFO: Tool 'dasel' is already installed at $DASEL_EXECUTABLE_PATH." return 0 fi echo "INFO: Tool 'dasel' not found. Attempting to install..." local download_url="https://github.com/TomWright/dasel/releases/download/${DASEL_VERSION}/dasel_linux_arm64" local temp_download_path="${TEMP_DIR}/dasel" echo "INFO: Downloading dasel ${DASEL_VERSION} for ARM64 from $download_url..." if ! curl -sSL "$download_url" -o "$temp_download_path"; then echo "ERROR: Failed to download dasel from $download_url" >&2 exit 1 fi echo "INFO: Installing dasel to $DASEL_INSTALL_DIR (requires sudo)..." # Using 'install' is more robust than 'mv' as it can set permissions in one step. if ! sudo install -m 0755 "$temp_download_path" "$DASEL_EXECUTABLE_PATH"; then echo "ERROR: Failed to install dasel. Please check permissions for $DASEL_INSTALL_DIR." >&2 exit 1 fi echo "INFO: dasel installed successfully." } # # Runs all dependency checks and installations. # run_dependency_checks() { echo "--- Running Prerequisite Checks ---" echo "INFO: Updating package lists (requires sudo)..." if ! sudo apt-get update; then echo "ERROR: Failed to update package lists. Please check your sources.list and network." >&2 exit 1 fi ensure_tool_installed "jq" ensure_tool_installed "curl" install_dasel echo "--- Prerequisite checks complete ---" echo } # # Executes the "classic" Unix pipeline. # @param $1: Input file path. # run_classic_pipeline() { local input_file="$1" cat "$input_file" | \ "$DASEL_EXECUTABLE_PATH" -r json 'sublists?.all().items?.all().name?.merge()' | jq -r '.[]' | grep -oP '\[\K[^]]*' | sort | uniq | jq -R . | jq -s . } # # Executes the integrated "jq" pipeline. # @param $1: Input file path. # run_jq_pipeline() { local input_file="$1" cat "$input_file" | \ "$DASEL_EXECUTABLE_PATH" -r json 'merge(name?, sublists?.all().name?, sublists?.all().items?.all().name?).merge()' | \ jq '[.[] | capture("\\[(?<content>.*?)\\]") | .content] | map(select(. != null)) | unique | sort' } # # Runs the self-contained demonstration. # run_demo() { echo "--- Running in Demonstration Mode ---" local demo_data_file="${TEMP_DIR}/data.json" echo "INFO: Creating a sample JSON file for demonstration..." cat > "$demo_data_file" << 'EOF' { "name": "Project Alpha [Metadata A]", "sublists": [ {"name": "Task 1 [Metadata B]", "items": [{"name": "Subtask 1.1 [Metadata C]"}, {"name": "Subtask 1.2 [Metadata B]"}]}, {"name": "Task 2 [Metadata D]"} ] } EOF echo "--- Input Data ---" cat "$demo_data_file" echo "------------------" echo echo ">>> Method 1: Recommended Unix Pipeline (classic)" run_classic_pipeline "$demo_data_file" echo echo ">>> Method 2: Integrated jq Pipeline (jq)" run_jq_pipeline "$demo_data_file" echo echo "--- Demonstration Complete ---" } # ============================================================================== # MAIN SCRIPT LOGIC # ============================================================================== main() { # Default values for options INPUT_FILE="" OUTPUT_FILE="" PIPELINE_METHOD="classic" RUN_MODE="utility" # Parse command-line arguments if [[ $# -eq 0 ]]; then usage fi while [[ $# -gt 0 ]]; do case $1 in -i|--input) INPUT_FILE="$2" shift 2 ;; -o|--output) OUTPUT_FILE="$2" shift 2 ;; -p|--pipeline) if [[ "$2" != "classic" && "$2" != "jq" ]]; then echo "ERROR: Invalid pipeline method '$2'. Must be 'classic' or 'jq'." >&2 usage fi PIPELINE_METHOD="$2" shift 2 ;; --demo) RUN_MODE="demo" shift 1 ;; -h|--help) usage ;; *) echo "ERROR: Unknown option: $1" >&2 usage ;; esac done # --- Run dependency checks --- run_dependency_checks # --- Execute based on run mode --- if [[ "$RUN_MODE" == "demo" ]]; then run_demo exit 0 fi # --- Utility Mode Logic --- if [[ -z "$INPUT_FILE" ]]; then echo "ERROR: Input file is required. Use the -i or --input flag." >&2 usage fi if [[ ! -f "$INPUT_FILE" ]]; then echo "ERROR: Input file not found at: $INPUT_FILE" >&2 exit 1 fi echo "INFO: Running in Utility Mode." echo "INFO: Input file: $INPUT_FILE" echo "INFO: Pipeline method: $PIPELINE_METHOD" if [[ -n "$OUTPUT_FILE" ]]; then echo "INFO: Output will be saved to: $OUTPUT_FILE" else echo "INFO: Output will be written to standard output." fi echo "---" local result if [[ "$PIPELINE_METHOD" == "classic" ]]; then result=$(run_classic_pipeline "$INPUT_FILE") else result=$(run_jq_pipeline "$INPUT_FILE") fi if [[ -n "$OUTPUT_FILE" ]]; then echo "$result" > "$OUTPUT_FILE" echo "---" echo "INFO: Successfully wrote output to $OUTPUT_FILE" else echo "$result" fi } # --- Script Entry Point --- main "$@" exit 0
URL: https://ib.bsb.br/prioritylist