``` ├── .github/ ├── dependabot.yml ├── workflows/ ├── rust.yml ├── .gitignore ├── CONTRIBUTING.md ├── Cargo.toml ├── LICENSE.md ├── NOTICE.md ├── README.md ├── rustfmt.toml ├── sp-cli/ ├── Cargo.toml ├── src/ ├── cli.rs ├── cli/ ├── info.rs ├── install.rs ├── pipeline.rs ├── reinstall.rs ├── search.rs ├── uninstall.rs ├── update.rs ├── upgrade.rs ├── main.rs ├── ui.rs ├── sp-common/ ├── Cargo.toml ├── src/ ├── cache.rs ├── config.rs ├── dependency/ ├── definition.rs ├── mod.rs ├── requirement.rs ├── resolver.rs ├── error.rs ├── formulary.rs ├── keg.rs ├── lib.rs ├── model/ ├── cask.rs ├── formula.rs ├── mod.rs ├── version.rs ├── sp-core/ ├── Cargo.toml ├── src/ ├── build/ ├── cask/ ├── artifacts/ ├── app.rs ├── audio_unit_plugin.rs ├── binary.rs ├── colorpicker.rs ├── dictionary.rs ├── font.rs ├── input_method.rs ├── installer.rs ├── internet_plugin.rs ├── keyboard_layout.rs ``` ## /.github/dependabot.yml ```yml path="/.github/dependabot.yml" version: 2 updates: - package-ecosystem: cargo directory: / schedule: interval: weekly groups: cargo: patterns: - "*" - package-ecosystem: github-actions directory: / schedule: interval: weekly groups: github-actions: patterns: - "*" ``` ## /.github/workflows/rust.yml ```yml path="/.github/workflows/rust.yml" name: Rust CI on: push: branches: - main pull_request: branches: - main permissions: contents: read jobs: rust-macos-arm64: runs-on: macos-latest steps: - name: Check out code uses: actions/checkout@v4 - name: Install Rust toolchain (Nightly for fmt) uses: actions-rs/toolchain@v1 with: toolchain: nightly components: rustfmt, clippy override: true - name: Set Stable as Default (MSRV 1.86.0) if: success() uses: actions-rs/toolchain@v1 with: toolchain: 1.86.0 components: clippy - name: Cache cargo registry & build artifacts uses: actions/cache@v4 with: path: | ~/.cargo/registry ~/.cargo/git target key: ${{ runner.os }}-cargo-stable-${{ hashFiles('**/Cargo.lock') }} restore-keys: | ${{ runner.os }}-cargo-stable- ${{ runner.os }}-cargo- - name: Verify runner architecture run: 'echo "UNAME reports: $(uname -m)"' - name: Check formatting run: cargo +nightly fmt --all -- --check - name: Run linters run: cargo clippy -- -D warnings - name: Build release binary run: cargo build --release --verbose - name: Run tests run: cargo test --verbose - name: Upload compiled binary uses: actions/upload-artifact@v4 with: name: sp-macos-arm64 path: target/release/sp ``` ## /.gitignore ```gitignore path="/.gitignore" /target Cargo.lock .DS_STORE NOTES.md # Out file names I like to use for stuff error diff one two log ``` ## /CONTRIBUTING.md # Contributing to sp > We love merge requests! This guide shows the fastest path from **idea** to **merged code**. Skip straight to the *Quick‑Start* if you just want to get going, or dive into the details below. --- ## ⏩ Quick‑Start ### 1. Fork, clone & branch ```bash git clone https://github.com//sp.git cd sp git checkout -b feat/ ``` ### 2. Install Nightly Toolchain (for formatting) ```bash rustup toolchain install nightly ``` ### 3. Compile fast (uses stable toolchain from rust-toolchain.toml) ```bash cargo check --workspace --all-targets ``` ### 4. Format (uses nightly toolchain) ```bash cargo +nightly fmt --all ``` ### 5. Lint (uses stable toolchain) ```bash cargo clippy --workspace --all-targets --all-features -- -D warnings ``` ### 6. Test (uses stable toolchain) ```bash cargo test --workspace ``` ### 7. Commit (Conventional + DCO) ```bash git commit -s -m "feat(core): add new fetcher" ``` ### 8. Push & open a Merge Request against `main` ```bash git push origin feat/ # then open a merge request on GitHub ``` ----- ## Project Layout | Crate | Role | | ------------------- | -------------------------------------------------------- | | **`sp-core`** | Library: dependency resolution, fetchers, install logic | | **`sp-cli`** | Binary: user‑facing `sp` command | All crates live in one Cargo **workspace**, so `cargo ` from the repo root affects everything. ----- ## Dev Environment * **Platform**: Development and execution require **macOS**. * **Rust (Build/Test)**: **Stable** toolchain, MSRV pinned in `rust-toolchain.toml` (currently *1.76.0*). Install via [rustup.rs][rustup.rs]. This is used by default for `cargo build`, `cargo check`, `cargo test`, etc. * **Rust (Format)**: **Nightly** toolchain is required *only* for formatting (`cargo fmt`) due to unstable options used in our `rustfmt.toml` configuration. * Install via: `rustup toolchain install nightly` * **Rust Components**: `rustfmt`, `clippy` – install via `rustup component add rustfmt clippy`. Make sure these components are available for *both* your default stable toolchain and the nightly toolchain. * **macOS System Tools**: Xcode Command Line Tools (provides C compiler, git, etc.). Install with `xcode-select --install`. You may also need `pkg-config` and `cmake` (e.g., install via [Homebrew][Homebrew]: `brew install pkg-config cmake`). ----- ## Coding Style * **Format** ‑ We use custom formatting rules (`rustfmt.toml`) which include unstable options (like `group_imports`, `imports_granularity`, `wrap_comments`, etc.). Applying these requires using the **nightly** toolchain. Format your code *before committing* using: ```bash cargo +nightly fmt --all ``` * Ensure the nightly toolchain is installed (`rustup toolchain install nightly`). * CI runs `cargo +nightly fmt --all --check`, so MRs with incorrect formatting will fail. * **Lint** ‑ `cargo clippy … -D warnings`; annotate false positives with `#[allow()]` + comment. (This uses the default stable toolchain). * **API** ‑ follow the [Rust API Guidelines][Rust API Guidelines]; document every public item; avoid `unwrap()`. * **Dependencies** ‑ discuss new crates in the MR; future policy will use `cargo deny`. ----- ## Testing * Unit tests in modules, integration tests in `tests/`. * Aim to cover new code; bug‑fix MRs **must** include a failing test that passes after the fix. * `cargo test --workspace` must pass (uses the default stable toolchain). ----- ## Git & Commits * **Fork** the repo on GitHub and add your remote if you haven’t already. * **Branches**: use feature branches like `feat/…`, `fix/…`, `docs/…`, `test/…`. * **Conventional Commits** preferred (`feat(core): add bottle caching`). * **DCO**: add `-s` flag (`git commit -s …`). * Keep commits atomic; squash fix‑ups before marking the MR ready. ----- ## Merge‑Request Flow 1. Sync with `main`; rebase preferred. 2. Ensure your code is formatted correctly with `cargo +nightly fmt --all`. 3. Ensure CI is green (build, fmt check, clippy, tests on macOS using appropriate toolchains). 4. Fill out the MR template; explain *why* + *how*. 5. Respond to review comments promptly – we’re friendly, promise! 6. Maintainers will *Squash & Merge* (unless history is already clean). ----- ## Reporting Issues * **Bug** – include repro steps, expected vs. actual, macOS version & architecture (Intel/ARM). * **Feature** – explain use‑case, alternatives, and willingness to implement. * **Security** – email maintainers privately; do **not** file a public issue. ----- ## License & DCO By submitting code you agree to the BSD‑3‑Clause license and certify the [Developer Certificate of Origin][Developer Certificate of Origin]. ----- ## Code of Conduct We follow the [Contributor Covenant][Contributor Covenant]; be kind and inclusive. Report misconduct privately to the core team. ----- Happy coding – and thanks for making sp better! ✨ [rustup.rs]: https://rustup.rs/ [homebrew]: https://brew.sh/ [Rust API Guidelines]: https://rust-lang.github.io/api-guidelines/ [Developer Certificate of Origin]: https://developercertificate.org/ [Contributor Covenant]: https://www.contributor-covenant.org/version/2/1/code_of_conduct/ ## /Cargo.toml ```toml path="/Cargo.toml" [workspace] resolver = "3" members = [ "sp-cli", "sp-common", "sp-core", "sp-net", ] # Shared dependencies defined once [workspace.dependencies] anyhow = "1.0.98" thiserror = "2.0.12" serde = { version = "1.0.219", features = ["derive"] } # Using the highest specified version and common features serde_json = "1.0.140" # Used across multiple crates reqwest = { version = "0.12.15", features = ["json", "stream", "blocking"] } # Combined features from core, net, cli, common tokio = { version = "1.44.2", features = ["full"] } # Used across cli, core, net futures = "0.3.31" # Used across cli, core, net tracing = "0.1.41" # Used across cli, common, core, net semver = "1.0.26" # Used in common, core dirs = "6.0.0" # Used in common, core walkdir = "2.5.0" # Used in cli, core (using highest version) indicatif = "0.17.11" # Used in cli, core (using highest version) env_logger = "0.11.8" # Used in cli, core num_cpus = "1.16.0" # Used in cli, core object = { version = "0.36.7", features = ["read_core", "write_core", "macho"] } # Used in common, core (combined features) humantime = "2.2.0" # Used in common, core bitflags = { version = "2.9.0", features = ["serde"] } # Used in common, core (combined features) url = "2.5.4" # Used in core, net sha2 = "0.10.8" # Used in core, net hex = "0.4.3" # Used in core, net rand = "0.9.1" # Used in core, net infer = "0.19.0" # Used in core, net # Workspace-wide release profile [profile.release] lto = true codegen-units = 1 strip = true ``` ## /LICENSE.md BSD 3-Clause License Copyright 2025 sp Contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ## /NOTICE.md ## Homebrew ### Sources https://github.com/Homebrew/brew ### License BSD 2-Clause License Copyright (c) 2009-present, Homebrew contributors All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ## /README.md # sp > [!WARNING] > **ALPHA SOFTWARE** > sp is experimental, under heavy development, and may be unstable. Use at your own risk! > > Uninstalling a cask with brew then reinstalling it with sp will have it installed with slightly different paths, your user settings etc. will not be migrated automatically. sp is a next‑generation, Rust‑powered package manager inspired by Homebrew. It installs and manages: - **Formulae:** command‑line tools, libraries, and languages - **Casks:** desktop applications and related artifacts on macOS > _ARM only for now, might add x86 support eventually_ --- ## ⚙️ Project Structure - **sp‑core** Core library: fetching, dependency resolution, archive extraction, artifact handling (apps, binaries, pkg installers, fonts, plugins, zap/preflight/uninstall stanzas, etc.) - **sp‑cli** Command‑line interface: `sp` executable wrapping the core library. --- ## 🚧 Current Status - Bottle installation and uninstallation - Cask installation and uninstallation - Reinstall command for reinstalls - Upgrade command for updates (very careful. I ran into no system breakers, my Perl install got nuked though) - Parallel downloads and installs for speed - Automatic dependency resolution and installation - Building Formulae from source (very early impl) --- ## 🚀 Roadmap - **Cleanup** old downloads, versions, caches - **Prefix isolation:** support `/opt/sp` as standalone layout - **`sp init`** helper to bootstrap your environment - **Ongoing** Bug fixes and stability improvements --- Screenshot 2025-04-26 at 22 09 41 > I know this does not follow one defined style yet. Still thinking about how I actually want it to look so... we'll get there --- ## 📦 Usage ```sh # Print help sp --help # Update metadata sp update # Search for packages sp search # Get package info sp info # Install bottles or casks sp install # Build and install a formula from source sp install --build-from-source # Uninstall sp uninstall # Reinstall sp reinstall #Upgrade sp upgrade or --all # (coming soon) sp cleanup sp init ``` ----- ## 🏗️ Building from Source **Prerequisites:** Rust toolchain (stable). ```sh git clone cd sp cargo build --release ``` The `sp` binary will be at `target/release/sp`. Add it to your `PATH`. ----- ## 📥 Using the Latest Nightly Build You can download the latest nightly build from [`actions/workflows/rust.yml`](../../actions/workflows/rust.yml) inside this repository (select a successful build and scroll down to `Artifacts`). Before running the downloaded binary, remove the quarantine attribute: ```sh xattr -d com.apple.quarantine ./sp ``` Then, you can run the binary directly: ```sh ./sp --help ``` ----- ## 🤝 Contributing sp lives and grows by your feedback and code\! We’re particularly looking for: - Testing and bug reports for Cask & Bottle installation + `--build-from-source` - Test coverage for core and cask modules - CLI UI/UX improvements - See [CONTRIBUTING.md](CONTRIBUTING.md) Feel free to open issues or PRs. Every contribution helps\! ----- ## 📄 License - **sp:** BSD‑3‑Clause - see [LICENSE.md](LICENSE.md) - Inspired by Homebrew BSD‑2‑Clause — see [NOTICE.md](NOTICE.md) ----- > *Alpha software. No guarantees. Use responsibly.* ## /rustfmt.toml ```toml path="/rustfmt.toml" # configuration for https://rust-lang.github.io/rustfmt/ use_field_init_shorthand = true # unstable options. These require cargo +nightly fmt to use comment_width = 100 # generally more readable than 80 format_code_in_doc_comments = true format_macro_matchers = true group_imports = "StdExternalCrate" imports_granularity = "Module" # generally leads to easier merges and shorter diffs normalize_doc_attributes = true # converts #[doc = "..."] to /// and #[doc(...)] to /// ... wrap_comments = true ``` ## /sp-cli/Cargo.toml ```toml path="/sp-cli/Cargo.toml" [package] name = "sp-cli" version = "0.1.0" edition = "2024" # Or "2021" if not using nightly description = "Command-line interface for sp" repository = "https://github.com/alexykn/sp" license = "BSD-3 Clause" [[bin]] name = "sp" path = "src/main.rs" [dependencies] # Local workspace crates sp-core = { path = "../sp-core" } sp-net = { path = "../sp-net" } sp-common = { path = "../sp-common" } # Inherited from workspace serde = { workspace = true } thiserror = { workspace = true } serde_json = { workspace = true } reqwest = { workspace = true } tokio = { workspace = true } futures = { workspace = true } tracing = { workspace = true } walkdir = { workspace = true } indicatif = { workspace = true } env_logger = { workspace = true } num_cpus = { workspace = true } # CLI specific dependencies clap = { version = "4.5.37", features = ["derive"] } colored = "3.0.0" spinners = "4.1" dialoguer = "0.11.0" prettytable-rs = "0.10" terminal_size = "0.4.2" textwrap = "0.16.2" unicode-width = "0.2.0" tracing-subscriber = { version = "0.3.19", features = ["env-filter"] } crossbeam-channel = "0.5.15" threadpool = "1.8.1" once_cell = "1.21.3" tracing-appender = "0.2.3" [build-dependencies] clap_complete = "4.3" ``` ## /sp-cli/src/cli.rs ```rs path="/sp-cli/src/cli.rs" //! Defines the command-line argument structure using clap. use std::sync::Arc; use clap::{ArgAction, Parser, Subcommand}; use sp_common::error::Result; use sp_common::{Cache, Config}; use crate::cli::info::Info; use crate::cli::install::InstallArgs; use crate::cli::reinstall::ReinstallArgs; use crate::cli::search::Search; use crate::cli::uninstall::Uninstall; use crate::cli::update::Update; use crate::cli::upgrade::UpgradeArgs; pub mod info; pub mod install; pub mod pipeline; pub mod reinstall; pub mod search; pub mod uninstall; pub mod update; pub mod upgrade; #[derive(Parser, Debug)] #[command(author, version, about, long_about = None, name = "sp", bin_name = "sp")] #[command(propagate_version = true)] pub struct CliArgs { /// Increase verbosity (-v for debug output, -vv for trace) #[arg(short, long, action = ArgAction::Count, global = true)] pub verbose: u8, #[command(subcommand)] pub command: Command, } #[derive(Subcommand, Debug)] pub enum Command { /// Search for available formulas and casks Search(Search), /// Display information about a formula or cask Info(Info), /// Fetch the latest package list from the API Update(Update), /// Install a formula or cask Install(InstallArgs), /// Uninstall one or more formulas or casks Uninstall(Uninstall), /// Reinstall one or more formulas or casks Reinstall(ReinstallArgs), /// Upgrade one or more formulas or casks Upgrade(UpgradeArgs), } impl Command { pub async fn run(&self, config: &Config, cache: Arc) -> Result<()> { match self { Self::Search(command) => command.run(config, cache).await, Self::Info(command) => command.run(config, cache).await, Self::Update(command) => command.run(config, cache).await, Self::Install(command) => command.run(config, cache).await, Self::Uninstall(command) => command.run(config, cache).await, Self::Reinstall(command) => command.run(config, cache).await, Self::Upgrade(command) => command.run(config, cache).await, } } } ``` ## /sp-cli/src/cli/info.rs ```rs path="/sp-cli/src/cli/info.rs" //! Contains the logic for the `info` command. use std::sync::Arc; use clap::Args; use colored::Colorize; use serde_json::Value; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::{Result, SpError}; use sp_net::fetch::api; use crate::ui; #[derive(Args, Debug)] pub struct Info { /// Name of the formula or cask pub name: String, /// Show information for a cask, not a formula #[arg(long)] pub cask: bool, } impl Info { /// Displays detailed information about a formula or cask. pub async fn run(&self, _config: &Config, cache: Arc) -> Result<()> { let name = &self.name; let is_cask = self.cask; tracing::debug!("Getting info for package: {name}, is_cask: {is_cask}",); // Use the ui utility function to create the spinner let pb = ui::create_spinner(&format!("Loading info for {name}")); // <-- CHANGED if self.cask { match get_cask_info(Arc::clone(&cache), name).await { Ok(info) => { pb.finish_and_clear(); print_cask_info(name, &info); Ok(()) } Err(e) => { pb.finish_and_clear(); // Ensure spinner is cleared on error Err(e) } } } else { match get_formula_info_raw(Arc::clone(&cache), name).await { Ok(info) => { // Removed bottle check logic here as it was complex and potentially racy. // We'll try formula first, then cask if formula fails. pb.finish_and_clear(); // Clear spinner after successful fetch print_formula_info(name, &info); return Ok(()); } Err(SpError::NotFound(_)) | Err(SpError::Generic(_)) => { // If formula lookup failed (not found or generic error), try cask. tracing::debug!("Formula '{}' info failed, trying cask.", name); } Err(e) => { pb.finish_and_clear(); // Ensure spinner is cleared on other errors return Err(e); // Propagate other errors (API, JSON, etc.) } } // --- Cask Fallback --- match get_cask_info(Arc::clone(&cache), name).await { Ok(info) => { pb.finish_and_clear(); print_cask_info(name, &info); Ok(()) } Err(e) => { pb.finish_and_clear(); // Clear spinner on cask error too Err(e) // Return the cask error if both formula and cask fail } } } } } /// Retrieves formula information from the cache or API as raw JSON async fn get_formula_info_raw(cache: Arc, name: &str) -> Result { match cache.load_raw("formula.json") { Ok(formula_data) => { let formulas: Vec = serde_json::from_str(&formula_data).map_err(SpError::from)?; for formula in formulas { if let Some(fname) = formula.get("name").and_then(Value::as_str) { if fname == name { return Ok(formula); } } // Also check aliases if needed if let Some(aliases) = formula.get("aliases").and_then(|a| a.as_array()) { if aliases.iter().any(|a| a.as_str() == Some(name)) { return Ok(formula); } } } tracing::debug!("Formula '{}' not found within cached 'formula.json'.", name); // Explicitly return NotFound if not in cache return Err(SpError::NotFound(format!( "Formula '{name}' not found in cache" ))); } Err(e) => tracing::debug!( "Cache file 'formula.json' not found or failed to load ({}). Fetching from API.", e ), } tracing::debug!("Fetching formula '{}' directly from API", name); // api::fetch_formula returns Value directly now let value = api::fetch_formula(name).await?; // Store in cache if fetched successfully // Note: This might overwrite the full list cache, consider storing individual files or a map // cache.store_raw(&format!("formula/{}.json", name), &value.to_string())?; // Example of // storing individually Ok(value) } /// Retrieves cask information from the cache or API async fn get_cask_info(cache: Arc, name: &str) -> Result { match cache.load_raw("cask.json") { Ok(cask_data) => { let casks: Vec = serde_json::from_str(&cask_data).map_err(SpError::from)?; for cask in casks { if let Some(token) = cask.get("token").and_then(Value::as_str) { if token == name { return Ok(cask); } } // Check aliases if needed if let Some(aliases) = cask.get("aliases").and_then(|a| a.as_array()) { if aliases.iter().any(|a| a.as_str() == Some(name)) { return Ok(cask); } } } tracing::debug!("Cask '{}' not found within cached 'cask.json'.", name); // Explicitly return NotFound if not in cache return Err(SpError::NotFound(format!( "Cask '{name}' not found in cache" ))); } Err(e) => tracing::debug!( "Cache file 'cask.json' not found or failed to load ({}). Fetching from API.", e ), } tracing::debug!("Fetching cask '{}' directly from API", name); // api::fetch_cask returns Value directly now let value = api::fetch_cask(name).await?; // Store in cache if fetched successfully // cache.store_raw(&format!("cask/{}.json", name), &value.to_string())?; // Example of storing // individually Ok(value) } /// Prints formula information in a formatted table fn print_formula_info(_name: &str, formula: &Value) { // Basic info extraction let full_name = formula .get("full_name") .and_then(|f| f.as_str()) .unwrap_or("N/A"); let version = formula .get("versions") .and_then(|v| v.get("stable")) .and_then(|s| s.as_str()) .unwrap_or("N/A"); let revision = formula .get("revision") .and_then(|r| r.as_u64()) .unwrap_or(0); let version_str = if revision > 0 { format!("{version}_{revision}") } else { version.to_string() }; let license = formula .get("license") .and_then(|l| l.as_str()) .unwrap_or("N/A"); let homepage = formula .get("homepage") .and_then(|h| h.as_str()) .unwrap_or("N/A"); // Header println!("{}", format!("Formula: {full_name}").green().bold()); // Summary table let mut table = prettytable::Table::new(); table.set_format(*prettytable::format::consts::FORMAT_NO_BORDER_LINE_SEPARATOR); table.add_row(prettytable::row!["Version", version_str]); table.add_row(prettytable::row!["License", license]); table.add_row(prettytable::row!["Homepage", homepage]); table.printstd(); // Detailed sections if let Some(desc) = formula.get("desc").and_then(|d| d.as_str()) { if !desc.is_empty() { println!("\n{}", "Description".blue().bold()); println!("{desc}"); } } if let Some(caveats) = formula.get("caveats").and_then(|c| c.as_str()) { if !caveats.is_empty() { println!("\n{}", "Caveats".blue().bold()); println!("{caveats}"); } } // Combined Dependencies Section let mut dep_table = prettytable::Table::new(); dep_table.set_format(*prettytable::format::consts::FORMAT_NO_BORDER_LINE_SEPARATOR); let mut has_deps = false; let mut add_deps = |title: &str, key: &str, tag: &str| { if let Some(deps) = formula.get(key).and_then(|d| d.as_array()) { let dep_list: Vec<&str> = deps.iter().filter_map(|d| d.as_str()).collect(); if !dep_list.is_empty() { has_deps = true; for (i, d) in dep_list.iter().enumerate() { let display_title = if i == 0 { title } else { "" }; let display_tag = if i == 0 { format!("({tag})") } else { "".to_string() }; dep_table.add_row(prettytable::row![display_title, d, display_tag]); } } } }; add_deps("Required", "dependencies", "runtime"); add_deps( "Recommended", "recommended_dependencies", "runtime, recommended", ); add_deps("Optional", "optional_dependencies", "runtime, optional"); add_deps("Build", "build_dependencies", "build"); add_deps("Test", "test_dependencies", "test"); if has_deps { println!("\n{}", "Dependencies".blue().bold()); dep_table.printstd(); } // Installation hint println!("\n{}", "Installation".blue().bold()); println!( " {} install {}", "sp".cyan(), formula .get("name") .and_then(|n| n.as_str()) .unwrap_or(full_name) // Use short name if available ); } /// Prints cask information in a formatted table fn print_cask_info(name: &str, cask: &Value) { // Header println!("{}", format!("Cask: {name}").green().bold()); // Summary table let mut table = prettytable::Table::new(); table.set_format(*prettytable::format::consts::FORMAT_NO_BORDER_LINE_SEPARATOR); if let Some(names) = cask.get("name").and_then(|n| n.as_array()) { if let Some(first) = names.first().and_then(|s| s.as_str()) { table.add_row(prettytable::row!["Name", first]); } } if let Some(desc) = cask.get("desc").and_then(|d| d.as_str()) { table.add_row(prettytable::row!["Description", desc]); } if let Some(homepage) = cask.get("homepage").and_then(|h| h.as_str()) { table.add_row(prettytable::row!["Homepage", homepage]); } if let Some(version) = cask.get("version").and_then(|v| v.as_str()) { table.add_row(prettytable::row!["Version", version]); } if let Some(url) = cask.get("url").and_then(|u| u.as_str()) { table.add_row(prettytable::row!["Download URL", url]); } // Add SHA if present if let Some(sha) = cask.get("sha256").and_then(|s| s.as_str()) { if !sha.is_empty() { table.add_row(prettytable::row!["SHA256", sha]); } } table.printstd(); // Dependencies Section if let Some(deps) = cask.get("depends_on").and_then(|d| d.as_object()) { let mut dep_table = prettytable::Table::new(); dep_table.set_format(*prettytable::format::consts::FORMAT_NO_BORDER_LINE_SEPARATOR); let mut has_deps = false; if let Some(formulas) = deps.get("formula").and_then(|f| f.as_array()) { if !formulas.is_empty() { has_deps = true; dep_table.add_row(prettytable::row![ "Formula".yellow(), formulas .iter() .map(|v| v.as_str().unwrap_or("")) .collect::>() .join(", ") ]); } } if let Some(casks) = deps.get("cask").and_then(|c| c.as_array()) { if !casks.is_empty() { has_deps = true; dep_table.add_row(prettytable::row![ "Cask".yellow(), casks .iter() .map(|v| v.as_str().unwrap_or("")) .collect::>() .join(", ") ]); } } if let Some(macos) = deps.get("macos") { has_deps = true; let macos_str = match macos { Value::String(s) => s.clone(), Value::Array(arr) => arr .iter() .map(|v| v.as_str().unwrap_or("")) .collect::>() .join(" or "), _ => "Unknown".to_string(), }; dep_table.add_row(prettytable::row!["macOS".yellow(), macos_str]); } if has_deps { println!("\n{}", "Dependencies".blue().bold()); dep_table.printstd(); } } // Installation hint println!("\n{}", "Installation".blue().bold()); println!( " {} install --cask {}", // Always use --cask for clarity "sp".cyan(), name // Use the token 'name' passed to the function ); } // Removed is_bottle_available check ``` ## /sp-cli/src/cli/install.rs ```rs path="/sp-cli/src/cli/install.rs" // sp-cli/src/cli/install.rs use std::sync::Arc; use clap::Args; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::Result; use tracing::instrument; // Import pipeline components from the new module use crate::cli::pipeline::{CommandType, PipelineExecutor, PipelineFlags}; // Keep the Args struct specific to 'install' if needed, or reuse a common one #[derive(Debug, Args)] pub struct InstallArgs { #[arg(required = true)] names: Vec, // Keep flags relevant to install/pipeline #[arg(long)] skip_deps: bool, // Note: May not be fully supported by core resolution yet #[arg(long, help = "Force install specified targets as casks")] cask: bool, #[arg(long, help = "Force install specified targets as formulas")] formula: bool, #[arg(long)] include_optional: bool, #[arg(long)] skip_recommended: bool, #[arg( long, help = "Force building the formula from source, even if a bottle is available" )] build_from_source: bool, // Worker/Queue size flags might belong here or be global CLI flags // #[arg(long, value_name = "SP_WORKERS")] // max_workers: Option, // #[arg(long, value_name = "SP_QUEUE")] // queue_size: Option, } impl InstallArgs { #[instrument(skip(self, config, cache), fields(targets = ?self.names))] pub async fn run(&self, config: &Config, cache: Arc) -> Result<()> { println!("Installing: {:?}", self.names); // User feedback // --- Argument Validation (moved from old run) --- if self.formula && self.cask { return Err(sp_common::error::SpError::Generic( "Cannot use --formula and --cask together.".to_string(), )); } // Add validation for skip_deps if needed // --- Prepare Pipeline Flags --- let flags = PipelineFlags { build_from_source: self.build_from_source, include_optional: self.include_optional, skip_recommended: self.skip_recommended, // Add other flags... }; // --- Determine Initial Targets based on --formula/--cask flags --- // (This logic might be better inside plan_package_operations based on CommandType) let initial_targets = self.names.clone(); // For install, all names are initial targets // --- Execute the Pipeline --- PipelineExecutor::execute_pipeline( &initial_targets, CommandType::Install, // Specify the command type config, cache, &flags, // Pass the flags struct ) .await } } ``` ## /sp-cli/src/cli/pipeline.rs ```rs path="/sp-cli/src/cli/pipeline.rs" // sp-cli/src/cli/pipeline.rs use std::collections::{HashMap, HashSet, VecDeque}; use std::fs; use std::path::PathBuf; use std::sync::Arc; // use tokio::sync::Mutex; // For async-aware locking if needed later use colored::Colorize; use crossbeam_channel::{Receiver, Sender, bounded}; use futures::executor::block_on; use num_cpus; use serde_json::Value; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::dependency::{ DependencyResolver, ResolutionContext, ResolutionStatus, ResolvedGraph, }; use sp_common::error::{Result, SpError}; use sp_common::formulary::Formulary; use sp_common::keg::KegRegistry; use sp_common::model::Cask; // --- Shared Data Structures --- // Reusable enum to identify target type, potentially moved from sp-core if made public // Or defined locally here if InstallTargetIdentifier from core isn't suitable. // Assuming we use the one from core for now: use sp_common::model::InstallTargetIdentifier; use sp_common::model::formula::{Formula, FormulaDependencies}; use sp_core::build::{self}; use sp_core::installed::{InstalledPackageInfo, PackageType}; // Needs implementing in sp-core use sp_core::uninstall as core_uninstall; // Alias for the new module use sp_core::uninstall::UninstallOptions; // Needs implementing in sp-core use sp_core::update_check::{self, UpdateInfo}; // Needs implementing in sp-core use sp_net::fetch::api; use threadpool::ThreadPool; use tokio::task::JoinSet; use tracing::{Instrument, debug, error, instrument, warn}; // Placeholder: Ensure this is accessible // Represents the specific action for a pipeline job #[derive(Debug, Clone)] pub enum PipelineActionType { Install, Upgrade { from_version: String, old_install_path: PathBuf, // Path to the version being replaced }, Reinstall { version: String, current_install_path: PathBuf, // Path to the version being reinstalled }, } // Represents a unit of work for the pipeline #[derive(Debug)] pub struct PipelineJob { pub target: InstallTargetIdentifier, // Arc or Arc pub download_path: PathBuf, // Path to the downloaded file (bottle, source, cask) pub action: PipelineActionType, // Graph needed for source builds to know dependencies pub resolved_graph: Option>, pub is_source_build: bool, } // Represents the outcome of processing a PipelineJob #[derive(Debug)] pub enum PipelineJobResult { InstallOk(String, PackageType), UpgradeOk(String, PackageType, String), // Name, Type, OldVersion ReinstallOk(String, PackageType), // Name, Type InstallErr(String, PackageType, SpError), UpgradeErr(String, PackageType, String, SpError), // Include old version ReinstallErr(String, PackageType, SpError), } // Represents the type of command triggering the pipeline #[derive(Debug, Clone, PartialEq, Eq)] pub enum CommandType { Install, Reinstall, Upgrade { all: bool }, } // Flags affecting pipeline behavior #[derive(Debug, Clone)] pub struct PipelineFlags { pub build_from_source: bool, pub include_optional: bool, pub skip_recommended: bool, // Add other common flags like --force if needed } // Add this after the PipelineFlags struct, before PipelineExecutor type PlanResult = Result<(Vec, Vec<(String, SpError)>, HashSet)>; // The main orchestrator struct pub struct PipelineExecutor; impl PipelineExecutor { /// Main entry point to run install, reinstall, or upgrade. #[instrument(skip(config, cache, flags), fields(cmd = ?command_type, targets = ?initial_targets))] pub async fn execute_pipeline( initial_targets: &[String], command_type: CommandType, config: &Config, cache: Arc, flags: &PipelineFlags, ) -> Result<()> { // Define worker/queue size (same logic as before) let worker_count = std::cmp::max(1, num_cpus::get_physical().saturating_sub(1)).min(6); // Example sizing let queue_size = worker_count * 2; // --- 1. Plan Operations --- debug!("Planning package operations..."); let (planned_jobs, mut overall_errors, already_installed) = Self::plan_package_operations( initial_targets, command_type.clone(), config, cache.clone(), flags, ) .await?; // Report planning errors and already installed packages for name in already_installed { info_line(format!( "{} {} is already installed.", "✓".green(), name.cyan() )); } for (name, err) in &overall_errors { error!("✖ Error during planning for '{}': {}", name.cyan(), err); } if planned_jobs.is_empty() { if overall_errors.is_empty() { info_line("No packages need to be installed, upgraded, or reinstalled."); return Ok(()); } else { error!("No operations possible due to planning errors."); // Combine errors into a single message for returning let final_error_msg = overall_errors .into_iter() .map(|(name, err)| format!("'{name}': {err}")) .collect::>() .join("; "); return Err(SpError::InstallError(format!( "Operation failed during planning: {final_error_msg}" ))); } } debug!("Planning complete. {} jobs generated.", planned_jobs.len()); // --- 2. Setup Channels & Worker Pool --- let (job_tx, job_rx): (Sender, Receiver) = bounded(queue_size); let (result_tx, result_rx): (Sender, Receiver) = bounded(queue_size); let pool = ThreadPool::new(worker_count); let client = Arc::new(reqwest::Client::new()); // HTTP client for downloads // --- 3. Coordinate Downloads --- debug!("Coordinating downloads..."); let download_errors_count = Self::coordinate_downloads( planned_jobs, // Pass the Vec directly config, cache.clone(), client, job_tx.clone(), // Clone Sender for the download coordinator flags, ) .await?; drop(job_tx); // Signal that no more download jobs will be sent debug!( "Download coordination finished. {} errors.", download_errors_count ); if download_errors_count > 0 { // Add generic error if specific ones weren't captured during download overall_errors.push(( "[Download Phase]".to_string(), SpError::Generic(format!( "Encountered {download_errors_count} download errors." )), )); } // --- 4. Coordinate Workers & Collect Results --- debug!("Coordinating workers..."); let pump_handle = Self::coordinate_workers( pool, // Pass the pool job_rx, // Pass the Receiver result_tx.clone(), // Clone Sender for workers config, cache.clone(), // No flags needed directly by worker coordinator? Flags are in job. ); drop(result_tx); // Drop the original Sender for results debug!("Collecting results..."); let install_errors = Self::collect_results(result_rx); // Collect results from the Receiver if let Err(e) = pump_handle.await { error!("Worker coordination task panicked: {}", e); overall_errors.push(( "[Worker Pool]".to_string(), SpError::Generic(format!("Worker coordination failed: {e}")), )); } debug!("Result collection finished."); // --- 5. Combine and Report Final Status --- overall_errors.extend(install_errors); // Add errors collected from workers if overall_errors.is_empty() { info_line("Pipeline execution completed successfully."); Ok(()) } else { error!( "Pipeline execution completed with {} error(s).", overall_errors.len() ); let final_error_msg = overall_errors .into_iter() .map(|(name, err)| format!("'{name}': {err}")) .collect::>() .join("; "); Err(SpError::InstallError(format!( "Operation failed: {final_error_msg}" ))) } } /// Determines the set of operations (Install, Upgrade, Reinstall) needed. #[instrument(skip(config, cache, flags), fields(cmd = ?command_type))] async fn plan_package_operations( initial_targets: &[String], command_type: CommandType, config: &Config, cache: Arc, flags: &PipelineFlags, ) -> PlanResult { let mut jobs: Vec = Vec::new(); let mut errors: Vec<(String, SpError)> = Vec::new(); let mut already_installed: HashSet = HashSet::new(); let _needs_resolution: HashMap = HashMap::new(); // name -> target def for resolution let mut processed: HashSet = HashSet::new(); // Track names already decided upon // --- Identify Initial Targets and Action Type --- let mut initial_ops: HashMap< String, (PipelineActionType, Option), > = HashMap::new(); match command_type { CommandType::Install => { debug!("Planning for INSTALL command"); for name in initial_targets { if processed.contains(name) { continue; } match sp_core::installed::get_installed_package(name, config).await? { Some(_installed_info) => { already_installed.insert(name.clone()); processed.insert(name.clone()); } None => { // Mark for install, need to fetch definition later initial_ops.insert(name.clone(), (PipelineActionType::Install, None)); } } } } CommandType::Reinstall => { debug!("Planning for REINSTALL command"); for name in initial_targets { if processed.contains(name) { continue; } match sp_core::installed::get_installed_package(name, config).await? { Some(installed_info) => { initial_ops.insert( name.clone(), ( PipelineActionType::Reinstall { version: installed_info.version.clone(), current_install_path: installed_info.path.clone(), }, None, ), ); // Need to fetch definition } None => { let msg = format!("Cannot reinstall '{name}': not installed."); error!("✖ {msg}"); errors.push((name.clone(), SpError::NotFound(msg))); processed.insert(name.clone()); } } } } CommandType::Upgrade { all } => { debug!("Planning for UPGRADE command (all={})", all); let packages_to_check = if all { sp_core::installed::get_installed_packages(config).await? } else { let mut specific = Vec::new(); for name in initial_targets { match sp_core::installed::get_installed_package(name, config).await? { Some(info) => specific.push(info), None => { let msg = format!("Cannot upgrade '{name}': not installed."); warn!("! {msg}"); // Warn, maybe user meant install? // Don't add error here, let install handle it if they meant install processed.insert(name.clone()); } } } specific }; if packages_to_check.is_empty() { if all { info_line("No installed packages found to check for upgrades."); } // else: warnings about specific packages already printed return Ok((jobs, errors, already_installed)); // No ops needed } let updates = update_check::check_for_updates(&packages_to_check, &cache).await?; let update_map: HashMap = updates.into_iter().map(|u| (u.name.clone(), u)).collect(); for installed in packages_to_check { if processed.contains(&installed.name) { continue; } if let Some(update_info) = update_map.get(&installed.name) { initial_ops.insert( installed.name.clone(), ( PipelineActionType::Upgrade { from_version: installed.version.clone(), old_install_path: installed.path.clone(), }, Some(update_info.target_definition.clone()), ), ); // Have target def! processed.insert(installed.name.clone()); } else { // Already up-to-date, mark as already installed for reporting already_installed.insert(installed.name.clone()); processed.insert(installed.name.clone()); } } } } // --- Fetch Definitions for Install/Reinstall targets --- let definitions_to_fetch: Vec = initial_ops .iter() .filter(|(_, (_, def))| def.is_none()) .map(|(name, _)| name.clone()) .collect(); if !definitions_to_fetch.is_empty() { debug!( "Fetching definitions for initial targets: {:?}", definitions_to_fetch ); let fetched_defs = Self::fetch_target_definitions(&definitions_to_fetch, &cache).await; for (name, result) in fetched_defs { match result { Ok(target_def) => { if let Some((_, existing_def_opt)) = initial_ops.get_mut(&name) { *existing_def_opt = Some(target_def); } } Err(e) => { error!( "✖ Failed to get definition for target '{}': {}", name.cyan(), e ); errors.push((name.clone(), e)); initial_ops.remove(&name); // Remove from ops if def fetch fails processed.insert(name.clone()); } } } } // --- Initial Dependency Resolution Setup --- let mut formulae_for_resolution: HashMap = HashMap::new(); let mut cask_queue: VecDeque = VecDeque::new(); let mut cask_deps_map: HashMap> = HashMap::new(); // Cache fetched cask defs for (name, (_action, opt_def)) in &initial_ops { match opt_def { Some(InstallTargetIdentifier::Formula(f_arc)) => { formulae_for_resolution.insert( name.clone(), InstallTargetIdentifier::Formula(f_arc.clone()), ); } Some(InstallTargetIdentifier::Cask(c_arc)) => { if !processed.contains(name) { // Only queue if not already handled/errored cask_queue.push_back(name.clone()); cask_deps_map.insert(name.clone(), c_arc.clone()); } } None => { // Should not happen if fetch logic is correct, but handle defensively if !errors.iter().any(|(n, _)| n == name) { // Avoid duplicate errors let msg = format!("Definition missing for target '{name}' after fetch attempt."); error!("✖ {msg}"); errors.push((name.clone(), SpError::Generic(msg))); } processed.insert(name.clone()); } } } // --- Resolve Cask Dependencies (Iterative) --- // Similar to logic in old gather_full_dependency_set, but adds formula deps to // formulae_for_resolution let mut processed_casks: HashSet = initial_ops.keys().cloned().collect(); while let Some(token) = cask_queue.pop_front() { let cask_ref = cask_deps_map.entry(token.clone()).or_insert_with(|| { // Fetch cask definition if not already cached in our map // This part needs to be async, might require restructuring or // pre-fetching all needed cask defs. For simplicity sketch, assume pre-fetched. // In reality, you might need another async fetch loop here or integrate into the // initial fetch. match block_on(api::get_cask(&token)) { // block_on is suboptimal here Ok(c) => Arc::new(c), Err(e) => { if !errors.iter().any(|(n, _)| n == &token) { errors.push((token.clone(), e)); } // Return a dummy Arc or handle differently Arc::new(Cask { token: token.clone(), ..Default::default() }) // Dummy } } }); let cask = cask_ref.clone(); if errors.iter().any(|(n, _)| n == &token) { continue; } // Skip if fetch failed if let Some(deps) = &cask.depends_on { for formula_dep in &deps.formula { if !formulae_for_resolution.contains_key(formula_dep) { // Need to fetch formula definition before adding match Self::fetch_target_definitions(&[formula_dep.clone()], &cache) .await .remove(formula_dep) { Some(Ok(target_def @ InstallTargetIdentifier::Formula(_))) => { debug!( "Adding formula dependency from cask '{}': {}", token, formula_dep ); formulae_for_resolution.insert(formula_dep.clone(), target_def); } Some(Err(e)) => { if !errors.iter().any(|(n, _)| n == formula_dep) { errors.push((formula_dep.clone(), e)); } } _ => { // Not found or not a formula let msg = format!( "Dependency '{formula_dep}' for cask '{token}' not found or not a formula." ); if !errors.iter().any(|(n, _)| n == formula_dep) { errors.push((formula_dep.clone(), SpError::NotFound(msg))); } } } } } for cask_dep in &deps.cask { if processed_casks.insert(cask_dep.clone()) { debug!( "Queueing cask dependency from cask '{}': {}", token, cask_dep ); cask_queue.push_back(cask_dep.clone()); // Definition will be fetched when popped if not in cask_deps_map } } } } // --- Resolve Formula Dependencies --- let mut resolved_formula_graph: Option> = None; if !formulae_for_resolution.is_empty() { let resolution_target_names: Vec = formulae_for_resolution.keys().cloned().collect(); debug!( "Resolving dependencies for formulae: {:?}", resolution_target_names ); let formulary = Formulary::new(config.clone()); let keg_registry = KegRegistry::new(config.clone()); let ctx = ResolutionContext { formulary: &formulary, keg_registry: &keg_registry, sp_prefix: config.prefix(), include_optional: flags.include_optional, include_test: false, // Typically false for install/upgrade skip_recommended: flags.skip_recommended, force_build: flags.build_from_source, // Pass build flag here }; let mut resolver = DependencyResolver::new(ctx); match resolver.resolve_targets(&resolution_target_names) { Ok(graph) => { debug!("Dependency resolution successful."); resolved_formula_graph = Some(Arc::new(graph)); } Err(e) => { error!( "✖ Fatal dependency resolution error: {}. Aborting operation.", e ); // Add error for all requested formulae for name in resolution_target_names { if !errors.iter().any(|(n, _)| n == &name) { errors.push((name.clone(), SpError::DependencyError(e.to_string()))); } } // Return early as resolution is fundamental return Ok((jobs, errors, already_installed)); } } } // --- Construct Final Job List --- let final_graph = resolved_formula_graph.clone(); // Arc clone // Add initial ops first (Install, Upgrade, Reinstall) for (name, (action, opt_def)) in initial_ops { if errors.iter().any(|(n, _)| n == &name) { continue; } // Skip errored targets if let Some(target_def) = opt_def { jobs.push(PipelineJob { target: target_def.clone(), // Clone here download_path: PathBuf::new(), // Will be filled by download coordinator action: action.clone(), resolved_graph: final_graph.clone(), // Pass graph if formula is_source_build: match action { // Determine if source build is needed based on flags and bottle // availability PipelineActionType::Install | PipelineActionType::Upgrade { .. } => { if let InstallTargetIdentifier::Formula(f) = &target_def { flags.build_from_source || !build::formula::has_bottle_for_current_platform(f) } else { false } } PipelineActionType::Reinstall { .. } => { // Reinstall might need source build if forced or original bottle // missing? if let InstallTargetIdentifier::Formula(f) = &target_def { flags.build_from_source || !build::formula::has_bottle_for_current_platform(f) // Check availability for the *current* version } else { false } } }, }); } } // Add dependency installs from the graph if let Some(graph) = resolved_formula_graph { for dep in &graph.install_plan { let name = dep.formula.name(); if errors.iter().any(|(n, _)| n == name) { continue; } // Skip errored deps // Add only if it wasn't an initial target already added if !jobs.iter().any(|j| match &j.target { InstallTargetIdentifier::Formula(f) => f.name() == name, _ => false, }) { if matches!( dep.status, ResolutionStatus::Missing | ResolutionStatus::Requested ) { jobs.push(PipelineJob { target: InstallTargetIdentifier::Formula(dep.formula.clone()), download_path: PathBuf::new(), action: PipelineActionType::Install, // Dependencies are always installs resolved_graph: Some(graph.clone()), is_source_build: flags.build_from_source || !build::formula::has_bottle_for_current_platform(&dep.formula), }); } } else { // If it *was* an initial target, update its source build status based on // resolution if let Some(initial_job) = jobs.iter_mut().find(|j| match &j.target { InstallTargetIdentifier::Formula(f) => f.name() == name, _ => false, }) { initial_job.is_source_build = flags.build_from_source || !build::formula::has_bottle_for_current_platform(&dep.formula); } } } // Add cask dependencies identified earlier (if they need installing) for (token, cask_arc) in cask_deps_map { if errors.iter().any(|(n, _)| n == &token) { continue; } if !jobs.iter().any(|j| match &j.target { InstallTargetIdentifier::Cask(c) => c.token == token, _ => false, }) { // Check if cask is actually installed before adding install job if sp_core::installed::get_installed_package(&token, config) .await? .is_none() { jobs.push(PipelineJob { target: InstallTargetIdentifier::Cask(cask_arc.clone()), download_path: PathBuf::new(), action: PipelineActionType::Install, resolved_graph: None, is_source_build: false, }); } else { // Mark as already installed if it's just a dependency and present already_installed.insert(token); } } } // Sort all jobs by dependency order before returning if !jobs.is_empty() { debug!("Sorting {} jobs by dependency order", jobs.len()); sort_jobs_by_dependency_order(&mut jobs, &graph); } } Ok((jobs, errors, already_installed)) } /// Fetches Formula or Cask definitions for a list of names. async fn fetch_target_definitions( names: &[String], cache: &Cache, ) -> HashMap> { let mut results = HashMap::new(); let mut futures = JoinSet::new(); // Attempt to load full lists first to minimize API calls let formulae_map_res = load_or_fetch_json(cache, "formula.json", api::fetch_all_formulas()) .await .map(|values| { values .into_iter() .filter_map(|v| serde_json::from_value::(v).ok()) .map(|f| (f.name.clone(), Arc::new(f))) .collect::>() }); let casks_map_res = load_or_fetch_json(cache, "cask.json", api::fetch_all_casks()) .await .map(|values| { values .into_iter() .filter_map(|v| serde_json::from_value::(v).ok()) .map(|c| (c.token.clone(), Arc::new(c))) .collect::>() }); for name in names { let name = name.clone(); let formulae_map_clone = formulae_map_res.as_ref().ok().cloned(); let casks_map_clone = casks_map_res.as_ref().ok().cloned(); futures.spawn(async move { let formulae_map = formulae_map_clone; // Use the cloned map let casks_map = casks_map_clone; // Use the cloned map // Check formulae map first if let Some(map) = formulae_map { if let Some(f_arc) = map.get(&name) { return (name, Ok(InstallTargetIdentifier::Formula(f_arc.clone()))); } } // Check casks map next if let Some(map) = casks_map { if let Some(c_arc) = map.get(&name) { return (name, Ok(InstallTargetIdentifier::Cask(c_arc.clone()))); } } // If not found in maps (maybe maps failed to load, or item is obscure), try direct // API fetch This adds redundancy but makes it more robust if full // list fetch fails match api::get_formula(&name).await { // Using get_formula which returns Formula Ok(formula) => { return ( name, Ok(InstallTargetIdentifier::Formula(Arc::new(formula))), ); } Err(SpError::NotFound(_)) | Err(SpError::Api(_)) | Err(SpError::Http(_)) => { // Formula fetch failed, try cask } Err(e) => return (name, Err(e)), // Propagate other formula errors } match api::get_cask(&name).await { // Using get_cask which returns Cask Ok(cask) => (name, Ok(InstallTargetIdentifier::Cask(Arc::new(cask)))), Err(e) => (name, Err(e)), // Return cask error (could be NotFound) } }); } while let Some(res) = futures.join_next().await { match res { Ok((name, result)) => { results.insert(name, result); } Err(e) => { // Log join error, but difficult to associate with a name here error!("Task join error during definition fetch: {}", e); } } } results } /// Coordinates the download phase. #[instrument(skip(planned_jobs, config, cache, client, job_tx, flags))] async fn coordinate_downloads( planned_jobs: Vec, // Takes ownership of the jobs Vec config: &Config, cache: Arc, client: Arc, job_tx: Sender, // Sender for jobs ready to be installed flags: &PipelineFlags, ) -> Result { // Returns count of download errors let mut download_join_set: JoinSet> = JoinSet::new(); let mut download_errors_count = 0; // Spawn download tasks for mut job in planned_jobs { // Mutate job to set is_source_build let name = match &job.target { InstallTargetIdentifier::Formula(f) => f.name().to_string(), InstallTargetIdentifier::Cask(c) => c.token.clone(), }; let name_clone = name.clone(); let target_type = job.target.clone(); // Clone Arc for the task let cfg_clone = config.clone(); let cache_clone = Arc::clone(&cache); let client_clone = Arc::clone(&client); // Determine source build requirement *before* spawning download task job.is_source_build = match &target_type { InstallTargetIdentifier::Formula(f) => { flags.build_from_source || !build::formula::has_bottle_for_current_platform(f) } InstallTargetIdentifier::Cask(_) => false, }; let is_source_build = job.is_source_build; // Copy bool for task download_join_set.spawn( async move { // Now call download_target with the pre-determined is_source_build flag let download_path = download_target_file( &name, &target_type, &cfg_clone, cache_clone, client_clone, is_source_build, ) .await?; job.download_path = download_path; // Update job with download path Ok((job, name)) // Return the modified job } .instrument(tracing::info_span!("download_task", pkg = %name_clone)), // Use name_clone here ); } // Process download results while let Some(result) = download_join_set.join_next().await { match result { Ok(Ok((install_job, _name))) => { // Send the job with download_path populated if job_tx.send(install_job).is_err() { error!( "Job channel closed while sending download result for {}", _name ); download_errors_count += 1; // Treat send error as a download phase error } } Ok(Err(e)) => { // Log error, name extraction might be needed if not DownloadError let name = match &e { SpError::DownloadError(n, _, _) => n.clone(), _ => "[unknown]".to_string(), }; error!("✖ Download failed for '{}': {}", name.cyan(), e); download_errors_count += 1; } Err(join_error) => { error!("✖ Download task panicked: {}", join_error); download_errors_count += 1; } } } Ok(download_errors_count) } /// Spawns a task to coordinate worker threads. fn coordinate_workers( pool: ThreadPool, job_rx: Receiver, result_tx: Sender, config: &Config, cache: Arc, // flags are passed within the PipelineJob ) -> tokio::task::JoinHandle<()> { let cfg_clone = config.clone(); // Clone config once for the coordinator task tokio::spawn( async move { while let Ok(job) = job_rx.recv() { let pkg_name = match &job.target { InstallTargetIdentifier::Formula(f) => f.name().to_string(), InstallTargetIdentifier::Cask(c) => c.token.clone(), }; let res_tx = result_tx.clone(); let worker_cfg = cfg_clone.clone(); // Clone config again for the worker thread let worker_cache = Arc::clone(&cache); let install_span = tracing::info_span!("install_worker", pkg = %pkg_name); pool.execute(move || { // Run the potentially blocking install logic in the thread pool let result = install_span .in_scope(|| Self::run_pipeline_job(job, &worker_cfg, worker_cache)); if res_tx.send(result).is_err() { warn!( "Result channel closed, could not send install result for {}.", pkg_name ); } }); } debug!("Job channel closed, worker coordinator task finishing."); } .in_current_span(), // Inherit span context ) } /// Collects results from worker threads. fn collect_results(result_rx: Receiver) -> Vec<(String, SpError)> { let mut install_errors: Vec<(String, SpError)> = Vec::new(); for result in result_rx { // Drains the channel let (_result, was_success, message) = match result { PipelineJobResult::InstallOk(name, pkg_type) => { let pkg_type_str = match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", }; ( name.clone(), true, format!("Installed {} {}", pkg_type_str, name.green()), ) } PipelineJobResult::UpgradeOk(name, pkg_type, old_v) => { let pkg_type_str = match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", }; ( name.clone(), true, format!( "Upgraded {} {} (from {})", pkg_type_str, name.green(), old_v ), ) } PipelineJobResult::ReinstallOk(name, pkg_type) => { let pkg_type_str = match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", }; ( name.clone(), true, format!("Reinstalled {} {}", pkg_type_str, name.green()), ) } PipelineJobResult::InstallErr(name, pkg_type, e) => { let pkg_type_str = match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", }; let err_msg = format!("Failed {} '{}': {}", pkg_type_str, name.red(), e); install_errors.push((name.clone(), e)); (name.clone(), false, err_msg) } PipelineJobResult::UpgradeErr(name, pkg_type, old_v, e) => { let pkg_type_str = match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", }; let err_msg = format!( "Failed {} upgrade '{}' (from {}): {}", pkg_type_str, name.red(), old_v, e ); install_errors.push((name.clone(), e)); (name.clone(), false, err_msg) } PipelineJobResult::ReinstallErr(name, pkg_type, e) => { let pkg_type_str = match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", }; let err_msg = format!("Failed {} reinstall '{}': {}", pkg_type_str, name.red(), e); install_errors.push((name.clone(), e)); (name.clone(), false, err_msg) } }; if !was_success { error!("✖ {}", message); } else { info_line(message); } } install_errors } /// The actual worker function performing pre-uninstall and installation. #[instrument(skip(job, config, cache), fields(pkg = %match &job.target { InstallTargetIdentifier::Formula(f) => f.name().to_string(), InstallTargetIdentifier::Cask(c) => c.token.clone(), }, action = ?job.action))] fn run_pipeline_job(job: PipelineJob, config: &Config, cache: Arc) -> PipelineJobResult { let (name, pkg_type) = match &job.target { InstallTargetIdentifier::Formula(f) => (f.name().to_string(), PackageType::Formula), InstallTargetIdentifier::Cask(c) => (c.token.clone(), PackageType::Cask), }; // --- 1. Pre-Install Step (Uninstall for Upgrade/Reinstall) --- let pre_install_result = match &job.action { PipelineActionType::Upgrade { from_version, old_install_path, } | PipelineActionType::Reinstall { version: from_version, current_install_path: old_install_path, } => { info_line(format!( "Removing existing {name} version {from_version}..." )); // Construct the InstalledPackageInfo for the *old* version let old_info = InstalledPackageInfo { name: name.clone(), version: from_version.clone(), pkg_type: pkg_type.clone(), path: old_install_path.clone(), }; let uninstall_opts = UninstallOptions { skip_zap: true }; // CRUCIAL // Call the appropriate core uninstall function match pkg_type { PackageType::Formula => core_uninstall::uninstall_formula_artifacts( &old_info, config, &uninstall_opts, ), PackageType::Cask => { core_uninstall::uninstall_cask_artifacts(&old_info, config, &uninstall_opts) } } } PipelineActionType::Install => Ok(()), // No pre-install step needed }; if let Err(e) = pre_install_result { let old_version_str = match &job.action { PipelineActionType::Upgrade { from_version, .. } => from_version.clone(), PipelineActionType::Reinstall { version, .. } => version.clone(), _ => "[N/A]".to_string(), }; error!( "Failed to remove old version {} for {}: {}", old_version_str, name, e ); // Return specific error based on action type return match job.action { PipelineActionType::Upgrade { from_version, .. } => { PipelineJobResult::UpgradeErr(name, pkg_type, from_version, e) } PipelineActionType::Reinstall { .. } => { PipelineJobResult::ReinstallErr(name, pkg_type, e) } PipelineActionType::Install => PipelineJobResult::InstallErr(name, pkg_type, e), /* Should ideally not happen here */ }; } // --- 2. Perform Installation --- info_line(format!( "Installing {} {}...", pkg_type_str(pkg_type.clone()), name )); let install_result = Self::perform_actual_installation(&job, config, cache); // Pass job by ref // --- 3. Return result based on action type and install outcome --- match (job.action, install_result) { (PipelineActionType::Install, Ok(_)) => PipelineJobResult::InstallOk(name, pkg_type), (PipelineActionType::Install, Err(e)) => { PipelineJobResult::InstallErr(name, pkg_type, e) } (PipelineActionType::Upgrade { from_version, .. }, Ok(_)) => { PipelineJobResult::UpgradeOk(name, pkg_type, from_version) } (PipelineActionType::Upgrade { from_version, .. }, Err(e)) => { PipelineJobResult::UpgradeErr(name, pkg_type, from_version, e) } (PipelineActionType::Reinstall { .. }, Ok(_)) => { PipelineJobResult::ReinstallOk(name, pkg_type) } (PipelineActionType::Reinstall { .. }, Err(e)) => { PipelineJobResult::ReinstallErr(name, pkg_type, e) } } } /// Extracted core install logic (previously part of run_install). #[instrument(skip(job, config, _cache), fields(pkg = %match &job.target { InstallTargetIdentifier::Formula(f) => f.name().to_string(), InstallTargetIdentifier::Cask(c) => c.token.clone(), }))] fn perform_actual_installation( job: &PipelineJob, config: &Config, _cache: Arc, ) -> Result<()> { match &job.target { InstallTargetIdentifier::Formula(formula) => { let install_dir = formula.install_prefix(&config.cellar)?; // Ensure parent exists (needed after potential uninstall) if let Some(parent_dir) = install_dir.parent() { fs::create_dir_all(parent_dir).map_err(|e| SpError::Io(Arc::new(e)))?; } if job.is_source_build { // Source Build Logic info_line(format!("Building {} from source", formula.name())); let resolved_graph = job.resolved_graph.as_ref().ok_or_else(|| { SpError::Generic("Missing resolved graph for source build".to_string()) })?; let build_dep_paths = resolved_graph.build_dependency_opt_paths.clone(); let runtime_dep_paths = resolved_graph.runtime_dependency_opt_paths.clone(); let all_dep_paths = [build_dep_paths, runtime_dep_paths].concat(); let build_result = block_on(build::formula::source::build_from_source( &job.download_path, formula, // Pass the Arc by ref config, &all_dep_paths, )); match build_result { Ok(installed_dir) => build::formula::link::link_formula_artifacts( formula, &installed_dir, config, ), Err(e) => Err(e), } } else { // Bottle Install Logic info_line(format!("Installing bottle for {}", formula.name())); let installed_dir = build::formula::bottle::install_bottle( &job.download_path, formula, // Pass the Arc by ref config, )?; build::formula::link::link_formula_artifacts(formula, &installed_dir, config) } } InstallTargetIdentifier::Cask(cask) => { // Cask Install Logic info_line(format!("Installing cask {}", cask.token)); build::cask::install_cask(cask, &job.download_path, config) } } } } // --- Helper Functions (Moved from old install.rs or new) --- /// Downloads the target file (bottle, source, cask archive). #[instrument(skip(cfg, cache, client), fields(name=%target_name))] async fn download_target_file( target_name: &str, target_type: &InstallTargetIdentifier, // Borrow instead of consume cfg: &Config, cache: Arc, client: Arc, is_source_build: bool, ) -> Result { debug!( "Starting download process for {} (source_build={})", target_name, is_source_build ); match target_type { InstallTargetIdentifier::Formula(formula) => { if is_source_build { info_line(format!("Downloading source for {}", formula.name)); build::formula::source::download_source(formula, cfg).await } else { info_line(format!("Downloading bottle {}", formula.name)); build::formula::bottle::download_bottle(formula, cfg, client.as_ref()).await } } InstallTargetIdentifier::Cask(cask) => { info_line(format!("Downloading cask {}", cask.token)); build::cask::download_cask(cask, cache.as_ref()).await } } .map_err(|e| { // Wrap errors nicely error!("Download failed for {}: {}", target_name, e); // Add more context if it's not already a DownloadError if matches!(e, SpError::DownloadError(_, _, _)) { e } else { SpError::DownloadError( target_name.to_string(), "[unknown URL]".to_string(), e.to_string(), ) } }) } // Simple green INFO logger for install actions (copied from old install.rs) fn info_line(message: impl AsRef) { println!("{} sp::pipeline: {}", "INFO".green(), message.as_ref()); // Indicate pipeline source } // Helper to get string representation of PackageType fn pkg_type_str(pkg_type: PackageType) -> &'static str { match pkg_type { PackageType::Formula => "Formula", PackageType::Cask => "Cask", } } async fn load_or_fetch_json( cache: &Cache, filename: &str, api_fetcher: impl std::future::Future>, ) -> Result> { match cache.load_raw(filename) { Ok(data) => { debug!("Loaded {} from cache.", filename); serde_json::from_str(&data).map_err(|e| { error!("Failed to parse cached {}: {}", filename, e); SpError::Cache(format!("Failed parse cached {filename}: {e}")) }) } Err(_) => { debug!("Cache miss for {}, fetching from API...", filename); let raw_data = api_fetcher.await?; if let Err(cache_err) = cache.store_raw(filename, &raw_data) { warn!( "Failed to cache {} data after fetching: {}", filename, cache_err ); } else { debug!("Successfully cached {} after fetching.", filename); } serde_json::from_str(&raw_data).map_err(|e| SpError::Json(Arc::new(e))) } } } // Add helper function for sorting jobs by dependency order fn sort_jobs_by_dependency_order(jobs: &mut [PipelineJob], graph: &ResolvedGraph) { let formula_order: HashMap = graph .install_plan .iter() .enumerate() .map(|(idx, dep)| (dep.formula.name().to_string(), idx)) .collect(); jobs.sort_by_key(|job| match &job.target { InstallTargetIdentifier::Formula(f) => { formula_order.get(f.name()).copied().unwrap_or(usize::MAX) } InstallTargetIdentifier::Cask(_) => usize::MAX, // Install casks after formulae }); } ``` ## /sp-cli/src/cli/reinstall.rs ```rs path="/sp-cli/src/cli/reinstall.rs" // sp-cli/src/cli/reinstall.rs use std::sync::Arc; use clap::Args; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::Result; use crate::cli::pipeline::{CommandType, PipelineExecutor, PipelineFlags}; #[derive(Args, Debug)] pub struct ReinstallArgs { #[arg(required = true)] pub names: Vec, #[arg( long, help = "Force building the formula from source, even if a bottle is available" )] pub build_from_source: bool, } impl ReinstallArgs { pub async fn run(&self, config: &Config, cache: Arc) -> Result<()> { println!("Reinstalling: {:?}", self.names); // User feedback let flags = PipelineFlags { // Populate flags from args build_from_source: self.build_from_source, include_optional: false, // Reinstall usually doesn't change optional deps skip_recommended: true, /* Reinstall usually doesn't change recommended deps * ... add other common flags if needed ... */ }; PipelineExecutor::execute_pipeline( &self.names, CommandType::Reinstall, config, cache, &flags, ) .await } } ``` ## /sp-cli/src/cli/search.rs ```rs path="/sp-cli/src/cli/search.rs" // Contains the logic for the `search` command. use std::sync::Arc; use clap::Args; use colored::Colorize; use prettytable::{Table, format}; use serde_json::Value; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::Result; use sp_net::fetch::api; use terminal_size::{Width, terminal_size}; use unicode_width::{UnicodeWidthChar, UnicodeWidthStr}; use crate::ui; #[derive(Args, Debug)] pub struct Search { /// The search term to look for pub query: String, /// Search only formulae #[arg(long, conflicts_with = "cask")] pub formula: bool, /// Search only casks #[arg(long, conflicts_with = "formula")] pub cask: bool, } /// Represents the type of package to search for pub enum SearchType { All, Formula, Cask, } impl Search { /// Runs the search command pub async fn run(&self, config: &Config, cache: Arc) -> Result<()> { // Determine search type based on flags let search_type = if self.formula { SearchType::Formula } else if self.cask { SearchType::Cask } else { SearchType::All }; // Run the search with the determined type run_search(&self.query, search_type, config, cache).await } } /// Searches for packages matching the query pub async fn run_search( query: &str, search_type: SearchType, _config: &Config, // kept for potential future needs cache: Arc, ) -> Result<()> { tracing::debug!("Searching for packages matching: {}", query); // Use the ui utility function to create the spinner let pb = ui::create_spinner(&format!("Searching for \"{query}\"")); // <-- CHANGED // Store search results let mut formula_matches = Vec::new(); let mut cask_matches = Vec::new(); let mut formula_err = None; let mut cask_err = None; // Search formulas if needed if matches!(search_type, SearchType::All | SearchType::Formula) { match search_formulas(Arc::clone(&cache), query).await { Ok(matches) => formula_matches = matches, Err(e) => { tracing::error!("Error searching formulas: {}", e); formula_err = Some(e); // Store error } } } // Search casks if needed if matches!(search_type, SearchType::All | SearchType::Cask) { match search_casks(Arc::clone(&cache), query).await { Ok(matches) => cask_matches = matches, Err(e) => { tracing::error!("Error searching casks: {}", e); cask_err = Some(e); // Store error } } } // Finished searching pb.finish_and_clear(); // Handle potential errors after attempting searches if formula_matches.is_empty() && cask_matches.is_empty() { if let Some(e) = formula_err.or(cask_err) { // If both searches errored, return one of the errors return Err(e); } // If no errors but no matches, print message below } // Print results (even if empty, the function handles that) print_search_results(query, &formula_matches, &cask_matches); Ok(()) } /// Search for formulas matching the query async fn search_formulas(cache: Arc, query: &str) -> Result> { let query_lower = query.to_lowercase(); let mut matches = Vec::new(); let mut data_source_name = "cache"; // Assume cache initially // Try to load from cache let formula_data_result = cache.load_raw("formula.json"); let formulas: Vec = match formula_data_result { Ok(formula_data) => serde_json::from_str(&formula_data)?, Err(e) => { // If cache fails, fetch from API tracing::debug!("Formula cache load failed ({}), fetching from API", e); data_source_name = "API"; let all_formulas = api::fetch_all_formulas().await?; // This fetches String // Try to cache the fetched data if let Err(cache_err) = cache.store_raw("formula.json", &all_formulas) { tracing::warn!("Failed to cache formula data after fetching: {}", cache_err); } // Now parse the String fetched from API serde_json::from_str(&all_formulas)? } }; // Find matching formulas from the loaded data (either cache or API) for formula in formulas { if is_formula_match(&formula, &query_lower) { matches.push(formula); } } tracing::debug!( "Found {} potential formula matches from {}", matches.len(), data_source_name ); tracing::debug!( "Filtered down to {} formula matches with available bottles", matches.len() ); Ok(matches) } /// Search for casks matching the query async fn search_casks(cache: Arc, query: &str) -> Result> { let query_lower = query.to_lowercase(); let mut matches = Vec::new(); let mut data_source_name = "cache"; // Assume cache initially // Try to load from cache let cask_data_result = cache.load_raw("cask.json"); let casks: Vec = match cask_data_result { Ok(cask_data) => serde_json::from_str(&cask_data)?, Err(e) => { // If cache fails, fetch from API tracing::debug!("Cask cache load failed ({}), fetching from API", e); data_source_name = "API"; let all_casks = api::fetch_all_casks().await?; // Fetches String // Try to cache the fetched data if let Err(cache_err) = cache.store_raw("cask.json", &all_casks) { tracing::warn!("Failed to cache cask data after fetching: {}", cache_err); } // Parse the String fetched from API serde_json::from_str(&all_casks)? } }; // Find matching casks for cask in casks { if is_cask_match(&cask, &query_lower) { matches.push(cask); } } tracing::debug!( "Found {} cask matches from {}", matches.len(), data_source_name ); Ok(matches) } /// Check if a formula matches the search query fn is_formula_match(formula: &Value, query: &str) -> bool { // Check name if let Some(name) = formula.get("name").and_then(|n| n.as_str()) { if name.to_lowercase().contains(query) { return true; } } // Check full_name if let Some(full_name) = formula.get("full_name").and_then(|n| n.as_str()) { if full_name.to_lowercase().contains(query) { return true; } } // Check description if let Some(desc) = formula.get("desc").and_then(|d| d.as_str()) { if desc.to_lowercase().contains(query) { return true; } } // Check aliases if let Some(aliases) = formula.get("aliases").and_then(|a| a.as_array()) { for alias in aliases { if let Some(alias_str) = alias.as_str() { if alias_str.to_lowercase().contains(query) { return true; } } } } false } /// Check if a cask matches the search query fn is_cask_match(cask: &Value, query: &str) -> bool { // Check token if let Some(token) = cask.get("token").and_then(|t| t.as_str()) { if token.to_lowercase().contains(query) { return true; } } // Check name array if let Some(names) = cask.get("name").and_then(|n| n.as_array()) { for name in names { if let Some(name_str) = name.as_str() { if name_str.to_lowercase().contains(query) { return true; } } } } // Check description if let Some(desc) = cask.get("desc").and_then(|d| d.as_str()) { if desc.to_lowercase().contains(query) { return true; } } // Check aliases if casks have them (add if necessary) if let Some(aliases) = cask.get("aliases").and_then(|a| a.as_array()) { for alias in aliases { if let Some(alias_str) = alias.as_str() { if alias_str.to_lowercase().contains(query) { return true; } } } } false } /// Truncates to max visible width, adding '…' if cut. fn truncate_vis(s: &str, max: usize) -> String { if UnicodeWidthStr::width(s) <= max { return s.to_string(); } let mut w = 0; let mut out = String::new(); // Ensure max is at least 1 for the ellipsis let effective_max = if max > 0 { max } else { 1 }; for ch in s.chars() { let cw = UnicodeWidthChar::width(ch).unwrap_or(0); // Check if adding the next char *including* ellipsis fits if w + cw >= effective_max.saturating_sub(1) { break; } out.push(ch); w += cw; } out.push('…'); out } /// Width‑aware search results with Name:Desc = 1:2 truncation and Name coloured. pub fn print_search_results(query: &str, formula_matches: &[Value], cask_matches: &[Value]) { let total = formula_matches.len() + cask_matches.len(); if total == 0 { println!("{}", format!("No matches found for '{query}'").yellow()); return; } println!( "{}", format!("Found {total} result(s) for '{query}'").bold() ); // 1) Terminal width let term_cols = terminal_size() .map(|(Width(w), _)| w as usize) .unwrap_or(120); // Default width if detection fails // 2) Fixed columns: "Formula"/"Cask" plus two " | " separators let type_col = 7; // Max width for "Formula" let sep_width = 3; // Width of " | " let total_fixed = type_col + sep_width * 2; // Ensure leftover is not negative let leftover = term_cols.saturating_sub(total_fixed); // Allocate space, ensuring minimum width for names/desc let name_min_width = 10; // Minimum columns for the name let desc_min_width = 20; // Minimum columns for the description // Calculate proportional widths, respecting minimums let name_prop_width = leftover / 3; let _desc_prop_width = leftover.saturating_sub(name_prop_width); let name_max = std::cmp::max(name_min_width, name_prop_width); // Adjust desc_max based on the actual space name_max takes, ensuring desc gets at least its // minimum let desc_max = std::cmp::max(desc_min_width, leftover.saturating_sub(name_max)); // Clamp to ensure total doesn't exceed leftover (due to minimums) let name_max = std::cmp::min(name_max, leftover.saturating_sub(desc_min_width)); let desc_max = std::cmp::min(desc_max, leftover.saturating_sub(name_max)); // 3) Build plain table with truncated cells let mut tbl = Table::new(); tbl.set_format(*format::consts::FORMAT_NO_BORDER_LINE_SEPARATOR); // Don't set titles, we'll manually handle the header coloring later if desired // tbl.set_titles(prettytable::row!["Type", "Name", "Description"]); for f in formula_matches { let raw_name = f.get("name").and_then(|n| n.as_str()).unwrap_or("Unknown"); let raw_desc = f.get("desc").and_then(|d| d.as_str()).unwrap_or(""); let _name = truncate_vis(raw_name, name_max); let desc = truncate_vis(raw_desc, desc_max); // Add colored type and name directly tbl.add_row(prettytable::row![ "Formula".cyan(), raw_name.blue().bold(), /* Color the full name before potential truncation for * simplicity here */ desc // Description remains uncolored ]); } for c in cask_matches { let raw_name = c.get("token").and_then(|t| t.as_str()).unwrap_or("Unknown"); let raw_desc = c.get("desc").and_then(|d| d.as_str()).unwrap_or(""); // let name = truncate_vis(raw_name, name_max); // Truncation might hide colored part let desc = truncate_vis(raw_desc, desc_max); // Add colored type and name directly tbl.add_row(prettytable::row![ "Cask".green(), raw_name.blue().bold(), // Color the full name desc // Description remains uncolored ]); } // 4) Print the table directly (coloring is done during row creation) tbl.printstd(); } ``` ## /sp-cli/src/cli/uninstall.rs ```rs path="/sp-cli/src/cli/uninstall.rs" use std::sync::Arc; use clap::Args; use colored::Colorize; use sp_common::Cache; use sp_common::config::Config; use sp_common::error::{Result, SpError}; use sp_core::{PackageType, UninstallOptions, installed, uninstall as core_uninstall}; use tracing::{debug, error}; // Removed warn use walkdir; use crate::ui; #[derive(Args, Debug)] pub struct Uninstall { /// The names of the formulas or casks to uninstall #[arg(required = true)] // Ensure at least one name is given pub names: Vec, } impl Uninstall { pub async fn run(&self, config: &Config, _cache: Arc) -> Result<()> { let names = &self.names; let mut errors: Vec<(String, SpError)> = Vec::new(); for name in names { // Basic name validation to prevent path traversal if name.contains('/') || name.contains("..") { let msg = format!("Invalid package name '{name}' contains disallowed characters"); error!("✖ {msg}"); errors.push((name.to_string(), SpError::Generic(msg))); continue; } let pb = ui::create_spinner(&format!("Uninstalling {name}")); match installed::get_installed_package(name, config).await? { Some(installed_info) => { let (file_count, size_bytes) = count_files_and_size(&installed_info.path).unwrap_or((0, 0)); let uninstall_opts = UninstallOptions { skip_zap: false }; // Explicit uninstall includes zap debug!( "Attempting uninstall for {} ({:?})", name, installed_info.pkg_type ); let uninstall_result = match installed_info.pkg_type { PackageType::Formula => core_uninstall::uninstall_formula_artifacts( &installed_info, config, &uninstall_opts, ), PackageType::Cask => core_uninstall::uninstall_cask_artifacts( &installed_info, config, &uninstall_opts, ), }; if let Err(e) = uninstall_result { error!("✖ Failed to uninstall '{}': {}", name.cyan(), e); errors.push((name.to_string(), e)); pb.finish_and_clear(); } else { pb.finish_with_message(format!( "✓ Uninstalled {:?} {} ({} files, {})", installed_info.pkg_type, name.green(), file_count, format_size(size_bytes) )); } } None => { let msg = format!("Package '{name}' is not installed."); error!("✖ {msg}"); errors.push((name.to_string(), SpError::NotFound(msg))); pb.finish_and_clear(); } } } if errors.is_empty() { Ok(()) } else { eprintln!("\n{}:", "Finished uninstalling with errors".yellow()); let mut errors_by_pkg: std::collections::HashMap> = std::collections::HashMap::new(); for (pkg_name, error) in errors { errors_by_pkg .entry(pkg_name) .or_default() .push(error.to_string()); } for (pkg_name, error_list) in errors_by_pkg { eprintln!("Package '{}':", pkg_name.cyan()); let unique_errors: std::collections::HashSet<_> = error_list.into_iter().collect(); for error_str in unique_errors { eprintln!("- {}", error_str.red()); } } Err(SpError::Generic( "Uninstall failed for one or more packages.".to_string(), )) } } } // --- Unchanged Helper Functions --- fn count_files_and_size(path: &std::path::Path) -> Result<(usize, u64)> { let mut file_count = 0; let mut total_size = 0; for entry in walkdir::WalkDir::new(path) { match entry { Ok(entry_data) => { if entry_data.file_type().is_file() || entry_data.file_type().is_symlink() { match entry_data.metadata() { Ok(metadata) => { file_count += 1; if entry_data.file_type().is_file() { total_size += metadata.len(); } } Err(e) => { tracing::warn!( "Could not get metadata for {}: {}", entry_data.path().display(), e ); } } } } Err(e) => { tracing::warn!("Error traversing directory {}: {}", path.display(), e); } } } Ok((file_count, total_size)) } fn format_size(size: u64) -> String { const KB: u64 = 1024; const MB: u64 = KB * 1024; const GB: u64 = MB * 1024; if size >= GB { format!("{:.1}GB", size as f64 / GB as f64) } else if size >= MB { format!("{:.1}MB", size as f64 / MB as f64) } else if size >= KB { format!("{:.1}KB", size as f64 / KB as f64) } else { format!("{size}B") } } ``` ## /sp-cli/src/cli/update.rs ```rs path="/sp-cli/src/cli/update.rs" //! Contains the logic for the `update` command. use std::fs; use std::sync::Arc; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::Result; use sp_net::fetch::api; use crate::ui; #[derive(clap::Args, Debug)] pub struct Update; impl Update { pub async fn run(&self, config: &Config, cache: Arc) -> Result<()> { tracing::debug!("Running manual update..."); // Log clearly it's the manual one // Use the ui utility function to create the spinner let pb = ui::create_spinner("Updating package lists"); // <-- CHANGED tracing::debug!("Using cache directory: {:?}", config.cache_dir); // Fetch and store raw formula data match api::fetch_all_formulas().await { Ok(raw_data) => { cache.store_raw("formula.json", &raw_data)?; tracing::debug!("✓ Successfully cached formulas data"); pb.set_message("Cached formulas data"); } Err(e) => { let err_msg = format!("Failed to fetch/store formulas from API: {e}"); tracing::error!("{}", err_msg); pb.finish_and_clear(); // Clear spinner on error return Err(e); } } // Fetch and store raw cask data match api::fetch_all_casks().await { Ok(raw_data) => { cache.store_raw("cask.json", &raw_data)?; tracing::debug!("✓ Successfully cached casks data"); pb.set_message("Cached casks data"); } Err(e) => { let err_msg = format!("Failed to fetch/store casks from API: {e}"); tracing::error!("{}", err_msg); pb.finish_and_clear(); // Clear spinner on error return Err(e); } } // Update timestamp file let timestamp_file = config.cache_dir.join(".sp_last_update_check"); tracing::debug!( "Manual update successful. Updating timestamp file: {}", timestamp_file.display() ); match fs::File::create(×tamp_file) { Ok(_) => { tracing::debug!("Updated timestamp file successfully."); } Err(e) => { tracing::warn!( "Failed to create or update timestamp file '{}': {}", timestamp_file.display(), e ); } } pb.finish_with_message("Update completed successfully!"); Ok(()) } } ``` ## /sp-cli/src/cli/upgrade.rs ```rs path="/sp-cli/src/cli/upgrade.rs" use std::sync::Arc; use clap::Args; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::Result; use sp_core::installed; use crate::cli::pipeline::{CommandType, PipelineExecutor, PipelineFlags}; #[derive(Args, Debug)] pub struct UpgradeArgs { #[arg()] pub names: Vec, #[arg(long, conflicts_with = "names")] pub all: bool, #[arg(long)] pub build_from_source: bool, } impl UpgradeArgs { pub async fn run(&self, config: &Config, cache: Arc) -> Result<()> { let targets = if self.all { println!("Checking all installed packages for upgrades..."); // Get all installed package names let installed = installed::get_installed_packages(config).await?; installed.into_iter().map(|p| p.name).collect() } else { println!("Checking specified packages for upgrades: {:?}", self.names); self.names.clone() }; if targets.is_empty() && !self.all { println!("No packages specified to upgrade."); return Ok(()); } else if targets.is_empty() && self.all { println!("No packages installed to upgrade."); return Ok(()); } let flags = PipelineFlags { // Populate flags from args build_from_source: self.build_from_source, // Upgrade should respect original install options ideally, // but for now let's default them. This could be enhanced later // by reading install receipts. include_optional: false, skip_recommended: false, // ... add other common flags if needed ... }; PipelineExecutor::execute_pipeline( &targets, CommandType::Upgrade { all: self.all }, config, cache, &flags, ) .await } } ``` ## /sp-cli/src/main.rs ```rs path="/sp-cli/src/main.rs" // sp-cli/src/main.rs // Corrected logging setup for file output. use std::sync::Arc; use std::time::{Duration, SystemTime}; use std::{env, fs, process}; use clap::Parser; use colored::Colorize; use sp_common::cache::Cache; use sp_common::config::Config; use sp_common::error::{Result as spResult, SpError}; use tracing::Level; // Import the Level type use tracing::level_filters::LevelFilter; use tracing_subscriber::EnvFilter; use tracing_subscriber::fmt::writer::MakeWriterExt; mod cli; mod ui; use cli::{CliArgs, Command}; #[tokio::main] async fn main() -> spResult<()> { let cli_args = CliArgs::parse(); // Initialize config *before* logging setup, as we need the cache path for logs let config = Config::load().map_err(|e| SpError::Config(format!("Could not load config: {e}")))?; // --- Logging Setup --- let level_filter = match cli_args.verbose { 0 => LevelFilter::INFO, 1 => LevelFilter::DEBUG, _ => LevelFilter::TRACE, }; // Convert LevelFilter to Option for use with with_max_level // We know INFO, DEBUG, TRACE filters correspond to Some(Level), so unwrap is safe. let max_log_level = level_filter.into_level().unwrap_or(Level::INFO); let info_level = LevelFilter::INFO.into_level().unwrap_or(Level::INFO); // INFO level specifically let env_filter = EnvFilter::builder() .with_default_directive(level_filter.into()) // Use LevelFilter for general filtering .with_env_var("SP_LOG") // Allow overriding via env var .from_env_lossy(); // Create a logs directory if it doesn't exist let log_dir = config.cache_dir.join("logs"); if let Err(e) = fs::create_dir_all(&log_dir) { // Log to stderr initially if log dir creation fails eprintln!( "{} Failed to create log directory {}: {}", "Error:".red().bold(), log_dir.display(), e ); // Fallback to stderr logging tracing_subscriber::fmt() .with_env_filter(env_filter) .with_writer(std::io::stderr) .with_ansi(true) .without_time() .init(); } else { // Set up file logging only if verbose > 0 if cli_args.verbose > 0 { let file_appender = tracing_appender::rolling::daily(&log_dir, "sp.log"); let (non_blocking_appender, _guard) = tracing_appender::non_blocking(file_appender); // Log DEBUG/TRACE to file, INFO+ still goes to stderr // Use the converted Level type here let stderr_writer = std::io::stderr.with_max_level(info_level); let file_writer = non_blocking_appender.with_max_level(max_log_level); // Use the calculated Level tracing_subscriber::fmt() .with_env_filter(env_filter) .with_writer(stderr_writer.and(file_writer)) // Combine writers .with_ansi(true) // Keep ANSI codes for stderr .without_time() // Keep time disabled for CLI feel .init(); // Keep the guard alive for the duration of the program // Leaking is simpler for a CLI app's main function. Box::leak(Box::new(_guard)); tracing::debug!( "Verbose logging enabled. Writing logs to: {}/sp.log", log_dir.display() ); } else { // Default: INFO+ to stderr only tracing_subscriber::fmt() .with_env_filter(env_filter) .with_writer(std::io::stderr) .with_ansi(true) .without_time() .init(); } } // --- End Logging Setup --- // Create Cache once and wrap in Arc (after config load) let cache = Arc::new( Cache::new(&config.cache_dir) .map_err(|e| SpError::Cache(format!("Could not initialize cache: {e}")))?, ); let needs_update_check = matches!( cli_args.command, Command::Install(_) | Command::Search { .. } | Command::Info { .. } ); if needs_update_check { if let Err(e) = check_and_run_auto_update(&config, Arc::clone(&cache)).await { tracing::error!("Error during auto-update check: {}", e); } } else { tracing::debug!( "Skipping auto-update check for command: {:?}", cli_args.command ); } if let Err(e) = cli_args.command.run(&config, cache).await { // Log error using tracing *before* printing to stderr, so it goes to file too if verbose tracing::error!("Command failed: {:#}", e); eprintln!("{}: {:#}", "Error".red().bold(), e); process::exit(1); } tracing::debug!("Command completed successfully."); // Add success debug log Ok(()) } // check_and_run_auto_update function remains the same async fn check_and_run_auto_update(config: &Config, cache: Arc) -> spResult<()> { // 1. Check if auto-update is disabled if env::var("SP_NO_AUTO_UPDATE").is_ok_and(|v| v == "1") { tracing::debug!("Auto-update disabled via SP_NO_AUTO_UPDATE=1."); return Ok(()); } // 2. Determine update interval let default_interval_secs: u64 = 86400; // 24 hours let update_interval_secs = env::var("SP_AUTO_UPDATE_SECS") .ok() .and_then(|s| s.parse::().ok()) .unwrap_or(default_interval_secs); let update_interval = Duration::from_secs(update_interval_secs); tracing::debug!("Auto-update interval: {:?}", update_interval); // 3. Check timestamp file let timestamp_file = config.cache_dir.join(".sp_last_update_check"); tracing::debug!("Checking timestamp file: {}", timestamp_file.display()); let mut needs_update = true; // Assume update needed unless file is recent if let Ok(metadata) = fs::metadata(×tamp_file) { if let Ok(modified_time) = metadata.modified() { match SystemTime::now().duration_since(modified_time) { Ok(age) => { tracing::debug!("Time since last update check: {:?}", age); if age < update_interval { needs_update = false; tracing::debug!("Auto-update interval not yet passed."); } else { tracing::debug!("Auto-update interval passed."); } } Err(e) => { tracing::warn!( "Could not get duration since last update check (system time error?): {}", e ); // Proceed with update if we can't determine age } } } else { tracing::warn!( "Could not read modification time for timestamp file: {}", timestamp_file.display() ); // Proceed with update if we can't read time } } else { tracing::debug!("Timestamp file not found or not accessible."); // Proceed with update if file doesn't exist } // 4. Run update if needed if needs_update { println!("Running auto-update..."); // Keep user feedback on stderr // Use the existing update command logic match cli::update::Update.run(config, cache).await { Ok(_) => { println!("Auto-update successful."); // Keep user feedback on stderr // 5. Update timestamp file on success match fs::File::create(×tamp_file) { Ok(_) => { tracing::debug!("Updated timestamp file: {}", timestamp_file.display()); } Err(e) => { tracing::warn!( "Failed to create or update timestamp file '{}': {}", timestamp_file.display(), e ); // Continue even if timestamp update fails, but log it } } } Err(e) => { // Log error but don't prevent the main command from running tracing::error!("Auto-update failed: {}", e); eprintln!("{} Auto-update failed: {}", "Warning:".yellow(), e); // Also inform user // on stderr } } } else { tracing::debug!("Skipping auto-update."); } Ok(()) } ``` ## /sp-cli/src/ui.rs ```rs path="/sp-cli/src/ui.rs" //! UI utility functions for creating common elements like spinners. use std::time::Duration; use indicatif::{ProgressBar, ProgressStyle}; /// Creates and configures a default spinner ProgressBar. /// /// # Arguments /// /// * `message` - The initial message to display next to the spinner. /// /// # Returns /// /// A configured `ProgressBar` instance ready to be used. pub fn create_spinner(message: &str) -> ProgressBar { let pb = ProgressBar::new_spinner(); pb.set_style(ProgressStyle::with_template("{spinner:.blue.bold} {msg}").unwrap()); pb.set_message(message.to_string()); pb.enable_steady_tick(Duration::from_millis(100)); // Standard tick rate pb } ``` ## /sp-common/Cargo.toml ```toml path="/sp-common/Cargo.toml" [package] name = "sp-common" version = "0.1.0" edition = "2024" # Or "2021" if not using nightly [dependencies] # Inherited from workspace thiserror = { workspace = true } reqwest = { workspace = true } object = { workspace = true } semver = { workspace = true } serde_json = { workspace = true } dirs = { workspace = true } tracing = { workspace = true } serde = { workspace = true } humantime = { workspace = true } bitflags = { workspace = true } ``` ## /sp-common/src/cache.rs ```rs path="/sp-common/src/cache.rs" // src/utils/cache.rs // Handles caching of formula data and downloads use std::fs; use std::path::{Path, PathBuf}; use std::time::{Duration, SystemTime}; use serde::Serialize; use serde::de::DeserializeOwned; use super::error::{Result, SpError}; // TODO: Define cache directory structure (e.g., ~/.cache/sp) // TODO: Implement functions for storing, retrieving, and clearing cached data. const CACHE_SUBDIR: &str = "sp"; // Define how long cache entries are considered valid const CACHE_TTL: Duration = Duration::from_secs(24 * 60 * 60); // 24 hours /// Cache struct to manage cache operations pub struct Cache { cache_dir: PathBuf, } impl Cache { pub fn new(cache_dir: &Path) -> Result { if !cache_dir.exists() { fs::create_dir_all(cache_dir)?; } Ok(Self { cache_dir: cache_dir.to_path_buf(), }) } /// Gets the cache directory path pub fn get_dir(&self) -> &Path { &self.cache_dir } /// Stores raw string data in the cache pub fn store_raw(&self, filename: &str, data: &str) -> Result<()> { let path = self.cache_dir.join(filename); tracing::debug!("Saving raw data to cache file: {:?}", path); fs::write(&path, data)?; Ok(()) } /// Loads raw string data from the cache pub fn load_raw(&self, filename: &str) -> Result { let path = self.cache_dir.join(filename); tracing::debug!("Loading raw data from cache file: {:?}", path); if !path.exists() { return Err(SpError::Cache(format!( "Cache file {filename} does not exist" ))); } fs::read_to_string(&path).map_err(|e| SpError::Cache(format!("IO error: {e}"))) } /// Checks if a cache file exists and is valid (within TTL) pub fn is_cache_valid(&self, filename: &str) -> Result { let path = self.cache_dir.join(filename); if !path.exists() { return Ok(false); } let metadata = fs::metadata(&path)?; let modified_time = metadata.modified()?; let age = SystemTime::now() .duration_since(modified_time) .map_err(|e| SpError::Cache(format!("System time error: {e}")))?; Ok(age <= CACHE_TTL) } /// Clears a specific cache file pub fn clear_file(&self, filename: &str) -> Result<()> { let path = self.cache_dir.join(filename); if path.exists() { fs::remove_file(&path)?; } Ok(()) } /// Clears all cache files pub fn clear_all(&self) -> Result<()> { if self.cache_dir.exists() { fs::remove_dir_all(&self.cache_dir)?; fs::create_dir_all(&self.cache_dir)?; } Ok(()) } } /// Gets the path to the application's cache directory, creating it if necessary. /// Uses dirs::cache_dir() to find the appropriate system cache location. pub fn get_cache_dir() -> Result { let base_cache_dir = dirs::cache_dir() .ok_or_else(|| SpError::Cache("Could not determine system cache directory".to_string()))?; let app_cache_dir = base_cache_dir.join(CACHE_SUBDIR); if !app_cache_dir.exists() { tracing::debug!("Creating cache directory at {:?}", app_cache_dir); fs::create_dir_all(&app_cache_dir)?; } Ok(app_cache_dir) } /// Constructs the full path for a given cache filename. fn get_cache_path(filename: &str) -> Result { Ok(get_cache_dir()?.join(filename)) } /// Saves serializable data to a file in the cache directory. /// The data is serialized as JSON. pub fn save_to_cache(filename: &str, data: &T) -> Result<()> { let path = get_cache_path(filename)?; tracing::debug!("Saving data to cache file: {:?}", path); let file = fs::File::create(&path)?; // Use serde_json::to_writer_pretty for readable cache files (optional) serde_json::to_writer_pretty(file, data)?; Ok(()) } /// Loads and deserializes data from a file in the cache directory. /// Checks if the cache file exists and is within the TTL (Time To Live). pub fn load_from_cache(filename: &str) -> Result { let path = get_cache_path(filename)?; tracing::debug!("Attempting to load from cache file: {:?}", path); if !path.exists() { tracing::debug!("Cache file not found."); return Err(SpError::Cache("Cache file does not exist".to_string())); } // Check cache file age let metadata = fs::metadata(&path)?; let modified_time = metadata.modified()?; let age = SystemTime::now() .duration_since(modified_time) .map_err(|e| SpError::Cache(format!("System time error: {e}")))?; if age > CACHE_TTL { tracing::debug!("Cache file expired (age: {:?}, TTL: {:?}).", age, CACHE_TTL); return Err(SpError::Cache(format!( "Cache file expired ({} > {})", humantime::format_duration(age), humantime::format_duration(CACHE_TTL) ))); } tracing::debug!("Cache file is valid. Loading"); let file = fs::File::open(&path)?; let data: T = serde_json::from_reader(file)?; Ok(data) } /// Clears the entire application cache directory. pub fn clear_cache() -> Result<()> { let path = get_cache_dir()?; tracing::debug!("Clearing cache directory: {:?}", path); if path.exists() { fs::remove_dir_all(&path)?; } Ok(()) } /// Checks if a specific cache file exists and is valid (within TTL). pub fn is_cache_valid(filename: &str) -> Result { let path = get_cache_path(filename)?; if !path.exists() { return Ok(false); } let metadata = fs::metadata(&path)?; let modified_time = metadata.modified()?; let age = SystemTime::now() .duration_since(modified_time) .map_err(|e| SpError::Cache(format!("System time error: {e}")))?; Ok(age <= CACHE_TTL) } ``` ## /sp-common/src/config.rs ```rs path="/sp-common/src/config.rs" // ===== sp-core/src/utils/config.rs ===== use std::env; use std::path::{Path, PathBuf}; use dirs; use tracing::debug; use super::cache; use super::error::Result; // for home directory lookup /// Default installation prefixes const DEFAULT_LINUX_PREFIX: &str = "/home/linuxbrew/.linuxbrew"; const DEFAULT_MACOS_INTEL_PREFIX: &str = "/usr/local"; const DEFAULT_MACOS_ARM_PREFIX: &str = "/opt/homebrew"; /// Determines the active prefix for installation. /// Checks SP_PREFIX/HOMEBREW_PREFIX env vars, then OS-specific defaults. fn determine_prefix() -> PathBuf { if let Ok(prefix) = env::var("SP_PREFIX").or_else(|_| env::var("HOMEBREW_PREFIX")) { debug!("Using prefix from environment variable: {}", prefix); return PathBuf::from(prefix); } let default_prefix = if cfg!(target_os = "linux") { DEFAULT_LINUX_PREFIX } else if cfg!(target_os = "macos") { if cfg!(target_arch = "aarch64") { DEFAULT_MACOS_ARM_PREFIX } else { DEFAULT_MACOS_INTEL_PREFIX } } else { // Fallback for unsupported OS "/usr/local/sp" }; debug!("Using default prefix for OS/Arch: {}", default_prefix); PathBuf::from(default_prefix) } #[derive(Debug, Clone)] pub struct Config { pub prefix: PathBuf, pub cellar: PathBuf, pub taps_dir: PathBuf, pub cache_dir: PathBuf, pub api_base_url: String, pub artifact_domain: Option, pub docker_registry_token: Option, pub docker_registry_basic_auth: Option, pub github_api_token: Option, } impl Config { pub fn load() -> Result { debug!("Loading sp configuration"); let prefix = determine_prefix(); let cellar = prefix.join("Cellar"); let taps_dir = prefix.join("Library/Taps"); let cache_dir = cache::get_cache_dir()?; let api_base_url = "https://formulae.brew.sh/api".to_string(); let artifact_domain = env::var("HOMEBREW_ARTIFACT_DOMAIN").ok(); let docker_registry_token = env::var("HOMEBREW_DOCKER_REGISTRY_TOKEN").ok(); let docker_registry_basic_auth = env::var("HOMEBREW_DOCKER_REGISTRY_BASIC_AUTH_TOKEN").ok(); let github_api_token = env::var("HOMEBREW_GITHUB_API_TOKEN").ok(); if artifact_domain.is_some() { debug!("Loaded HOMEBREW_ARTIFACT_DOMAIN"); } if docker_registry_token.is_some() { debug!("Loaded HOMEBREW_DOCKER_REGISTRY_TOKEN"); } if docker_registry_basic_auth.is_some() { debug!("Loaded HOMEBREW_DOCKER_REGISTRY_BASIC_AUTH_TOKEN"); } if github_api_token.is_some() { debug!("Loaded HOMEBREW_GITHUB_API_TOKEN"); } debug!("Configuration loaded successfully."); Ok(Self { prefix, cellar, taps_dir, cache_dir, api_base_url, artifact_domain, docker_registry_token, docker_registry_basic_auth, github_api_token, }) } // --- Start: New Path Methods --- pub fn prefix(&self) -> &Path { &self.prefix } pub fn cellar_path(&self) -> &Path { &self.cellar } pub fn caskroom_dir(&self) -> PathBuf { self.prefix.join("Caskroom") } pub fn opt_dir(&self) -> PathBuf { self.prefix.join("opt") } pub fn bin_dir(&self) -> PathBuf { self.prefix.join("bin") } pub fn applications_dir(&self) -> PathBuf { if cfg!(target_os = "macos") { PathBuf::from("/Applications") } else { self.prefix.join("Applications") } } pub fn formula_cellar_dir(&self, formula_name: &str) -> PathBuf { self.cellar_path().join(formula_name) } pub fn formula_keg_path(&self, formula_name: &str, version_str: &str) -> PathBuf { self.formula_cellar_dir(formula_name).join(version_str) } pub fn formula_opt_link_path(&self, formula_name: &str) -> PathBuf { self.opt_dir().join(formula_name) } pub fn cask_dir(&self, cask_token: &str) -> PathBuf { self.caskroom_dir().join(cask_token) } pub fn cask_version_path(&self, cask_token: &str, version_str: &str) -> PathBuf { self.cask_dir(cask_token).join(version_str) } /// Returns the path to the current user's home directory. pub fn home_dir(&self) -> PathBuf { dirs::home_dir().unwrap_or_else(|| PathBuf::from("/")) } /// Returns the base manpage directory (e.g., /usr/local/share/man). pub fn manpagedir(&self) -> PathBuf { self.prefix.join("share").join("man") } // --- End: New Path Methods --- pub fn get_tap_path(&self, name: &str) -> Option { let parts: Vec<&str> = name.split('/').collect(); if parts.len() == 2 { Some( self.taps_dir .join(parts[0]) .join(format!("homebrew-{}", parts[1])), ) } else { None } } pub fn get_formula_path_from_tap(&self, tap_name: &str, formula_name: &str) -> Option { self.get_tap_path(tap_name).and_then(|tap_path| { let json_path = tap_path .join("Formula") .join(format!("{formula_name}.json")); if json_path.exists() { return Some(json_path); } let rb_path = tap_path.join("Formula").join(format!("{formula_name}.rb")); if rb_path.exists() { return Some(rb_path); } None }) } } impl Default for Config { fn default() -> Self { Self::load().expect("Failed to load default configuration") } } pub fn load_config() -> Result { Config::load() } ``` ## /sp-common/src/dependency/definition.rs ```rs path="/sp-common/src/dependency/definition.rs" // **File:** sp-core/src/dependency/dependency.rs // Should be in the model module use std::fmt; use bitflags::bitflags; use serde::{Deserialize, Serialize}; // For derive macros and attributes bitflags! { #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)] /// Tags associated with a dependency, mirroring Homebrew's concepts. pub struct DependencyTag: u8 { /// Standard runtime dependency, needed for the formula to function. const RUNTIME = 0b00000001; /// Needed only at build time. const BUILD = 0b00000010; /// Needed for running tests (`brew test`). const TEST = 0b00000100; /// Optional dependency, installable via user flag (e.g., `--with-foo`). const OPTIONAL = 0b00001000; /// Recommended dependency, installed by default but can be skipped (e.g., `--without-bar`). const RECOMMENDED = 0b00010000; // Add other tags as needed (e.g., :implicit) } } impl Default for DependencyTag { // By default, a dependency is considered runtime unless specified otherwise. fn default() -> Self { Self::RUNTIME } } impl fmt::Display for DependencyTag { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{self:?}") // Simple debug format for now } } /// Represents a dependency declared by a Formula. #[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)] pub struct Dependency { /// The name of the formula dependency. pub name: String, /// Tags associated with this dependency (e.g., build, optional). #[serde(default)] // Use default tags (RUNTIME) if missing in serialization pub tags: DependencyTag, // We could add requirements here later: // pub requirements: Vec, } impl Dependency { /// Creates a new runtime dependency. pub fn new_runtime(name: impl Into) -> Self { Self { name: name.into(), tags: DependencyTag::RUNTIME, } } /// Creates a new dependency with specific tags. pub fn new_with_tags(name: impl Into, tags: DependencyTag) -> Self { Self { name: name.into(), tags, } } } /// Extension trait for Vec for easier filtering. pub trait DependencyExt { /// Filters dependencies based on included tags and excluded tags. /// For example, to get runtime dependencies that are *not* optional: /// `filter_by_tags(DependencyTag::RUNTIME, DependencyTag::OPTIONAL)` fn filter_by_tags(&self, include: DependencyTag, exclude: DependencyTag) -> Vec<&Dependency>; /// Get only runtime dependencies (excluding build, test). fn runtime(&self) -> Vec<&Dependency>; /// Get only build-time dependencies (includes :build, excludes others unless also :build). fn build_time(&self) -> Vec<&Dependency>; } impl DependencyExt for Vec { fn filter_by_tags(&self, include: DependencyTag, exclude: DependencyTag) -> Vec<&Dependency> { self.iter() .filter(|dep| dep.tags.contains(include) && !dep.tags.intersects(exclude)) .collect() } fn runtime(&self) -> Vec<&Dependency> { // Runtime deps are those *not* exclusively build or test // (A dep could be both runtime and build, e.g., a compiler needed at runtime too) self.iter() .filter(|dep| { !dep.tags .contains(DependencyTag::BUILD | DependencyTag::TEST) || dep.tags.contains(DependencyTag::RUNTIME) }) // Alternatively, be more explicit: include RUNTIME | RECOMMENDED | OPTIONAL // .filter(|dep| dep.tags.intersects(DependencyTag::RUNTIME | DependencyTag::RECOMMENDED // | DependencyTag::OPTIONAL)) .collect() } fn build_time(&self) -> Vec<&Dependency> { self.filter_by_tags(DependencyTag::BUILD, DependencyTag::empty()) } } // Required for bitflags! ``` ## /sp-common/src/dependency/mod.rs ```rs path="/sp-common/src/dependency/mod.rs" pub mod definition; // Renamed from 'dependency' pub mod requirement; pub mod resolver; // Re-export key types for easier access pub use definition::{Dependency, DependencyExt, DependencyTag}; // Updated source module pub use requirement::Requirement; pub use resolver::{ DependencyResolver, ResolutionContext, ResolutionStatus, ResolvedDependency, ResolvedGraph, }; ``` ## /sp-common/src/dependency/requirement.rs ```rs path="/sp-common/src/dependency/requirement.rs" // **File:** sp-core/src/dependency/requirement.rs (New file) use std::fmt; use serde::{Deserialize, Serialize}; /// Represents a requirement beyond a simple formula dependency. /// Placeholder - This needs significant expansion based on Homebrew's Requirement system. #[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)] pub enum Requirement { /// Minimum macOS version required. MacOS(String), // e.g., "12.0" /// Minimum Xcode version required. Xcode(String), // e.g., "14.1" // Add others: Arch, specific libraries, environment variables, etc. /// Placeholder for unparsed or complex requirements. Other(String), } impl fmt::Display for Requirement { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { match self { Self::MacOS(v) => write!(f, "macOS >= {v}"), Self::Xcode(v) => write!(f, "Xcode >= {v}"), Self::Other(s) => write!(f, "Requirement: {s}"), } } } ``` ## /sp-common/src/dependency/resolver.rs ```rs path="/sp-common/src/dependency/resolver.rs" // FILE: sp-core/src/dependency/resolver.rs use std::collections::{HashMap, HashSet, VecDeque}; use std::path::{Path, PathBuf}; use std::sync::Arc; use tracing::{debug, error, warn}; use crate::dependency::{Dependency, DependencyTag}; use crate::error::{Result, SpError}; use crate::formulary::Formulary; use crate::keg::KegRegistry; use crate::model::formula::Formula; #[derive(Debug, Clone)] pub struct ResolvedDependency { pub formula: Arc, pub keg_path: Option, pub opt_path: Option, pub status: ResolutionStatus, pub tags: DependencyTag, pub failure_reason: Option, } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum ResolutionStatus { Installed, Missing, Requested, SkippedOptional, NotFound, Failed, } #[derive(Debug, Clone)] pub struct ResolvedGraph { pub install_plan: Vec, pub build_dependency_opt_paths: Vec, pub runtime_dependency_opt_paths: Vec, pub resolution_details: HashMap, } pub struct ResolutionContext<'a> { pub formulary: &'a Formulary, pub keg_registry: &'a KegRegistry, pub sp_prefix: &'a Path, pub include_optional: bool, pub include_test: bool, pub skip_recommended: bool, pub force_build: bool, } pub struct DependencyResolver<'a> { context: ResolutionContext<'a>, formula_cache: HashMap>, visiting: HashSet, resolution_details: HashMap, // Store Arc instead of SpError errors: HashMap>, } impl<'a> DependencyResolver<'a> { pub fn new(context: ResolutionContext<'a>) -> Self { Self { context, formula_cache: HashMap::new(), visiting: HashSet::new(), resolution_details: HashMap::new(), errors: HashMap::new(), } } pub fn resolve_targets(&mut self, targets: &[String]) -> Result { debug!("Starting dependency resolution for targets: {:?}", targets); self.visiting.clear(); self.resolution_details.clear(); self.errors.clear(); for target_name in targets { if let Err(e) = self.resolve_recursive(target_name, DependencyTag::RUNTIME, true) { // Wrap error in Arc for storage self.errors.insert(target_name.clone(), Arc::new(e)); warn!( "Resolution failed for target '{}', but continuing for others.", target_name ); } } debug!( "Raw resolved map after initial pass: {:?}", self.resolution_details .iter() .map(|(k, v)| (k.clone(), v.status, v.tags)) .collect::>() ); let sorted_list = match self.topological_sort() { Ok(list) => list, Err(e @ SpError::DependencyError(_)) => { error!("Topological sort failed due to dependency cycle: {}", e); return Err(e); } Err(e) => { error!("Topological sort failed: {}", e); return Err(e); } }; let install_plan: Vec = sorted_list .into_iter() .filter(|dep| { matches!( dep.status, ResolutionStatus::Missing | ResolutionStatus::Requested ) }) .collect(); let mut build_paths = Vec::new(); let mut runtime_paths = Vec::new(); let mut seen_build_paths = HashSet::new(); let mut seen_runtime_paths = HashSet::new(); for dep in self.resolution_details.values() { if matches!( dep.status, ResolutionStatus::Installed | ResolutionStatus::Requested | ResolutionStatus::Missing ) { if let Some(opt_path) = &dep.opt_path { if dep.tags.contains(DependencyTag::BUILD) && seen_build_paths.insert(opt_path.clone()) { debug!("Adding build dep path: {}", opt_path.display()); build_paths.push(opt_path.clone()); } if dep.tags.intersects( DependencyTag::RUNTIME | DependencyTag::RECOMMENDED | DependencyTag::OPTIONAL, ) && seen_runtime_paths.insert(opt_path.clone()) { debug!("Adding runtime dep path: {}", opt_path.display()); runtime_paths.push(opt_path.clone()); } } else if dep.status != ResolutionStatus::NotFound && dep.status != ResolutionStatus::Failed { debug!( "Warning: No opt_path found for resolved dependency {} ({:?})", dep.formula.name(), dep.status ); } } } if !self.errors.is_empty() { warn!( "Resolution encountered errors for specific targets: {:?}", self.errors .iter() .map(|(k, v)| (k, v.to_string())) .collect::>() ); } debug!( "Final installation plan (needs install/build): {:?}", install_plan .iter() .map(|d| (d.formula.name(), d.status)) .collect::>() ); debug!( "Collected build dependency paths: {:?}", build_paths.iter().map(|p| p.display()).collect::>() ); debug!( "Collected runtime dependency paths: {:?}", runtime_paths .iter() .map(|p| p.display()) .collect::>() ); Ok(ResolvedGraph { install_plan, build_dependency_opt_paths: build_paths, runtime_dependency_opt_paths: runtime_paths, resolution_details: self.resolution_details.clone(), }) } /// Walk a dependency node, collecting status and propagating errors fn resolve_recursive( &mut self, name: &str, tags_from_parent: DependencyTag, is_target: bool, ) -> Result<()> { debug!( "Resolving: {} (requested as {:?}, is_target: {})", name, tags_from_parent, is_target ); // -------- cycle guard ------------------------------------------------------------- if self.visiting.contains(name) { error!("Dependency cycle detected involving: {}", name); return Err(SpError::DependencyError(format!( "Dependency cycle detected involving '{name}'" ))); } // -------- if we have a previous entry, maybe promote status / tags ----------------- if let Some(existing) = self.resolution_details.get_mut(name) { let original_status = existing.status; let original_tags = existing.tags; // status promotion rules ------------------------------------------------------- let mut new_status = original_status; if is_target && new_status == ResolutionStatus::Missing { new_status = ResolutionStatus::Requested; } if new_status == ResolutionStatus::SkippedOptional && (tags_from_parent.contains(DependencyTag::RUNTIME) || tags_from_parent.contains(DependencyTag::BUILD) || (tags_from_parent.contains(DependencyTag::RECOMMENDED) && !self.context.skip_recommended) || (is_target && self.context.include_optional)) { new_status = if existing.keg_path.is_some() { ResolutionStatus::Installed } else if is_target { ResolutionStatus::Requested } else { ResolutionStatus::Missing }; } // apply any changes ------------------------------------------------------------ let mut needs_revisit = false; if new_status != original_status { debug!( "Updating status for '{name}' from {:?} to {:?}", original_status, new_status ); existing.status = new_status; needs_revisit = true; } let combined_tags = original_tags | tags_from_parent; if combined_tags != original_tags { debug!( "Updating tags for '{name}' from {:?} to {:?}", original_tags, combined_tags ); existing.tags = combined_tags; needs_revisit = true; } // nothing else to do if !needs_revisit { debug!("'{}' already resolved with compatible status/tags.", name); return Ok(()); } debug!( "Re-evaluating dependencies for '{}' due to status/tag update", name ); } // -------- first time we see this node --------------------------------------------- else { self.visiting.insert(name.to_string()); // load / cache the formula ----------------------------------------------------- let formula: Arc = match self.formula_cache.get(name) { Some(f) => f.clone(), None => { debug!("Loading formula definition for '{}'", name); match self.context.formulary.load_formula(name) { Ok(f) => { let arc = Arc::new(f); self.formula_cache.insert(name.to_string(), arc.clone()); arc } Err(e) => { error!("Failed to load formula definition for '{}': {}", name, e); let msg = e.to_string(); self.resolution_details.insert( name.to_string(), ResolvedDependency { formula: Arc::new(Formula::placeholder(name)), keg_path: None, opt_path: None, status: ResolutionStatus::NotFound, tags: tags_from_parent, failure_reason: Some(msg.clone()), }, ); self.visiting.remove(name); self.errors .insert(name.to_string(), Arc::new(SpError::NotFound(msg))); return Ok(()); // treat “not found” as a soft failure } } } }; // work out installation state -------------------------------------------------- let installed_keg = if self.context.force_build { None } else { self.context.keg_registry.get_installed_keg(name)? }; let opt_path = self.context.keg_registry.get_opt_path(name); let (status, keg_path) = match installed_keg { Some(keg) => (ResolutionStatus::Installed, Some(keg.path)), None => ( if is_target { ResolutionStatus::Requested } else { ResolutionStatus::Missing }, None, ), }; debug!( "Initial status for '{}': {:?}, keg: {:?}, opt: {}", name, status, keg_path, opt_path.display() ); self.resolution_details.insert( name.to_string(), ResolvedDependency { formula, keg_path, opt_path: Some(opt_path), status, tags: tags_from_parent, failure_reason: None, }, ); } // --------------------------------------------------------------------- recurse ---- let dep_snapshot = self .resolution_details .get(name) .expect("just inserted") .clone(); // if this node is already irrecoverably broken, stop here if matches!( dep_snapshot.status, ResolutionStatus::Failed | ResolutionStatus::NotFound ) { self.visiting.remove(name); return Ok(()); } // iterate its declared dependencies ----------------------------------------------- for dep in dep_snapshot.formula.dependencies()? { let dep_name = &dep.name; let dep_tags = dep.tags; debug!( "Processing dependency '{}' for '{}' with tags {:?}", dep_name, name, dep_tags ); // optional / test filtering if !self.should_consider_dependency(&dep) { if !self.resolution_details.contains_key(dep_name.as_str()) { debug!("Marking '{}' as SkippedOptional", dep_name); if let Ok(f) = self.context.formulary.load_formula(dep_name) { let arc = Arc::new(f); let opt = self.context.keg_registry.get_opt_path(dep_name); self.formula_cache.insert(dep_name.to_string(), arc.clone()); self.resolution_details.insert( dep_name.to_string(), ResolvedDependency { formula: arc, keg_path: None, opt_path: Some(opt), status: ResolutionStatus::SkippedOptional, tags: dep_tags, failure_reason: None, }, ); } } continue; } // --- real recursion ----------------------------------------------------------- if let Err(e) = self.resolve_recursive(dep_name, dep_tags, false) { warn!( "Recursive resolution for '{}' (child of '{}') failed: {}", dep_name, name, e ); // we’ll need the details after moving `e`, so harvest now let is_cycle = matches!(e, SpError::DependencyError(_)); let msg = e.to_string(); // move `e` into the error map self.errors .entry(dep_name.to_string()) .or_insert_with(|| Arc::new(e)); // mark the node as failed if let Some(node) = self.resolution_details.get_mut(dep_name.as_str()) { node.status = ResolutionStatus::Failed; node.failure_reason = Some(msg); } // propagate cycles upward if is_cycle { self.visiting.remove(name); return Err(SpError::DependencyError( "Circular dependency detected".into(), )); } } } self.visiting.remove(name); debug!("Finished resolving '{}'", name); Ok(()) } fn topological_sort(&self) -> Result> { debug!("Starting topological sort"); let mut in_degree: HashMap = HashMap::new(); let mut adj: HashMap> = HashMap::new(); let mut sorted_list = Vec::new(); let mut queue = VecDeque::new(); let relevant_nodes: Vec<_> = self .resolution_details .iter() .filter(|(_, dep)| { matches!( dep.status, ResolutionStatus::Installed | ResolutionStatus::Missing | ResolutionStatus::Requested ) }) .map(|(name, _)| name.clone()) .collect(); for name in &relevant_nodes { in_degree.entry(name.clone()).or_insert(0); adj.entry(name.clone()).or_default(); } for name in &relevant_nodes { let resolved_dep = self.resolution_details.get(name).unwrap(); match resolved_dep.formula.dependencies() { Ok(dependencies) => { for dep in dependencies { if relevant_nodes.contains(&dep.name) && self.should_consider_dependency(&dep) && adj .entry(dep.name.clone()) .or_default() .insert(name.clone()) { *in_degree.entry(name.clone()).or_insert(0) += 1; } } } Err(e) => { error!( "Failed to get dependencies for '{}' during sort: {}", name, e ); return Err(e); } } } debug!("In-degrees (relevant nodes only): {:?}", in_degree); for name in &relevant_nodes { if *in_degree.get(name).unwrap_or(&1) == 0 { queue.push_back(name.clone()); } } debug!("Initial queue: {:?}", queue); while let Some(u_name) = queue.pop_front() { if let Some(resolved_dep) = self.resolution_details.get(&u_name) { if matches!( resolved_dep.status, ResolutionStatus::Installed | ResolutionStatus::Missing | ResolutionStatus::Requested ) { sorted_list.push(resolved_dep.clone()); } } else { error!( "Error: Node '{}' from queue not found in resolved map!", u_name ); return Err(SpError::Generic(format!( "Topological sort inconsistency: node {u_name} not found" ))); } if let Some(neighbors) = adj.get(&u_name) { for v_name in neighbors { if relevant_nodes.contains(v_name) { if let Some(degree) = in_degree.get_mut(v_name) { *degree = degree.saturating_sub(1); if *degree == 0 { queue.push_back(v_name.clone()); } } } } } } if sorted_list.len() != relevant_nodes.len() { error!( "Cycle detected! Sorted count ({}) != Relevant node count ({}).", sorted_list.len(), relevant_nodes.len() ); let cyclic_nodes: Vec<_> = relevant_nodes .iter() .filter(|n| in_degree.get(*n).unwrap_or(&0) > &0) .cloned() .collect(); error!( "Nodes potentially involved in cycle (relevant, in-degree > 0): {:?}", cyclic_nodes ); return Err(SpError::DependencyError( "Circular dependency detected".to_string(), )); } debug!( "Topological sort successful. {} relevant nodes in sorted list.", sorted_list.len() ); Ok(sorted_list) } fn should_consider_dependency(&self, dep: &Dependency) -> bool { let tags = dep.tags; if tags.contains(DependencyTag::TEST) && !self.context.include_test { return false; } if tags.contains(DependencyTag::OPTIONAL) && !self.context.include_optional { return false; } if tags.contains(DependencyTag::RECOMMENDED) && self.context.skip_recommended { return false; } true } } impl Formula { fn placeholder(name: &str) -> Self { Self { name: name.to_string(), stable_version_str: "0.0.0".to_string(), version_semver: semver::Version::new(0, 0, 0), revision: 0, desc: Some("Placeholder for unresolved formula".to_string()), homepage: None, url: String::new(), sha256: String::new(), mirrors: Vec::new(), bottle: Default::default(), dependencies: Vec::new(), requirements: Vec::new(), resources: Vec::new(), install_keg_path: None, } } } ``` ## /sp-common/src/error.rs ```rs path="/sp-common/src/error.rs" use std::sync::Arc; use thiserror::Error; #[derive(Error, Debug, Clone)] pub enum SpError { #[error("I/O Error: {0}")] Io(#[from] Arc), #[error("HTTP Request Error: {0}")] Http(#[from] Arc), #[error("JSON Parsing Error: {0}")] Json(#[from] Arc), #[error("Semantic Versioning Error: {0}")] SemVer(#[from] Arc), #[error("Object File Error: {0}")] Object(#[from] Arc), #[error("Configuration Error: {0}")] Config(String), #[error("API Error: {0}")] Api(String), #[error("API Request Error: {0}")] ApiRequestError(String), #[error("DownloadError: Failed to download '{0}' from '{1}': {2}")] DownloadError(String, String, String), #[error("Cache Error: {0}")] Cache(String), #[error("Resource Not Found: {0}")] NotFound(String), #[error("Installation Error: {0}")] InstallError(String), #[error("Generic Error: {0}")] Generic(String), #[error("HttpError: {0}")] HttpError(String), #[error("Checksum Mismatch: {0}")] ChecksumMismatch(String), #[error("Validation Error: {0}")] ValidationError(String), #[error("Checksum Error: {0}")] ChecksumError(String), #[error("Parsing Error in {0}: {1}")] ParseError(&'static str, String), #[error("Version error: {0}")] VersionError(String), #[error("Dependency Error: {0}")] DependencyError(String), #[error("Build environment setup failed: {0}")] BuildEnvError(String), #[error("IoError: {0}")] IoError(String), #[error("Failed to execute command: {0}")] CommandExecError(String), #[error("Mach-O Error: {0}")] MachOError(String), #[error("Mach-O Modification Error: {0}")] MachOModificationError(String), #[error("Mach-O Relocation Error: Path too long - {0}")] PathTooLongError(String), #[error("Codesign Error: {0}")] CodesignError(String), } impl From for SpError { fn from(err: std::io::Error) -> Self { SpError::Io(Arc::new(err)) } } impl From for SpError { fn from(err: reqwest::Error) -> Self { SpError::Http(Arc::new(err)) } } impl From for SpError { fn from(err: serde_json::Error) -> Self { SpError::Json(Arc::new(err)) } } impl From for SpError { fn from(err: semver::Error) -> Self { SpError::SemVer(Arc::new(err)) } } impl From for SpError { fn from(err: object::read::Error) -> Self { SpError::Object(Arc::new(err)) } } pub type Result = std::result::Result; ``` ## /sp-common/src/formulary.rs ```rs path="/sp-common/src/formulary.rs" use std::collections::HashMap; // For caching parsed formulas use std::sync::Arc; // Removed: use std::fs; // Removed: use std::path::PathBuf; // Removed: const DEFAULT_CORE_TAP: &str = "homebrew/core"; use tracing::debug; use super::cache::Cache; use super::config::Config; use super::error::{Result, SpError}; use super::model::formula::Formula; // Import the Cache struct // Import Arc for thread-safe shared ownership /// Responsible for finding and loading Formula definitions from the API cache. #[derive()] pub struct Formulary { // config: Config, // Keep config if needed for cache path, etc. cache: Cache, // Optional: Add a cache for *parsed* formulas to avoid repeated parsing of the large JSON parsed_cache: std::sync::Mutex>>, /* Using Arc for thread-safety */ } impl Formulary { pub fn new(config: Config) -> Self { // Initialize the cache helper using the directory from config let cache = Cache::new(&config.cache_dir).unwrap_or_else(|e| { // Handle error appropriately - maybe panic or return Result? // Using expect here for simplicity, but Result is better. panic!("Failed to initialize cache in Formulary: {e}"); }); Self { // config, cache, parsed_cache: std::sync::Mutex::new(HashMap::new()), } } // Removed: resolve_formula_path // Removed: parse_qualified_name /// Loads a formula definition by name from the API cache. pub fn load_formula(&self, name: &str) -> Result { // 1. Check parsed cache first let mut parsed_cache_guard = self.parsed_cache.lock().unwrap(); if let Some(formula_arc) = parsed_cache_guard.get(name) { debug!("Loaded formula '{}' from parsed cache.", name); return Ok(Arc::clone(formula_arc).as_ref().clone()); } // Release lock early if not found drop(parsed_cache_guard); // 2. Load the raw formula list from the main cache file debug!("Loading raw formula data from cache file 'formula.json'..."); let raw_data = self.cache.load_raw("formula.json")?; // Assumes update stored it here // 3. Parse the entire JSON array // This could be expensive, hence the parsed_cache above. debug!("Parsing full formula list"); let all_formulas: Vec = serde_json::from_str(&raw_data) .map_err(|e| SpError::Cache(format!("Failed to parse cached formula data: {e}")))?; debug!("Parsed {} formulas.", all_formulas.len()); // 4. Find the requested formula and populate the parsed cache let mut found_formula: Option = None; // Lock again to update the parsed cache parsed_cache_guard = self.parsed_cache.lock().unwrap(); // Use entry API to avoid redundant lookups if another thread populated it for formula in all_formulas { let formula_name = formula.name.clone(); // Clone name for insertion let formula_arc = std::sync::Arc::new(formula); // Create Arc once // If this is the formula we're looking for, store it for return value if formula_name == name { found_formula = Some(Arc::clone(&formula_arc).as_ref().clone()); // Clone Formula out } // Insert into parsed cache using entry API parsed_cache_guard .entry(formula_name) .or_insert(formula_arc); } // 5. Return the found formula or an error match found_formula { Some(f) => { debug!( "Successfully loaded formula '{}' version {}", f.name, f.version_str_full() ); Ok(f) } None => { debug!( "Formula '{}' not found within the cached formula data.", name ); Err(SpError::Generic(format!( "Formula '{name}' not found in cache." ))) } } } } ``` ## /sp-common/src/keg.rs ```rs path="/sp-common/src/keg.rs" use std::fs; use std::path::{Path, PathBuf}; use semver::Version; use super::config::Config; use super::error::Result; /// Represents information about an installed package (Keg). #[derive(Debug, Clone, PartialEq, Eq)] pub struct InstalledKeg { pub name: String, pub version: Version, pub path: PathBuf, pub revision: u32, } /// Manages querying installed packages in the Cellar. #[derive(Debug)] pub struct KegRegistry { config: Config, } impl KegRegistry { pub fn new(config: Config) -> Self { Self { config } } /// Gets the path to the directory containing all versions for a formula. fn formula_cellar_path(&self, name: &str) -> PathBuf { self.config.cellar.join(name) } /// Calculates the conventional 'opt' path for a formula (e.g., /opt/homebrew/opt/foo). /// This path typically points to the currently linked/active version. pub fn get_opt_path(&self, name: &str) -> PathBuf { self.config.prefix.join("opt").join(name) } /// Checks if a formula is installed and returns its Keg info if it is. /// If multiple versions are installed, returns the latest version (considering revisions). pub fn get_installed_keg(&self, name: &str) -> Result> { let formula_dir = self.formula_cellar_path(name); if !formula_dir.is_dir() { return Ok(None); } let mut latest_keg: Option = None; for entry_result in fs::read_dir(&formula_dir)? { let entry = entry_result?; let path = entry.path(); if path.is_dir() { if let Some(version_str_full) = path.file_name().and_then(|n| n.to_str()) { let mut parts = version_str_full.splitn(2, '_'); let version_part = parts.next().unwrap_or(version_str_full); let revision = parts .next() .and_then(|s| s.parse::().ok()) .unwrap_or(0); let version_str_padded = if version_part.split('.').count() < 3 { let v_parts: Vec<&str> = version_part.split('.').collect(); match v_parts.len() { 1 => format!("{}.0.0", v_parts[0]), 2 => format!("{}.{}.0", v_parts[0], v_parts[1]), _ => version_part.to_string(), } } else { version_part.to_string() }; if let Ok(version) = Version::parse(&version_str_padded) { let current_keg = InstalledKeg { name: name.to_string(), version: version.clone(), revision, path: path.clone(), }; match latest_keg { Some(ref latest) => { if version > latest.version || (version == latest.version && revision > latest.revision) { latest_keg = Some(current_keg); } } None => { latest_keg = Some(current_keg); } } } } } } Ok(latest_keg) } /// Lists all installed kegs. /// Reads the cellar directory and parses all valid keg structures found. pub fn list_installed_kegs(&self) -> Result> { let mut installed_kegs = Vec::new(); let cellar_dir = self.cellar_path(); if !cellar_dir.is_dir() { return Ok(installed_kegs); } for formula_entry in fs::read_dir(cellar_dir)? { let formula_entry = formula_entry?; let formula_path = formula_entry.path(); if formula_path.is_dir() { if let Some(formula_name) = formula_path.file_name().and_then(|n| n.to_str()) { for version_entry in fs::read_dir(&formula_path)? { let version_entry = version_entry?; let version_path = version_entry.path(); if version_path.is_dir() { if let Some(version_str_full) = version_path.file_name().and_then(|n| n.to_str()) { let mut parts = version_str_full.splitn(2, '_'); let version_part = parts.next().unwrap_or(version_str_full); let revision = parts .next() .and_then(|s| s.parse::().ok()) .unwrap_or(0); let version_str_padded = if version_part.split('.').count() < 3 { let v_parts: Vec<&str> = version_part.split('.').collect(); match v_parts.len() { 1 => format!("{}.0.0", v_parts[0]), 2 => format!("{}.{}.0", v_parts[0], v_parts[1]), _ => version_part.to_string(), } } else { version_part.to_string() }; if let Ok(version) = Version::parse(&version_str_padded) { installed_kegs.push(InstalledKeg { name: formula_name.to_string(), version, revision, path: version_path.clone(), }); } } } } } } } Ok(installed_kegs) } /// Returns the root path of the Cellar. pub fn cellar_path(&self) -> &Path { &self.config.cellar } /// Returns the path for a specific versioned keg (whether installed or not). /// Includes revision in the path name if revision > 0. pub fn get_keg_path(&self, name: &str, version: &Version, revision: u32) -> PathBuf { let version_string = if revision > 0 { format!("{version}_{revision}") } else { version.to_string() }; self.formula_cellar_path(name).join(version_string) } } ``` ## /sp-common/src/lib.rs ```rs path="/sp-common/src/lib.rs" // sp-common/src/lib.rs pub mod cache; pub mod config; pub mod dependency; pub mod error; pub mod formulary; pub mod keg; pub mod model; // Optional: pub mod dependency_def; // Re-export key types pub use cache::Cache; pub use config::Config; pub use error::{Result, SpError}; pub use model::{Cask, Formula}; // etc. // Optional: pub use dependency_def::{Dependency, DependencyTag}; ``` ## /sp-common/src/model/cask.rs ```rs path="/sp-common/src/model/cask.rs" // ===== sp-core/src/model/cask.rs ===== use std::collections::HashMap; use std::fs; use serde::{Deserialize, Serialize}; use crate::config::Config; // <-- Added import pub type Artifact = serde_json::Value; /// Represents the `url` field, which can be a simple string or a map with specs #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(untagged)] pub enum UrlField { Simple(String), WithSpec { url: String, #[serde(default)] verified: Option, #[serde(flatten)] other: HashMap, }, } /// Represents the `sha256` field: hex, no_check, or per-architecture #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(untagged)] pub enum Sha256Field { Hex(String), #[serde(rename_all = "snake_case")] NoCheck { no_check: bool, }, PerArch(HashMap), } /// Appcast metadata #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Appcast { pub url: String, pub checkpoint: Option, } /// Represents conflicts with other casks or formulae #[derive(Debug, Clone, Serialize, Deserialize)] pub struct ConflictsWith { #[serde(default)] pub cask: Vec, #[serde(default)] pub formula: Vec, #[serde(flatten)] pub extra: HashMap, } /// Helper for architecture requirements: single string, list of strings, or list of spec objects #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(untagged)] pub enum ArchReq { One(String), // e.g., "arm64" Many(Vec), // e.g., ["arm64", "x86_64"] Specs(Vec), // Add this variant to handle [{"type": "arm", "bits": 64}] } /// Helper for macOS requirements: symbol, list, comparison, or map #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(untagged)] pub enum MacOSReq { Symbol(String), // ":big_sur" Symbols(Vec), // [":catalina", ":big_sur"] Comparison(String), // ">= :big_sur" Map(HashMap>), } /// Helper to coerce string-or-list into Vec #[derive(Debug, Clone, Serialize, Deserialize)] #[serde(untagged)] pub enum StringList { One(String), Many(Vec), } impl From for Vec { fn from(item: StringList) -> Self { match item { StringList::One(s) => vec![s], StringList::Many(v) => v, } } } /// Represents the specific architecture details found in some cask definitions #[derive(Debug, Clone, Serialize, Deserialize)] pub struct ArchSpec { #[serde(rename = "type")] // Map the JSON "type" field pub type_name: String, // e.g., "arm" pub bits: u32, // e.g., 64 } /// Represents `depends_on` block with multiple possible keys #[derive(Debug, Clone, Serialize, Deserialize, Default)] pub struct DependsOn { #[serde(default)] pub cask: Vec, #[serde(default)] pub formula: Vec, #[serde(default)] pub arch: Option, #[serde(default)] pub macos: Option, #[serde(flatten)] pub extra: HashMap, } /// The main Cask model matching Homebrew JSON v2 #[derive(Debug, Clone, Serialize, Deserialize, Default)] pub struct Cask { pub token: String, #[serde(default)] pub name: Option>, pub version: Option, pub desc: Option, pub homepage: Option, #[serde(default)] pub artifacts: Option>, #[serde(default)] pub url: Option, #[serde(default)] pub url_specs: Option>, #[serde(default)] pub sha256: Option, pub appcast: Option, pub auto_updates: Option, #[serde(default)] pub depends_on: Option, #[serde(default)] pub conflicts_with: Option, pub caveats: Option, pub stage_only: Option, #[serde(default)] pub uninstall: Option>, #[serde(default)] pub zap: Option>, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CaskList { pub casks: Vec, } impl Cask { /// Check if this cask is installed by looking for a manifest file /// in any versioned directory within the Caskroom. pub fn is_installed(&self, config: &Config) -> bool { let cask_dir = config.cask_dir(&self.token); // e.g., /opt/homebrew/Caskroom/firefox if !cask_dir.exists() || !cask_dir.is_dir() { return false; } // Iterate through entries (version dirs) inside the cask_dir match fs::read_dir(&cask_dir) { Ok(entries) => { // Clippy fix: Use flatten() to handle Result entries directly for entry in entries.flatten() { // <-- Use flatten() here let version_path = entry.path(); // Check if it's a directory (representing a version) if version_path.is_dir() { // Check for the existence of the manifest file let manifest_path = version_path.join("CASK_INSTALL_MANIFEST.json"); // <-- Correct filename if manifest_path.is_file() { // Found a manifest in at least one version directory, consider it // installed return true; } } } // If loop completes without finding a manifest in any version dir false } Err(e) => { // Log error if reading the directory fails, but assume not installed tracing::warn!( "Failed to read cask directory {} to check for installed versions: {}", cask_dir.display(), e ); false } } } /// Get the installed version of this cask by reading the directory names /// in the Caskroom. Returns the first version found (use cautiously if multiple /// versions could exist, though current install logic prevents this). pub fn installed_version(&self, config: &Config) -> Option { let cask_dir = config.cask_dir(&self.token); // if !cask_dir.exists() { return None; } // Iterate through entries and return the first directory name found match fs::read_dir(&cask_dir) { Ok(entries) => { // Clippy fix: Use flatten() for entry in entries.flatten() { // <-- Use flatten() here let path = entry.path(); // Check if it's a directory (representing a version) if path.is_dir() { if let Some(version_str) = path.file_name().and_then(|name| name.to_str()) { // Return the first version directory name found return Some(version_str.to_string()); } } } // No version directories found None } Err(_) => None, // Error reading directory } } /// Get a friendly name for display purposes pub fn display_name(&self) -> String { self.name .as_ref() .and_then(|names| names.first().cloned()) .unwrap_or_else(|| self.token.clone()) } } ``` ## /sp-common/src/model/formula.rs ```rs path="/sp-common/src/model/formula.rs" // sp-core/src/model/formula.rs // *** Corrected: Removed derive Deserialize from ResourceSpec, removed unused SpError import, // added ResourceSpec struct and parsing *** use std::collections::HashMap; use std::path::{Path, PathBuf}; use semver::Version; use serde::{Deserialize, Deserializer, Serialize, de}; use serde_json::Value; use tracing::{debug, error}; use crate::dependency::{Dependency, DependencyTag, Requirement}; use crate::error::Result; // <-- Import only Result // Use log crate imports // --- Resource Spec Struct --- // *** Added struct definition, REMOVED #[derive(Deserialize)] *** #[derive(Debug, Clone, Serialize, PartialEq, Eq)] pub struct ResourceSpec { pub name: String, pub url: String, pub sha256: String, // Add other potential fields like version if needed later } // --- Bottle Related Structs (Original structure) --- #[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] pub struct BottleFileSpec { pub url: String, pub sha256: String, } #[derive(Debug, Clone, Serialize, Deserialize, Default, PartialEq, Eq)] pub struct BottleSpec { pub stable: Option, } #[derive(Debug, Clone, Serialize, Deserialize, Default, PartialEq, Eq)] pub struct BottleStableSpec { pub rebuild: u32, #[serde(default)] pub files: HashMap, } // --- Formula Version Struct (Original structure) --- #[derive(Deserialize, Serialize, Debug, Clone, Default, PartialEq, Eq)] pub struct FormulaVersions { pub stable: Option, pub head: Option, #[serde(default)] pub bottle: bool, } // --- Main Formula Struct --- // *** Added 'resources' field *** #[derive(Debug, Clone, Serialize, PartialEq, Eq)] pub struct Formula { pub name: String, pub stable_version_str: String, #[serde(rename = "versions")] pub version_semver: Version, #[serde(default)] pub revision: u32, #[serde(default)] pub desc: Option, #[serde(default)] pub homepage: Option, #[serde(default)] pub url: String, #[serde(default)] pub sha256: String, #[serde(default)] pub mirrors: Vec, #[serde(default)] pub bottle: BottleSpec, #[serde(skip_deserializing)] pub dependencies: Vec, #[serde(default, deserialize_with = "deserialize_requirements")] pub requirements: Vec, #[serde(skip_deserializing)] // Skip direct deserialization for this field pub resources: Vec, // Stores parsed resources #[serde(skip)] pub install_keg_path: Option, } // Custom deserialization logic for Formula impl<'de> Deserialize<'de> for Formula { fn deserialize(deserializer: D) -> std::result::Result where D: Deserializer<'de>, { // Temporary struct reflecting the JSON structure more closely // *** Added 'resources' field to capture raw JSON Value *** #[derive(Deserialize, Debug)] struct RawFormulaData { name: String, #[serde(default)] revision: u32, desc: Option, homepage: Option, versions: FormulaVersions, #[serde(default)] url: String, #[serde(default)] sha256: String, #[serde(default)] mirrors: Vec, #[serde(default)] bottle: BottleSpec, #[serde(default)] dependencies: Vec, #[serde(default)] build_dependencies: Vec, #[serde(default)] test_dependencies: Vec, #[serde(default)] recommended_dependencies: Vec, #[serde(default)] optional_dependencies: Vec, #[serde(default, deserialize_with = "deserialize_requirements")] requirements: Vec, #[serde(default)] resources: Vec, // Capture resources as generic Value first #[serde(default)] urls: Option, } let raw: RawFormulaData = RawFormulaData::deserialize(deserializer)?; // --- Version Parsing (Original logic) --- let stable_version_str = raw .versions .stable .clone() .ok_or_else(|| de::Error::missing_field("versions.stable"))?; let version_semver = match crate::model::version::Version::parse(&stable_version_str) { Ok(v) => v.into(), Err(_) => { let mut majors = 0u32; let mut minors = 0u32; let mut patches = 0u32; let mut part_count = 0; for (i, part) in stable_version_str.split('.').enumerate() { let numeric_part = part .chars() .take_while(|c| c.is_ascii_digit()) .collect::(); if numeric_part.is_empty() && i > 0 { break; } if numeric_part.len() < part.len() && i > 0 { if let Ok(num) = numeric_part.parse::() { match i { 0 => majors = num, 1 => minors = num, 2 => patches = num, _ => {} } part_count += 1; } break; } if let Ok(num) = numeric_part.parse::() { match i { 0 => majors = num, 1 => minors = num, 2 => patches = num, _ => {} } part_count += 1; } if i >= 2 { break; } } let version_str_padded = match part_count { 1 => format!("{majors}.0.0"), 2 => format!("{majors}.{minors}.0"), _ => format!("{majors}.{minors}.{patches}"), }; match Version::parse(&version_str_padded) { Ok(v) => v, Err(_) => { error!( "Warning: Could not parse version '{}' (sanitized to '{}') for formula '{}'. Using 0.0.0.", stable_version_str, version_str_padded, raw.name ); Version::new(0, 0, 0) } } } }; // --- URL/SHA256 Logic (Original logic) --- let mut final_url = raw.url; let mut final_sha256 = raw.sha256; if final_url.is_empty() { if let Some(Value::Object(urls_map)) = raw.urls { if let Some(Value::Object(stable_url_info)) = urls_map.get("stable") { if let Some(Value::String(u)) = stable_url_info.get("url") { final_url = u.clone(); } if let Some(Value::String(s)) = stable_url_info .get("checksum") .or_else(|| stable_url_info.get("sha256")) { final_sha256 = s.clone(); } } } } if final_url.is_empty() && raw.versions.head.is_none() { debug!("Warning: Formula '{}' has no stable URL defined.", raw.name); } // --- Dependency Processing (Original logic) --- let mut combined_dependencies: Vec = Vec::new(); let mut seen_deps: HashMap = HashMap::new(); let mut process_list = |deps: &[String], tag: DependencyTag| { for name in deps { *seen_deps .entry(name.clone()) .or_insert(DependencyTag::empty()) |= tag; } }; process_list(&raw.dependencies, DependencyTag::RUNTIME); process_list(&raw.build_dependencies, DependencyTag::BUILD); process_list(&raw.test_dependencies, DependencyTag::TEST); process_list( &raw.recommended_dependencies, DependencyTag::RECOMMENDED | DependencyTag::RUNTIME, ); process_list( &raw.optional_dependencies, DependencyTag::OPTIONAL | DependencyTag::RUNTIME, ); for (name, tags) in seen_deps { combined_dependencies.push(Dependency::new_with_tags(name, tags)); } // --- Resource Processing --- // *** Added parsing logic for the 'resources' field *** let mut combined_resources: Vec = Vec::new(); for res_val in raw.resources { // Homebrew API JSON format puts resource spec inside a keyed object // e.g., { "resource_name": { "url": "...", "sha256": "..." } } if let Value::Object(map) = res_val { // Assume only one key-value pair per object in the array if let Some((res_name, res_spec_val)) = map.into_iter().next() { // Use the manual Deserialize impl for ResourceSpec match ResourceSpec::deserialize(res_spec_val.clone()) { // Use ::deserialize Ok(mut res_spec) => { // Inject the name from the key if missing if res_spec.name.is_empty() { res_spec.name = res_name; } else if res_spec.name != res_name { debug!( "Resource name mismatch in formula '{}': key '{}' vs spec '{}'. Using key.", raw.name, res_name, res_spec.name ); res_spec.name = res_name; // Prefer key name } // Ensure required fields are present if res_spec.url.is_empty() || res_spec.sha256.is_empty() { debug!( "Resource '{}' for formula '{}' is missing URL or SHA256. Skipping.", res_spec.name, raw.name ); continue; } debug!( "Parsed resource '{}' for formula '{}'", res_spec.name, raw.name ); combined_resources.push(res_spec); } Err(e) => { // Use display for the error which comes from serde::de::Error::custom debug!( "Failed to parse resource spec value for key '{}' in formula '{}': {}. Value: {:?}", res_name, raw.name, e, res_spec_val ); } } } else { debug!("Empty resource object found in formula '{}'.", raw.name); } } else { debug!( "Unexpected format for resource entry in formula '{}': expected object, got {:?}", raw.name, res_val ); } } Ok(Self { name: raw.name, stable_version_str, version_semver, revision: raw.revision, desc: raw.desc, homepage: raw.homepage, url: final_url, sha256: final_sha256, mirrors: raw.mirrors, bottle: raw.bottle, dependencies: combined_dependencies, requirements: raw.requirements, resources: combined_resources, // Assign parsed resources install_keg_path: None, }) } } // --- Formula impl Methods --- impl Formula { // dependencies() and requirements() are unchanged pub fn dependencies(&self) -> Result> { Ok(self.dependencies.clone()) } pub fn requirements(&self) -> Result> { Ok(self.requirements.clone()) } // *** Added: Returns a clone of the defined resources. *** pub fn resources(&self) -> Result> { Ok(self.resources.clone()) } // Other methods (set_keg_path, version_str_full, accessors) are unchanged pub fn set_keg_path(&mut self, path: PathBuf) { self.install_keg_path = Some(path); } pub fn version_str_full(&self) -> String { if self.revision > 0 { format!("{}_{}", self.stable_version_str, self.revision) } else { self.stable_version_str.clone() } } pub fn name(&self) -> &str { &self.name } pub fn version(&self) -> &Version { &self.version_semver } pub fn source_url(&self) -> &str { &self.url } pub fn source_sha256(&self) -> &str { &self.sha256 } pub fn get_bottle_spec(&self, bottle_tag: &str) -> Option<&BottleFileSpec> { self.bottle.stable.as_ref()?.files.get(bottle_tag) } } // --- BuildEnvironment Dependency Interface (Unchanged) --- pub trait FormulaDependencies { fn name(&self) -> &str; fn install_prefix(&self, cellar_path: &Path) -> Result; fn resolved_runtime_dependency_paths(&self) -> Result>; fn resolved_build_dependency_paths(&self) -> Result>; fn all_resolved_dependency_paths(&self) -> Result>; } impl FormulaDependencies for Formula { fn name(&self) -> &str { &self.name } fn install_prefix(&self, cellar_path: &Path) -> Result { let version_string = self.version_str_full(); Ok(cellar_path.join(self.name()).join(version_string)) } fn resolved_runtime_dependency_paths(&self) -> Result> { Ok(Vec::new()) } fn resolved_build_dependency_paths(&self) -> Result> { Ok(Vec::new()) } fn all_resolved_dependency_paths(&self) -> Result> { Ok(Vec::new()) } } // --- Deserialization Helpers --- // deserialize_requirements remains unchanged fn deserialize_requirements<'de, D>( deserializer: D, ) -> std::result::Result, D::Error> where D: serde::Deserializer<'de>, { #[derive(Deserialize, Debug)] struct ReqWrapper { #[serde(default)] name: String, #[serde(default)] version: Option, #[serde(default)] cask: Option, #[serde(default)] download: Option, } let raw_reqs: Vec = Deserialize::deserialize(deserializer)?; let mut requirements = Vec::new(); for req_val in raw_reqs { if let Ok(req_obj) = serde_json::from_value::(req_val.clone()) { match req_obj.name.as_str() { "macos" => { requirements.push(Requirement::MacOS( req_obj.version.unwrap_or_else(|| "any".to_string()), )); } "xcode" => { requirements.push(Requirement::Xcode( req_obj.version.unwrap_or_else(|| "any".to_string()), )); } "cask" => { requirements.push(Requirement::Other(format!( "Cask Requirement: {}", req_obj.cask.unwrap_or_else(|| "?".to_string()) ))); } "download" => { requirements.push(Requirement::Other(format!( "Download Requirement: {}", req_obj.download.unwrap_or_else(|| "?".to_string()) ))); } _ => requirements.push(Requirement::Other(format!( "Unknown requirement type: {req_obj:?}" ))), } } else if let Value::String(req_str) = req_val { match req_str.as_str() { "macos" => requirements.push(Requirement::MacOS("latest".to_string())), "xcode" => requirements.push(Requirement::Xcode("latest".to_string())), _ => { requirements.push(Requirement::Other(format!("Simple requirement: {req_str}"))) } } } else { debug!("Warning: Could not parse requirement: {:?}", req_val); requirements.push(Requirement::Other(format!( "Unparsed requirement: {req_val:?}" ))); } } Ok(requirements) } // Manual impl Deserialize for ResourceSpec (unchanged, this is needed) impl<'de> Deserialize<'de> for ResourceSpec { fn deserialize(deserializer: D) -> std::result::Result where D: Deserializer<'de>, { #[derive(Deserialize)] struct Helper { #[serde(default)] name: String, // name is often the key, not in the value url: String, sha256: String, } let helper = Helper::deserialize(deserializer)?; // Note: The actual resource name comes from the key in the map during Formula // deserialization Ok(Self { name: helper.name, url: helper.url, sha256: helper.sha256, }) } } ``` ## /sp-common/src/model/mod.rs ```rs path="/sp-common/src/model/mod.rs" // src/model/mod.rs // Declares the modules within the model directory. use std::sync::Arc; pub mod cask; pub mod formula; pub mod version; // Re-export pub use cask::Cask; pub use formula::Formula; #[derive(Debug, Clone)] pub enum InstallTargetIdentifier { Formula(Arc), Cask(Arc), } ``` ## /sp-common/src/model/version.rs ```rs path="/sp-common/src/model/version.rs" // **File:** sp-core/src/model/version.rs (New file) use std::fmt; use std::str::FromStr; use serde::{Deserialize, Deserializer, Serialize, Serializer}; use crate::error::{Result, SpError}; /// Wrapper around semver::Version for formula versions. #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct Version(semver::Version); impl Version { pub fn parse(s: &str) -> Result { // Attempt standard semver parse first semver::Version::parse(s).map(Version).or_else(|_| { // Homebrew often uses versions like "1.2.3_1" (revision) or just "123" // Try to handle these by stripping suffixes or padding // This is a simplified handling, Homebrew's PkgVersion is complex let cleaned = s.split('_').next().unwrap_or(s); // Take part before _ let parts: Vec<&str> = cleaned.split('.').collect(); let padded = match parts.len() { 1 => format!("{}.0.0", parts[0]), 2 => format!("{}.{}.0", parts[0], parts[1]), _ => cleaned.to_string(), // Use original if 3+ parts }; semver::Version::parse(&padded).map(Version).map_err(|e| { SpError::VersionError(format!( "Failed to parse version '{s}' (tried '{padded}'): {e}" )) }) }) } } impl FromStr for Version { type Err = SpError; fn from_str(s: &str) -> std::result::Result { Self::parse(s) } } impl fmt::Display for Version { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { // TODO: Preserve original format if possible? PkgVersion complexity. // For now, display the parsed semver representation. write!(f, "{}", self.0) } } // Manual Serialize/Deserialize to handle the Version<->String conversion impl Serialize for Version { fn serialize(&self, serializer: S) -> std::result::Result where S: Serializer, { serializer.serialize_str(&self.to_string()) } } impl AsRef for Version { fn as_ref(&self) -> &Self { self } } // Removed redundant ToString implementation as it conflicts with the blanket implementation in std. impl From for semver::Version { fn from(version: Version) -> Self { version.0 } } impl<'de> Deserialize<'de> for Version { fn deserialize(deserializer: D) -> std::result::Result where D: Deserializer<'de>, { let s = String::deserialize(deserializer)?; Self::from_str(&s).map_err(serde::de::Error::custom) } } // Add to sp-core/src/utils/error.rs: // #[error("Version error: {0}")] // VersionError(String), // Add to sp-core/Cargo.toml: // [dependencies] // semver = "1.0" ``` ## /sp-core/Cargo.toml ```toml path="/sp-core/Cargo.toml" # sp-core: ./sp-core/Cargo.toml [package] name = "sp-core" version = "0.1.0" edition = "2024" # Or "2021" if not using nightly description = "Core library for the sp package manager" repository = "https://github.com/alexykn/sp" license = "BSD-3 Clause" [dependencies] sp-net = { path = "../sp-net" } sp-common = { path = "../sp-common" } # Inherited from workspace anyhow = { workspace = true } thiserror = { workspace = true } serde = { workspace = true } serde_json = { workspace = true } env_logger = { workspace = true } semver = { workspace = true } dirs = { workspace = true } walkdir = { workspace = true } reqwest = { workspace = true } url = { workspace = true } sha2 = { workspace = true } indicatif = { workspace = true } hex = { workspace = true } object = { workspace = true } tokio = { workspace = true } futures = { workspace = true } rand = { workspace = true } infer = { workspace = true } num_cpus = { workspace = true } humantime = { workspace = true } bitflags = { workspace = true } tracing = { workspace = true } devtools = "0.3.3" which = "7.0.3" toml = "0.8.21" fs_extra = "1.3" git2 = "0.20.1" cmd_lib = "1.9.5" tempfile = "3.19.1" regex = "1.11.1" glob = "0.3.2" flate2 = "1.1.1" bzip2 = "0.5.2" xz2 = "0.1.7" tar = "0.4.44" zip = "2.6.1" chrono = { version = "0.4.40", features = ["serde"] } async-recursion = "1.1.1" ``` ## /sp-core/src/build/cask/artifacts/app.rs ```rs path="/sp-core/src/build/cask/artifacts/app.rs" // In sp-core/src/build/cask/app.rs use std::fs; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::{Result, SpError}; use sp_common::model::cask::Cask; use tracing::{debug, error}; // Added log imports use crate::build::cask::InstalledArtifact; /// Installs an app bundle from a staged location to /Applications and creates a symlink in the /// caskroom. Returns a Vec containing the details of artifacts created. pub fn install_app_from_staged( _cask: &Cask, // Keep cask for potential future use (e.g., specific app flags) staged_app_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { // <-- Return type changed if !staged_app_path.exists() || !staged_app_path.is_dir() { return Err(SpError::NotFound(format!( "Staged app bundle not found or is not a directory: {}", staged_app_path.display() ))); } let app_name = staged_app_path .file_name() .ok_or_else(|| { SpError::Generic(format!( "Invalid staged app path: {}", staged_app_path.display() )) })? .to_string_lossy(); let applications_dir = config.applications_dir(); let final_app_destination = applications_dir.join(app_name.as_ref()); debug!( "Moving app '{}' from stage to {}", app_name, applications_dir.display() ); // --- Remove Existing Destination --- if final_app_destination.exists() || final_app_destination.symlink_metadata().is_ok() { debug!( "Removing existing app at {}", final_app_destination.display() ); let remove_result = if final_app_destination.is_dir() { fs::remove_dir_all(&final_app_destination) } else { fs::remove_file(&final_app_destination) // Remove file or symlink }; if let Err(e) = remove_result { if e.kind() == std::io::ErrorKind::PermissionDenied || e.kind() == std::io::ErrorKind::DirectoryNotEmpty { debug!("Direct removal failed ({}). Trying with sudo rm -rf", e); debug!("Executing: sudo rm -rf {}", final_app_destination.display()); let output = Command::new("sudo") .arg("rm") .arg("-rf") .arg(&final_app_destination) .output()?; if !output.status.success() { let stderr = String::from_utf8_lossy(&output.stderr); error!("sudo rm -rf failed ({}): {}", output.status, stderr); return Err(SpError::InstallError(format!( "Failed to remove existing app at {}: {}", final_app_destination.display(), stderr ))); } debug!("Successfully removed existing app with sudo."); } else { error!( "Failed to remove existing app at {}: {}", final_app_destination.display(), e ); return Err(SpError::Io(std::sync::Arc::new(e))); } } else { debug!("Successfully removed existing app."); } } // --- Move/Copy from Stage --- debug!( "Moving staged app {} to {}", staged_app_path.display(), final_app_destination.display() ); let move_output = Command::new("mv") .arg(staged_app_path) // Source .arg(&final_app_destination) // Destination .output()?; if !move_output.status.success() { let stderr = String::from_utf8_lossy(&move_output.stderr).to_lowercase(); if stderr.contains("cross-device link") || stderr.contains("operation not permitted") || stderr.contains("permission denied") { debug!("Direct mv failed ({}), trying cp -R", stderr); debug!( "Executing: cp -R {} {}", staged_app_path.display(), final_app_destination.display() ); let copy_output = Command::new("cp") .arg("-R") // Recursive copy .arg(staged_app_path) .arg(&final_app_destination) .output()?; if !copy_output.status.success() { let copy_stderr = String::from_utf8_lossy(©_output.stderr); error!("cp -R failed ({}): {}", copy_output.status, copy_stderr); return Err(SpError::InstallError(format!( "Failed to copy app from stage to {}: {}", final_app_destination.display(), copy_stderr ))); } debug!("Successfully copied app using cp -R."); } else { error!("mv command failed ({}): {}", move_output.status, stderr); return Err(SpError::InstallError(format!( "Failed to move app from stage to {}: {}", final_app_destination.display(), stderr ))); } } else { debug!("Successfully moved app using mv."); } // --- Record the main app artifact --- let mut created_artifacts = vec![InstalledArtifact::App { path: final_app_destination.clone(), }]; // --- Create Caskroom Symlink --- let caskroom_app_link_path = cask_version_install_path.join(app_name.as_ref()); debug!( "Linking {} -> {}", caskroom_app_link_path.display(), final_app_destination.display() ); if caskroom_app_link_path.exists() || caskroom_app_link_path.symlink_metadata().is_ok() { if let Err(e) = fs::remove_file(&caskroom_app_link_path) { debug!( "Failed to remove existing item at caskroom link path {}: {}", caskroom_app_link_path.display(), e ); } } #[cfg(unix)] { if let Err(e) = std::os::unix::fs::symlink(&final_app_destination, &caskroom_app_link_path) { debug!( "Failed to create symlink {} -> {}: {}", caskroom_app_link_path.display(), final_app_destination.display(), e ); // Decide if this should be a fatal error or just a warning // For now, let's just warn and continue. } else { debug!("Successfully created caskroom link."); // Record the link artifact if created successfully created_artifacts.push(InstalledArtifact::CaskroomLink { link_path: caskroom_app_link_path.clone(), target_path: final_app_destination.clone(), }); } } #[cfg(not(unix))] { debug!( "Symlink creation not supported on this platform. Skipping link for {}.", caskroom_app_link_path.display() ); } debug!("Successfully installed app artifact: {}", app_name); Ok(created_artifacts) // <-- Return the collected artifacts } ``` ## /sp-core/src/build/cask/artifacts/audio_unit_plugin.rs ```rs path="/sp-core/src/build/cask/artifacts/audio_unit_plugin.rs" // ===== sp-core/src/build/cask/artifacts/audio_unit_plugin.rs ===== use std::fs; use std::os::unix::fs::symlink; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::Result; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; /// Installs `audio_unit_plugin` bundles from the staging area into /// `~/Library/Audio/Plug-Ins/Components`, then symlinks them into the Caskroom. /// /// Mirrors Homebrew’s `AudioUnitPlugin < Moved` pattern. pub fn install_audio_unit_plugin( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(entries) = obj.get("audio_unit_plugin").and_then(|v| v.as_array()) { // Target directory for Audio Unit components let dest_dir = config .home_dir() .join("Library") .join("Audio") .join("Plug-Ins") .join("Components"); fs::create_dir_all(&dest_dir)?; for entry in entries { if let Some(bundle_name) = entry.as_str() { let src = stage_path.join(bundle_name); if !src.exists() { debug!( "AudioUnit plugin '{}' not found in staging; skipping", bundle_name ); continue; } let dest = dest_dir.join(bundle_name); if dest.exists() { fs::remove_dir_all(&dest)?; } debug!( "Installing AudioUnit plugin '{}' → '{}'", src.display(), dest.display() ); // Try move, fallback to copy let status = Command::new("mv").arg(&src).arg(&dest).status()?; if !status.success() { Command::new("cp").arg("-R").arg(&src).arg(&dest).status()?; } // Record moved plugin installed.push(InstalledArtifact::App { path: dest.clone() }); // Symlink into Caskroom for reference let link = cask_version_install_path.join(bundle_name); let _ = fs::remove_file(&link); symlink(&dest, &link)?; installed.push(InstalledArtifact::CaskroomLink { link_path: link, target_path: dest, }); } } break; // one stanza only } } } } Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/binary.rs ```rs path="/sp-core/src/build/cask/artifacts/binary.rs" // ===== sp-core/src/build/cask/artifacts/binary.rs ===== use std::fs; use std::os::unix::fs::symlink; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::{Result, SpError}; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; /// Installs `binary` artifacts, which can be declared as: /// - a simple string: `"foo"` (source and target both `"foo"`) /// - a map: `{ "source": "path/in/stage", "target": "name", "chmod": "0755" }` /// - a map with just `"target"`: automatically generate a wrapper script /// /// Copies or symlinks executables into the prefix bin directory, /// and records both the link and caskroom reference. pub fn install_binary( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(entries) = obj.get("binary") { // Normalize into an array let arr = if let Some(arr) = entries.as_array() { arr.clone() } else { vec![entries.clone()] }; let bin_dir = config.bin_dir(); fs::create_dir_all(&bin_dir)?; for entry in arr { // Determine source, target, and optional chmod let (source_rel, target_name, chmod) = if let Some(tgt) = entry.as_str() { // simple form: "foo" (tgt.to_string(), tgt.to_string(), None) } else if let Some(m) = entry.as_object() { let target = m .get("target") .and_then(|v| v.as_str()) .map(String::from) .ok_or_else(|| { SpError::InstallError(format!( "Binary artifact missing 'target': {m:?}" )) })?; let chmod = m.get("chmod").and_then(|v| v.as_str()).map(String::from); // If `source` is provided, use it; otherwise generate wrapper let source = if let Some(src) = m.get("source").and_then(|v| v.as_str()) { src.to_string() } else { // generate wrapper script in caskroom let wrapper_name = format!("{target}.wrapper.sh"); let wrapper_path = cask_version_install_path.join(&wrapper_name); // assume the real executable lives inside the .app bundle let app_name = format!("{}.app", cask.display_name()); let exe_path = format!("/Applications/{app_name}/Contents/MacOS/{target}"); let script = format!("#!/usr/bin/env bash\nexec \"{exe_path}\" \"$@\"\n"); fs::write(&wrapper_path, script)?; Command::new("chmod") .arg("+x") .arg(&wrapper_path) .status()?; wrapper_name }; (source, target, chmod) } else { debug!("Invalid binary artifact entry: {:?}", entry); continue; }; let src_path = stage_path.join(&source_rel); if !src_path.exists() { debug!("Binary source '{}' not found, skipping", src_path.display()); continue; } // Link into bin_dir let link_path = bin_dir.join(&target_name); let _ = fs::remove_file(&link_path); debug!( "Linking binary '{}' → '{}'", src_path.display(), link_path.display() ); symlink(&src_path, &link_path)?; // Apply chmod if specified if let Some(mode) = chmod.as_deref() { let _ = Command::new("chmod").arg(mode).arg(&link_path).status(); } installed.push(InstalledArtifact::BinaryLink { link_path: link_path.clone(), target_path: src_path.clone(), }); // Also create a Caskroom symlink for reference let caskroom_link = cask_version_install_path.join(&target_name); let _ = fs::remove_file(&caskroom_link); symlink(&link_path, &caskroom_link)?; installed.push(InstalledArtifact::CaskroomLink { link_path: caskroom_link, target_path: link_path, }); } // Only one binary stanza per cask break; } } } } Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/colorpicker.rs ```rs path="/sp-core/src/build/cask/artifacts/colorpicker.rs" // ===== sp-core/src/build/cask/artifacts/colorpicker.rs ===== use std::fs; use std::os::unix::fs::symlink; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::Result; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; /// Installs any `colorpicker` stanzas from the Cask definition. /// /// Homebrew’s `Colorpicker` artifact simply subclasses `Moved` with /// `dirmethod :colorpickerdir` → `~/Library/ColorPickers` :contentReference[oaicite:3]{index=3}. pub fn install_colorpicker( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(entries) = obj.get("colorpicker").and_then(|v| v.as_array()) { // For each declared bundle name: for entry in entries { if let Some(bundle_name) = entry.as_str() { let src = stage_path.join(bundle_name); if !src.exists() { debug!( "Colorpicker bundle '{}' not found in stage; skipping", bundle_name ); continue; } // Ensure ~/Library/ColorPickers exists // :contentReference[oaicite:4]{index=4} let dest_dir = config .home_dir() // e.g. /Users/alxknt .join("Library") .join("ColorPickers"); fs::create_dir_all(&dest_dir)?; let dest = dest_dir.join(bundle_name); // Remove any previous copy if dest.exists() { fs::remove_dir_all(&dest)?; } debug!( "Moving colorpicker '{}' → '{}'", src.display(), dest.display() ); // mv, fallback to cp -R if necessary (cross‑device) let status = Command::new("mv").arg(&src).arg(&dest).status()?; if !status.success() { Command::new("cp").arg("-R").arg(&src).arg(&dest).status()?; } // Record as a moved artifact (bundle installed) installed.push(InstalledArtifact::App { path: dest.clone() }); // Symlink back into Caskroom for reference // :contentReference[oaicite:5]{index=5} let link = cask_version_install_path.join(bundle_name); let _ = fs::remove_file(&link); symlink(&dest, &link)?; installed.push(InstalledArtifact::CaskroomLink { link_path: link, target_path: dest, }); } } break; // only one `colorpicker` stanza per cask } } } } Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/dictionary.rs ```rs path="/sp-core/src/build/cask/artifacts/dictionary.rs" // ===== sp-core/src/build/cask/artifacts/dictionary.rs ===== use std::fs; use std::os::unix::fs::symlink; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::Result; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; /// Implements the `dictionary` stanza by moving each declared /// `.dictionary` bundle from the staging area into `~/Library/Dictionaries`, /// then symlinking it in the Caskroom. /// /// Homebrew’s Ruby definition is simply: /// \`\`\`ruby /// class Dictionary < Moved; end /// \`\`\` /// :contentReference[oaicite:2]{index=2} pub fn install_dictionary( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); // Find any `dictionary` arrays in the raw JSON artifacts if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(entries) = obj.get("dictionary").and_then(|v| v.as_array()) { for entry in entries { if let Some(bundle_name) = entry.as_str() { let src = stage_path.join(bundle_name); if !src.exists() { debug!( "Dictionary bundle '{}' not found in staging; skipping", bundle_name ); continue; } // Standard user dictionary directory: ~/Library/Dictionaries // :contentReference[oaicite:3]{index=3} let dest_dir = config .home_dir() // e.g. /Users/alxknt .join("Library") .join("Dictionaries"); fs::create_dir_all(&dest_dir)?; let dest = dest_dir.join(bundle_name); // Remove any previous install if dest.exists() { fs::remove_dir_all(&dest)?; } debug!( "Moving dictionary '{}' → '{}'", src.display(), dest.display() ); // Try a direct move; fall back to recursive copy let status = Command::new("mv").arg(&src).arg(&dest).status()?; if !status.success() { Command::new("cp").arg("-R").arg(&src).arg(&dest).status()?; } // Record the moved bundle installed.push(InstalledArtifact::App { path: dest.clone() }); // Symlink back into Caskroom for reference let link = cask_version_install_path.join(bundle_name); let _ = fs::remove_file(&link); symlink(&dest, &link)?; installed.push(InstalledArtifact::CaskroomLink { link_path: link, target_path: dest, }); } } break; // Only one `dictionary` stanza per cask } } } } Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/font.rs ```rs path="/sp-core/src/build/cask/artifacts/font.rs" // ===== sp-core/src/build/cask/artifacts/font.rs ===== use std::fs; use std::os::unix::fs::symlink; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::Result; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; /// Implements the `font` stanza by moving each declared /// font file or directory from the staging area into /// `~/Library/Fonts`, then symlinking it in the Caskroom. /// /// Mirrors Homebrew’s `Dictionary < Moved` and `Colorpicker < Moved` pattern. pub fn install_font( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); // Look for "font" entries in the JSON artifacts if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(entries) = obj.get("font").and_then(|v| v.as_array()) { // Target directory for user fonts let dest_dir = config.home_dir().join("Library").join("Fonts"); fs::create_dir_all(&dest_dir)?; for entry in entries { if let Some(name) = entry.as_str() { let src = stage_path.join(name); if !src.exists() { debug!("Font '{}' not found in staging; skipping", name); continue; } let dest = dest_dir.join(name); if dest.exists() { fs::remove_file(&dest)?; } debug!("Installing font '{}' → '{}'", src.display(), dest.display()); // Try move, fallback to copy let status = Command::new("mv").arg(&src).arg(&dest).status()?; if !status.success() { Command::new("cp").arg("-R").arg(&src).arg(&dest).status()?; } // Record moved font installed.push(InstalledArtifact::App { path: dest.clone() }); // Symlink into Caskroom for reference let link = cask_version_install_path.join(name); let _ = fs::remove_file(&link); symlink(&dest, &link)?; installed.push(InstalledArtifact::CaskroomLink { link_path: link, target_path: dest, }); } } break; // single font stanza per cask } } } } Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/input_method.rs ```rs path="/sp-core/src/build/cask/artifacts/input_method.rs" // ===== sp-core/src/build/cask/artifacts/input_method.rs ===== use std::fs; use std::os::unix::fs as unix_fs; use std::path::Path; use sp_common::config::Config; use sp_common::error::Result; use sp_common::model::cask::Cask; use crate::build::cask::{InstalledArtifact, write_cask_manifest}; /// Install `input_method` artifacts from the staged directory into /// `~/Library/Input Methods` and record installed artifacts. pub fn install_input_method( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); // Ensure we have an array of input_method names if let Some(artifacts_def) = &cask.artifacts { for artifact_value in artifacts_def { if let Some(obj) = artifact_value.as_object() { if let Some(names) = obj.get("input_method").and_then(|v| v.as_array()) { for name_val in names { if let Some(name) = name_val.as_str() { let source = stage_path.join(name); if source.exists() { // Target directory: ~/Library/Input Methods let target_dir = config.home_dir().join("Library").join("Input Methods"); if !target_dir.exists() { fs::create_dir_all(&target_dir)?; } let target = target_dir.join(name); // Remove existing input method if present if target.exists() { fs::remove_dir_all(&target)?; } // Move (or rename) the staged bundle fs::rename(&source, &target) .or_else(|_| unix_fs::symlink(&source, &target))?; // Record the main artifact installed.push(InstalledArtifact::App { path: target.clone(), }); // Create a caskroom symlink for uninstallation let link_path = cask_version_install_path.join(name); if link_path.exists() { fs::remove_file(&link_path)?; } #[cfg(unix)] std::os::unix::fs::symlink(&target, &link_path)?; installed.push(InstalledArtifact::CaskroomLink { link_path, target_path: target, }); } } } } } } } // Write manifest for these artifacts write_cask_manifest(cask, cask_version_install_path, installed.clone())?; Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/installer.rs ```rs path="/sp-core/src/build/cask/artifacts/installer.rs" // ===== sp-core/src/build/cask/artifacts/installer.rs ===== use std::path::Path; use std::process::{Command, Stdio}; use sp_common::config::Config; use sp_common::error::{Result, SpError}; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; // Helper to validate that the executable is a filename (relative, no '/' or "..") fn validate_filename_or_relative_path(file: &str) -> Result { if file.starts_with("/") || file.contains("..") || file.contains("/") { return Err(SpError::Generic(format!( "Invalid executable filename: {file}" ))); } Ok(file.to_string()) } // Helper to validate a command argument based on allowed characters or allowed option form fn validate_argument(arg: &str) -> Result { if arg.starts_with("-") { return Ok(arg.to_string()); } if arg.starts_with("/") || arg.contains("..") || arg.contains("/") { return Err(SpError::Generic(format!("Invalid argument: {arg}"))); } if !arg .chars() .all(|c| c.is_alphanumeric() || c == '-' || c == '_' || c == '.') { return Err(SpError::Generic(format!( "Invalid characters in argument: {arg}" ))); } Ok(arg.to_string()) } /// Implements the `installer` stanza: /// - `manual`: prints instructions to open the staged path. /// - `script`: runs the given executable with args, under sudo if requested. /// /// Mirrors Homebrew’s `Cask::Artifact::Installer` behavior :contentReference[oaicite:1]{index=1}. pub fn run_installer( cask: &Cask, stage_path: &Path, _cask_version_install_path: &Path, _config: &Config, ) -> Result> { let mut installed = Vec::new(); // Find the `installer` definitions in the raw JSON artifacts if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(insts) = obj.get("installer").and_then(|v| v.as_array()) { for inst in insts { if let Some(inst_obj) = inst.as_object() { if let Some(man) = inst_obj.get("manual").and_then(|v| v.as_str()) { debug!( "Cask {} requires manual install. To finish:\n open {}", cask.token, stage_path.join(man).display() ); continue; } let exe_key = if inst_obj.contains_key("script") { "script" } else { "executable" }; let executable = inst_obj .get(exe_key) .and_then(|v| v.as_str()) .ok_or_else(|| { SpError::Generic(format!( "installer stanza missing '{exe_key}' field" )) })?; let args: Vec = inst_obj .get("args") .and_then(|v| v.as_array()) .map(|arr| { arr.iter() .filter_map(|a| a.as_str().map(String::from)) .collect() }) .unwrap_or_default(); let use_sudo = inst_obj .get("sudo") .and_then(|v| v.as_bool()) .unwrap_or(false); let validated_executable = validate_filename_or_relative_path(executable)?; let mut validated_args = Vec::new(); for arg in &args { validated_args.push(validate_argument(arg)?); } let script_path = stage_path.join(&validated_executable); if !script_path.exists() { return Err(SpError::NotFound(format!( "Installer script not found: {}", script_path.display() ))); } debug!( "Running installer script '{}' for cask {}", script_path.display(), cask.token ); let mut cmd = if use_sudo { let mut c = Command::new("sudo"); c.arg(script_path.clone()); c } else { Command::new(script_path.clone()) }; cmd.args(&validated_args); cmd.stdin(Stdio::null()) .stdout(Stdio::inherit()) .stderr(Stdio::inherit()); let status = cmd.status().map_err(|e| { SpError::Generic(format!("Failed to spawn installer script: {e}")) })?; if !status.success() { return Err(SpError::InstallError(format!( "Installer script exited with {status}" ))); } installed .push(InstalledArtifact::CaskroomReference { path: script_path }); } } } } } } Ok(installed) } ``` ## /sp-core/src/build/cask/artifacts/internet_plugin.rs ```rs path="/sp-core/src/build/cask/artifacts/internet_plugin.rs" // ===== sp-core/src/build/cask/artifacts/internet_plugin.rs ===== use std::fs; use std::os::unix::fs::symlink; use std::path::Path; use std::process::Command; use sp_common::config::Config; use sp_common::error::Result; use sp_common::model::cask::Cask; use tracing::debug; use crate::build::cask::InstalledArtifact; /// Implements the `internet_plugin` stanza by moving each declared /// internet plugin bundle from the staging area into /// `~/Library/Internet Plug-Ins`, then symlinking it in the Caskroom. /// /// Mirrors Homebrew’s `InternetPlugin < Moved` pattern. pub fn install_internet_plugin( cask: &Cask, stage_path: &Path, cask_version_install_path: &Path, config: &Config, ) -> Result> { let mut installed = Vec::new(); // Look for "internet_plugin" entries in the JSON artifacts if let Some(artifacts_def) = &cask.artifacts { for art in artifacts_def { if let Some(obj) = art.as_object() { if let Some(entries) = obj.get("internet_plugin").and_then(|v| v.as_array()) { // Target directory for user internet plugins let dest_dir = config.home_dir().join("Library").join("Internet Plug-Ins"); fs::create_dir_all(&dest_dir)?; for entry in entries { if let Some(name) = entry.as_str() { let src = stage_path.join(name); if !src.exists() { debug!("Internet plugin '{}' not found in staging; skipping", name); continue; } let dest = dest_dir.join(name); if dest.exists() { fs::remove_dir_all(&dest)?; } debug!( "Installing internet plugin '{}' → '{}'", src.display(), dest.display() ); // Try move, fallback to copy let status = Command::new("mv").arg(&src).arg(&dest).status()?; if !status.success() { Command::new("cp").arg("-R").arg(&src).arg(&dest).status()?; } // Record moved plugin installed.push(InstalledArtifact::App { path: dest.clone() }); // Symlink into Caskroom for reference let link = cask_version_install_path.join(name); let _ = fs::remove_file(&link); symlink(&dest, &link)?; installed.push(InstalledArtifact::CaskroomLink { link_path: link, target_path: dest, }); } } break; // single stanza } } } } Ok(installed) } ``` The content has been capped at 50000 tokens, and files over NaN bytes have been omitted. The user could consider applying other filters to refine the result. The better and more specific the context, the better the LLM can follow instructions. If the context seems verbose, the user can refine the filter using uithub. Thank you for using https://uithub.com - Perfect LLM context for any GitHub repo.