``` ├── .github/ ├── ISSUE_TEMPLATE/ ├── bug_report.yml ├── feature_request.yml ├── PULL_REQUEST_TEMPLATE.md ├── scripts/ ├── integration_detect_changed_files.sh ├── integration_generate_pr_content.sh ├── integration_pr_review.py ├── workflows/ ├── backend.yml ├── devportal.yml ├── integration_code_review.yml ├── .gitignore ├── .pre-commit-config.yaml ├── CLA-Corporate.md ├── CLA-Individual.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── SECURITY.md ├── backend/ ├── .dockerignore ├── .env.example ├── .env.shared ├── .gitignore ├── .prettierrc.json ├── .python-version ├── Dockerfile.runner ├── Dockerfile.server ├── README.md ├── aci/ ├── __init__.py ├── alembic/ ├── README ├── env.py ├── script.py.mako ├── versions/ ├── 2025_01_10_1906-c6f47d7d2fa1_first_migration.py ├── 2025_01_27_1657-adcfaa729f61_added_custom_instructions_to_agent_table.py ├── 2025_02_10_2318-6482e8fa201e_store_app_name_function_name_for_non_.py ├── 2025_03_08_1922-70dd635d80d4_add_new_protocol_enum_value.py ├── 2025_03_10_2339-28702a5576f5_change_agent_level_app_function_acl.py ├── 2025_03_11_2315-949afaf258c3_json_to_jsonb.py ├── 2025_03_14_2000-1b82aeb7431f_create_secret_table.py ├── 2025_04_11_1232-7a159de1064c_add_nullable_org_id_column.py ├── 2025_04_11_1236-a79cdd14460e_make_org_id_of_project_table_not_.py ├── 2025_04_11_1237-af2ecf7ca19a_drop_owner_id_column_of_project_table.py ├── 2025_04_11_1238-bce2fbe6273b_drop_user_entity_organization_etc_tables.py ├── 2025_04_15_0929-c5978747c602_add_encrypted_key_and_key_hmac_columns_.py ├── 2025_04_15_0930-0846452f51ac_drop_the_key_column_of_the_api_keys_.py ├── 2025_04_15_0932-7ecafab6f8f9_rename_encrypted_key_column_to_key_and_.py ├── cli/ ├── __init__.py ├── __main__.py ├── aci.py ├── commands/ ├── __init__.py ├── create_agent.py ├── create_project.py ├── create_random_api_key.py ├── delete_app.py ├── fuzzy_test_function_execution.py ├── get_app.py ├── rename_app.py ├── update_agent.py ├── upsert_app.py ├── upsert_functions.py ├── config.py ├── tests/ ├── __init__.py ├── conftest.py ├── test_upsert_app.py ├── test_upsert_functions.py ├── common/ ├── __init__.py ├── config.py ├── db/ ├── crud/ ├── __init__.py ├── app_configurations.py ├── apps.py ├── functions.py ├── linked_accounts.py ├── projects.py ├── secret.py ├── custom_sql_types.py ├── sql_models.py ├── embeddings.py ├── encryption.py ├── enums.py ├── exceptions.py ``` ## /.github/ISSUE_TEMPLATE/bug_report.yml ```yml path="/.github/ISSUE_TEMPLATE/bug_report.yml" name: 🐛 Bug Report description: Report a bug or issue in the ACI backend API, dev portal, or SDK. title: "[BUG] " labels: [bug] body: - type: markdown attributes: value: | Please make sure the issue is clear, reproducible, and easy to act on. (Feel free to ask in [Discussions](https://github.com/aipotheosis-labs/aci/discussions) first if unsure) - type: checkboxes id: pre-requisites attributes: label: "Required Pre-requisites" description: "Please make sure you've completed the following steps before submitting. Thank you! 🙏" options: - label: "I have read the [Documentation](https://www.aci.dev/docs)" required: true - label: "I have searched the [Issue Tracker](https://github.com/aipotheosis-labs/aci/issues) and [Discussions](https://github.com/aipotheosis-labs/aci/discussions) that this hasn't been reported yet." required: true - label: "Consider asking in [Discussions](https://github.com/aipotheosis-labs/aci/discussions) first" required: false - type: textarea id: description attributes: label: "Description" description: "Please provide a clear and concise description of the bug or issue, the expected behavior, the actual behavior, and steps to reproduce the issue. Thank you! 🙏" placeholder: | Description: Expected behavior: Actual behavior: Steps to reproduce: validations: required: true - type: textarea id: additional-context attributes: label: Additional context description: >- Add any other context about the problem here. Screenshots, logs, etc may also be helpful. If you know or suspect the reason for this bug, paste the code lines and suggest modifications. validations: required: false ``` ## /.github/ISSUE_TEMPLATE/feature_request.yml ```yml path="/.github/ISSUE_TEMPLATE/feature_request.yml" name: 💡 Feature Request description: Request a new feature or enhancement in the ACI backend API, dev portal, or SDK. title: "[FEATURE] " labels: [enhancement] body: - type: markdown attributes: value: | Please make sure the feature request is clear, reproducible, and easy to act on. (Feel free to ask in [Discussions](https://github.com/aipotheosis-labs/aci/discussions) first if unsure) - type: checkboxes id: pre-requisites attributes: label: "Required Pre-requisites" description: "Please make sure you've completed the following steps before submitting. Thank you!" options: - label: "I have read the [Documentation](https://www.aci.dev/docs)" required: true - label: "I have searched the [Issue Tracker](https://github.com/aipotheosis-labs/aci/issues) and [Discussions](https://github.com/aipotheosis-labs/aci/discussions) that this hasn't been reported yet." required: true - label: "Consider asking in [Discussions](https://github.com/aipotheosis-labs/aci/discussions) first" required: false - type: textarea id: motivation attributes: label: "Motivation" description: "Please provide a clear and concise description of the feature request, the motivation for the feature, and any additional context that would be helpful. Thank you! 🙏" placeholder: | Motivation: Additional context: validations: required: true - type: textarea id: proposed-solution attributes: label: "Proposed Solution" description: "Please provide a description of the proposed solution." placeholder: | Proposed solution: Additional context: validations: required: true ``` ## /.github/PULL_REQUEST_TEMPLATE.md ### 🏷️ Ticket [link the issue or ticket you are addressing in this PR here, or use the **Development** section on the right sidebar to link the issue] ### 📝 Description [Describe your changes in detail (optional if the issue you linked already contains a detail description of the change)] ### 🎥 Demo (if applicable) ### 📸 Screenshots (if applicable) ### ✅ Checklist - [ ] I have signed the [Contributor License Agreement]() (CLA) and read the [contributing guide](./../CONTRIBUTING.md) (required) - [ ] I have linked this PR to an issue or a ticket (required) - [ ] I have updated the documentation related to my change if needed - [ ] I have updated the tests accordingly (required for a bug fix or a new feature) - [ ] All checks on CI passed ## /.github/scripts/integration_detect_changed_files.sh ```sh path="/.github/scripts/integration_detect_changed_files.sh" #!/bin/bash # Takes base SHA and head SHA as inputs BASE_SHA="$1" HEAD_SHA="$2" # Get the list of changed files, focusing on integration-related files CHANGED_FILES=$(git diff --name-only $BASE_SHA $HEAD_SHA | grep -E '^(apps)/' | grep -E '\.json$' || echo "") # Echo debugging information to stderr, not stdout # This way it won't be captured in $GITHUB_OUTPUT echo "Changed files: $CHANGED_FILES" >&2 # Check if there are integration-related changes if [ -z "$CHANGED_FILES" ]; then echo "No relevant integration files changed, skipping review" >&2 echo "skip=true" else echo "skip=false" fi # Output the changed files list - using proper GitHub Actions output syntax echo "changed_files< pr_content.txt ``` ## /.github/scripts/integration_pr_review.py ```py path="/.github/scripts/integration_pr_review.py" import os import sys import anthropic # Check if the file exists if not os.path.exists("pr_content.txt"): print("ERROR: The PR content file does not exist") sys.exit(1) # Read the content of the PR with open("pr_content.txt") as f: pr_content = f.read() # Print the content of the PR print("PR Content:") print(pr_content) # Define the prompt for Claude prompt = f""" ## Roles You are a senior integration engineer with deep expertise in API configurations, function specifications, and integration documentation. You are reviewing a pull request that modified integration configuration files. ## Objectives - Analyze the changes in a pull request that modified integration files, focus on checking required and visible fields in function specification - Identify potential issues, improvements, and best practices, and provide actionable feedback for the developer - Evaluate the overall quality of the integration changes ## Integration Context This project has integrations in the 'apps/' directory: - Each subdirectory represents a different integration (e.g., Discord, Slack, GitHub) - Each integration typically has an app.json file (containing configuration) and a functions.json file (defining API operations) - The function specifications follow a structured format with metadata and parameters ## Workflow 1. Search relevant documentation for the integration 2. According to api documentation and function specification, special rules, check the pull request and provide feedback ## Function Specification (Function Object) - The function specification including the metadata of the API, such as name, description, tags, visibility, active, protocol, protocol_data, parameters. - The function didn't include the return value, because the return value is not relevant to the LLM in current design. Each "function" object should detail a specific API operation or endpoint. This includes information such as the operation name, HTTP method, URL path, parameters etc. But also some custom fields like "visible" and "required". - for metadata field, follow following rules: - name: the name of the function, should be unique, uppercase, begin with application name then double underscore then the function name like "GITHUB__GET_USER" - description: the description of the function, should be a short sentence. - tags: the tags of the function, should be a list of string. - visibility: the visibility of the function, should be a string, could be "public" or "private", default is "public". - active: the active status of the function, should be a boolean, default is true. - protocol: the protocol of the function, should be a string, could be "rest" or "graphql", default is "rest". - protocol_data: the protocol data of the function, should be a object, including method, path, server_url. - method: the method of the function, should be a string, could be "GET", "POST", "PUT", "DELETE", etc. - path: the path of the function, should be a string, could be "/users", "/repos", etc. - server_url: the server url of the function, should be a string, could be "https://api.github.com", "https://discord.com/api/v10", etc. - parameters: the parameters of the function, should be a object - for required field, it should be a list of required parameters, you should fill according to the original markdown documentation. - for visible field, you should thinking if we need to show this parameter to LLM or not. usually, we don't need to show the parameter like version. ## Special Rules - The Authorization information like token or api key has been configured in the app.json file and should not be shown in the function specification. - the version number and api path should in `server_url` field, not in `path` field in `protocol_data` object. ## Output Format For each issue you find, include: - The file and line numbers - A description of the problem - A suggested solution or improvement At the end, provide: - A summary of the changes and their impact on the integration functionality - An overall assessment rating (High quality / Acceptable / Needs improvement) - Actionable next steps for the developer ## Pull Request Content {pr_content} """ # Get API key api_key = os.environ.get("ANTHROPIC_API_KEY") if not api_key: print("ERROR: ANTHROPIC_API_KEY environment variable is not set") sys.exit(1) try: # Create the client client = anthropic.Anthropic(api_key=api_key) # Call the API with streaming for long requests with open("claude_review.md", "w") as f: # Start the streaming request stream = client.messages.create( model="claude-3-7-sonnet-20250219", max_tokens=64000, temperature=0.0, system="You are an expert code reviewer specialized in API integrations and configurations. Focus on analyzing the diff sections where lines are prefixed with + (additions) or - (deletions).", messages=[{"role": "user", "content": prompt}], stream=True, ) # Process the stream using the correct attribute for event in stream: if event.type == "content_block_delta": f.write(event.delta.text) print("Integration code review completed successfully") except Exception as e: print(f"ERROR: Failed to call Anthropic API: {e!s}") sys.exit(1) ``` ## /.github/workflows/backend.yml ```yml path="/.github/workflows/backend.yml" name: Backend Checks on: push: branches: [main] pull_request: branches: [main] schedule: - cron: "0 9 * * *" jobs: lint: name: Format & Lint runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Install uv uses: astral-sh/setup-uv@v5 - name: "Set up Python" uses: actions/setup-python@v5 with: python-version-file: "backend/.python-version" - name: Install Dependencies with uv working-directory: backend run: | uv sync --all-extras --dev - name: Run Ruff Linting working-directory: backend run: | uv run ruff check . - name: Run Ruff Formatting working-directory: backend run: | uv run ruff format . --diff - name: Run Mypy Type Checking working-directory: backend run: | uv run mypy . test: name: Compose Tests runs-on: ubuntu-latest timeout-minutes: 30 env: SERVER_OPENAI_API_KEY: ${{ secrets.SERVER_OPENAI_API_KEY }} SERVER_PROPELAUTH_API_KEY: ${{ secrets.SERVER_PROPELAUTH_API_KEY }} SERVER_SVIX_SIGNING_SECRET: ${{ secrets.SERVER_SVIX_SIGNING_SECRET }} CLI_OPENAI_API_KEY: ${{ secrets.CLI_OPENAI_API_KEY }} steps: - uses: actions/checkout@v4 - name: Set up Docker Compose uses: docker/setup-compose-action@v1 - name: Run compose test id: run_tests working-directory: backend run: | docker compose -f compose.yml -f compose.ci.yml run runner - name: Get service names id: service_names working-directory: backend if: always() run: | SERVICE_NAMES=$(docker compose config --services) echo "names<> $GITHUB_OUTPUT echo "$SERVICE_NAMES" >> $GITHUB_OUTPUT echo "EOF" >> $GITHUB_OUTPUT - name: Print logs for each service if: always() working-directory: backend run: | echo "${{ steps.service_names.outputs.names }}" | while read -r service; do echo "Logs for service: $service" docker compose logs "$service" done ``` ## /.github/workflows/devportal.yml ```yml path="/.github/workflows/devportal.yml" name: Dev Portal Checks on: push: branches: [main] pull_request: branches: [main] schedule: - cron: "0 9 * * *" jobs: ci: name: Format, Lint, and Test runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: "18" cache: "npm" cache-dependency-path: frontend/package-lock.json - name: Install dependencies working-directory: frontend run: npm ci --legacy-peer-deps - name: Check formatting working-directory: frontend run: npm run format:check - name: Run linter working-directory: frontend run: npm run lint - name: Run tests working-directory: frontend run: npm run test - name: Build working-directory: frontend env: NEXT_PUBLIC_AUTH_URL: https://8367878.propelauthtest.com run: npm run build ``` ## /.github/workflows/integration_code_review.yml ```yml path="/.github/workflows/integration_code_review.yml" name: Integration PR Review on: pull_request: types: [opened, synchronize, reopened] paths: - 'apps/**' jobs: integration-pr-review: runs-on: ubuntu-latest permissions: contents: read pull-requests: write steps: - name: Checkout code uses: actions/checkout@v4 with: fetch-depth: 0 # Fetches all history for all branches and tags - name: Detect changed files id: changed-files run: | # Execute the script and redirect its output to $GITHUB_OUTPUT bash .github/scripts/integration_detect_changed_files.sh "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}" >> $GITHUB_OUTPUT - name: Get PR title and body id: pr-info if: steps.changed-files.outputs.skip != 'true' run: | echo "title=${{ github.event.pull_request.title }}" >> $GITHUB_OUTPUT echo "body<> $GITHUB_OUTPUT echo "${{ github.event.pull_request.body }}" >> $GITHUB_OUTPUT echo "BODY_DELIMITER" >> $GITHUB_OUTPUT - name: Generate file content for Claude id: file-content if: steps.changed-files.outputs.skip != 'true' run: | # Generate PR content using our script bash .github/scripts/integration_generate_pr_content.sh "${{ steps.pr-info.outputs.title }}" "${{ steps.pr-info.outputs.body }}" "${{ steps.changed-files.outputs.changed_files }}" "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}" - name: Set up Python uses: actions/setup-python@v4 with: python-version: '3.12' - name: Run Claude Code Review id: claude-review if: steps.changed-files.outputs.skip != 'true' env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} run: | # Install the Anthropic CLI pip install anthropic # Run our Python script python .github/scripts/integration_pr_review.py - name: Read Review and Post Comment uses: actions/github-script@v7 if: steps.changed-files.outputs.skip != 'true' with: github-token: ${{ secrets.GITHUB_TOKEN }} script: | const fs = require('fs'); // Read review from file let review = ''; try { review = fs.readFileSync('claude_review.md', 'utf8'); console.log('Successfully read review file'); } catch (error) { console.error('Error reading review file:', error); review = 'Error: Could not read review content.'; } // Post review as comment try { const commentPrefix = 'ACI Integration Code Review (Sonnet 3.7)'; const timestamp = new Date().toISOString().replace('T', ' ').replace('Z', ''); const commentBody = `## ${commentPrefix} - ${timestamp}\n\nThis review analyzes changes to integration files in the app/ directories.\n\n${review}`; console.log('Comment body:', commentBody); await github.rest.issues.createComment({ owner: context.repo.owner, repo: context.repo.repo, issue_number: context.issue.number, body: commentBody }); console.log('Posted integration review comment to PR'); } catch (error) { console.error('Error posting review comment:', error); } ``` ## /.gitignore ```gitignore path="/.gitignore" # custom ignores .vscode/ .DS_Store # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ share/python-wheels/ *.egg-info/ .installed.cfg *.egg MANIFEST # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .nox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *.cover *.py,cover .hypothesis/ .pytest_cache/ cover/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py db.sqlite3 db.sqlite3-journal # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder .pybuilder/ target/ # Jupyter Notebook .ipynb_checkpoints # IPython profile_default/ ipython_config.py # pyenv # For a library or package, you might want to ignore these files since the code is # intended to run in multiple environments; otherwise, check them in: # .python-version # pipenv # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. # However, in case of collaboration, if having platform-specific dependencies or dependencies # having no cross-platform support, pipenv may install dependencies that don't work, or not # install all needed dependencies. #Pipfile.lock # UV # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control. # This is especially recommended for binary packages to ensure reproducibility, and is more # commonly ignored for libraries. #uv.lock # poetry # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. # This is especially recommended for binary packages to ensure reproducibility, and is more # commonly ignored for libraries. # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control #poetry.lock # pdm # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. #pdm.lock # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it # in version control. # https://pdm.fming.dev/latest/usage/project/#working-with-version-control .pdm.toml .pdm-python .pdm-build/ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm __pypackages__/ # Celery stuff celerybeat-schedule celerybeat.pid # SageMath parsed files *.sage.py # Environments .env .venv env/ venv/ ENV/ env.bak/ venv.bak/ # Spyder project settings .spyderproject .spyproject # Rope project settings .ropeproject # mkdocs documentation /site # mypy .mypy_cache/ .dmypy.json dmypy.json # Pyre type checker .pyre/ # pytype static type analyzer .pytype/ # Cython debug symbols cython_debug/ # PyCharm # JetBrains specific template is maintained in a separate JetBrains.gitignore that can # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore # and can be added to the global gitignore or merged into this file. For a more nuclear # option (not recommended) you can uncomment the following to ignore the entire idea folder. #.idea/ # Ruff stuff: .ruff_cache/ # PyPI configuration file .pypirc # Ignore all .secrets.json files .app.secrets.* # CDK asset staging directory .cdk.staging cdk.out ``` ## /.pre-commit-config.yaml ```yaml path="/.pre-commit-config.yaml" repos: # General checks - repo: https://github.com/pre-commit/pre-commit-hooks rev: v5.0.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml - id: check-json - id: check-added-large-files - id: check-merge-conflict # Dev Portal checks - repo: local hooks: - id: format check name: running dev portal format check language: system entry: sh -c "cd frontend && npm run format" pass_filenames: false require_serial: true - id: lint check name: running dev portal lint check language: system entry: sh -c "cd frontend && npm run lint" pass_filenames: false require_serial: true - id: test check name: running dev portal test check language: system entry: sh -c "cd frontend && npm run test:run" pass_filenames: false require_serial: true # Backend checks - repo: local hooks: - id: format name: running backend format checks language: system entry: sh -c "cd backend && uv run ruff format ." pass_filenames: false require_serial: true - id: lint name: running backend lint checks language: system entry: sh -c "cd backend && uv run ruff check . --fix" pass_filenames: false require_serial: true - id: mypy name: running backend mypy checks language: system entry: sh -c "cd backend && uv run mypy ." pass_filenames: false require_serial: true ``` ## /CLA-Corporate.md # Corporate Contributor License Agreement (v1.0, Aipolabs) By commenting **"We have read the CLA Document and we hereby sign the CLA"** on a Pull Request, **the entity identified below ("Corporation") agrees to the following terms** for all past and future "Contributions" submitted to the **Aipolabs ACI project (the "Project")**. ## 1. Definitions - **"Contribution"** – any original work of authorship submitted to the Project (code, documentation, designs, etc.) by the Corporation's designated employees. - **"Corporation"** – the legal entity posting the acceptance comment. - **"Designated Employees"** – the employees or contractors of the Corporation who are authorized to submit Contributions on behalf of the Corporation. ## 2. Copyright License The Corporation grants **Aipolabs Ltd.** and all recipients of software distributed by the Project a perpetual, worldwide, non‑exclusive, royalty‑free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute the Corporation's Contributions and derivative works. ## 3. Patent License The Corporation grants **Aipolabs Ltd.** and all recipients of the Project a perpetual, worldwide, non‑exclusive, royalty‑free, irrevocable (except as below) patent license to make, have made, use, sell, offer to sell, import, and otherwise transfer the Corporation's Contributions alone or in combination with the Project. If any entity brings patent litigation alleging that the Project or a Contribution infringes a patent, the patent licenses granted by the Corporation to that entity under this CLA terminate. ## 4. Representations The Corporation represents that: 1. It is legally entitled to grant the licenses above. 2. Each Contribution is either the Corporation's original creation or the Corporation has rights to submit it under this CLA. 3. Each employee of the Corporation who submits Contributions is authorized to do so on behalf of the Corporation. 4. No Contribution knowingly violates the intellectual property rights of any third party. 5. All Contributions are provided **"AS IS"** without warranties of any kind. 6. The Corporation will notify the Project if any statement above becomes inaccurate. ## 5. Designated Employees The Corporation acknowledges that it is responsible for all Contributions submitted by its Designated Employees and that it is the Corporation's responsibility to inform the Project of changes to the list of Designated Employees authorized to submit Contributions on behalf of the Corporation. ## 6. Miscellany This Agreement is governed by the laws of England and Wales, without regard to its conflict of laws provisions. If any provision is held unenforceable, the remaining provisions remain in force. --- ## Acceptance Process When posting the acceptance comment "We have read the CLA Document and we hereby sign the CLA" on a pull request, **please include the following information in your comment**: - **Corporation name:** [Your corporation's legal name] - **Corporation address:** [Your corporation's registered address] - **Point of Contact:** [Name of authorized representative] - **Contact Email:** [Email address for CLA-related communications] - **Contact Phone:** [Phone number for CLA-related communications] Without this information, your acceptance of the CLA cannot be properly recorded. ## /CLA-Individual.md # Individual Contributor License Agreement (v1.0, Aipolabs) *Based on the Apache Software Foundation Individual CLA v 2.2.* By commenting **“I have read the CLA Document and I hereby sign the CLA”** on a Pull Request, **you (“Contributor”) agree to the following terms** for any past and future “Contributions” submitted to the **Aipolabs ACI project (the “Project”)**. --- ## 1. Definitions - **“Contribution”** – any original work of authorship submitted to the Project (code, documentation, designs, etc.). - **“You” / “Your”** – the individual (or legal entity) posting the acceptance comment. ## 2. Copyright License You grant **Aipolabs Ltd.** and all recipients of software distributed by the Project a perpetual, worldwide, non‑exclusive, royalty‑free, irrevocable license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Your Contributions and derivative works. ## 3. Patent License You grant **Aipolabs Ltd.** and all recipients of the Project a perpetual, worldwide, non‑exclusive, royalty‑free, irrevocable (except as below) patent license to make, have made, use, sell, offer to sell, import, and otherwise transfer Your Contributions alone or in combination with the Project. If any entity brings patent litigation alleging that the Project or a Contribution infringes a patent, the patent licenses granted by You to that entity under this CLA terminate. ## 4. Representations 1. You are legally entitled to grant the licenses above. 2. Each Contribution is either Your original creation or You have authority to submit it under this CLA. 3. Your Contributions are provided **“AS IS”** without warranties of any kind. 4. You will notify the Project if any statement above becomes inaccurate. ## 5. Miscellany This Agreement is governed by the **laws of England and Wales**, without regard to its conflict of laws provisions. If any provision is held unenforceable, the remaining provisions remain in force. ## /CODE_OF_CONDUCT.md # Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the project maintainers responsible for enforcement. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at [https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations ## /CONTRIBUTING.md # Contributing to ACI.dev Thank you for your interest in contributing to ACI.dev! We welcome contributions from everyone, whether it's submitting bug reports, suggesting new features, improving documentation, or contributing code. ## Code of Conduct Please read and follow our [Code of Conduct](CODE_OF_CONDUCT.md) to foster an open and welcoming community. ## Getting Started Before you begin contributing, please set up your local development environment by following the instructions in the [README.md](README.md). ## Contributor License Agreement (CLA) Before we can merge your contribution, we need you to agree to our Contributor License Agreement (CLA). This protects both you and the project. - **Individuals:** Please read the [Individual CLA (CLA-Individual.md)](CLA-Individual.md). To signify your agreement, comment on your Pull Request with the exact phrase: `I have read the CLA Document and I hereby sign the CLA` - **Corporations:** If you are contributing on behalf of a company, please ensure **both** of the following steps are completed: 1. An authorized representative from your company must read the [Corporate CLA (CLA-Corporate.md)](CLA-Corporate.md) and signify agreement by commenting on the Pull Request with the exact phrase: `We have read the CLA Document and we hereby sign the CLA`. This comment must also include the Corporation Name, Address, Point of Contact Name, Email, and Phone number as specified in the Corporate CLA. 2. The individual developer submitting the code must **also** sign the [Individual CLA (CLA-Individual.md)](CLA-Individual.md) by commenting with the phrase specified in the "Individuals" section above. We cannot accept pull requests until **all** required CLA agreements are met. ## Repository Structure Our monorepo contains two main components: - **`/backend`**: Contains the main ACI platform server, including the APIs, core logic, database models, and the entire integration library (over 600+ tools). - **`/frontend`**: Contains the Next.js application for the ACI.dev Developer Portal. This is the web interface for managing projects, integrations, authentication, and testing agents. ## Getting in Touch Before starting work on a contribution, especially for larger features or changes, we kindly request that you first get in touch with the core team by emailing . This helps us coordinate efforts, provide guidance, and ensure your contribution aligns with the project's roadmap. ## How to Contribute ### Reporting Bugs If you've found a bug: 1. Check if the bug has already been reported in the GitHub Issues 2. If not, create a new issue with a clear title and description 3. Include steps to reproduce, expected behavior, and actual behavior 4. Add any relevant screenshots or error logs ### Suggesting Features We welcome feature suggestions: 1. Describe the feature and its use case 2. Explain how it would benefit ACI.dev users 3. Provide any examples or references if available ### Code Contributions 1. **Fork the repository** to your GitHub account 2. **Clone your fork** to your local machine 3. **Create a new branch** for your feature or bugfix: ```bash git checkout -b feature/your-feature-name ``` 4. **Make your changes** following our coding standards (see below) 5. **Commit your changes** with clear, descriptive commit messages and make sure all commit hooks pass 6. **Push your branch** to your fork 7. **Create a pull request** to the main repository #### Pull Request Process 1. Ensure your code follows our coding standards 2. Update documentation if necessary 3. Make sure all tests pass 4. Reference any related issues in your pull request description 5. Wait until all CI checks are green 6. Request a review from a maintainer ## Coding Standards Check out the README file for each component for more information on coding standards. - **Backend:** [backend/README.md](backend/README.md) - **Frontend:** [frontend/README.md](frontend/README.md) ## Testing Check out the README file for each component for more information on testing. - **Backend:** [backend/README.md](backend/README.md) - **Frontend:** [frontend/README.md](frontend/README.md) ## License By contributing to ACI.dev, you agree that your contributions will be licensed under the project's [Apache 2.0 License](LICENSE). ## /LICENSE ``` path="/LICENSE" Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2025 Aipotheosis Labs (Aipolabs Ltd) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## /README.md

ACI.dev Logo

# ACI: Open Source Tool-Use Infrastructure for AI Agents

Dev Portal CI Backend CI License Discord Twitter Follow PyPI version

:rocket: Top strategy, product, and ops teams are building AI agents that can take action across their stack — not just chat. :point_right: Star this repo to stay ahead of the agent-native future. :wrench: Works with any LLM, framework, or workflow engine. ACI.dev connects your AI agents to 600+ tool integrations with multi-tenant authentication, granular permissions, and dynamic tool discovery, accessible through either direct tool/function-calling or a **Unified MCP server**. **Example:** Instead of writing separate OAuth flows and API clients for Google Calendar, Slack, and more, use ACI.dev to manage authentication and provide your AI agents with unified, secure function calls. Access these capabilities through our lightweight [Python SDK](https://github.com/aipotheosis-labs/aci-python-sdk) or via our **Unified** [MCP server](https://github.com/aipotheosis-labs/aci-mcp), compatible with any LLM framework. Build production-ready AI agents without the infrastructure headaches. ![ACI.dev Architecture](frontend/public/aci-architecture-intro.svg) ## Demo Video ACI.dev **Unified MCP Server** Demo [![ACI.dev Unified MCP Server Demo](https://img.youtube.com/vi/8zOYLp9Dn0U/0.jpg)](https://youtu.be/8zOYLp9Dn0U) ## Key Features - **600+ Pre-built Integrations**: Connect to popular services and apps in minutes. - **Flexible Access Methods**: Use our lightweight SDK for direct function calling or our unified MCP server. - **Multi-tenant Authentication**: Built-in OAuth flows and secrets management for both developers and end-users. - **Enhanced Agent Reliability**: Natural language permission boundaries and dynamic tool discovery. - **Framework & Model Agnostic**: Works with any LLM framework and agent architecture. - **100% Open Source**: Everything released under Apache 2.0 (backend, dev portal, integrations). ## Why Use ACI.dev? ACI.dev solves your critical infrastructure challenges for production-ready AI agents: - **Authentication at Scale**: Connect multiple users to multiple services securely. - **Discovery Without Overload**: Find and use the right tools without overwhelming LLM context windows. - **Natural Language Permissions**: Control agent capabilities with human-readable boundaries. - **Build Once, Run Anywhere**: No vendor lock-in with our open source, framework-agnostic approach. ## Common Use Cases - **Personal Assistant Chatbots:** Build chatbots that can search the web, manage calendars, send emails, interact with SaaS tools, etc. - **Research Agent:** Conducts research on specific topics and syncs results to other apps (e.g., Notion, Google Sheets). - **Outbound Sales Agent:** Automates lead generation, email outreach, and CRM updates. - **Customer Support Agent:** Provides answers, manages tickets, and performs actions based on customer queries. ## Quick Links - **Managed Service:** [aci.dev](https://www.aci.dev/) - **Documentation:** [aci.dev/docs](https://www.aci.dev/docs) - **Available Tools List:** [aci.dev/tools](https://www.aci.dev/tools) - **Python SDK:** [github.com/aipotheosis-labs/aci-python-sdk](https://github.com/aipotheosis-labs/aci-python-sdk) - **Unified MCP Server:** [github.com/aipotheosis-labs/aci-mcp](https://github.com/aipotheosis-labs/aci-mcp) - **Agent Examples Built with ACI.dev:** [github.com/aipotheosis-labs/aci-agents](https://github.com/aipotheosis-labs/aci-agents) - **Blog:** [aci.dev/blog](https://www.aci.dev/blog) - **Community:** [Discord](https://discord.com/invite/UU2XAnfHJh) | [Twitter/X](https://x.com/AipoLabs) | [LinkedIn](https://www.linkedin.com/company/aipotheosis-labs-aipolabs/posts/?feedView=all) ## Repository Structure This is a monorepo that contains the core components of ACI.dev: - **`/backend`**: Contains the main ACI platform server, including the APIs, core logic, database models, and the entire integration library (over 600+ tools). - **`/frontend`**: Contains the Next.js application for the ACI.dev Developer Portal. This is the web interface for managing projects, integrations, authentication, and testing agents. ## Getting Started: Local Development To run the full ACI.dev platform (backend server and frontend portal) locally, follow the individual README files for each component: - **Backend:** [backend/README.md](backend/README.md) - **Frontend:** [frontend/README.md](frontend/README.md) ## Contributing We welcome contributions! Please see our [CONTRIBUTING.md](CONTRIBUTING.md) for more information. ## Star History [![Star History Chart](https://api.star-history.com/svg?repos=aipotheosis-labs/aci&type=Date)](https://www.star-history.com/#aipotheosis-labs/aci&Date) ## /SECURITY.md # Security Policy ## Scope This security policy applies to vulnerabilities discovered in the `main` branch of the following components within the ACI.dev monorepo: - `/backend` - `/frontend` Please report vulnerabilities related to other branches or components if you believe they are critical, but be aware that our primary focus for security patches is the `main` branch of these core components. ## **Reporting a Vulnerability** We take the security of [ACI.dev](http://ACI.dev) very seriously. If you believe you've found a security vulnerability, please follow these steps: 1. **Do not disclose the vulnerability publicly** or to any third parties. 2. **Minimize Harm:** Make every effort to avoid accessing or downloading data that does not belong to you, disrupting services, or violating user privacy during your testing. If access to user data or confidential information is necessary to demonstrate the vulnerability, please minimize the amount accessed and report this immediately. 3. **Email us directly** at with 1. Title format "[Vulnerability] Summary of issue". 2. Details of the vulnerability in the body. 4. **Include the following information** in your report: - Description of the vulnerability - Steps to reproduce - Potential impact - Any suggestions for mitigation 5. We will acknowledge receipt of your vulnerability report within 48 hours and provide an estimated timeline for a fix. 6. Once the vulnerability is fixed, we will notify you and publicly acknowledge your contribution (unless you prefer to remain anonymous). ## Safe Harbor We consider security research and vulnerability disclosure activities conducted following this policy to be authorized and beneficial. We will not pursue legal action against individuals who report vulnerabilities in good faith and adhere to this policy, including the restrictions on public disclosure. This safe harbor does not apply to any actions that intentionally cause harm, disrupt services, violate user privacy, access or modify data beyond what is necessary to demonstrate the vulnerability, or violate any applicable laws. ## /backend/.dockerignore ```dockerignore path="/backend/.dockerignore" # custom ignores **/.env **/.env.example **/__pycache__ **/*.pyc # Python __pycache__ app.egg-info *.pyc .mypy_cache .coverage htmlcov .venv ``` ## /backend/.env.example ```example path="/backend/.env.example" # .env file mostly should only be used for local development (locally or docker) or for running pytest # Server SERVER_OPENAI_API_KEY= SERVER_PROPELAUTH_API_KEY= SERVER_SVIX_SIGNING_SECRET= # CLI CLI_OPENAI_API_KEY= ``` ## /backend/.env.shared ```shared path="/backend/.env.shared" ######################################################## # Common ######################################################## # AWS COMMON_AWS_REGION=us-east-2 COMMON_AWS_ENDPOINT_URL=http://aws:4566 # Encryption & Hashing COMMON_KEY_ENCRYPTION_KEY_ARN=arn:aws:kms:us-east-2:000000000000:key/00000000-0000-0000-0000-000000000001 COMMON_API_KEY_HASHING_SECRET=5ef74d594f5edf1f98219ddfeb79056cb9ab8198d11820791c407befc5075166 ######################################################## # Local Only ######################################################## # For local testing only, AWS credentials are required for boto3 to work, therefore we # must set it, even though the localstack container in local compose setup doesn't # verify it. # On prod we use AWS IAM instead of static credentials. Therefore, we don't need to set # it on prod. AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test ######################################################## # Server ######################################################## SERVER_ENVIRONMENT=local SERVER_SIGNING_KEY=SErq6tYWOXsCQZ0B-ynjAIOxVFyOQX71E8vprZx6Msg SERVER_JWT_ALGORITHM=HS256 SERVER_JWT_ACCESS_TOKEN_EXPIRE_MINUTES=1440 SERVER_DB_SCHEME=postgresql+psycopg SERVER_DB_USER=user SERVER_DB_PASSWORD=password SERVER_DB_HOST=db SERVER_DB_PORT=5432 SERVER_DB_NAME=local_db SERVER_OPENAI_EMBEDDING_MODEL=text-embedding-3-small SERVER_OPENAI_EMBEDDING_DIMENSION=1024 # need to set a high rate limit for running tests without triggering the rate limit SERVER_RATE_LIMIT_IP_PER_SECOND=999 SERVER_RATE_LIMIT_IP_PER_DAY=100000 SERVER_PROJECT_DAILY_QUOTA=100000 SERVER_APPLICATION_LOAD_BALANCER_DNS=127.0.0.1 SERVER_REDIRECT_URI_BASE=http://localhost:8000 SERVER_DEV_PORTAL_URL=http://localhost:3000 SERVER_MAX_PROJECTS_PER_ORG=3 SERVER_MAX_AGENTS_PER_PROJECT=10 # PropelAuth SERVER_PROPELAUTH_AUTH_URL=https://8367878.propelauthtest.com # LOGFIRE # NOTE: locally we don't want to send logs to logfire, just setting a dummy token # here so that the config.py won't fail SERVER_LOGFIRE_WRITE_TOKEN="" SERVER_LOGFIRE_READ_TOKEN="" ######################################################## # Alembic ######################################################## ALEMBIC_DB_SCHEME=postgresql+psycopg ALEMBIC_DB_USER=user ALEMBIC_DB_PASSWORD=password ALEMBIC_DB_HOST=db ALEMBIC_DB_PORT=5432 ALEMBIC_DB_NAME=local_db ######################################################## # CLI ######################################################## CLI_OPENAI_EMBEDDING_MODEL=text-embedding-3-small CLI_OPENAI_EMBEDDING_DIMENSION=1024 CLI_DB_SCHEME=postgresql+psycopg CLI_DB_USER=user CLI_DB_PASSWORD=password CLI_DB_HOST=db CLI_DB_PORT=5432 CLI_DB_NAME=local_db CLI_SERVER_URL=http://server:8000 ``` ## /backend/.gitignore ```gitignore path="/backend/.gitignore" # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class .DS_Store .vscode # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ share/python-wheels/ *.egg-info/ .installed.cfg *.egg MANIFEST # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .nox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *.cover *.py,cover .hypothesis/ .pytest_cache/ cover/ # Translations *.mo *.pot # Django stuff: *.log local_settings.py db.sqlite3 db.sqlite3-journal # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ # PyBuilder .pybuilder/ target/ # Jupyter Notebook .ipynb_checkpoints # IPython profile_default/ ipython_config.py # pyenv # For a library or package, you might want to ignore these files since the code is # intended to run in multiple environments; otherwise, check them in: # .python-version # pipenv # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. # However, in case of collaboration, if having platform-specific dependencies or dependencies # having no cross-platform support, pipenv may install dependencies that don't work, or not # install all needed dependencies. #Pipfile.lock # poetry # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. # This is especially recommended for binary packages to ensure reproducibility, and is more # commonly ignored for libraries. # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control #poetry.lock # pdm # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. #pdm.lock # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it # in version control. # https://pdm.fming.dev/latest/usage/project/#working-with-version-control .pdm.toml .pdm-python .pdm-build/ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm __pypackages__/ # Celery stuff celerybeat-schedule celerybeat.pid # SageMath parsed files *.sage.py # Environments .env .venv env/ venv/ ENV/ env.bak/ venv.bak/ # Spyder project settings .spyderproject .spyproject # Rope project settings .ropeproject # mkdocs documentation /site # mypy .mypy_cache/ .dmypy.json dmypy.json # Pyre type checker .pyre/ # pytype static type analyzer .pytype/ # Cython debug symbols cython_debug/ # PyCharm # JetBrains specific template is maintained in a separate JetBrains.gitignore that can # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore # and can be added to the global gitignore or merged into this file. For a more nuclear # option (not recommended) you can uncomment the following to ignore the entire idea folder. #.idea/ # Temporary files in the current directory tmp/ ``` ## /backend/.prettierrc.json ```json path="/backend/.prettierrc.json" { "tabWidth": 4, "useTabs": false, "semi": true, "singleQuote": true, "trailingComma": "es5", "bracketSpacing": true, "printWidth": 100, "arrowParens": "avoid" } ``` ## /backend/.python-version ```python-version path="/backend/.python-version" 3.12 ``` ## /backend/Dockerfile.runner ```runner path="/backend/Dockerfile.runner" FROM python:3.12 COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ WORKDIR /workdir COPY ./pyproject.toml ./uv.lock /workdir/ RUN uv sync --all-extras --dev --no-install-project ENV PATH="/workdir/.venv/bin:$PATH" # Check if PYTHONPATH exists and append /workdir, otherwise set it to /workdir RUN if [ -z "$PYTHONPATH" ]; then export PYTHONPATH=/workdir; else export PYTHONPATH=/workdir:$PYTHONPATH; fi ``` ## /backend/Dockerfile.server ```server path="/backend/Dockerfile.server" FROM python:3.12 COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ WORKDIR /workdir COPY ./pyproject.toml ./uv.lock /workdir/ RUN uv sync --all-extras --no-install-project ENV PATH="/workdir/.venv/bin:$PATH" # Check if PYTHONPATH exists and append /workdir, otherwise set it to /workdir RUN if [ -z "$PYTHONPATH" ]; then export PYTHONPATH=/workdir; else export PYTHONPATH=/workdir:$PYTHONPATH; fi # .env files will be skipped by default specified in .dockerignore COPY ./aci/server /workdir/aci/server COPY ./aci/common /workdir/aci/common COPY ./aci/__init__.py /workdir/aci/__init__.py # remove unecessary or sensitive files (.env files are skipped by default specified in .dockerignore) RUN rm -rf /workdir/aci/server/tests CMD ["uvicorn", "aci.server.main:app", "--proxy-headers", "--forwarded-allow-ips=*", "--host", "0.0.0.0", "--port", "8000"] ``` ## /backend/README.md # ACI.dev Backend [![Backend CI](https://github.com/aipotheosis-labs/aci/actions/workflows/backend.yml/badge.svg)](https://github.com/aipotheosis-labs/aci/actions/workflows/backend.yml) [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) ## Overview The backend component of ACI.dev provides the server infrastructure, API endpoints, database models, and integration libraries that enable over 600+ tool integrations with multi-tenant authentication and granular permissions. - [ACI.dev Backend](#acidev-backend) - [Overview](#overview) - [Code Structure](#code-structure) - [Development Setup](#development-setup) - [Prerequisites](#prerequisites) - [Code Style](#code-style) - [IDE Configuration](#ide-configuration) - [Getting Started](#getting-started) - [Running Tests](#running-tests) - [Database Management](#database-management) - [Working with Migrations](#working-with-migrations) - [Webhooks (for local end-to-end development with frontend)](#webhooks-for-local-end-to-end-development-with-frontend) - [Admin CLI](#admin-cli) - [Contributing](#contributing) - [License](#license) ## Code Structure The backend consists of several main components: - **Server**: FastAPI application handling API requests, authentication, and tool executions - **Database**: PostgreSQL with pgvector for vector similarity search - **CLI**: Command-line interface for local testing and development - **Common**: Shared code and utilities used across components ## Development Setup ### Prerequisites - Python 3.12+ - Docker and Docker Compose - `uv` package manager ### Code Style We follow strict code quality standards: - **Formatting & Linting**: We use `ruff` for code formatting and linting - **Type Checking**: We use `mypy` for static type checking - **Pre-commit Hooks**: Install with `pre-commit install` ### IDE Configuration For VS Code users, configure Ruff formatter: ```json { "[python]": { "editor.formatOnSave": true, "editor.defaultFormatter": "charliermarsh.ruff", "editor.codeActionsOnSave": { "source.organizeImports.ruff": "always" } } } ``` ### Getting Started 1. Clone the repository: ```bash git clone https://github.com/aipotheosis-labs/aci.git cd aci/backend ``` 2. Install dependencies and activate virtual environment: ```bash uv sync source .venv/bin/activate ``` 3. Install `pre-commit` hooks: ```bash pre-commit install ``` 4. Set up environment variables: ```bash cp .env.example .env ``` There are 4 env vars you need to set in `.env`: - `SERVER_OPENAI_API_KEY`: create an API key yourself - `CLI_OPENAI_API_KEY`: create an API key yourself - `SERVER_PROPELAUTH_API_KEY`: we'll give you an API key if you are one of our approved contributors. You can also create a PropelAuth Org yourself. See the [Webhooks](#webhooks-for-local-end-to-end-development-with-frontend) section for how to get an API key once you have access to a PropelAuth Org. - `SERVER_SVIX_SIGNING_SECRET`: you don't need it if you aren't developing the dev portal. But if you are, complete the [Webhooks](#webhooks-for-local-end-to-end-development-with-frontend) section before moving on. Note: Most insensitive variables are already defined in `.env.shared` 5. Start services with Docker Compose: ```bash docker compose up --build ``` This will start: - `server`: Backend API service - `db`: PostgreSQL database - `aws`: LocalStack for mocking AWS services - `runner`: Container for running commands like tests or database seeds 6. (Optional) Seed the database with sample data: ```bash docker compose exec runner ./scripts/seed_db.sh ``` 7. (Optional) Connect to the database using a GUI client (e.g., `DBeaver`) - Parameters for the db connection can be found in the `.env.shared` file 8. Create a random API key for local development (step 6 also creates a random API key when you run the seed db script): ```bash docker compose exec runner python -m aci.cli create-random-api-key --visibility-access public ``` 9. Access the API documentation at: ```bash http://localhost:8000/v1/notforhuman-docs ``` 10. (Optional) If you are developing the dev portal, follow the instructions on [frontend README](../frontend/README.md) to start the dev portal. ### Running Tests Ensure the `db` service is running and the database is empty (in case you have seeded the db in the previous steps) before running tests: ```bash docker compose exec runner pytest ``` ## Database Management ### Working with Migrations When making changes to database models: 1. Check for detected changes: ```bash docker compose exec runner alembic check ``` 2. Generate a migration: ```bash docker compose exec runner alembic revision --autogenerate -m "description of changes" ``` 3. Manually review and edit the generated file in `database/alembic/versions/` if needed to add custom changes, e.g.,: - pgvector library imports - Index creation/deletion - Vector extension setup - Other database-specific operations 4. Apply the migration (to the local db): ```bash docker compose exec runner alembic upgrade head ``` 5. To revert the latest migration: ```bash docker compose exec runner alembic downgrade -1 ``` ## Webhooks (for local end-to-end development with frontend) If you are developing the dev portal, you would need a real `user` and `org` in the PropelAuth test environment as well as a default `project` and `agent` in your local db. Follow the steps here to set up the webhooks so that when you sign up on the PropelAuth test environment, PropelAuth will notify your local server to create an org in the PropelAuth test environment for you as well as creating a default project and agent in the local db. 1. Install and set up ngrok: - Follow [ngrok's getting started guide](https://ngrok.com/docs/getting-started/?os=macos) - Expose your local server: `ngrok http http://localhost:8000` - Copy your public endpoint you just exposed from previous step and create a new endpoint in the [ngrok dashboard](https://dashboard.ngrok.com/endpoints) (e.g. ) 2. Configure PropelAuth: - Go to the `aipolabs local` PropelAuth Org [dashboard](https://app.propelauth.com/proj/1b327933-ffbf-4a36-bd05-76cd896b0d56) if you have access, or create your own local dev organization yourself if you don't. - Go to the **Users** and **Organizations** tabs, delete your previously created user and organization. (Note: only delete your own user and org) ![delete user](./images/delete-user.png) ![delete org](./images/delete-org.png) - If you don't have a PropelAuth API key already, go to the **Backend Integration** tab and create an API key for the test environment, set it as `SERVER_PROPELAUTH_API_KEY` in `.env`. ![propelauth-api-key](./images/propelauth-api-key.png) - Go to the **Integrations** tab on the dashboard, click Webhooks. And click **Set Up Webhooks** for the **TEST ENV**, which will lead you to [Svix endpoints](https://app.svix.com/app_2uuG50X13IEu2cVRRL5fnXOeWWv/endpoints) page. ![webhook-setup](./images/webhook-setup.png) - Click `Add Endpoint`, put `/v1/webhooks/auth/user-created` as the endpoint and subscribe to the `user.created` event. Hit Create. ![svix](./images/svix.png) - Copy the `Signing Secret` of the endpoint and set it as `SERVER_SVIX_SIGNING_SECRET` in `.env`. ![svix](./images/svix-signing-secret.png) - Go back to the [Getting Started](#getting-started) section step 5 to bring up docker compose ## Admin CLI The CLI module is an internal admin tool for ACI to manage apps, functions, users, etc. For local development, the commands can be executed via the `runner` container. ```bash docker compose exec runner python -m aci.cli upsert-app --app-file ./apps/brave_search/app.json --secrets-file ./apps/brave_search/.app.secrets.json ``` ## Contributing Please refer to the [Contributing Guide](../CONTRIBUTING.md) for details on making contributions to this project. ## License This project is licensed under the Apache License 2.0 - see the [LICENSE](../LICENSE) file for details. ## /backend/aci/__init__.py ```py path="/backend/aci/__init__.py" ``` ## /backend/aci/alembic/README ``` path="/backend/aci/alembic/README" Generic single-database configuration. ``` ## /backend/aci/alembic/env.py ```py path="/backend/aci/alembic/env.py" import os from logging.config import fileConfig from alembic import context from dotenv import load_dotenv from sqlalchemy import engine_from_config, pool from aci.common.db.sql_models import Base load_dotenv() # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # Interpret the config file for Python logging. # This line sets up loggers basically. if config.config_file_name is not None: fileConfig(config.config_file_name) # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata target_metadata = Base.metadata # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def _check_and_get_env_variable(name: str) -> str: value = os.getenv(name) if value is None: raise ValueError(f"Environment variable '{name}' is not set") if value == "": raise ValueError(f"Environment variable '{name}' is empty string") return value def _get_db_url() -> str: # construct db url from env variables DB_SCHEME = _check_and_get_env_variable("ALEMBIC_DB_SCHEME") DB_USER = _check_and_get_env_variable("ALEMBIC_DB_USER") DB_PASSWORD = _check_and_get_env_variable("ALEMBIC_DB_PASSWORD") DB_HOST = _check_and_get_env_variable("ALEMBIC_DB_HOST") DB_PORT = _check_and_get_env_variable("ALEMBIC_DB_PORT") DB_NAME = _check_and_get_env_variable("ALEMBIC_DB_NAME") return f"{DB_SCHEME}://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}" def run_migrations_offline() -> None: """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ context.configure( url=_get_db_url(), target_metadata=target_metadata, literal_binds=True, dialect_opts={"paramstyle": "named"}, ) with context.begin_transaction(): context.run_migrations() def run_migrations_online() -> None: """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ configuration = config.get_section(config.config_ini_section, {}) configuration["sqlalchemy.url"] = _get_db_url() connectable = engine_from_config( configuration, prefix="sqlalchemy.", poolclass=pool.NullPool, ) with connectable.connect() as connection: context.configure(connection=connection, target_metadata=target_metadata) with context.begin_transaction(): context.run_migrations() if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online() ``` ## /backend/aci/alembic/script.py.mako ```mako path="/backend/aci/alembic/script.py.mako" """${message} Revision ID: ${up_revision} Revises: ${down_revision | comma,n} Create Date: ${create_date} """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa ${imports if imports else ""} # revision identifiers, used by Alembic. revision: str = ${repr(up_revision)} down_revision: Union[str, None] = ${repr(down_revision)} branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)} depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)} def upgrade() -> None: ${upgrades if upgrades else "pass"} def downgrade() -> None: ${downgrades if downgrades else "pass"} ``` ## /backend/aci/alembic/versions/2025_01_10_1906-c6f47d7d2fa1_first_migration.py ```py path="/backend/aci/alembic/versions/2025_01_10_1906-c6f47d7d2fa1_first_migration.py" # type: ignore """first migration Revision ID: c6f47d7d2fa1 Revises: Create Date: 2025-01-10 19:06:15.190427+00:00 """ from typing import Sequence, Union import sqlalchemy as sa from alembic import op from pgvector.sqlalchemy import Vector from sqlalchemy.dialects import postgresql # revision identifiers, used by Alembic. revision: str = "c6f47d7d2fa1" down_revision: Union[str, None] = None branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # create extension if not exists vector; op.execute("CREATE EXTENSION IF NOT EXISTS vector;") # ### commands auto generated by Alembic - please adjust! ### op.create_table( "apps", sa.Column("id", sa.UUID(), nullable=False), sa.Column("name", sa.String(length=100), nullable=False), sa.Column("display_name", sa.String(length=255), nullable=False), sa.Column("provider", sa.String(length=255), nullable=False), sa.Column("version", sa.String(length=255), nullable=False), sa.Column("description", sa.Text(), nullable=False), sa.Column("logo", sa.Text(), nullable=True), sa.Column("categories", postgresql.ARRAY(sa.String()), nullable=False), sa.Column("visibility", sa.Enum("PUBLIC", "PRIVATE", name="visibility"), nullable=False), sa.Column("active", sa.Boolean(), nullable=False), sa.Column("security_schemes", sa.JSON(), nullable=False), sa.Column("default_security_credentials_by_scheme", sa.JSON(), nullable=False), sa.Column("embedding", Vector(dim=1024), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.PrimaryKeyConstraint("id"), sa.UniqueConstraint("name"), ) op.create_table( "entities", sa.Column("id", sa.UUID(), nullable=False), sa.Column( "type", sa.Enum("ENTITY", "USER", "ORGANIZATION", name="entitytype"), nullable=False ), sa.Column("name", sa.String(length=255), nullable=False), sa.Column("email", sa.String(length=255), nullable=False), sa.Column("profile_picture", sa.Text(), nullable=True), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.PrimaryKeyConstraint("id"), ) op.create_table( "functions", sa.Column("id", sa.UUID(), nullable=False), sa.Column("app_id", sa.UUID(), nullable=False), sa.Column("name", sa.String(length=255), nullable=False), sa.Column("description", sa.Text(), nullable=False), sa.Column("tags", postgresql.ARRAY(sa.String()), nullable=False), sa.Column("visibility", sa.Enum("PUBLIC", "PRIVATE", name="visibility"), nullable=False), sa.Column("active", sa.Boolean(), nullable=False), sa.Column("protocol", sa.Enum("REST", name="protocol"), nullable=False), sa.Column("protocol_data", sa.JSON(), nullable=False), sa.Column("parameters", sa.JSON(), nullable=False), sa.Column("response", sa.JSON(), nullable=False), sa.Column("embedding", Vector(dim=1024), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["app_id"], ["apps.id"], ), sa.PrimaryKeyConstraint("id"), sa.UniqueConstraint("name"), ) op.create_table( "organizations", sa.Column("id", sa.UUID(), nullable=False), sa.ForeignKeyConstraint( ["id"], ["entities.id"], ), sa.PrimaryKeyConstraint("id"), ) op.create_table( "projects", sa.Column("id", sa.UUID(), nullable=False), sa.Column("owner_id", sa.UUID(), nullable=False), sa.Column("name", sa.String(length=255), nullable=False), sa.Column( "visibility_access", sa.Enum("PUBLIC", "PRIVATE", name="visibility"), nullable=False ), sa.Column("daily_quota_used", sa.Integer(), nullable=False), sa.Column( "daily_quota_reset_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False ), sa.Column("total_quota_used", sa.Integer(), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["owner_id"], ["entities.id"], ), sa.PrimaryKeyConstraint("id"), ) op.create_table( "subscriptions", sa.Column("id", sa.UUID(), nullable=False), sa.Column("entity_id", sa.UUID(), nullable=False), sa.Column( "plan", sa.Enum("CUSTOM", "FREE", "PRO", "ENTERPRISE", name="subscriptionplan"), nullable=False, ), sa.Column( "status", sa.Enum("ACTIVE", "CANCELLED", "EXPIRED", name="subscriptionstatus"), nullable=False, ), sa.Column("expires_at", sa.DateTime(), nullable=True), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["entity_id"], ["entities.id"], ), sa.PrimaryKeyConstraint("id"), ) op.create_table( "users", sa.Column("id", sa.UUID(), nullable=False), sa.Column("identity_provider", sa.String(length=255), nullable=False), sa.Column("user_id_by_provider", sa.String(length=255), nullable=False), sa.ForeignKeyConstraint( ["id"], ["entities.id"], ), sa.PrimaryKeyConstraint("id"), sa.UniqueConstraint( "identity_provider", "user_id_by_provider", name="uc_auth_provider_user" ), ) op.create_table( "agents", sa.Column("id", sa.UUID(), nullable=False), sa.Column("project_id", sa.UUID(), nullable=False), sa.Column("name", sa.String(length=255), nullable=False), sa.Column("description", sa.Text(), nullable=False), sa.Column("excluded_apps", postgresql.ARRAY(sa.UUID()), nullable=False), sa.Column("excluded_functions", postgresql.ARRAY(sa.UUID()), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["project_id"], ["projects.id"], ), sa.PrimaryKeyConstraint("id"), ) op.create_table( "app_configurations", sa.Column("id", sa.UUID(), nullable=False), sa.Column("project_id", sa.UUID(), nullable=False), sa.Column("app_id", sa.UUID(), nullable=False), sa.Column( "security_scheme", sa.Enum( "NO_AUTH", "API_KEY", "HTTP_BASIC", "HTTP_BEARER", "OAUTH2", name="securityscheme" ), nullable=False, ), sa.Column("security_scheme_overrides", sa.JSON(), nullable=False), sa.Column("enabled", sa.Boolean(), nullable=False), sa.Column("all_functions_enabled", sa.Boolean(), nullable=False), sa.Column("enabled_functions", postgresql.ARRAY(sa.UUID()), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["app_id"], ["apps.id"], ), sa.ForeignKeyConstraint( ["project_id"], ["projects.id"], ), sa.PrimaryKeyConstraint("id"), sa.UniqueConstraint("project_id", "app_id", name="uc_project_app"), ) op.create_table( "linked_accounts", sa.Column("id", sa.UUID(), nullable=False), sa.Column("project_id", sa.UUID(), nullable=False), sa.Column("app_id", sa.UUID(), nullable=False), sa.Column("linked_account_owner_id", sa.String(length=255), nullable=False), sa.Column( "security_scheme", sa.Enum( "NO_AUTH", "API_KEY", "HTTP_BASIC", "HTTP_BEARER", "OAUTH2", name="securityscheme" ), nullable=False, ), sa.Column("security_credentials", sa.JSON(), nullable=False), sa.Column("enabled", sa.Boolean(), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["app_id"], ["apps.id"], ), sa.ForeignKeyConstraint( ["project_id"], ["projects.id"], ), sa.PrimaryKeyConstraint("id"), sa.UniqueConstraint( "project_id", "app_id", "linked_account_owner_id", name="uc_project_app_linked_account_owner", ), ) op.create_table( "memberships", sa.Column("id", sa.UUID(), nullable=False), sa.Column("user_id", sa.UUID(), nullable=False), sa.Column("organization_id", sa.UUID(), nullable=False), sa.Column("role", sa.Enum("ADMIN", "MEMBER", name="organizationrole"), nullable=False), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["organization_id"], ["organizations.id"], ), sa.ForeignKeyConstraint( ["user_id"], ["users.id"], ), sa.PrimaryKeyConstraint("id"), ) op.create_table( "api_keys", sa.Column("id", sa.UUID(), nullable=False), sa.Column("key", sa.String(length=255), nullable=False), sa.Column("agent_id", sa.UUID(), nullable=False), sa.Column( "status", sa.Enum("ACTIVE", "DISABLED", "DELETED", name="apikeystatus"), nullable=False ), sa.Column("created_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.Column("updated_at", sa.DateTime(), server_default=sa.text("now()"), nullable=False), sa.ForeignKeyConstraint( ["agent_id"], ["agents.id"], ), sa.PrimaryKeyConstraint("id"), sa.UniqueConstraint("agent_id"), sa.UniqueConstraint("key"), ) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_table("api_keys") op.drop_table("memberships") op.drop_table("linked_accounts") op.drop_table("app_configurations") op.drop_table("agents") op.drop_table("users") op.drop_table("subscriptions") op.drop_table("projects") op.drop_table("organizations") op.drop_table("functions") op.drop_table("entities") op.drop_table("apps") # ### end Alembic commands ### # Drop the Enum types op_bind = op.get_bind() sa.Enum( "PUBLIC", "PRIVATE", name="visibility", ).drop(op_bind) sa.Enum( "ENTITY", "USER", "ORGANIZATION", name="entitytype", ).drop(op_bind) sa.Enum( "ACTIVE", "DISABLED", "DELETED", name="apikeystatus", ).drop(op_bind) sa.Enum( "REST", name="protocol", ).drop(op_bind) sa.Enum( "CUSTOM", "FREE", "PRO", "ENTERPRISE", name="subscriptionplan", ).drop(op_bind) sa.Enum( "ACTIVE", "CANCELLED", "EXPIRED", name="subscriptionstatus", ).drop(op_bind) sa.Enum( "NO_AUTH", "API_KEY", "HTTP_BASIC", "HTTP_BEARER", "OAUTH2", name="securityscheme", ).drop(op_bind) sa.Enum( "ADMIN", "MEMBER", name="organizationrole", ).drop(op_bind) # drop extentions op.execute("DROP EXTENSION IF EXISTS vector;") ``` ## /backend/aci/alembic/versions/2025_01_27_1657-adcfaa729f61_added_custom_instructions_to_agent_table.py ```py path="/backend/aci/alembic/versions/2025_01_27_1657-adcfaa729f61_added_custom_instructions_to_agent_table.py" """added custom instructions to agent table Revision ID: adcfaa729f61 Revises: c6f47d7d2fa1 Create Date: 2025-01-27 16:57:57.358842+00:00 """ from typing import Sequence, Union import sqlalchemy as sa from alembic import op # revision identifiers, used by Alembic. revision: str = "adcfaa729f61" down_revision: Union[str, None] = "c6f47d7d2fa1" branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column("agents", sa.Column("custom_instructions", sa.JSON(), nullable=False)) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_column("agents", "custom_instructions") # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_02_10_2318-6482e8fa201e_store_app_name_function_name_for_non_.py ```py path="/backend/aci/alembic/versions/2025_02_10_2318-6482e8fa201e_store_app_name_function_name_for_non_.py" """store app_name/function_name for non foreign key fields Revision ID: 6482e8fa201e Revises: adcfaa729f61 Create Date: 2025-02-10 23:18:55.457877+00:00 """ from typing import Sequence, Union import sqlalchemy as sa from alembic import op from sqlalchemy.dialects import postgresql # revision identifiers, used by Alembic. revision: str = "6482e8fa201e" down_revision: Union[str, None] = "adcfaa729f61" branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column( "agents", "excluded_apps", existing_type=postgresql.ARRAY(sa.UUID()), type_=postgresql.ARRAY(sa.String(length=255)), existing_nullable=False, ) op.alter_column( "agents", "excluded_functions", existing_type=postgresql.ARRAY(sa.UUID()), type_=postgresql.ARRAY(sa.String(length=255)), existing_nullable=False, ) op.alter_column( "app_configurations", "enabled_functions", existing_type=postgresql.ARRAY(sa.UUID()), type_=postgresql.ARRAY(sa.String(length=255)), existing_nullable=False, ) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column( "app_configurations", "enabled_functions", existing_type=postgresql.ARRAY(sa.String(length=255)), type_=postgresql.ARRAY(sa.UUID()), existing_nullable=False, ) op.alter_column( "agents", "excluded_functions", existing_type=postgresql.ARRAY(sa.String(length=255)), type_=postgresql.ARRAY(sa.UUID()), existing_nullable=False, ) op.alter_column( "agents", "excluded_apps", existing_type=postgresql.ARRAY(sa.String(length=255)), type_=postgresql.ARRAY(sa.UUID()), existing_nullable=False, ) # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_03_08_1922-70dd635d80d4_add_new_protocol_enum_value.py ```py path="/backend/aci/alembic/versions/2025_03_08_1922-70dd635d80d4_add_new_protocol_enum_value.py" """add new protocol enum value Revision ID: 70dd635d80d4 Revises: 6482e8fa201e Create Date: 2025-03-08 19:22:45.910952+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision: str = '70dd635d80d4' down_revision: Union[str, None] = '6482e8fa201e' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # Create a new temporary enum type with the new value op.execute("ALTER TYPE protocol ADD VALUE 'CONNECTOR'") # Note: PostgreSQL allows adding values to enum types directly with the command above. # If you were using a different database, you might need a more complex migration. def downgrade() -> None: # Unfortunately, PostgreSQL doesn't provide a direct way to remove enum values # The only way would be to create a new type without the value and migrate data # This is complex and potentially dangerous, so it's often left as a no-op pass ``` ## /backend/aci/alembic/versions/2025_03_10_2339-28702a5576f5_change_agent_level_app_function_acl.py ```py path="/backend/aci/alembic/versions/2025_03_10_2339-28702a5576f5_change_agent_level_app_function_acl.py" """change agent level app/function acl Revision ID: 28702a5576f5 Revises: 70dd635d80d4 Create Date: 2025-03-10 23:39:42.611170+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import postgresql # revision identifiers, used by Alembic. revision: str = '28702a5576f5' down_revision: Union[str, None] = '70dd635d80d4' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column('agents', sa.Column('allowed_apps', postgresql.ARRAY(sa.String(length=255)), nullable=False)) op.drop_column('agents', 'excluded_apps') op.drop_column('agents', 'excluded_functions') # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column('agents', sa.Column('excluded_functions', postgresql.ARRAY(sa.VARCHAR(length=255)), autoincrement=False, nullable=False)) op.add_column('agents', sa.Column('excluded_apps', postgresql.ARRAY(sa.VARCHAR(length=255)), autoincrement=False, nullable=False)) op.drop_column('agents', 'allowed_apps') # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_03_11_2315-949afaf258c3_json_to_jsonb.py ```py path="/backend/aci/alembic/versions/2025_03_11_2315-949afaf258c3_json_to_jsonb.py" # type: ignore """json to jsonb Revision ID: 949afaf258c3 Revises: 28702a5576f5 Create Date: 2025-03-11 23:15:10.615837+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import postgresql # revision identifiers, used by Alembic. revision: str = '949afaf258c3' down_revision: Union[str, None] = '28702a5576f5' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column('agents', 'custom_instructions', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('app_configurations', 'security_scheme_overrides', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('apps', 'security_schemes', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('apps', 'default_security_credentials_by_scheme', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('functions', 'protocol_data', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('functions', 'parameters', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('functions', 'response', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) op.alter_column('linked_accounts', 'security_credentials', existing_type=postgresql.JSON(astext_type=sa.Text()), type_=postgresql.JSONB(astext_type=sa.Text()), existing_nullable=False) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column('linked_accounts', 'security_credentials', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('functions', 'response', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('functions', 'parameters', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('functions', 'protocol_data', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('apps', 'default_security_credentials_by_scheme', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('apps', 'security_schemes', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('app_configurations', 'security_scheme_overrides', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) op.alter_column('agents', 'custom_instructions', existing_type=postgresql.JSONB(astext_type=sa.Text()), type_=postgresql.JSON(astext_type=sa.Text()), existing_nullable=False) # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_03_14_2000-1b82aeb7431f_create_secret_table.py ```py path="/backend/aci/alembic/versions/2025_03_14_2000-1b82aeb7431f_create_secret_table.py" """Create Secret table Revision ID: 1b82aeb7431f Revises: 949afaf258c3 Create Date: 2025-03-14 20:00:46.127853+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import postgresql # revision identifiers, used by Alembic. revision: str = '1b82aeb7431f' down_revision: Union[str, None] = '949afaf258c3' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.create_table('secrets', sa.Column('id', sa.UUID(), nullable=False), sa.Column('linked_account_id', sa.UUID(), nullable=False), sa.Column('key', sa.String(length=255), nullable=False), sa.Column('value', postgresql.BYTEA(), nullable=False), sa.Column('created_at', sa.DateTime(), server_default=sa.text('now()'), nullable=False), sa.Column('updated_at', sa.DateTime(), server_default=sa.text('now()'), nullable=False), sa.ForeignKeyConstraint(['linked_account_id'], ['linked_accounts.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('linked_account_id', 'key', name='uc_linked_account_key') ) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_table('secrets') # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_11_1232-7a159de1064c_add_nullable_org_id_column.py ```py path="/backend/aci/alembic/versions/2025_04_11_1232-7a159de1064c_add_nullable_org_id_column.py" """Add nullable org_id column Revision ID: 7a159de1064c Revises: 1b82aeb7431f Create Date: 2025-04-11 12:32:20.415772+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision: str = '7a159de1064c' down_revision: Union[str, None] = '1b82aeb7431f' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column('projects', sa.Column('org_id', sa.UUID(), nullable=True)) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_column('projects', 'org_id') # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_11_1236-a79cdd14460e_make_org_id_of_project_table_not_.py ```py path="/backend/aci/alembic/versions/2025_04_11_1236-a79cdd14460e_make_org_id_of_project_table_not_.py" """Make org_id of project table not nullable Revision ID: a79cdd14460e Revises: 7a159de1064c Create Date: 2025-04-11 12:36:10.907555+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision: str = 'a79cdd14460e' down_revision: Union[str, None] = '7a159de1064c' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column('projects', 'org_id', existing_type=sa.UUID(), nullable=False) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column('projects', 'org_id', existing_type=sa.UUID(), nullable=True) # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_11_1237-af2ecf7ca19a_drop_owner_id_column_of_project_table.py ```py path="/backend/aci/alembic/versions/2025_04_11_1237-af2ecf7ca19a_drop_owner_id_column_of_project_table.py" """Drop owner_id column of project table Revision ID: af2ecf7ca19a Revises: a79cdd14460e Create Date: 2025-04-11 12:37:30.824896+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision: str = 'af2ecf7ca19a' down_revision: Union[str, None] = 'a79cdd14460e' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_constraint('projects_owner_id_fkey', 'projects', type_='foreignkey') op.drop_column('projects', 'owner_id') # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column('projects', sa.Column('owner_id', sa.UUID(), autoincrement=False, nullable=False)) op.create_foreign_key('projects_owner_id_fkey', 'projects', 'entities', ['owner_id'], ['id']) # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_11_1238-bce2fbe6273b_drop_user_entity_organization_etc_tables.py ```py path="/backend/aci/alembic/versions/2025_04_11_1238-bce2fbe6273b_drop_user_entity_organization_etc_tables.py" """Drop User, Entity, Organization etc tables Revision ID: bce2fbe6273b Revises: af2ecf7ca19a Create Date: 2025-04-11 12:38:43.626898+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa from sqlalchemy.dialects import postgresql # revision identifiers, used by Alembic. revision: str = 'bce2fbe6273b' down_revision: Union[str, None] = 'af2ecf7ca19a' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_table('memberships') op.drop_table('subscriptions') op.drop_table('users') op.drop_table('organizations') op.drop_table('entities') # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.create_table('subscriptions', sa.Column('id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('entity_id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('plan', postgresql.ENUM('CUSTOM', 'FREE', 'PRO', 'ENTERPRISE', name='subscriptionplan'), autoincrement=False, nullable=False), sa.Column('status', postgresql.ENUM('ACTIVE', 'CANCELLED', 'EXPIRED', name='subscriptionstatus'), autoincrement=False, nullable=False), sa.Column('expires_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True), sa.Column('created_at', postgresql.TIMESTAMP(), server_default=sa.text('now()'), autoincrement=False, nullable=False), sa.Column('updated_at', postgresql.TIMESTAMP(), server_default=sa.text('now()'), autoincrement=False, nullable=False), sa.ForeignKeyConstraint(['entity_id'], ['entities.id'], name='subscriptions_entity_id_fkey'), sa.PrimaryKeyConstraint('id', name='subscriptions_pkey') ) op.create_table('organizations', sa.Column('id', sa.UUID(), autoincrement=False, nullable=False), sa.ForeignKeyConstraint(['id'], ['entities.id'], name='organizations_id_fkey'), sa.PrimaryKeyConstraint('id', name='organizations_pkey'), postgresql_ignore_search_path=False ) op.create_table('users', sa.Column('id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('identity_provider', sa.VARCHAR(length=255), autoincrement=False, nullable=False), sa.Column('user_id_by_provider', sa.VARCHAR(length=255), autoincrement=False, nullable=False), sa.ForeignKeyConstraint(['id'], ['entities.id'], name='users_id_fkey'), sa.PrimaryKeyConstraint('id', name='users_pkey'), sa.UniqueConstraint('identity_provider', 'user_id_by_provider', name='uc_auth_provider_user'), postgresql_ignore_search_path=False ) op.create_table('entities', sa.Column('id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('type', postgresql.ENUM('ENTITY', 'USER', 'ORGANIZATION', name='entitytype'), autoincrement=False, nullable=False), sa.Column('name', sa.VARCHAR(length=255), autoincrement=False, nullable=False), sa.Column('email', sa.VARCHAR(length=255), autoincrement=False, nullable=False), sa.Column('profile_picture', sa.TEXT(), autoincrement=False, nullable=True), sa.Column('created_at', postgresql.TIMESTAMP(), server_default=sa.text('now()'), autoincrement=False, nullable=False), sa.Column('updated_at', postgresql.TIMESTAMP(), server_default=sa.text('now()'), autoincrement=False, nullable=False), sa.PrimaryKeyConstraint('id', name='entities_pkey'), postgresql_ignore_search_path=False ) op.create_table('memberships', sa.Column('id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('user_id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('organization_id', sa.UUID(), autoincrement=False, nullable=False), sa.Column('role', postgresql.ENUM('ADMIN', 'MEMBER', name='organizationrole'), autoincrement=False, nullable=False), sa.Column('created_at', postgresql.TIMESTAMP(), server_default=sa.text('now()'), autoincrement=False, nullable=False), sa.Column('updated_at', postgresql.TIMESTAMP(), server_default=sa.text('now()'), autoincrement=False, nullable=False), sa.ForeignKeyConstraint(['organization_id'], ['organizations.id'], name='memberships_organization_id_fkey'), sa.ForeignKeyConstraint(['user_id'], ['users.id'], name='memberships_user_id_fkey'), sa.PrimaryKeyConstraint('id', name='memberships_pkey') ) # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_15_0929-c5978747c602_add_encrypted_key_and_key_hmac_columns_.py ```py path="/backend/aci/alembic/versions/2025_04_15_0929-c5978747c602_add_encrypted_key_and_key_hmac_columns_.py" """Add encrypted_key and key_hmac columns for api_keys table Revision ID: c5978747c602 Revises: bce2fbe6273b Create Date: 2025-04-15 09:29:41.112876+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa from aci.common.db.custom_sql_types import Key # revision identifiers, used by Alembic. revision: str = 'c5978747c602' down_revision: Union[str, None] = 'bce2fbe6273b' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column('api_keys', sa.Column('encrypted_key', Key(), nullable=True)) op.add_column('api_keys', sa.Column('key_hmac', sa.String(length=64), nullable=True)) op.create_unique_constraint('api_keys_encrypted_key_key', 'api_keys', ['encrypted_key']) op.create_unique_constraint('api_keys_key_hmac_key', 'api_keys', ['key_hmac']) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_constraint('api_keys_encrypted_key_key', 'api_keys', type_='unique') op.drop_constraint('api_keys_key_hmac_key', 'api_keys', type_='unique') op.drop_column('api_keys', 'key_hmac') op.drop_column('api_keys', 'encrypted_key') # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_15_0930-0846452f51ac_drop_the_key_column_of_the_api_keys_.py ```py path="/backend/aci/alembic/versions/2025_04_15_0930-0846452f51ac_drop_the_key_column_of_the_api_keys_.py" """Drop the key column of the api_keys table Revision ID: 0846452f51ac Revises: c5978747c602 Create Date: 2025-04-15 09:30:48.205741+00:00 """ from typing import Sequence, Union from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision: str = '0846452f51ac' down_revision: Union[str, None] = 'c5978747c602' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.drop_constraint('api_keys_key_key', 'api_keys', type_='unique') op.drop_column('api_keys', 'key') # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.add_column('api_keys', sa.Column('key', sa.VARCHAR(length=255), autoincrement=False, nullable=False)) op.create_unique_constraint('api_keys_key_key', 'api_keys', ['key']) # ### end Alembic commands ### ``` ## /backend/aci/alembic/versions/2025_04_15_0932-7ecafab6f8f9_rename_encrypted_key_column_to_key_and_.py ```py path="/backend/aci/alembic/versions/2025_04_15_0932-7ecafab6f8f9_rename_encrypted_key_column_to_key_and_.py" """Rename encrypted_key column to key and make encrypted_key and key_hmac not nullable Revision ID: 7ecafab6f8f9 Revises: 0846452f51ac Create Date: 2025-04-15 09:32:55.974507+00:00 """ from typing import Sequence, Union from alembic import op # revision identifiers, used by Alembic. revision: str = '7ecafab6f8f9' down_revision: Union[str, None] = '0846452f51ac' branch_labels: Union[str, Sequence[str], None] = None depends_on: Union[str, Sequence[str], None] = None def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column('api_keys', 'encrypted_key', new_column_name='key', nullable=False) op.alter_column('api_keys', 'key_hmac', nullable=False) # ### end Alembic commands ### def downgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.alter_column('api_keys', 'key', new_column_name='encrypted_key', nullable=True) op.alter_column('api_keys', 'key_hmac', nullable=True) # ### end Alembic commands ### ``` ## /backend/aci/cli/__init__.py ```py path="/backend/aci/cli/__init__.py" ``` ## /backend/aci/cli/__main__.py ```py path="/backend/aci/cli/__main__.py" from aci.cli.aci import cli if __name__ == "__main__": cli() ``` ## /backend/aci/cli/aci.py ```py path="/backend/aci/cli/aci.py" import click from aci.cli.commands import ( create_agent, create_project, create_random_api_key, delete_app, fuzzy_test_function_execution, get_app, rename_app, update_agent, upsert_app, upsert_functions, ) from aci.common.logging_setup import setup_logging @click.group(context_settings={"help_option_names": ["-h", "--help"]}) def cli() -> None: """AIPO CLI Tool""" setup_logging() # Add commands to the group cli.add_command(create_project.create_project) cli.add_command(create_agent.create_agent) cli.add_command(update_agent.update_agent) cli.add_command(upsert_app.upsert_app) cli.add_command(get_app.get_app) cli.add_command(rename_app.rename_app) cli.add_command(delete_app.delete_app) cli.add_command(upsert_functions.upsert_functions) cli.add_command(create_random_api_key.create_random_api_key) cli.add_command(fuzzy_test_function_execution.fuzzy_test_function_execution) if __name__ == "__main__": cli() ``` ## /backend/aci/cli/commands/__init__.py ```py path="/backend/aci/cli/commands/__init__.py" ``` ## /backend/aci/cli/commands/create_agent.py ```py path="/backend/aci/cli/commands/create_agent.py" import json from uuid import UUID import click from rich.console import Console from aci.cli import config from aci.common import utils from aci.common.db import crud console = Console() @click.command() @click.option( "--project-id", "project_id", required=True, type=UUID, help="project id under which the agent is created", ) @click.option( "--name", "name", required=True, help="agent name", ) @click.option( "--description", "description", required=True, help="agent description", ) @click.option( "--allowed-apps", "allowed_apps", required=False, default="", help="comma-separated list of app names to allow the agent to access (e.g., 'app1,app2,app3')", ) @click.option( "--custom-instructions", "custom_instructions", required=False, default="{}", type=str, help="function level custom instructions for the agent", ) @click.option( "--skip-dry-run", is_flag=True, help="provide this flag to run the command and apply changes to the database", ) def create_agent( project_id: UUID, name: str, description: str, allowed_apps: str, custom_instructions: str, skip_dry_run: bool, ) -> UUID: """ Create an agent in db. """ # Parse comma-separated string into list, handling empty string case list_of_allowed_apps = [app.strip() for app in allowed_apps.split(",")] if allowed_apps else [] return create_agent_helper( project_id, name, description, list_of_allowed_apps, json.loads(custom_instructions), skip_dry_run, ) def create_agent_helper( project_id: UUID, name: str, description: str, allowed_apps: list[str], custom_instructions: dict[str, str], skip_dry_run: bool, ) -> UUID: with utils.create_db_session(config.DB_FULL_URL) as db_session: agent = crud.projects.create_agent( db_session, project_id, name, description, allowed_apps, custom_instructions, ) if not skip_dry_run: console.rule("[bold green]Provide --skip-dry-run to Create Agent[/bold green]") db_session.rollback() else: db_session.commit() console.rule(f"[bold green]Agent created: {agent.name}[/bold green]") console.print(agent) return agent.id ``` ## /backend/aci/cli/commands/create_project.py ```py path="/backend/aci/cli/commands/create_project.py" from uuid import UUID import click from rich.console import Console from aci.cli import config from aci.common import utils from aci.common.db import crud from aci.common.enums import Visibility console = Console() @click.command() @click.option( "--name", "name", required=True, help="project name", ) @click.option( "--org-id", "org_id", required=True, type=UUID, help="organization id", ) @click.option( "--visibility-access", "visibility_access", required=True, type=Visibility, help="visibility access of the project, if 'public', the project can only access public apps and functions", ) @click.option( "--skip-dry-run", is_flag=True, help="provide this flag to run the command and apply changes to the database", ) def create_project( name: str, org_id: UUID, visibility_access: Visibility, skip_dry_run: bool, ) -> UUID: """ Create a project in db. Note this is a privileged command, as it can create projects under any user or organization. """ return create_project_helper(name, org_id, visibility_access, skip_dry_run) def create_project_helper( name: str, org_id: UUID, visibility_access: Visibility, skip_dry_run: bool, ) -> UUID: with utils.create_db_session(config.DB_FULL_URL) as db_session: project = crud.projects.create_project(db_session, org_id, name, visibility_access) if not skip_dry_run: console.rule( f"[bold green]Provide --skip-dry-run to Create Project: {project.name}[/bold green]" ) db_session.rollback() else: db_session.commit() console.rule(f"[bold green]Project created: {project.name}[/bold green]") console.print(project) return project.id ``` ## /backend/aci/cli/commands/create_random_api_key.py ```py path="/backend/aci/cli/commands/create_random_api_key.py" """ A convenience command to create a test api key for local development. This will: - create a new dummy user - create a new dummy project with the new user as the owner - create a new dummy agent in the project """ import json import uuid from pathlib import Path import click from rich.console import Console from aci.cli import config from aci.cli.commands import create_agent, create_project from aci.common import utils from aci.common.db import crud from aci.common.db.sql_models import APIKey from aci.common.enums import Visibility console = Console() @click.option( "--visibility-access", "visibility_access", required=True, type=Visibility, help="visibility access of the project that the api key belongs to, either 'public' or 'private'", ) @click.command() def create_random_api_key(visibility_access: Visibility) -> str: """Create a random test api key for local development.""" return create_random_api_key_helper(visibility_access) def create_random_api_key_helper(visibility_access: Visibility) -> str: # can not do dry run because of the dependencies skip_dry_run = True random_id = str(uuid.uuid4())[:8] # Get first 8 chars of UUID project_id = create_project.create_project_helper( name=f"Test Project {random_id}", org_id=uuid.uuid4(), visibility_access=visibility_access, skip_dry_run=skip_dry_run, ) # Load app names from app.json files allowed_apps = [] for app_file in Path("./apps").glob("*/app.json"): with open(app_file) as f: app_data = json.load(f) allowed_apps.append(app_data["name"]) agent_id = create_agent.create_agent_helper( project_id=project_id, name=f"Test Agent {random_id}", description=f"Test Agent {random_id}", allowed_apps=allowed_apps, custom_instructions={}, skip_dry_run=skip_dry_run, ) # get the api key by agent id with utils.create_db_session(config.DB_FULL_URL) as db_session: api_key: APIKey | None = crud.projects.get_api_key_by_agent_id(db_session, agent_id) if not api_key: raise ValueError(f"API key with agent ID {agent_id} not found") console.rule("[bold green]Summary[/bold green]") console.print( { "Project Id": str(project_id), "Agent Id": str(agent_id), "API Key": str(api_key.key), } ) return str(api_key.key) ``` ## /backend/aci/cli/commands/delete_app.py ```py path="/backend/aci/cli/commands/delete_app.py" import click from rich.console import Console from aci.cli import config from aci.common import utils from aci.common.db import crud console = Console() @click.command() @click.option( "--app-name", "app_name", required=True, help="Name of the app to delete", ) @click.option( "--skip-dry-run", is_flag=True, help="Provide this flag to run the command and apply changes to the database", ) def delete_app( app_name: str, skip_dry_run: bool, ) -> None: """ Delete an app and all its references from the database. This command will: 1. Delete all functions associated with the app 2. Delete linked accounts associated with the app 3. Delete app configurations referencing the app 4. Update agents that reference the app in allowed_apps or custom_instructions 5. Delete the app itself WARNING: This operation cannot be undone. """ # if skip dry run, warn user if skip_dry_run: console.print( "[bold red]WARNING: This operation will delete all data associated with the app " "including functions, linked accounts, app configurations, and agents's allowed_apps " "and custom_instructions.[/bold red]" ) if not click.confirm("Are you sure you want to continue?", default=False): raise click.Abort() with utils.create_db_session(config.DB_FULL_URL) as db_session: # Check if app exists app = crud.apps.get_app( db_session, app_name, public_only=False, active_only=False, ) if app is None: raise click.ClickException(f"App '{app_name}' not found") # Get associated data that will be affected functions = crud.functions.get_functions_by_app_id(db_session, app.id) app_configurations = crud.app_configurations.get_app_configurations_by_app_id( db_session, app.id ) agents = crud.projects.get_agents_whose_allowed_apps_contains(db_session, app_name) # Get linked accounts linked_accounts = crud.linked_accounts.get_linked_accounts_by_app_id(db_session, app.id) if not skip_dry_run: console.rule("[bold yellow]Dry run mode - no changes applied[/bold yellow]") try: # 1. Update agents - remove from allowed_apps and custom_instructions for agent in agents: # Remove app from allowed_apps agent.allowed_apps = [app for app in agent.allowed_apps if app != app_name] console.print(f"Removed '{app_name}' from allowed_apps for agent {agent.id}") # Remove custom instructions for this app keys_to_remove = [ key for key in agent.custom_instructions if key.startswith(f"{app_name}__") ] for key in keys_to_remove: del agent.custom_instructions[key] console.print(f"Removed custom instruction '{key}' for agent {agent.id}") # 2. Delete linked accounts for linked_account in linked_accounts: db_session.delete(linked_account) console.print( f"Deleted linked account {linked_account.id} for project {linked_account.project_id}" ) # 3. Delete app configurations for app_config in app_configurations: db_session.delete(app_config) console.print( f"Deleted app configuration of {app_config.app_name} for project {app_config.project_id}" ) # 4. Delete functions (SQLAlchemy will handle this via cascade) for function in functions: console.print(f"Function '{function.name}' will be deleted with app") # 5. Delete the app (will cascade to functions) db_session.delete(app) console.print(f"Deleted app '{app_name}'") # Commit changes if skip_dry_run: db_session.commit() console.rule(f"[bold green]Successfully deleted app '{app_name}'[/bold green]") else: console.rule( "[bold yellow]Run with [bold green]--skip-dry-run[/bold green] to apply these changes[/bold yellow]" ) db_session.rollback() except Exception as e: db_session.rollback() console.print(f"[bold red]Error deleting app: {e}[/bold red]") ``` ## /backend/aci/cli/commands/fuzzy_test_function_execution.py ```py path="/backend/aci/cli/commands/fuzzy_test_function_execution.py" """Sanity check function execution with GPT-generated inputs.""" import json from typing import Any from uuid import UUID import click import httpx from openai import OpenAI from rich.console import Console from aci.cli import config from aci.common.enums import FunctionDefinitionFormat from aci.common.schemas.function import FunctionExecute console = Console() @click.command() @click.option( "--function-name", "function_name", required=True, type=str, help="Name of the function to test", ) @click.option( "--aci-api-key", "aci_api_key", required=True, type=str, help="ACI API key to use for authentication", ) @click.option( "--linked-account-owner-id", "linked_account_owner_id", required=True, type=str, help="ID of the linked account owner to use for authentication", ) @click.option( "--prompt", "prompt", type=str, help="Prompt for LLM to generate function call arguments", ) @click.option( "--model", "model", type=str, required=False, default="gpt-4o", help="LLM model to use for function call arguments generation", ) def fuzzy_test_function_execution( aci_api_key: str, function_name: str, model: str, linked_account_owner_id: UUID, prompt: str | None = None, ) -> None: """Test function execution with GPT-generated inputs.""" return fuzzy_test_function_execution_helper( aci_api_key, function_name, model, linked_account_owner_id, prompt ) def fuzzy_test_function_execution_helper( aci_api_key: str, function_name: str, model: str, linked_account_owner_id: UUID, prompt: str | None = None, ) -> None: """Test function execution with GPT-generated inputs.""" # Get function definition response = httpx.get( f"{config.SERVER_URL}/v1/functions/{function_name}/definition", params={"format": FunctionDefinitionFormat.OPENAI}, headers={"x-api-key": aci_api_key}, ) if response.status_code != 200: raise click.ClickException(f"Failed to get function definition: {response.json()}") function_definition = response.json() console.rule("[bold green]Function definition Fetched[/bold green]") console.print(function_definition) # Use OpenAI function calling to generate a random input openai_client = OpenAI(api_key=config.OPENAI_API_KEY) function_args = _generate_fuzzy_function_call_arguments( openai_client, model, function_definition, prompt=prompt ) console.rule("[bold green]Generated Function Call Arguments[/bold green]") console.print(function_args) # Execute function with generated input function_execute = FunctionExecute( function_input=function_args, linked_account_owner_id=str(linked_account_owner_id) ) response = httpx.post( f"{config.SERVER_URL}/v1/functions/{function_name}/execute", json=function_execute.model_dump(mode="json"), headers={"x-api-key": aci_api_key}, timeout=30.0, ) if response.status_code != 200: raise click.ClickException(f"Function execution failed: {response.json()}") result = response.json() console.rule(f"[bold green]Execution Result for {function_name}[/bold green]") console.print(result) def _generate_fuzzy_function_call_arguments( openai_client: OpenAI, model: str, function_definition: dict, prompt: str | None = None, ) -> Any: """ Generate fuzzy input arguments for a function with LLM. """ messages = [ { "role": "system", "content": "You are a helpful assistant that generates test inputs for API functions. Generate reasonable test values that would work with the function.", }, { "role": "user", "content": f"Generate test input for this function {function_definition['function']['name']}, definition provided to you separately.", }, ] if prompt: messages.append( { "role": "user", "content": prompt, } ) response = openai_client.chat.completions.create( model=model, messages=messages, tools=[function_definition], tool_choice="required", # force the model to generate a tool call ) # type: ignore tool_call = ( response.choices[0].message.tool_calls[0] if response.choices[0].message.tool_calls else None ) if tool_call: if tool_call.function.name != function_definition["function"]["name"]: console.print( f"[bold red]Generated function name {tool_call.function.name} does not match expected function name {function_definition['function']['name']}[/bold red]" ) raise click.ClickException( "Generated function name does not match expected function name" ) else: return json.loads(tool_call.function.arguments) else: console.print("[bold red]No tool call was generated[/bold red]") raise click.ClickException("No tool call was generated") ``` ## /backend/aci/cli/commands/get_app.py ```py path="/backend/aci/cli/commands/get_app.py" import json import click from rich.console import Console from rich.syntax import Syntax from aci.cli import config from aci.common import utils from aci.common.db import crud console = Console() @click.command() @click.option( "--app-name", "app_name", required=True, help="Name of the app to retrieve", ) def get_app( app_name: str, ) -> None: """ Get an app by name from the database. """ with utils.create_db_session(config.DB_FULL_URL) as db_session: app = crud.apps.get_app( db_session, app_name, public_only=False, active_only=False, ) if app is None: console.rule(f"[bold red]App '{app_name}' not found[/bold red]") return console.rule(f"[bold green]App: {app.name}[/bold green]") # print without excluded fields excluded_fields = ["functions", "_sa_instance_state"] app_dict = {} for key, value in vars(app).items(): if key not in excluded_fields: app_dict[key] = value # Add function count app_dict["function_count"] = len(app.functions) if hasattr(app, "functions") else 0 # Convert to JSON string with nice formatting json_str = json.dumps(app_dict, indent=2, default=str) # Print with syntax highlighting console.print(Syntax(json_str, "json", theme="monokai")) ``` ## /backend/aci/cli/commands/rename_app.py ```py path="/backend/aci/cli/commands/rename_app.py" from copy import deepcopy import click from rich.console import Console from aci.cli import config from aci.common import utils from aci.common.db import crud console = Console() @click.command() @click.option( "--current-name", "current_name", required=True, help="Current name of the app to rename", ) @click.option( "--new-name", "new_name", required=True, help="New name for the app", ) @click.option( "--skip-dry-run", is_flag=True, help="Provide this flag to run the command and apply changes to the database", ) def rename_app( current_name: str, new_name: str, skip_dry_run: bool, ) -> None: """ Rename an app and update all related table entities. This command changes the app name and updates all functions that begin with the app name prefix. It also updates any references to the app in other tables like AppConfigurations and Agents. """ # if skip dry run, warn user if skip_dry_run: console.print( "[bold red]WARNING: This operation will change the name of the app and all data " "associated with the app including functions, linked accounts, app configurations, " "and agents's allowed_apps and custom_instructions.[/bold red]" ) if not click.confirm("Are you sure you want to continue?", default=False): raise click.Abort() with utils.create_db_session(config.DB_FULL_URL) as db_session: # Check if old app exists app = crud.apps.get_app( db_session, current_name, public_only=False, active_only=False, ) if app is None: raise click.ClickException(f"App '{current_name}' not found") # Check if new app name already exists new_app = crud.apps.get_app( db_session, new_name, public_only=False, active_only=False, ) if new_app is not None: raise click.ClickException(f"App with name '{new_name}' already exists") # Get functions that need to be renamed functions = crud.functions.get_functions_by_app_id(db_session, app.id) # Get app configurations that need to be updated app_configurations = crud.app_configurations.get_app_configurations_by_app_id( db_session, app.id ) # Get agents that include this app in allowed_apps agents = crud.projects.get_agents_whose_allowed_apps_contains(db_session, current_name) if not skip_dry_run: console.rule("[bold yellow]Dry run mode - no changes applied[/bold yellow]") try: # Update app name app.name = new_name console.print(f"Updating app name from '{current_name}' to '{new_name}'") # Update function names for function in functions: assert function.name.startswith(f"{current_name}__") new_function_name = function.name.replace(f"{current_name}__", f"{new_name}__", 1) console.print( f"Updating function name from '{function.name}' to '{new_function_name}'" ) function.name = new_function_name # Update app configurations's enabled_functions (if the functions are from the app) for app_config in app_configurations: # Update enabled_functions if they contain the old app name for i, func_name in enumerate(app_config.enabled_functions): if func_name.startswith(f"{current_name}__"): new_func_name = func_name.replace(f"{current_name}__", f"{new_name}__", 1) app_config.enabled_functions[i] = new_func_name console.print( f"Updating enabled_functions from '{func_name}' to '{new_func_name}' for app configuration {app_config.id}" ) # Update agents allowed_apps for agent in agents: # Directly modify the allowed_apps list in place for i, app_name in enumerate(agent.allowed_apps): if app_name == current_name: agent.allowed_apps[i] = new_name console.print( f"Updating allowed_apps from '{app_name}' to '{new_name}' for agent {agent.id}" ) # Update custom_instructions if they reference the old app name new_custom_instructions = deepcopy(agent.custom_instructions) # Find keys that need to be updated before modifying the dictionary keys_to_update = [ key for key in new_custom_instructions.keys() if key.startswith(f"{current_name}__") ] for func_name in keys_to_update: new_func_name = func_name.replace(f"{current_name}__", f"{new_name}__", 1) new_custom_instructions[new_func_name] = new_custom_instructions[func_name] del new_custom_instructions[func_name] console.print( f"Updating custom_instructions from '{func_name}' to '{new_func_name}' for agent {agent.id}" ) agent.custom_instructions = new_custom_instructions # Commit changes if not skip_dry_run: console.rule( "[bold yellow]Run with [bold green]--skip-dry-run[/bold green] to apply these changes[/bold yellow]" ) else: db_session.commit() console.rule( f"[bold green]Successfully renamed app from '{current_name}' to '{new_name}'[/bold green]" ) except Exception as e: db_session.rollback() console.print(f"[bold red]Error renaming app: {e}[/bold red]") ``` ## /backend/aci/cli/commands/update_agent.py ```py path="/backend/aci/cli/commands/update_agent.py" import json from uuid import UUID import click from rich.console import Console from aci.cli import config from aci.common import utils from aci.common.db import crud from aci.common.exceptions import AgentNotFound from aci.common.schemas.agent import AgentUpdate console = Console() # TODO: Make an upsert update agent command so you can use json files to update the agent @click.command() @click.option("--agent-id", "agent_id", required=True, type=UUID, help="id of the agent to update") @click.option( "--name", "name", required=False, help="new agent name", ) @click.option( "--description", "description", required=False, help="new agent description", ) @click.option( "--allowed-apps", "allowed_apps", required=False, help="comma-separated list of app names to allow the agent to access (e.g., 'app1,app2,app3')", ) @click.option( "--custom-instructions", "custom_instructions", required=False, type=str, help="new custom instructions for the agent", ) @click.option( "--skip-dry-run", is_flag=True, help="provide this flag to run the command and apply changes to the database", ) def update_agent( agent_id: UUID, name: str | None, description: str | None, allowed_apps: str | None, custom_instructions: str | None, skip_dry_run: bool, ) -> UUID: """ Update an existing agent in db. """ list_of_allowed_apps = ( [app.strip() for app in allowed_apps.split(",")] if allowed_apps is not None else None ) return update_agent_helper( agent_id, name, description, list_of_allowed_apps, json.loads(custom_instructions) if custom_instructions else None, skip_dry_run, ) def update_agent_helper( agent_id: UUID, name: str | None, description: str | None, allowed_apps: list[str] | None, custom_instructions: dict[str, str] | None, skip_dry_run: bool, ) -> UUID: with utils.create_db_session(config.DB_FULL_URL) as db_session: agent = crud.projects.get_agent_by_id(db_session, agent_id) if not agent: raise AgentNotFound(f"agent={agent_id} not found.") update = AgentUpdate( name=name, description=description, allowed_apps=allowed_apps, custom_instructions=custom_instructions, ) updated_agent = crud.projects.update_agent(db_session, agent, update) if not skip_dry_run: console.rule( f"[bold green]Provide --skip-dry-run to Update Agent: {updated_agent.name}[/bold green]" ) db_session.rollback() else: db_session.commit() console.rule(f"[bold green]Updated Agent: {updated_agent.name}[/bold green]") console.print(updated_agent) return updated_agent.id ``` ## /backend/aci/cli/commands/upsert_app.py ```py path="/backend/aci/cli/commands/upsert_app.py" import json from pathlib import Path from uuid import UUID import click from deepdiff import DeepDiff from jinja2 import Environment, FileSystemLoader, StrictUndefined, Template from openai import OpenAI from rich.console import Console from sqlalchemy.orm import Session from aci.cli import config from aci.common import embeddings, utils from aci.common.db import crud from aci.common.db.sql_models import App from aci.common.schemas.app import AppEmbeddingFields, AppUpsert console = Console() openai_client = OpenAI(api_key=config.OPENAI_API_KEY) @click.command() @click.option( "--app-file", "app_file", required=True, type=click.Path(exists=True, path_type=Path), help="Path to the app JSON file", ) @click.option( "--secrets-file", "secrets_file", type=click.Path(exists=True, path_type=Path), default=None, show_default=True, help="Path to the secrets JSON file", ) @click.option( "--skip-dry-run", is_flag=True, help="Provide this flag to run the command and apply changes to the database", ) def upsert_app(app_file: Path, secrets_file: Path | None, skip_dry_run: bool) -> UUID: """ Insert or update an App in the DB from a JSON file, optionally injecting secrets. If an app with the given name already exists, performs an update; otherwise, creates a new app. For changing the app name of an existing app, use the command. """ with utils.create_db_session(config.DB_FULL_URL) as db_session: return upsert_app_helper(db_session, app_file, secrets_file, skip_dry_run) def upsert_app_helper( db_session: Session, app_file: Path, secrets_file: Path | None, skip_dry_run: bool ) -> UUID: # Load secrets if provided secrets = {} if secrets_file: with open(secrets_file) as f: secrets = json.load(f) # Render the template in-memory and load JSON data try: rendered_content = _render_template_to_string(app_file, secrets) except Exception as e: console.print(f"[bold red]Error rendering template, failed to upsert app: {e}[/bold red]") raise e app_upsert = AppUpsert.model_validate(json.loads(rendered_content)) existing_app = crud.apps.get_app( db_session, app_upsert.name, public_only=False, active_only=False ) if existing_app is None: return create_app_helper(db_session, app_upsert, skip_dry_run) else: return update_app_helper( db_session, existing_app, app_upsert, skip_dry_run, ) def create_app_helper(db_session: Session, app_upsert: AppUpsert, skip_dry_run: bool) -> UUID: # Generate app embedding using the fields defined in AppEmbeddingFields app_embedding = embeddings.generate_app_embedding( AppEmbeddingFields.model_validate(app_upsert.model_dump()), openai_client, config.OPENAI_EMBEDDING_MODEL, config.OPENAI_EMBEDDING_DIMENSION, ) # Create the app entry in the database app = crud.apps.create_app(db_session, app_upsert, app_embedding) if not skip_dry_run: console.rule(f"Provide [bold green]--skip-dry-run[/bold green] to create App={app.name}") db_session.rollback() else: db_session.commit() console.rule(f"Created App={app.name}") return app.id def update_app_helper( db_session: Session, existing_app: App, app_upsert: AppUpsert, skip_dry_run: bool ) -> UUID: """ Update an existing app in the database. If fields used for generating embeddings (name, display_name, provider, description, categories) are changed, re-generates the app embedding. """ existing_app_upsert = AppUpsert.model_validate(existing_app, from_attributes=True) if existing_app_upsert == app_upsert: console.rule(f"App={existing_app.name} exists and is up to date") return existing_app.id else: console.rule(f"App={existing_app.name} exists and will be updated") # Determine if any fields affecting the embedding have changed new_embedding = None if _need_embedding_regeneration(existing_app_upsert, app_upsert): new_embedding = embeddings.generate_app_embedding( AppEmbeddingFields.model_validate(app_upsert.model_dump()), openai_client, config.OPENAI_EMBEDDING_MODEL, config.OPENAI_EMBEDDING_DIMENSION, ) # Update the app in the database with the new fields and optional embedding update updated_app = crud.apps.update_app(db_session, existing_app, app_upsert, new_embedding) diff = DeepDiff(existing_app_upsert.model_dump(), app_upsert.model_dump(), ignore_order=True) if not skip_dry_run: console.rule( f"Provide [bold green]--skip-dry-run[/bold green] to update App={existing_app.name} with the following changes:" ) db_session.rollback() else: db_session.commit() console.rule(f"Updated App={existing_app.name}") console.print(diff.pretty()) return updated_app.id def _render_template_to_string(template_path: Path, secrets: dict[str, str]) -> str: """ Render a Jinja2 template with the provided secrets and return as string. """ env = Environment( loader=FileSystemLoader(template_path.parent), undefined=StrictUndefined, # Raise error if any placeholders are missing autoescape=False, trim_blocks=True, lstrip_blocks=True, ) template: Template = env.get_template(template_path.name) rendered_content: str = template.render(secrets) return rendered_content def _need_embedding_regeneration(old_app: AppUpsert, new_app: AppUpsert) -> bool: fields = set(AppEmbeddingFields.model_fields.keys()) return bool(old_app.model_dump(include=fields) != new_app.model_dump(include=fields)) ``` ## /backend/aci/cli/commands/upsert_functions.py ```py path="/backend/aci/cli/commands/upsert_functions.py" import json from pathlib import Path import click from deepdiff import DeepDiff from openai import OpenAI from rich.console import Console from rich.table import Table from sqlalchemy.orm import Session from aci.cli import config from aci.common import embeddings, utils from aci.common.db import crud from aci.common.schemas.function import FunctionEmbeddingFields, FunctionUpsert console = Console() openai_client = OpenAI(api_key=config.OPENAI_API_KEY) @click.command() @click.option( "--functions-file", "functions_file", required=True, type=click.Path(exists=True, path_type=Path), help="Path to the functions JSON file", ) @click.option( "--skip-dry-run", is_flag=True, help="Provide this flag to run the command and apply changes to the database", ) def upsert_functions(functions_file: Path, skip_dry_run: bool) -> list[str]: """ Upsert functions in the DB from a JSON file. This command groups the functions into three categories: - New functions to create, - Existing functions that require an update, - Functions that are unchanged. Batch creation and update operations are performed. """ return upsert_functions_helper(functions_file, skip_dry_run) def upsert_functions_helper(functions_file: Path, skip_dry_run: bool) -> list[str]: with utils.create_db_session(config.DB_FULL_URL) as db_session: with open(functions_file) as f: functions_data = json.load(f) # Validate and parse each function record functions_upsert = [ FunctionUpsert.model_validate(func_data) for func_data in functions_data ] app_name = _validate_all_functions_belong_to_the_app(functions_upsert) console.rule(f"App={app_name}") _validate_app_exists(db_session, app_name) new_functions: list[FunctionUpsert] = [] existing_functions: list[FunctionUpsert] = [] for function_upsert in functions_upsert: existing_function = crud.functions.get_function( db_session, function_upsert.name, public_only=False, active_only=False ) if existing_function is None: new_functions.append(function_upsert) else: existing_functions.append(function_upsert) console.rule("Checking functions to create...") functions_created = create_functions_helper(db_session, new_functions) console.rule("Checking functions to update...") functions_updated = update_functions_helper(db_session, existing_functions) # for functions that are in existing_functions but not in functions_updated functions_unchanged = [ func.name for func in existing_functions if func.name not in functions_updated ] if not skip_dry_run: console.rule("Provide [bold green]--skip-dry-run[/bold green] to upsert functions") db_session.rollback() else: db_session.commit() console.rule("[bold green]Upserted functions[/bold green]") table = Table("Function Name", "Operation") for func in functions_created: table.add_row(func, "Create") for func in functions_updated: table.add_row(func, "Update") for func in functions_unchanged: table.add_row(func, "No changes") console.print(table) return functions_created + functions_updated def create_functions_helper( db_session: Session, functions_upsert: list[FunctionUpsert] ) -> list[str]: """ Batch creates functions in the database. Generates embeddings for each new function and calls the CRUD layer for creation. Returns a list of created function names. """ functions_embeddings = embeddings.generate_function_embeddings( [FunctionEmbeddingFields.model_validate(func.model_dump()) for func in functions_upsert], openai_client, embedding_model=config.OPENAI_EMBEDDING_MODEL, embedding_dimension=config.OPENAI_EMBEDDING_DIMENSION, ) created_functions = crud.functions.create_functions( db_session, functions_upsert, functions_embeddings ) return [func.name for func in created_functions] def update_functions_helper( db_session: Session, functions_upsert: list[FunctionUpsert] ) -> list[str]: """ Batch updates functions in the database. For each function to update, determines if the embedding needs to be regenerated. Regenerates embeddings in batch for those that require it and updates the functions accordingly. Returns a list of updated function names. """ functions_with_new_embeddings: list[FunctionUpsert] = [] functions_without_new_embeddings: list[FunctionUpsert] = [] for function_upsert in functions_upsert: existing_function = crud.functions.get_function( db_session, function_upsert.name, public_only=False, active_only=False ) if existing_function is None: raise click.ClickException(f"Function '{function_upsert.name}' not found.") existing_function_upsert = FunctionUpsert.model_validate( existing_function, from_attributes=True ) if existing_function_upsert == function_upsert: continue else: diff = DeepDiff( existing_function_upsert.model_dump(), function_upsert.model_dump(), ignore_order=True, ) console.rule( f"Will update function '{existing_function.name}' with the following changes:" ) console.print(diff.pretty()) if _need_function_embedding_regeneration(existing_function_upsert, function_upsert): functions_with_new_embeddings.append(function_upsert) else: functions_without_new_embeddings.append(function_upsert) # Generate new embeddings in batch for functions that require regeneration. functions_embeddings = embeddings.generate_function_embeddings( [ FunctionEmbeddingFields.model_validate(func.model_dump()) for func in functions_with_new_embeddings ], openai_client, embedding_model=config.OPENAI_EMBEDDING_MODEL, embedding_dimension=config.OPENAI_EMBEDDING_DIMENSION, ) # Note: the order matters here because the embeddings need to match the functions functions_updated = crud.functions.update_functions( db_session, functions_with_new_embeddings + functions_without_new_embeddings, functions_embeddings + [None] * len(functions_without_new_embeddings), ) return [func.name for func in functions_updated] def _validate_app_exists(db_session: Session, app_name: str) -> None: app = crud.apps.get_app(db_session, app_name, False, False) if not app: raise click.ClickException(f"App={app_name} does not exist") def _validate_all_functions_belong_to_the_app( functions_upsert: list[FunctionUpsert], ) -> str: app_names = {utils.parse_app_name_from_function_name(func.name) for func in functions_upsert} if len(app_names) != 1: raise click.ClickException( f"All functions must belong to the same app, instead found multiple apps={app_names}" ) return app_names.pop() def _need_function_embedding_regeneration( old_func: FunctionUpsert, new_func: FunctionUpsert ) -> bool: """ Determines if the function embedding should be regenerated based on changes in the fields used for embedding (name, description, parameters). """ fields = set(FunctionEmbeddingFields.model_fields.keys()) return bool(old_func.model_dump(include=fields) != new_func.model_dump(include=fields)) ``` ## /backend/aci/cli/config.py ```py path="/backend/aci/cli/config.py" from dotenv import load_dotenv from aci.common.utils import check_and_get_env_variable, construct_db_url load_dotenv() OPENAI_API_KEY = check_and_get_env_variable("CLI_OPENAI_API_KEY") OPENAI_EMBEDDING_MODEL = check_and_get_env_variable("CLI_OPENAI_EMBEDDING_MODEL") OPENAI_EMBEDDING_DIMENSION = int(check_and_get_env_variable("CLI_OPENAI_EMBEDDING_DIMENSION")) DB_SCHEME = check_and_get_env_variable("CLI_DB_SCHEME") DB_USER = check_and_get_env_variable("CLI_DB_USER") DB_PASSWORD = check_and_get_env_variable("CLI_DB_PASSWORD") DB_HOST = check_and_get_env_variable("CLI_DB_HOST") DB_PORT = check_and_get_env_variable("CLI_DB_PORT") DB_NAME = check_and_get_env_variable("CLI_DB_NAME") DB_FULL_URL = construct_db_url(DB_SCHEME, DB_USER, DB_PASSWORD, DB_HOST, DB_PORT, DB_NAME) SERVER_URL = check_and_get_env_variable("CLI_SERVER_URL") ``` ## /backend/aci/cli/tests/__init__.py ```py path="/backend/aci/cli/tests/__init__.py" ``` ## /backend/aci/cli/tests/conftest.py ```py path="/backend/aci/cli/tests/conftest.py" import json import logging from collections.abc import Generator from pathlib import Path from typing import cast import pytest from sqlalchemy import inspect from sqlalchemy.engine.reflection import Inspector from sqlalchemy.orm import Session from aci.cli import config # override the rate limit to a high number for testing before importing aipolabs modules from aci.common import utils from aci.common.db.sql_models import Base logger = logging.getLogger(__name__) @pytest.fixture(scope="function") def db_session() -> Generator[Session, None, None]: with utils.create_db_session(config.DB_FULL_URL) as db_session: yield db_session @pytest.fixture(scope="function", autouse=True) def database_setup_and_cleanup(db_session: Session) -> Generator[None, None, None]: """ Setup and cleanup the database for each test case. """ # make sure we are connecting to the local db not the production db # TODO: it's part of the environment separation problem, need to properly set up failsafe prod isolation assert config.DB_HOST in ["localhost", "db"] # Use 'with' to manage the session context inspector = cast(Inspector, inspect(db_session.bind)) # Check if all tables defined in models are created in the db for table in Base.metadata.tables.values(): if not inspector.has_table(table.name): pytest.exit(f"Table {table} does not exist in the database.") # Go through all tables and make sure there are no records in the table # (skip alembic_version table) for table in Base.metadata.tables.values(): if table.name != "alembic_version" and db_session.query(table).count() > 0: pytest.exit(f"Table {table} is not empty.") yield # This allows the test to run # Clean up: Empty all tables after tests in reverse order of creation for table in reversed(Base.metadata.sorted_tables): if table.name != "alembic_version" and db_session.query(table).count() > 0: logger.debug(f"Deleting all records from table {table.name}") db_session.execute(table.delete()) db_session.commit() @pytest.fixture def dummy_app_data() -> dict: return { "name": "GOOGLE_CALENDAR", "display_name": "Google Calendar", "logo": "https://example.com/google-logo.png", "provider": "Google", "version": "3.0.0", "description": "The Google Calendar API is a RESTful API that can be accessed through explicit HTTP calls. The API exposes most of the features available in the Google Calendar Web interface.", "security_schemes": { "oauth2": { "location": "header", "name": "Authorization", "prefix": "Bearer", "client_id": "{{ AIPOLABS_GOOGLE_APP_CLIENT_ID }}", "client_secret": "{{ AIPOLABS_GOOGLE_APP_CLIENT_SECRET }}", "scope": "openid email profile https://www.googleapis.com/auth/calendar", "server_metadata_url": "https://accounts.google.com/.well-known/openid-configuration", } }, "default_security_credentials_by_scheme": {}, "categories": ["calendar"], "visibility": "public", "active": True, } @pytest.fixture def dummy_app_secrets_data() -> dict: return { "AIPOLABS_GOOGLE_APP_CLIENT_ID": "dummy_client_id", "AIPOLABS_GOOGLE_APP_CLIENT_SECRET": "dummy_client_secret", } @pytest.fixture def dummy_functions_data() -> list[dict]: return [ { "name": "GOOGLE_CALENDAR__CALENDARLIST_LIST", "description": "Returns the calendars on the user's calendar list", "tags": ["calendar"], "visibility": "public", "active": True, "protocol": "rest", "protocol_data": { "method": "GET", "path": "/users/me/calendarList", "server_url": "https://www.googleapis.com/calendar/v3", }, "parameters": { "type": "object", "properties": { "query": { "type": "object", "description": "query parameters", "properties": { "maxResults": { "type": "integer", "description": "Maximum number of entries returned on one result page. By default the value is 100 entries. The page size can never exceed 250 entries.", "default": 100, } }, "required": [], "visible": ["maxResults"], "additionalProperties": False, }, }, "required": [], "visible": ["query"], "additionalProperties": False, }, } ] @pytest.fixture def dummy_app_file(tmp_path: Path, dummy_app_data: dict) -> Path: dummy_app_file = tmp_path / "app.json" dummy_app_file.write_text(json.dumps(dummy_app_data)) return dummy_app_file @pytest.fixture def dummy_app_secrets_file(tmp_path: Path, dummy_app_secrets_data: dict) -> Path: dummy_app_secrets_file = tmp_path / ".app.secrets.json" dummy_app_secrets_file.write_text(json.dumps(dummy_app_secrets_data)) return dummy_app_secrets_file @pytest.fixture def dummy_functions_file(tmp_path: Path, dummy_functions_data: list[dict]) -> Path: dummy_functions_file = tmp_path / "functions.json" dummy_functions_file.write_text(json.dumps(dummy_functions_data)) return dummy_functions_file ``` ## /backend/aci/cli/tests/test_upsert_app.py ```py path="/backend/aci/cli/tests/test_upsert_app.py" import json from pathlib import Path import pytest from click.testing import CliRunner from sqlalchemy.orm import Session from aci.cli.commands.upsert_app import upsert_app from aci.common.db import crud from aci.common.db.sql_models import SecurityScheme @pytest.mark.parametrize("skip_dry_run", [True, False]) def test_create_app( db_session: Session, dummy_app_data: dict, dummy_app_file: Path, dummy_app_secrets_data: dict, dummy_app_secrets_file: Path, skip_dry_run: bool, ) -> None: runner = CliRunner() command = [ "--app-file", dummy_app_file, "--secrets-file", dummy_app_secrets_file, ] if skip_dry_run: command.append("--skip-dry-run") result = runner.invoke(upsert_app, command) # type: ignore assert result.exit_code == 0, result.output # new record is created by a different db session, so we need to # expire the injected db_session to see the new record db_session.expire_all() app = crud.apps.get_app( db_session, dummy_app_data["name"], public_only=False, active_only=False ) if skip_dry_run: assert app is not None assert app.name == dummy_app_data["name"] assert ( app.security_schemes[SecurityScheme.OAUTH2]["client_id"] == dummy_app_secrets_data["AIPOLABS_GOOGLE_APP_CLIENT_ID"] ) assert ( app.security_schemes[SecurityScheme.OAUTH2]["client_secret"] == dummy_app_secrets_data["AIPOLABS_GOOGLE_APP_CLIENT_SECRET"] ) else: assert app is None, "App should not be created for dry run" @pytest.mark.parametrize("skip_dry_run", [True, False]) def test_update_app( db_session: Session, dummy_app_data: dict, dummy_app_file: Path, dummy_app_secrets_data: dict, dummy_app_secrets_file: Path, skip_dry_run: bool, ) -> None: # create the app first test_create_app( db_session, dummy_app_data, dummy_app_file, dummy_app_secrets_data, dummy_app_secrets_file, True, ) # modify the app data new_oauth2_scope = "updated_scope" new_oauth2_client_id = "updated_client_id" new_api_key = {"location": "header", "name": "X-API-KEY"} dummy_app_data["security_schemes"]["oauth2"]["scope"] = new_oauth2_scope dummy_app_secrets_data["AIPOLABS_GOOGLE_APP_CLIENT_ID"] = new_oauth2_client_id dummy_app_data["security_schemes"]["api_key"] = new_api_key # write the modified app data and secrets to the files dummy_app_file.write_text(json.dumps(dummy_app_data)) dummy_app_secrets_file.write_text(json.dumps(dummy_app_secrets_data)) # update the app runner = CliRunner() command = [ "--app-file", dummy_app_file, "--secrets-file", dummy_app_secrets_file, ] if skip_dry_run: command.append("--skip-dry-run") result = runner.invoke(upsert_app, command) # type: ignore assert result.exit_code == 0, result.output db_session.expire_all() app = crud.apps.get_app( db_session, dummy_app_data["name"], public_only=False, active_only=False ) assert app is not None assert app.name == dummy_app_data["name"] if skip_dry_run: assert app.security_schemes[SecurityScheme.OAUTH2]["scope"] == new_oauth2_scope assert app.security_schemes[SecurityScheme.OAUTH2]["client_id"] == new_oauth2_client_id assert app.security_schemes[SecurityScheme.API_KEY] == new_api_key else: # nothing should change for dry run assert ( app.security_schemes[SecurityScheme.OAUTH2]["scope"] == "openid email profile https://www.googleapis.com/auth/calendar" ) assert app.security_schemes[SecurityScheme.OAUTH2]["client_id"] == "dummy_client_id" assert SecurityScheme.API_KEY not in app.security_schemes ``` ## /backend/aci/cli/tests/test_upsert_functions.py ```py path="/backend/aci/cli/tests/test_upsert_functions.py" import json from pathlib import Path import pytest from click.testing import CliRunner from sqlalchemy.orm import Session from aci.cli.commands.upsert_functions import upsert_functions from aci.cli.tests.test_upsert_app import test_create_app from aci.common.db import crud from aci.common.db.sql_models import Function from aci.common.schemas.function import FunctionUpsert @pytest.mark.usefixtures( "dummy_app_data", "dummy_app_file", "dummy_app_secrets_data", "dummy_app_secrets_file", "dummy_functions_data", "dummy_functions_file", ) @pytest.mark.parametrize("skip_dry_run", [True, False]) def test_create_functions( db_session: Session, dummy_app_data: dict, dummy_app_file: Path, dummy_app_secrets_data: dict, dummy_app_secrets_file: Path, dummy_functions_data: list[dict], dummy_functions_file: Path, skip_dry_run: bool, ) -> None: # create the app first test_create_app( db_session, dummy_app_data, dummy_app_file, dummy_app_secrets_data, dummy_app_secrets_file, skip_dry_run=True, ) # create the functions runner = CliRunner() command = [ "--functions-file", dummy_functions_file, ] if skip_dry_run: command.append("--skip-dry-run") result = runner.invoke(upsert_functions, command) # type: ignore assert result.exit_code == 0, result.output # check the functions are created db_session.expire_all() functions = [ crud.functions.get_function( db_session, function_data["name"], public_only=False, active_only=False ) for function_data in dummy_functions_data ] functions = [f for f in functions if f is not None] if skip_dry_run: assert len(functions) > 0 assert len(functions) == len(dummy_functions_data) for i, function in enumerate(functions): assert FunctionUpsert.model_validate( function, from_attributes=True ) == FunctionUpsert.model_validate(dummy_functions_data[i]) else: assert len(functions) == 0, "Functions should not be created for dry run" @pytest.mark.parametrize("skip_dry_run", [True, False]) def test_update_functions( db_session: Session, dummy_app_data: dict, dummy_app_file: Path, dummy_app_secrets_data: dict, dummy_app_secrets_file: Path, dummy_functions_data: list[dict], dummy_functions_file: Path, skip_dry_run: bool, ) -> None: # create the functions first test_create_functions( db_session, dummy_app_data, dummy_app_file, dummy_app_secrets_data, dummy_app_secrets_file, dummy_functions_data, dummy_functions_file, skip_dry_run=True, ) # modify the functions data new_description = "UPDATED_DESCRIPTION" new_parameters = { "type": "object", "properties": { "query": { "type": "object", "description": "UPDATED_DESCRIPTION", "properties": { "UPDATED_PROPERTY": { "type": "string", "description": "UPDATED_DESCRIPTION", "default": "UPDATED_DEFAULT", }, }, "required": [], "visible": ["UPDATED_PROPERTY"], "additionalProperties": False, }, }, "required": [], "visible": ["query"], "additionalProperties": False, } for function_data in dummy_functions_data: function_data["description"] = new_description function_data["parameters"] = new_parameters # write the modified functions data to the file dummy_functions_file.write_text(json.dumps(dummy_functions_data)) # update the functions runner = CliRunner() command = [ "--functions-file", dummy_functions_file, ] if skip_dry_run: command.append("--skip-dry-run") result = runner.invoke(upsert_functions, command) # type: ignore assert result.exit_code == 0, result.output db_session.expire_all() functions: list[Function] = [] for function_data in dummy_functions_data: function = crud.functions.get_function( db_session, function_data["name"], public_only=False, active_only=False ) if function is not None: functions.append(function) assert len(functions) > 0 assert len(functions) == len(dummy_functions_data) if skip_dry_run: for function in functions: assert function.description == new_description assert function.parameters == new_parameters else: for function in functions: assert function.description != new_description assert function.parameters != new_parameters # TODO: # - test throw error if app does not exist # - test throw error if functions file contains functions for different apps # - test embedding is updated if app embedding fields are changed # - test embedding is not updated if app embedding fields are not changed # - test functions file contains both new and existing functions # - test functions file contains invalid function data ``` ## /backend/aci/common/__init__.py ```py path="/backend/aci/common/__init__.py" ``` ## /backend/aci/common/config.py ```py path="/backend/aci/common/config.py" from aci.common.utils import check_and_get_env_variable AWS_REGION = check_and_get_env_variable("COMMON_AWS_REGION") AWS_ENDPOINT_URL = check_and_get_env_variable("COMMON_AWS_ENDPOINT_URL") KEY_ENCRYPTION_KEY_ARN = check_and_get_env_variable("COMMON_KEY_ENCRYPTION_KEY_ARN") API_KEY_HASHING_SECRET = check_and_get_env_variable("COMMON_API_KEY_HASHING_SECRET") ``` ## /backend/aci/common/db/crud/__init__.py ```py path="/backend/aci/common/db/crud/__init__.py" from . import app_configurations, apps, functions, linked_accounts, projects, secret __all__ = [ "app_configurations", "apps", "functions", "linked_accounts", "projects", "secret", ] ``` ## /backend/aci/common/db/crud/app_configurations.py ```py path="/backend/aci/common/db/crud/app_configurations.py" from uuid import UUID from sqlalchemy import select from sqlalchemy.orm import Session from aci.common.db.sql_models import App, AppConfiguration from aci.common.logging_setup import get_logger from aci.common.schemas.app_configurations import ( AppConfigurationCreate, AppConfigurationUpdate, ) logger = get_logger(__name__) def create_app_configuration( db_session: Session, project_id: UUID, app_configuration_create: AppConfigurationCreate, ) -> AppConfiguration: """ Create a new app configuration record """ app_id = db_session.execute( select(App.id).filter_by(name=app_configuration_create.app_name) ).scalar_one() app_configuration = AppConfiguration( project_id=project_id, app_id=app_id, security_scheme=app_configuration_create.security_scheme, security_scheme_overrides=app_configuration_create.security_scheme_overrides, enabled=True, all_functions_enabled=app_configuration_create.all_functions_enabled, enabled_functions=app_configuration_create.enabled_functions, ) db_session.add(app_configuration) db_session.flush() db_session.refresh(app_configuration) return app_configuration def update_app_configuration( db_session: Session, app_configuration: AppConfiguration, update: AppConfigurationUpdate, ) -> AppConfiguration: """ Update an app configuration by app id. If a field is None, it will not be changed. """ # TODO: a better way to do update? if update.security_scheme is not None: app_configuration.security_scheme = update.security_scheme if update.security_scheme_overrides is not None: app_configuration.security_scheme_overrides = update.security_scheme_overrides if update.enabled is not None: app_configuration.enabled = update.enabled if update.all_functions_enabled is not None: app_configuration.all_functions_enabled = update.all_functions_enabled if update.enabled_functions is not None: app_configuration.enabled_functions = update.enabled_functions db_session.flush() db_session.refresh(app_configuration) return app_configuration def delete_app_configuration(db_session: Session, project_id: UUID, app_name: str) -> None: statement = ( select(AppConfiguration) .join(App, AppConfiguration.app_id == App.id) .filter(AppConfiguration.project_id == project_id, App.name == app_name) ) app_to_delete = db_session.execute(statement).scalar_one() db_session.delete(app_to_delete) db_session.flush() def get_app_configurations( db_session: Session, project_id: UUID, app_names: list[str] | None, limit: int, offset: int, ) -> list[AppConfiguration]: """Get all app configurations for a project, optionally filtered by app names""" statement = select(AppConfiguration).filter_by(project_id=project_id) if app_names: statement = statement.join(App, AppConfiguration.app_id == App.id).filter( App.name.in_(app_names) ) statement = statement.offset(offset).limit(limit) app_configurations = list(db_session.execute(statement).scalars().all()) return app_configurations def get_app_configuration( db_session: Session, project_id: UUID, app_name: str ) -> AppConfiguration | None: """Get an app configuration by project id and app name""" app_configuration: AppConfiguration | None = db_session.execute( select(AppConfiguration) .join(App, AppConfiguration.app_id == App.id) .filter(AppConfiguration.project_id == project_id, App.name == app_name) ).scalar_one_or_none() return app_configuration def get_app_configurations_by_app_id(db_session: Session, app_id: UUID) -> list[AppConfiguration]: statement = select(AppConfiguration).filter(AppConfiguration.app_id == app_id) return list(db_session.execute(statement).scalars().all()) def app_configuration_exists(db_session: Session, project_id: UUID, app_name: str) -> bool: stmt = ( select(AppConfiguration) .join(App, AppConfiguration.app_id == App.id) .filter( AppConfiguration.project_id == project_id, App.name == app_name, ) ) return db_session.execute(stmt).scalar_one_or_none() is not None ``` ## /backend/aci/common/db/crud/apps.py ```py path="/backend/aci/common/db/crud/apps.py" """ CRUD operations for apps. (not including app_configurations) """ from sqlalchemy import select, update from sqlalchemy.orm import Session from aci.common.db.sql_models import App from aci.common.enums import SecurityScheme, Visibility from aci.common.logging_setup import get_logger from aci.common.schemas.app import AppUpsert logger = get_logger(__name__) def create_app( db_session: Session, app_upsert: AppUpsert, app_embedding: list[float], ) -> App: logger.debug(f"creating app: {app_upsert}") app_data = app_upsert.model_dump(mode="json", exclude_none=True) app = App( **app_data, embedding=app_embedding, ) db_session.add(app) db_session.flush() db_session.refresh(app) return app def update_app( db_session: Session, app: App, app_upsert: AppUpsert, app_embedding: list[float] | None = None, ) -> App: """ Update an existing app. With the option to update the app embedding. (needed if AppEmbeddingFields are updated) """ new_app_data = app_upsert.model_dump(mode="json", exclude_unset=True) for field, value in new_app_data.items(): setattr(app, field, value) if app_embedding is not None: app.embedding = app_embedding db_session.flush() db_session.refresh(app) return app def update_app_default_security_credentials( db_session: Session, app: App, security_scheme: SecurityScheme, security_credentials: dict, ) -> None: # Note: this update works because of the MutableDict.as_mutable(JSON) in the sql_models.py # TODO: check if this is the best practice and double confirm that nested dict update does NOT work app.default_security_credentials_by_scheme[security_scheme] = security_credentials def get_app(db_session: Session, app_name: str, public_only: bool, active_only: bool) -> App | None: statement = select(App).filter_by(name=app_name) if active_only: statement = statement.filter(App.active) if public_only: statement = statement.filter(App.visibility == Visibility.PUBLIC) app: App | None = db_session.execute(statement).scalar_one_or_none() return app def get_apps( db_session: Session, public_only: bool, active_only: bool, app_names: list[str] | None, limit: int | None, offset: int | None, ) -> list[App]: statement = select(App) if public_only: statement = statement.filter(App.visibility == Visibility.PUBLIC) if active_only: statement = statement.filter(App.active) if app_names is not None: statement = statement.filter(App.name.in_(app_names)) if offset is not None: statement = statement.offset(offset) if limit is not None: statement = statement.limit(limit) return list(db_session.execute(statement).scalars().all()) def search_apps( db_session: Session, public_only: bool, active_only: bool, app_names: list[str] | None, categories: list[str] | None, intent_embedding: list[float] | None, limit: int, offset: int, ) -> list[tuple[App, float | None]]: """Get a list of apps with optional filtering by categories and sorting by vector similarity to intent. and pagination.""" statement = select(App) # filter out private apps if public_only: statement = statement.filter(App.visibility == Visibility.PUBLIC) # filter out inactive apps if active_only: statement = statement.filter(App.active) # filter out apps by app_names if app_names is not None: statement = statement.filter(App.name.in_(app_names)) # filter out apps by categories # TODO: Is there any way to get typing for cosine_distance, label, overlap? if categories is not None: statement = statement.filter(App.categories.overlap(categories)) # sort by similarity to intent if intent_embedding is not None: similarity_score = App.embedding.cosine_distance(intent_embedding) statement = statement.add_columns(similarity_score.label("similarity_score")) statement = statement.order_by("similarity_score") statement = statement.offset(offset).limit(limit) logger.debug(f"Executing statement: {statement}") results = db_session.execute(statement).all() if intent_embedding is not None: return [(app, score) for app, score in results] else: return [(app, None) for (app,) in results] def set_app_active_status(db_session: Session, app_name: str, active: bool) -> None: statement = update(App).filter_by(name=app_name).values(active=active) db_session.execute(statement) def set_app_visibility(db_session: Session, app_name: str, visibility: Visibility) -> None: statement = update(App).filter_by(name=app_name).values(visibility=visibility) db_session.execute(statement) ``` ## /backend/aci/common/db/crud/functions.py ```py path="/backend/aci/common/db/crud/functions.py" from uuid import UUID from sqlalchemy import select, update from sqlalchemy.orm import Session from aci.common import utils from aci.common.db import crud from aci.common.db.sql_models import App, Function from aci.common.enums import Visibility from aci.common.logging_setup import get_logger from aci.common.schemas.function import FunctionUpsert logger = get_logger(__name__) def create_functions( db_session: Session, functions_upsert: list[FunctionUpsert], functions_embeddings: list[list[float]], ) -> list[Function]: """ Create functions. Note: each function might be of different app. """ logger.debug(f"creating functions: {functions_upsert}") functions = [] for i, function_upsert in enumerate(functions_upsert): app_name = utils.parse_app_name_from_function_name(function_upsert.name) app = crud.apps.get_app(db_session, app_name, False, False) if not app: logger.error(f"App={app_name} does not exist for function={function_upsert.name}") raise ValueError(f"App={app_name} does not exist for function={function_upsert.name}") function_data = function_upsert.model_dump(mode="json", exclude_none=True) function = Function( app_id=app.id, **function_data, embedding=functions_embeddings[i], ) db_session.add(function) functions.append(function) db_session.flush() return functions def update_functions( db_session: Session, functions_upsert: list[FunctionUpsert], functions_embeddings: list[list[float] | None], ) -> list[Function]: """ Update functions. Note: each function might be of different app. With the option to update the function embedding. (needed if FunctionEmbeddingFields are updated) """ logger.debug(f"updating functions: {functions_upsert}") functions = [] for i, function_upsert in enumerate(functions_upsert): function = crud.functions.get_function(db_session, function_upsert.name, False, False) if not function: logger.error(f"Function={function_upsert.name} does not exist") raise ValueError(f"Function={function_upsert.name} does not exist") function_data = function_upsert.model_dump(mode="json", exclude_unset=True) for field, value in function_data.items(): setattr(function, field, value) if functions_embeddings[i] is not None: function.embedding = functions_embeddings[i] # type: ignore functions.append(function) db_session.flush() return functions def search_functions( db_session: Session, public_only: bool, active_only: bool, app_names: list[str] | None, intent_embedding: list[float] | None, limit: int, offset: int, ) -> list[Function]: """Get a list of functions with optional filtering by app names and sorting by vector similarity to intent.""" statement = select(Function).join(App, Function.app_id == App.id) # filter out all functions of inactive apps and all inactive functions # (where app is active buy specific functions can be inactive) if active_only: statement = statement.filter(App.active).filter(Function.active) # if the corresponding project (api key belongs to) can only access public apps and functions, # filter out all functions of private apps and all private functions (where app is public but specific function is private) if public_only: statement = statement.filter(App.visibility == Visibility.PUBLIC).filter( Function.visibility == Visibility.PUBLIC ) # filter out functions that are not in the specified apps if app_names is not None: statement = statement.filter(App.name.in_(app_names)) if intent_embedding is not None: similarity_score = Function.embedding.cosine_distance(intent_embedding) statement = statement.order_by(similarity_score) statement = statement.offset(offset).limit(limit) logger.debug(f"Executing statement: {statement}") return list(db_session.execute(statement).scalars().all()) def get_functions( db_session: Session, public_only: bool, active_only: bool, app_names: list[str] | None, limit: int, offset: int, ) -> list[Function]: """Get a list of functions and their details. Sorted by function name.""" statement = select(Function).join(App, Function.app_id == App.id) if app_names is not None: statement = statement.filter(App.name.in_(app_names)) # exclude private Apps's functions and private functions if public_only is True if public_only: statement = statement.filter(App.visibility == Visibility.PUBLIC).filter( Function.visibility == Visibility.PUBLIC ) # exclude inactive functions (including all functions if apps are inactive) if active_only: statement = statement.filter(App.active).filter(Function.active) statement = statement.order_by(Function.name).offset(offset).limit(limit) return list(db_session.execute(statement).scalars().all()) def get_functions_by_app_id(db_session: Session, app_id: UUID) -> list[Function]: statement = select(Function).filter(Function.app_id == app_id) return list(db_session.execute(statement).scalars().all()) def get_function( db_session: Session, function_name: str, public_only: bool, active_only: bool ) -> Function | None: statement = select(Function).filter(Function.name == function_name) # filter out all functions of inactive apps and all inactive functions # (where app is active buy specific functions can be inactive) if active_only: statement = ( statement.join(App, Function.app_id == App.id) .filter(App.active) .filter(Function.active) ) # if the corresponding project (api key belongs to) can only access public apps and functions, # filter out all functions of private apps and all private functions (where app is public but specific function is private) if public_only: statement = statement.filter(App.visibility == Visibility.PUBLIC).filter( Function.visibility == Visibility.PUBLIC ) return db_session.execute(statement).scalar_one_or_none() def set_function_active_status(db_session: Session, function_name: str, active: bool) -> None: statement = update(Function).filter_by(name=function_name).values(active=active) db_session.execute(statement) def set_function_visibility( db_session: Session, function_name: str, visibility: Visibility ) -> None: statement = update(Function).filter_by(name=function_name).values(visibility=visibility) db_session.execute(statement) ``` ## /backend/aci/common/db/crud/linked_accounts.py ```py path="/backend/aci/common/db/crud/linked_accounts.py" from uuid import UUID from sqlalchemy import select from sqlalchemy.orm import Session from aci.common import validators from aci.common.db.sql_models import App, LinkedAccount from aci.common.enums import SecurityScheme from aci.common.logging_setup import get_logger from aci.common.schemas.linked_accounts import LinkedAccountUpdate from aci.common.schemas.security_scheme import ( APIKeySchemeCredentials, NoAuthSchemeCredentials, OAuth2SchemeCredentials, ) logger = get_logger(__name__) def get_linked_accounts( db_session: Session, project_id: UUID, app_name: str | None, linked_account_owner_id: str | None, ) -> list[LinkedAccount]: """Get all linked accounts under a project, with optional filters""" statement = select(LinkedAccount).filter_by(project_id=project_id) if app_name: statement = statement.join(App, LinkedAccount.app_id == App.id).filter(App.name == app_name) if linked_account_owner_id: statement = statement.filter( LinkedAccount.linked_account_owner_id == linked_account_owner_id ) return list(db_session.execute(statement).scalars().all()) def get_linked_account( db_session: Session, project_id: UUID, app_name: str, linked_account_owner_id: str ) -> LinkedAccount | None: statement = ( select(LinkedAccount) .join(App, LinkedAccount.app_id == App.id) .filter( LinkedAccount.project_id == project_id, App.name == app_name, LinkedAccount.linked_account_owner_id == linked_account_owner_id, ) ) linked_account: LinkedAccount | None = db_session.execute(statement).scalar_one_or_none() return linked_account def get_linked_accounts_by_app_id(db_session: Session, app_id: UUID) -> list[LinkedAccount]: statement = select(LinkedAccount).filter_by(app_id=app_id) linked_accounts: list[LinkedAccount] = list(db_session.execute(statement).scalars().all()) return linked_accounts # TODO: the access control (project_id check) should probably be done at the route level? def get_linked_account_by_id_under_project( db_session: Session, linked_account_id: UUID, project_id: UUID ) -> LinkedAccount | None: """Get a linked account by its id, with optional project filter - linked_account_id uniquely identifies a linked account across the platform. - project_id is extra precaution useful for access control, the linked account must belong to the project. """ statement = select(LinkedAccount).filter_by(id=linked_account_id, project_id=project_id) linked_account: LinkedAccount | None = db_session.execute(statement).scalar_one_or_none() return linked_account def delete_linked_account(db_session: Session, linked_account: LinkedAccount) -> None: db_session.delete(linked_account) db_session.flush() def create_linked_account( db_session: Session, project_id: UUID, app_name: str, linked_account_owner_id: str, security_scheme: SecurityScheme, security_credentials: OAuth2SchemeCredentials | APIKeySchemeCredentials | NoAuthSchemeCredentials | None = None, enabled: bool = True, ) -> LinkedAccount: """Create a linked account when security_credentials is None, the linked account will be using App's default security credentials if exists # TODO: there is some ambiguity with "no auth" and "use app's default credentials", needs a refactor. """ app_id = db_session.execute(select(App.id).filter_by(name=app_name)).scalar_one() linked_account = LinkedAccount( project_id=project_id, app_id=app_id, linked_account_owner_id=linked_account_owner_id, security_scheme=security_scheme, security_credentials=( security_credentials.model_dump(mode="json") if security_credentials else {} ), enabled=enabled, ) db_session.add(linked_account) db_session.flush() db_session.refresh(linked_account) return linked_account def update_linked_account_credentials( db_session: Session, linked_account: LinkedAccount, security_credentials: OAuth2SchemeCredentials | APIKeySchemeCredentials | NoAuthSchemeCredentials, ) -> LinkedAccount: """ Update the security credentials of a linked account. Removing the security credentials (setting it to empty dict) is not handled here. """ # TODO: paranoid validation, should be removed if later the validation is done on the schema level validators.security_scheme.validate_scheme_and_credentials_type_match( linked_account.security_scheme, security_credentials ) linked_account.security_credentials = security_credentials.model_dump(mode="json") db_session.flush() db_session.refresh(linked_account) return linked_account def update_linked_account( db_session: Session, linked_account: LinkedAccount, linked_account_update: LinkedAccountUpdate, ) -> LinkedAccount: if linked_account_update.enabled is not None: linked_account.enabled = linked_account_update.enabled db_session.flush() db_session.refresh(linked_account) return linked_account def delete_linked_accounts(db_session: Session, project_id: UUID, app_name: str) -> int: statement = ( select(LinkedAccount) .join(App, LinkedAccount.app_id == App.id) .filter(LinkedAccount.project_id == project_id, App.name == app_name) ) linked_accounts_to_delete = db_session.execute(statement).scalars().all() for linked_account in linked_accounts_to_delete: db_session.delete(linked_account) db_session.flush() return len(linked_accounts_to_delete) ``` ## /backend/aci/common/db/crud/projects.py ```py path="/backend/aci/common/db/crud/projects.py" """ CRUD operations for projects, including direct entities under a project such as agents and API keys. TODO: function todelete project and all related data (app_configurations, agents, api_keys, etc.) """ import secrets from datetime import UTC, datetime, timedelta from uuid import UUID from sqlalchemy import func, select, update from sqlalchemy.orm import Session from aci.common import encryption from aci.common.db.sql_models import Agent, APIKey, Project from aci.common.enums import APIKeyStatus, Visibility from aci.common.logging_setup import get_logger from aci.common.schemas.agent import AgentUpdate, ValidInstruction logger = get_logger(__name__) def create_project( db_session: Session, org_id: UUID, name: str, visibility_access: Visibility = Visibility.PUBLIC, ) -> Project: project = Project( org_id=org_id, name=name, visibility_access=visibility_access, ) db_session.add(project) db_session.flush() db_session.refresh(project) return project def project_exists(db_session: Session, project_id: UUID) -> bool: return ( db_session.execute(select(Project).filter_by(id=project_id)).scalar_one_or_none() is not None ) def get_project(db_session: Session, project_id: UUID) -> Project | None: """ Get a project by primary key. """ project: Project | None = db_session.execute( select(Project).filter_by(id=project_id) ).scalar_one_or_none() return project def get_projects_by_org(db_session: Session, org_id: UUID) -> list[Project]: projects = list(db_session.execute(select(Project).filter_by(org_id=org_id)).scalars().all()) return projects def get_project_by_api_key_id(db_session: Session, api_key_id: UUID) -> Project | None: # api key id -> agent id -> project id project: Project | None = db_session.execute( select(Project) .join(Agent, Project.id == Agent.project_id) .join(APIKey, Agent.id == APIKey.agent_id) .filter(APIKey.id == api_key_id) ).scalar_one_or_none() return project def set_project_visibility_access( db_session: Session, project_id: UUID, visibility_access: Visibility ) -> None: statement = update(Project).filter_by(id=project_id).values(visibility_access=visibility_access) db_session.execute(statement) # TODO: TBD by business model def increase_project_quota_usage(db_session: Session, project: Project) -> None: now: datetime = datetime.now(UTC) need_reset = now >= project.daily_quota_reset_at.replace(tzinfo=UTC) + timedelta(days=1) if need_reset: # Reset the daily quota statement = ( update(Project) .where(Project.id == project.id) .values( { Project.daily_quota_used: 1, Project.daily_quota_reset_at: now, Project.total_quota_used: project.total_quota_used + 1, } ) ) else: # Increment the daily quota statement = ( update(Project) .where(Project.id == project.id) .values( { Project.daily_quota_used: project.daily_quota_used + 1, Project.total_quota_used: project.total_quota_used + 1, } ) ) db_session.execute(statement) def create_agent( db_session: Session, project_id: UUID, name: str, description: str, allowed_apps: list[str], custom_instructions: dict[str, ValidInstruction], ) -> Agent: """ Create a new agent under a project, and create a new API key for the agent. """ # Create the agent agent = Agent( project_id=project_id, name=name, description=description, allowed_apps=allowed_apps, custom_instructions=custom_instructions, ) db_session.add(agent) key = secrets.token_hex(32) key_hmac = encryption.hmac_sha256(key) # Create the API key for the agent api_key = APIKey(key=key, key_hmac=key_hmac, agent_id=agent.id, status=APIKeyStatus.ACTIVE) db_session.add(api_key) db_session.flush() db_session.refresh(agent) return agent def update_agent( db_session: Session, agent: Agent, update: AgentUpdate, ) -> Agent: """ Update Agent record by agent id """ if update.name is not None: agent.name = update.name if update.description is not None: agent.description = update.description if update.allowed_apps is not None: agent.allowed_apps = update.allowed_apps if update.custom_instructions is not None: agent.custom_instructions = update.custom_instructions db_session.flush() db_session.refresh(agent) return agent def delete_agent(db_session: Session, agent: Agent) -> None: db_session.delete(agent) db_session.flush() def delete_app_from_agents_allowed_apps( db_session: Session, project_id: UUID, app_name: str ) -> None: statement = ( update(Agent) .where(Agent.project_id == project_id) .values(allowed_apps=func.array_remove(Agent.allowed_apps, app_name)) ) db_session.execute(statement) def get_agents_by_project(db_session: Session, project_id: UUID) -> list[Agent]: return list(db_session.execute(select(Agent).filter_by(project_id=project_id)).scalars().all()) def get_agent_by_id(db_session: Session, agent_id: UUID) -> Agent | None: return db_session.execute(select(Agent).filter_by(id=agent_id)).scalar_one_or_none() def get_agent_by_api_key_id(db_session: Session, api_key_id: UUID) -> Agent | None: return db_session.execute( select(Agent).join(APIKey, Agent.id == APIKey.agent_id).filter(APIKey.id == str(api_key_id)) ).scalar_one_or_none() def get_agents_whose_allowed_apps_contains(db_session: Session, app_name: str) -> list[Agent]: statement = select(Agent).where(Agent.allowed_apps.contains([app_name])) return list(db_session.execute(statement).scalars().all()) def get_api_key_by_agent_id(db_session: Session, agent_id: UUID) -> APIKey | None: return db_session.execute(select(APIKey).filter_by(agent_id=agent_id)).scalar_one_or_none() def get_api_key(db_session: Session, key: str) -> APIKey | None: key_hmac = encryption.hmac_sha256(key) return db_session.execute(select(APIKey).filter_by(key_hmac=key_hmac)).scalar_one_or_none() def get_all_api_key_ids_for_project(db_session: Session, project_id: UUID) -> list[UUID]: agents = get_agents_by_project(db_session, project_id) project_api_key_ids = [] for agent in agents: api_key = get_api_key_by_agent_id(db_session, agent.id) if api_key: project_api_key_ids.append(api_key.id) return project_api_key_ids ``` ## /backend/aci/common/db/crud/secret.py ```py path="/backend/aci/common/db/crud/secret.py" from uuid import UUID from sqlalchemy import select from sqlalchemy.orm import Session from aci.common.db.sql_models import Secret from aci.common.logging_setup import get_logger from aci.common.schemas.secret import SecretCreate, SecretUpdate logger = get_logger(__name__) def create_secret( db_session: Session, linked_account_id: UUID, secret_create: SecretCreate, ) -> Secret: """ Create a new secret. """ secret = Secret( linked_account_id=linked_account_id, key=secret_create.key, value=secret_create.value, ) db_session.add(secret) db_session.flush() db_session.refresh(secret) return secret def get_secret(db_session: Session, linked_account_id: UUID, key: str) -> Secret | None: """ Get a secret by linked_account_id and key. """ statement = select(Secret).filter_by(linked_account_id=linked_account_id, key=key) return db_session.execute(statement).scalar_one_or_none() def list_secrets(db_session: Session, linked_account_id: UUID) -> list[Secret]: """ List all secrets for a linked account. """ statement = select(Secret).filter_by(linked_account_id=linked_account_id) secrets = db_session.execute(statement).scalars().all() return list(secrets) def update_secret( db_session: Session, secret: Secret, update: SecretUpdate, ) -> Secret: """ Update a secret's value. """ secret.value = update.value db_session.flush() db_session.refresh(secret) return secret def delete_secret(db_session: Session, secret: Secret) -> None: """ Delete a secret. """ db_session.delete(secret) db_session.flush() ``` ## /backend/aci/common/db/custom_sql_types.py ```py path="/backend/aci/common/db/custom_sql_types.py" import base64 import copy import json from sqlalchemy.dialects.postgresql import JSONB from sqlalchemy.engine import Dialect from sqlalchemy.types import LargeBinary, TypeDecorator from aci.common import encryption from aci.common.enums import SecurityScheme def _encrypt_value(value: str) -> str: """Encrypt a string value and return base64-encoded result.""" encrypted_bytes = encryption.encrypt(value.encode("utf-8")) # The bytes returned by the encryption.encrypt method can be any bytes, # which is not always valid for utf-8 decoding, so we need to encode it # using base64 first to ensure it's a valid bytes for utf-8. Then, we # decode it back to a string using utf-8. return base64.b64encode(encrypted_bytes).decode("utf-8") def _decrypt_value(value: str) -> str: """Decrypt a base64-encoded encrypted string.""" encrypted_bytes = base64.b64decode(value) return encryption.decrypt(encrypted_bytes).decode("utf-8") class Key(TypeDecorator[str]): impl = LargeBinary cache_ok = True def process_bind_param(self, value: str | None, dialect: Dialect) -> bytes | None: if value is not None: if not isinstance(value, str): raise TypeError("Key type expects a string value") plain_bytes = value.encode("utf-8") encrypted_bytes = encryption.encrypt(plain_bytes) return encrypted_bytes return None def process_result_value(self, value: bytes | None, dialect: Dialect) -> str | None: if value is not None: if not isinstance(value, bytes): raise TypeError("Key type expects a bytes value") decrypted_bytes = encryption.decrypt(value) return decrypted_bytes.decode("utf-8") return None class EncryptedSecurityScheme(TypeDecorator[dict]): impl = JSONB cache_ok = True def process_bind_param(self, value: dict | None, dialect: Dialect) -> dict | None: if value is not None: encrypted_value = copy.deepcopy(value) # Use deepcopy to handle nested structures for scheme_type, scheme_data in encrypted_value.items(): # We only need to encrypt the client_secret in OAuth2Scheme if scheme_type == SecurityScheme.OAUTH2 and "client_secret" in scheme_data: client_secret = scheme_data["client_secret"] if isinstance(client_secret, str): scheme_data["client_secret"] = _encrypt_value(client_secret) return encrypted_value return None def process_result_value(self, value: dict | None, dialect: Dialect) -> dict | None: if value is not None: decrypted_value = copy.deepcopy(value) # Use deepcopy to handle nested structures for scheme_type, scheme_data in decrypted_value.items(): # We only need to decrypt the client_secret in OAuth2Scheme if scheme_type == SecurityScheme.OAUTH2 and "client_secret" in scheme_data: client_secret_b64 = scheme_data["client_secret"] if isinstance(client_secret_b64, str): scheme_data["client_secret"] = _decrypt_value(client_secret_b64) return decrypted_value return None class EncryptedSecurityCredentials(TypeDecorator[dict]): impl = JSONB cache_ok = True def process_bind_param(self, value: dict | None, dialect: Dialect) -> dict | None: if value is not None: encrypted_value = copy.deepcopy(value) # Avoid modifying the original dict # TODO: if we add a new field or rename a field in the future, # we need to update the process_result_value method to handle the new field # APIKeySchemeCredentials if "secret_key" in encrypted_value: secret_key = encrypted_value["secret_key"] if isinstance(secret_key, str): encrypted_value["secret_key"] = _encrypt_value(secret_key) # OAuth2SchemeCredentials elif "access_token" in encrypted_value: access_token = encrypted_value.get("access_token") if isinstance(access_token, str): encrypted_value["access_token"] = _encrypt_value(access_token) refresh_token = encrypted_value.get("refresh_token") if isinstance(refresh_token, str): encrypted_value["refresh_token"] = _encrypt_value(refresh_token) raw_token_response = encrypted_value.get("raw_token_response") if isinstance(raw_token_response, dict): raw_token_response_str = json.dumps(raw_token_response) encrypted_value["raw_token_response"] = _encrypt_value(raw_token_response_str) # NoAuthSchemeCredentials (empty dict) - do nothing return encrypted_value return None def process_result_value(self, value: dict | None, dialect: Dialect) -> dict | None: if value is not None: decrypted_value = copy.deepcopy(value) # Avoid modifying the original dict # APIKeySchemeCredentials if "secret_key" in decrypted_value: secret_key_b64 = decrypted_value["secret_key"] if isinstance(secret_key_b64, str): decrypted_value["secret_key"] = _decrypt_value(secret_key_b64) # OAuth2SchemeCredentials elif "access_token" in decrypted_value: access_token_b64 = decrypted_value.get("access_token") if isinstance(access_token_b64, str): decrypted_value["access_token"] = _decrypt_value(access_token_b64) refresh_token_b64 = decrypted_value.get("refresh_token") if isinstance(refresh_token_b64, str): decrypted_value["refresh_token"] = _decrypt_value(refresh_token_b64) raw_token_response_b64 = decrypted_value.get("raw_token_response") if isinstance(raw_token_response_b64, str): decrypted_str = _decrypt_value(raw_token_response_b64) decrypted_value["raw_token_response"] = json.loads(decrypted_str) # NoAuthSchemeCredentials (empty dict) - do nothing return decrypted_value return None ``` ## /backend/aci/common/db/sql_models.py ```py path="/backend/aci/common/db/sql_models.py" """ TODO: Note: try to keep dependencies on other internal packages to a minimum. Note: at the time of writing, it's still too early to do optimizations on the database schema, but we should keep an eye on it and be prepared for potential future optimizations. for example, 1. should enum where possible, such as Plan, Visibility, etc 2. create index on embedding and other fields that are frequently used for filtering 3. materialized views for frequently queried data 4. limit string length for fields that have string type 5. Note we might need to set up index for embedding manually for customizing the similarity search algorithm (https://github.com/pgvector/pgvector) """ # TODO: ideally shouldn't need it in python 3.12 for forward reference? from __future__ import annotations from datetime import datetime from uuid import UUID, uuid4 from pgvector.sqlalchemy import Vector from sqlalchemy import ( Boolean, DateTime, ForeignKey, Integer, String, Text, UniqueConstraint, func, ) from sqlalchemy import Enum as SqlEnum # Note: need to use postgresqlr ARRAY in order to use overlap operator from sqlalchemy.dialects.postgresql import ARRAY, BYTEA, JSONB from sqlalchemy.dialects.postgresql import UUID as PGUUID from sqlalchemy.ext.mutable import MutableDict from sqlalchemy.orm import DeclarativeBase, Mapped, MappedAsDataclass, mapped_column, relationship from aci.common.db.custom_sql_types import ( EncryptedSecurityCredentials, EncryptedSecurityScheme, Key, ) from aci.common.enums import ( APIKeyStatus, Protocol, SecurityScheme, Visibility, ) EMBEDDING_DIMENSION = 1024 APP_DEFAULT_VERSION = "1.0.0" # need app to be shorter because it's used as prefix for function name APP_NAME_MAX_LENGTH = 100 MAX_STRING_LENGTH = 255 class Base(MappedAsDataclass, DeclarativeBase): pass # TODO: might need to limit number of projects a user can create class Project(Base): """ Project is a logical container for isolating and managing API keys, selected apps, and other data Each project can have multiple agents (associated with API keys), which are logical actors that access our platform """ __tablename__ = "projects" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) org_id: Mapped[UUID] = mapped_column(PGUUID(as_uuid=True), nullable=False) name: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) # if public, the project can only access public apps and functions # if private, the project can access all apps and functions, useful for A/B testing and internal testing before releasing # newly added apps and functions to public visibility_access: Mapped[Visibility] = mapped_column(SqlEnum(Visibility), nullable=False) """ quota related fields: TODO: TBD how to implement quota system """ daily_quota_used: Mapped[int] = mapped_column(Integer, default=0, nullable=False, init=False) daily_quota_reset_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) total_quota_used: Mapped[int] = mapped_column(Integer, default=0, nullable=False, init=False) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) # deleting project will delete all associated resources under the project agents: Mapped[list[Agent]] = relationship( "Agent", lazy="select", cascade="all, delete-orphan", init=False ) app_configurations: Mapped[list[AppConfiguration]] = relationship( "AppConfiguration", lazy="select", cascade="all, delete-orphan", init=False ) class Agent(Base): """ Agent is an actor under a project, each project can have multiple agents. It's the logical entity that access our platform, as a result, api keys are associated with agents. This is an opinionated design, intented for a multi-agent system, but subject to change. """ __tablename__ = "agents" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) project_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("projects.id"), nullable=False ) name: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) description: Mapped[str] = mapped_column(Text, nullable=False) # agent level control of what apps are accessible by the agent, should be asubset of project configured apps # we store a list of app names. # TODO: reconsider if this should be in a separate table to enforce data integrity, or use periodic task to clean up allowed_apps: Mapped[list[str]] = mapped_column( ARRAY(String(MAX_STRING_LENGTH)), nullable=False ) # TODO: should we use JSONB instead? As this will be frequently queried # TODO: reconsider if this should be in a separate table to enforce data integrity, or use periodic task to clean up # Custom instructions for the agent to follow. The key is the function name, and the value is the instruction. custom_instructions: Mapped[dict[str, str]] = mapped_column( MutableDict.as_mutable(JSONB), nullable=False, ) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) # Note: for now each agent has one API key, but we can add more flexibility in the future if needed # deleting agent will delete all API keys under the agent api_keys: Mapped[list[APIKey]] = relationship( "APIKey", lazy="select", cascade="all, delete-orphan", init=False ) class APIKey(Base): """ APIKey is the authentication token to access the platform. In this opinionated design, api key belongs to an agent. """ __tablename__ = "api_keys" # id is not the actual API key, it's just a unique identifier to easily reference each API key entry without depending # on the API key string itself. Also for logging without exposing the actual API key string. id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) # "key" is the encrypted actual API key string that the user will use to authenticate key: Mapped[str] = mapped_column(Key(), nullable=False, unique=True) key_hmac: Mapped[str] = mapped_column(String(64), nullable=False, unique=True) agent_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("agents.id"), unique=True, nullable=False ) status: Mapped[APIKeyStatus] = mapped_column(SqlEnum(APIKeyStatus), nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) # TODO: how to do versioning for app and funcitons to allow backward compatibility, or we don't actually need to # because function schema is loaded dynamically from the database to user # TODO: do we need auth_required on function level? class Function(Base): """ Function is a callable function that can be executed. Each function belongs to one App. """ __tablename__ = "functions" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) app_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("apps.id"), nullable=False ) # Note: the function name is unique across the platform and should have app information, e.g., "GITHUB_CLONE_REPO" # ideally this should just be _ (uppercase) name: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False, unique=True) description: Mapped[str] = mapped_column(Text, nullable=False) tags: Mapped[list[str]] = mapped_column(ARRAY(String), nullable=False) # if private, the function is only visible to privileged Projects (e.g., useful for internal and A/B testing) visibility: Mapped[Visibility] = mapped_column(SqlEnum(Visibility), nullable=False) # can be used to control if the app's discoverability active: Mapped[bool] = mapped_column(Boolean, nullable=False) protocol: Mapped[Protocol] = mapped_column(SqlEnum(Protocol), nullable=False) protocol_data: Mapped[dict] = mapped_column(MutableDict.as_mutable(JSONB), nullable=False) # empty dict for function that takes no args parameters: Mapped[dict] = mapped_column(MutableDict.as_mutable(JSONB), nullable=False) # TODO: should response schema be generic (data + execution success of not + optional error) or specific to the function response: Mapped[dict] = mapped_column(MutableDict.as_mutable(JSONB), nullable=False) # TODO: should we provide EMBEDDING_DIMENSION here? which makes it less flexible if we want to change the embedding dimention in the future embedding: Mapped[list[float]] = mapped_column(Vector(EMBEDDING_DIMENSION), nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) # the App that this function belongs to app: Mapped[App] = relationship("App", lazy="select", back_populates="functions", init=False) @property def app_name(self) -> str: return str(self.app.name) class App(Base): __tablename__ = "apps" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) # Need name to be unique to support globally unique function name. name: Mapped[str] = mapped_column(String(APP_NAME_MAX_LENGTH), nullable=False, unique=True) display_name: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) # provider (or company) of the app, e.g., google, github, or ACI or user (if allow user to create custom apps) provider: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) version: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) description: Mapped[str] = mapped_column(Text, nullable=False) logo: Mapped[str | None] = mapped_column(Text, nullable=True) categories: Mapped[list[str]] = mapped_column(ARRAY(String), nullable=False) # if private, the app is only visible to privileged Projects (e.g., useful for internal and A/B testing) visibility: Mapped[Visibility] = mapped_column(SqlEnum(Visibility), nullable=False) # operational status of the app, can be used to control if the app's discoverability active: Mapped[bool] = mapped_column(Boolean, nullable=False) # security schemes (including it's config) supported by the app, e.g., API key, OAuth2, etc security_schemes: Mapped[dict[SecurityScheme, dict]] = mapped_column( MutableDict.as_mutable(EncryptedSecurityScheme), nullable=False, ) # default security credentials (provided by ACI, if any) for the app that can be used by any client default_security_credentials_by_scheme: Mapped[dict[SecurityScheme, dict]] = mapped_column( MutableDict.as_mutable(EncryptedSecurityCredentials), nullable=False, ) # embedding vector for similarity search embedding: Mapped[list[float]] = mapped_column(Vector(EMBEDDING_DIMENSION), nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) # deleting app will delete all functions under the app functions: Mapped[list[Function]] = relationship( "Function", lazy="select", cascade="all, delete-orphan", back_populates="app", init=False, ) # TODO: We make the decision to only allow one configuration per app per project to avoid unjustified # complexity and mental overhead on client side. (simplify apis and sdks) But we can revisit this decision # if later a valid use case is found. # TODO: should we delete app's associated linked accounts when user delete the app configuration? # TODO: revisit if we should disallow client changing the security scheme after the record is created, to # enforce consistant linked accounts type. # TODO: Reconsider if "enabled_functions" should be in a separate table to enforce data integrity, or use periodic task to clean up class AppConfiguration(Base): """ App configuration is a configuration for an app in a project. A record is created when the user enable and configure an app to a project. """ __tablename__ = "app_configurations" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) project_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("projects.id"), nullable=False ) app_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("apps.id"), nullable=False ) # selected (by client) as default security scheme for the linking accounts. Although making security_scheme constant is easier for # implementation, we keep the flexibility for future use to allow user to select different security scheme for different linked accounts. # So, ultimately the actual security scheme and credentials should be decided by individual linked accounts # stored in linked_accounts table. security_scheme: Mapped[SecurityScheme] = mapped_column(SqlEnum(SecurityScheme), nullable=False) # can store security scheme override for each app, e.g., store client id and secret for OAuth2 if client # want to use their own OAuth2 app for whitelabeling # TODO: create a pydantic model for security scheme overrides once we finalize overridable fields security_scheme_overrides: Mapped[dict[SecurityScheme, dict]] = mapped_column( MutableDict.as_mutable(EncryptedSecurityScheme), nullable=False, ) # controlled by users to enable or disable the app # TODO: what are the implications of enabling/disabling the app? enabled: Mapped[bool] = mapped_column(Boolean, nullable=False) # indicate if all functions of the app are enabled for this app all_functions_enabled: Mapped[bool] = mapped_column(Boolean, nullable=False) # if all_functions_enabled is false, this list contains the unqiue names of the functions that are enabled for this app enabled_functions: Mapped[list[str]] = mapped_column( ARRAY(String(MAX_STRING_LENGTH)), nullable=False ) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) app: Mapped[App] = relationship("App", lazy="select", init=False) @property def app_name(self) -> str: return str(self.app.name) # unique constraint __table_args__ = ( # If in the future we want to allow a project to integrate the same app multiple times, we can remove the unique constraint # but that would require changes in other places (business logic and other tables) UniqueConstraint("project_id", "app_id", name="uc_project_app"), ) # TODO: table can get large if there are significant number of clients # (O(n) = #clients * #projects_per_client * #apps * #linked_accounts_per_app) # need to keep an eye out on performance and revisit if we should: # - use nosql (or sharding) to store linked accounts instead. # - use separate database instance for clients with large number of linked accounts # - use separate tables per project. Some benefits including easier to delete the record and associated # linked accounts when user delete the app configuration, without locking the table for too long. But number of # tables can be too big for postgres. class LinkedAccount(Base): """ Linked account is a specific account under an app in a project. """ __tablename__ = "linked_accounts" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) project_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("projects.id"), nullable=False ) app_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("apps.id"), nullable=False ) # linked_account_owner_id should be unique per app per project, it should identify the end user, which # is the owner of the linked account. One common design is to use the same linked_account_owner_id that # identifies an end user for all configured apps in a project. linked_account_owner_id: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) security_scheme: Mapped[SecurityScheme] = mapped_column(SqlEnum(SecurityScheme), nullable=False) # security credentials are different for each security scheme, e.g., API key, OAuth2 (access token, refresh token, scope, etc) etc # it can beempty dict because the linked account could be created to use default credentials provided by ACI security_credentials: Mapped[dict] = mapped_column( MutableDict.as_mutable(EncryptedSecurityCredentials), nullable=False, ) enabled: Mapped[bool] = mapped_column(Boolean, nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) app: Mapped[App] = relationship("App", lazy="select", init=False) @property def app_name(self) -> str: return str(self.app.name) __table_args__ = ( # TODO: write test UniqueConstraint( "project_id", "app_id", "linked_account_owner_id", name="uc_project_app_linked_account_owner", ), ) class Secret(Base): __tablename__ = "secrets" id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), primary_key=True, default_factory=uuid4, init=False ) linked_account_id: Mapped[UUID] = mapped_column( PGUUID(as_uuid=True), ForeignKey("linked_accounts.id"), nullable=False ) key: Mapped[str] = mapped_column(String(MAX_STRING_LENGTH), nullable=False) value: Mapped[bytes] = mapped_column(BYTEA, nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), nullable=False, init=False ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=False), server_default=func.now(), onupdate=func.now(), nullable=False, init=False, ) __table_args__ = (UniqueConstraint("linked_account_id", "key", name="uc_linked_account_key"),) __all__ = [ "APIKey", "Agent", "App", "AppConfiguration", "Base", "Function", "LinkedAccount", "Project", "Secret", ] ``` ## /backend/aci/common/embeddings.py ```py path="/backend/aci/common/embeddings.py" from openai import OpenAI from aci.common.logging_setup import get_logger from aci.common.schemas.app import AppEmbeddingFields from aci.common.schemas.function import FunctionEmbeddingFields logger = get_logger(__name__) def generate_app_embedding( app: AppEmbeddingFields, openai_client: OpenAI, embedding_model: str, embedding_dimension: int, ) -> list[float]: """ Generate embedding for app. TODO: what else should be included or not in the embedding? """ logger.debug(f"Generating embedding for app: {app.name}...") # generate app embeddings based on app config's name, display_name, provider, description, categories text_for_embedding = app.model_dump_json() logger.debug(f"Text for app embedding: {text_for_embedding}") return generate_embedding( openai_client, embedding_model, embedding_dimension, text_for_embedding ) # TODO: batch generate function embeddings # TODO: update app embedding to include function embeddings whenever functions are added/updated? def generate_function_embeddings( functions: list[FunctionEmbeddingFields], openai_client: OpenAI, embedding_model: str, embedding_dimension: int, ) -> list[list[float]]: logger.debug(f"Generating embeddings for {len(functions)} functions...") function_embeddings: list[list[float]] = [] for function in functions: function_embeddings.append( generate_function_embedding( function, openai_client, embedding_model, embedding_dimension ) ) return function_embeddings def generate_function_embedding( function: FunctionEmbeddingFields, openai_client: OpenAI, embedding_model: str, embedding_dimension: int, ) -> list[float]: logger.debug(f"Generating embedding for function: {function.name}...") text_for_embedding = function.model_dump_json() logger.debug(f"Text for function embedding: {text_for_embedding}") return generate_embedding( openai_client, embedding_model, embedding_dimension, text_for_embedding ) # TODO: allow different inference providers # TODO: exponential backoff? def generate_embedding( openai_client: OpenAI, embedding_model: str, embedding_dimension: int, text: str ) -> list[float]: """ Generate an embedding for the given text using OpenAI's model. """ logger.debug(f"Generating embedding for text: {text}") try: response = openai_client.embeddings.create( input=[text], model=embedding_model, dimensions=embedding_dimension, ) embedding: list[float] = response.data[0].embedding return embedding except Exception: logger.error("Error generating embedding", exc_info=True) raise ``` ## /backend/aci/common/encryption.py ```py path="/backend/aci/common/encryption.py" import hashlib import hmac from typing import cast import aws_encryption_sdk # type: ignore import boto3 # type: ignore from aws_cryptographic_material_providers.mpl import ( # type: ignore AwsCryptographicMaterialProviders, ) from aws_cryptographic_material_providers.mpl.config import MaterialProvidersConfig # type: ignore from aws_cryptographic_material_providers.mpl.models import CreateAwsKmsKeyringInput # type: ignore from aws_cryptographic_material_providers.mpl.references import IKeyring # type: ignore from aws_encryption_sdk import CommitmentPolicy from aci.common import config client = aws_encryption_sdk.EncryptionSDKClient( commitment_policy=CommitmentPolicy.REQUIRE_ENCRYPT_REQUIRE_DECRYPT ) kms_client = boto3.client( "kms", region_name=config.AWS_REGION, endpoint_url=config.AWS_ENDPOINT_URL, ) mat_prov: AwsCryptographicMaterialProviders = AwsCryptographicMaterialProviders( config=MaterialProvidersConfig() ) keyring_input: CreateAwsKmsKeyringInput = CreateAwsKmsKeyringInput( kms_key_id=config.KEY_ENCRYPTION_KEY_ARN, kms_client=kms_client, ) kms_keyring: IKeyring = mat_prov.create_aws_kms_keyring(input=keyring_input) def encrypt(plain_data: bytes) -> bytes: # TODO: ignore encryptor_header for now my_ciphertext, _ = client.encrypt(source=plain_data, keyring=kms_keyring) return cast(bytes, my_ciphertext) def decrypt(cipher_data: bytes) -> bytes: # TODO: ignore decryptor_header for now my_plaintext, _ = client.decrypt(source=cipher_data, keyring=kms_keyring) return cast(bytes, my_plaintext) def hmac_sha256(message: str) -> str: return hmac.new( config.API_KEY_HASHING_SECRET.encode("utf-8"), message.encode("utf-8"), hashlib.sha256 ).hexdigest() ``` ## /backend/aci/common/enums.py ```py path="/backend/aci/common/enums.py" from enum import StrEnum class APIKeyStatus(StrEnum): ACTIVE = "active" # can only be disabled by ACI DISABLED = "disabled" # TODO: this is soft delete (requested by user), in the future might consider hard delete and keep audit logs somewhere else DELETED = "deleted" class SecurityScheme(StrEnum): """ security scheme type for an app (or function if support override) """ NO_AUTH = "no_auth" API_KEY = "api_key" HTTP_BASIC = "http_basic" HTTP_BEARER = "http_bearer" OAUTH2 = "oauth2" class Protocol(StrEnum): """ function protocol type ideally all functions under the same app should use the same protocol, but we don't enforce that for maximum flexibility """ REST = "rest" CONNECTOR = "connector" # GRAPHQL = "graphql" # WEBSOCKET = "websocket" # GRPC = "grpc" class HttpLocation(StrEnum): PATH = "path" QUERY = "query" HEADER = "header" COOKIE = "cookie" BODY = "body" # TODO: use lowercase for consistency? class HttpMethod(StrEnum): GET = "GET" POST = "POST" PUT = "PUT" DELETE = "DELETE" PATCH = "PATCH" HEAD = "HEAD" OPTIONS = "OPTIONS" class Visibility(StrEnum): """visibility of an app or function""" PUBLIC = "public" PRIVATE = "private" class OrganizationRole(StrEnum): """ role for a user in an organization. """ OWNER = "Owner" ADMIN = "Admin" MEMBER = "Member" class FunctionDefinitionFormat(StrEnum): """ format for a function definition. """ BASIC = "basic" # only return name and description OPENAI = "openai" ANTHROPIC = "anthropic" OPENAI_RESPONSES = "openai_responses" class ClientIdentityProvider(StrEnum): GOOGLE = "google" # GITHUB = "github" ``` The content has been capped at 50000 tokens, and files over NaN bytes have been omitted. The user could consider applying other filters to refine the result. The better and more specific the context, the better the LLM can follow instructions. If the context seems verbose, the user can refine the filter using uithub. Thank you for using https://uithub.com - Perfect LLM context for any GitHub repo.