``` ├── .github/ ├── dangerous_example.png ├── demo.gif ├── demo.mov ├── demo.mp4 ├── workflows/ ├── publish.yml ├── .gitignore ├── LICENSE ├── README.md ├── pyproject.toml ├── src/ ├── zev/ ├── config/ ├── __init__.py ├── setup.py ├── types.py ├── constants.py ├── llms/ ├── __init__.py ├── gemini/ ├── __init__.py ├── provider.py ├── setup.py ├── inference_provider_base.py ├── llm.py ├── ollama/ ├── __init__.py ├── provider.py ├── setup.py ├── openai/ ├── __init__.py ├── provider.py ├── setup.py ├── types.py ├── main.py ├── utils.py ``` ## /.github/dangerous_example.png Binary file available at https://raw.githubusercontent.com/dtnewman/zev/refs/heads/main/.github/dangerous_example.png ## /.github/demo.gif Binary file available at https://raw.githubusercontent.com/dtnewman/zev/refs/heads/main/.github/demo.gif ## /.github/demo.mov Binary file available at https://raw.githubusercontent.com/dtnewman/zev/refs/heads/main/.github/demo.mov ## /.github/demo.mp4 Binary file available at https://raw.githubusercontent.com/dtnewman/zev/refs/heads/main/.github/demo.mp4 ## /.github/workflows/publish.yml ```yml path="/.github/workflows/publish.yml" name: Publish to PyPI on: release: types: [published] push: tags: - 'v*' workflow_dispatch: # Add permissions configuration permissions: id-token: write jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install dependencies run: | python -m pip install --upgrade pip pip install build - name: Build package run: python -m build - name: Publish to PyPI uses: pypa/gh-action-pypi-publish@release/v1 with: password: ${{ secrets.PYPI_API_TOKEN }} ``` ## /.gitignore ```gitignore path="/.gitignore" .env src/*.egg-info **/__pycache__ # mac os .DS_Store # dist /dist ``` ## /LICENSE ``` path="/LICENSE" Copyright 2025 Daniel Newman Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## /README.md # Zev 🔍 [![PyPI version](https://badge.fury.io/py/zev.svg)](https://badge.fury.io/py/zev) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) Zev helps you remember (or discover) terminal commands using natural language. ![Description](./.github/demo.gif) ## 🔧 Installation ```bash pip install zev ``` - **Note:** This project runs on top of LLM APIs like OpenAI, Google's Gemini, or [Ollama](https://ollama.com/). ## 📦 Dependencies For clipboard functionality (copying and pasting) to work properly, you may need to install: - On Linux: `xclip` or `xsel` (for X11) or `wl-clipboard` (for Wayland) - On macOS: No additional dependencies needed - On Windows: No additional dependencies needed ## 🎮 Usage #### Option 1: Interactive Mode ```bash zev ``` #### Option 2: Direct Query ```bash zev '' ``` ## 📝 Examples ```bash # Find running processes zev 'show all running python processes' # File operations zev 'find all .py files modified in the last 24 hours' # System information zev 'show disk usage for current directory' # Network commands zev 'check if google.com is reachable' # Git operations zev 'show uncommitted changes in git' ``` ## 🛡️ Safety Considerations ⚠️ Commands are generated by LLMs. While the tool attempts to flag dangerous commands, it may not always do so. Use caution. ![Example of dangerous command warning](./.github/dangerous_example.png) ## ⚙️ Settings ### **Supported LLM Providers:** - OpenAI - Google Gemini - Ollama You can update your API keys and provider settings by running: ```bash zev --setup ``` ### OpenAI To use OpenAI, you need an OpenAI account and a subscription. You can create an API key on [this page](https://platform.openai.com/settings/organization/api-keys). ### Google Gemini (Free) To use Google's Gemini models, you need a Google AI Studio account. You can create a Gemini API key in [Google AI Studio](https://aistudio.google.com/). ## 🐪 Using with Ollama You can use Zev with [Ollama](https://ollama.ai/) as an alternative to hosted providers, which lets you run all commands locally. To set this up: 1. Install and start [Ollama](https://ollama.com/) with a model of your choice 2. Run `zev --setup` and put in the proper settings. For example: ``` ? Pick your LLM provider: Ollama ? Enter the Ollama URL: http://localhost:11434/v1 ? Enter the model to use (e.g. llama3.2): llama3.2 ``` Note that to switch backends, you can re-run `zev --setup` again at any time. ## 🤝 Contributing Contributions are welcome! Feel free to open issues or submit pull requests. ## 📄 License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## /pyproject.toml ```toml path="/pyproject.toml" [project] name = "zev" version = "0.6.2" description = "Lookup CLI commands easily" readme = "README.md" dependencies = [ "google-genai>=1.11.0", "openai>=1.72.0", "pydantic>=2.10.6", "pyperclip>=1.9.0", "python-dotenv>=1.0.1", "questionary>=2.1.0", "rich>=13.9.4" ] requires-python = ">=3.9" urls = { Repository = "https://github.com/dtnewman/zev" } [project.scripts] zev = "zev.main:app" [project.optional-dependencies] dev = [ "ruff>=0.11.2" ] [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [tool.setuptools.packages.find] where = ["src"] include = ["zev*"] [tool.ruff] line-length = 120 ``` ## /src/zev/config/__init__.py ```py path="/src/zev/config/__init__.py" from pathlib import Path from dotenv import dotenv_values class Config: def __init__(self): self.config_path = Path.home() / ".zevrc" self.vals = dotenv_values(self.config_path) @property def llm_provider(self): return self.vals.get("LLM_PROVIDER") # OpenAI @property def openai_api_key(self): return self.vals.get("OPENAI_API_KEY") @property def openai_model(self): return self.vals.get("OPENAI_MODEL") # Ollama @property def ollama_base_url(self): return self.vals.get("OLLAMA_BASE_URL") @property def ollama_model(self): return self.vals.get("OLLAMA_MODEL") # Gemini @property def gemini_model(self): return self.vals.get("GEMINI_MODEL") @property def gemini_api_key(self): return self.vals.get("GEMINI_API_KEY") config = Config() ``` ## /src/zev/config/setup.py ```py path="/src/zev/config/setup.py" from dotenv import dotenv_values from pathlib import Path import questionary from typing import Dict from zev.config.types import ( SetupQuestion, SetupQuestionSelect, SetupQuestionText, SetupQuestionSelectOption, ) from zev.constants import LLMProviders from zev.llms.ollama.setup import questions as ollama_questions from zev.llms.openai.setup import questions as openai_questions from zev.llms.gemini.setup import questions as gemini_questions setup_questions = [ SetupQuestionSelect( name="LLM_PROVIDER", prompt="Pick your LLM provider:", options=[ SetupQuestionSelectOption( value=LLMProviders.OPENAI, label="OpenAI", follow_up_questions=openai_questions, ), SetupQuestionSelectOption( value=LLMProviders.OLLAMA, label="Ollama", follow_up_questions=ollama_questions, ), SetupQuestionSelectOption( value=LLMProviders.GEMINI, label="Gemini", follow_up_questions=gemini_questions, ), ], ) ] def prompt_question(question: SetupQuestion, answers: Dict[str, str]) -> Dict[str, str]: existing_answer = answers.get(question.name) if type(question) == SetupQuestionSelect: # Find the matching option for the default value default_option = None if existing_answer: default_option = next((opt for opt in question.options if opt.value == existing_answer), None) answer: SetupQuestionSelect = questionary.select( question.prompt, choices=[ questionary.Choice(option.label, description=option.description, value=option) for option in question.options ], default=default_option, ).ask() answers[question.name] = answer.value for q in answer.follow_up_questions: answers.update(prompt_question(q, answers=answers)) elif type(question) == SetupQuestionText: answer = questionary.text( question.prompt, default=existing_answer or question.default, validate=question.validator ).ask() answers[question.name] = answer else: raise Exception("Invalid question type") return answers def run_setup(): config_path = Path.home() / ".zevrc" answers = dotenv_values(config_path) # load in current values and then override as necessary for question in setup_questions: answers.update(prompt_question(question, answers)) new_file = "" for env_var_name, value in answers.items(): new_file += f"{env_var_name}={value}\n" with open(config_path, "w") as f: f.write(new_file) ``` ## /src/zev/config/types.py ```py path="/src/zev/config/types.py" from dataclasses import dataclass from typing import List, Optional, Tuple from enum import Enum @dataclass class SetupQuestionSelectOption: """Represents a possible answer to a question""" value: str label: str description: Optional[str] = None follow_up_questions: Tuple["SetupQuestion"] = () @dataclass class SetupQuestion: name: str prompt: str @dataclass class SetupQuestionSelect(SetupQuestion): """Prompts the user with a select menu""" options: List[SetupQuestionSelectOption] @dataclass class SetupQuestionText(SetupQuestion): """Prompts the user to enter text""" validator: Optional[callable] = None # a function that takes answer and returns a bool default: Optional[str] = "" ``` ## /src/zev/constants.py ```py path="/src/zev/constants.py" class LLMProviders: OPENAI = "openai" OLLAMA = "ollama" GEMINI = "gemini" DEFAULT_PROVIDER = LLMProviders.OPENAI # Default model names for each provider OPENAI_DEFAULT_MODEL = "gpt-4o-mini" GEMINI_DEFAULT_MODEL = "gemini-2.0-flash" OPENAI_BASE_URL = "https://api.openai.com/v1" CONFIG_FILE_NAME = ".zevrc" PROMPT = """ You are a helpful assistant that helps users remember commands for the terminal. You will return a JSON object with a list of at most three options. The options should be related to the prompt that the user provides (the prompt might either be desciptive or in the form of a question). The options should be in the form of a command that can be run in a bash terminal. If the user prompt is not clear, return an empty list and set is_valid to false, and provide an explanation of why it is not clear in the explanation_if_not_valid field. If you provide an option that is likely to be dangerous, set is_dangerous to true for that option. For example, the command 'git reset --hard' is dangerous because it can delete all the user's local changes. 'rm -rf' is dangerous because it can delete all the files in the user's directory. If something is marked as dangerous, provide a short explanation of why it is dangerous in the dangerous_explanation field (leave this field empty if the option is not dangerous). Otherwise, set is_valid to true, leave explanation_if_not_valid empty, and provide the commands in the commands field (remember, up to 3 options, and they all must be commands that can be run in a bash terminal without changing anything). Each command should have a short explanation of what it does. Here is some context about the user's environment: ============== {context} ============== Here is the users prompt: ============== {prompt} """ ``` ## /src/zev/llms/__init__.py ```py path="/src/zev/llms/__init__.py" ``` ## /src/zev/llms/gemini/__init__.py ```py path="/src/zev/llms/gemini/__init__.py" ``` ## /src/zev/llms/gemini/provider.py ```py path="/src/zev/llms/gemini/provider.py" from google import genai import json from zev.config import config from zev.constants import PROMPT, GEMINI_DEFAULT_MODEL from zev.llms.inference_provider_base import InferenceProvider from zev.llms.openai.provider import OpenAIProvider from zev.llms.types import OptionsResponse class GeminiProvider(InferenceProvider): def __init__(self): if not config.gemini_api_key: raise ValueError("GEMINI_API_KEY must be set. Try running `zev --setup`.") self.client = genai.Client(api_key=config.gemini_api_key) self.model = config.gemini_model or GEMINI_DEFAULT_MODEL def get_options(self, prompt: str, context: str) -> OptionsResponse | None: try: assembled_prompt = PROMPT.format(prompt=prompt, context=context) response = self.client.models.generate_content( model=self.model, contents=assembled_prompt, config={ "response_mime_type": "application/json", "response_schema": OptionsResponse, }, ) return response.parsed except genai.errors.ClientError as e: print("Error:", e.details["error"]["message"]) print("Note that to update settings, you can run `zev --setup`.") return ``` ## /src/zev/llms/gemini/setup.py ```py path="/src/zev/llms/gemini/setup.py" from zev.config.types import SetupQuestionText, SetupQuestionSelect, SetupQuestionSelectOption questions = ( SetupQuestionText( name="GEMINI_API_KEY", prompt="Your GEMINI api key:", default="", ), SetupQuestionSelect( name="GEMINI_MODEL", prompt="Choose which model you would like to default to:", options=[ SetupQuestionSelectOption( value="gemini-1.5-flash", label="gemini-1.5-flash", description="Low latency, good for summarization, good performance", ), SetupQuestionSelectOption( value="gemini-2.0-flash", label="gemini-2.0-flash", description="Long context, good performance" ), ], ), ) ``` ## /src/zev/llms/inference_provider_base.py ```py path="/src/zev/llms/inference_provider_base.py" from zev.llms.types import OptionsResponse class InferenceProvider: def __init__(self): raise NotImplementedError("Subclasses must implement this method") def get_options(prompt: str, context: str) -> OptionsResponse | None: raise NotImplementedError("Subclasses must implement this method") ``` ## /src/zev/llms/llm.py ```py path="/src/zev/llms/llm.py" from zev.config import config from zev.constants import LLMProviders from zev.llms.openai.provider import OpenAIProvider from zev.llms.ollama.provider import OllamaProvider from zev.llms.gemini.provider import GeminiProvider from zev.llms.inference_provider_base import InferenceProvider def get_inference_provider() -> InferenceProvider: if config.llm_provider == LLMProviders.OPENAI: return OpenAIProvider() elif config.llm_provider == LLMProviders.OLLAMA: return OllamaProvider() elif config.llm_provider == LLMProviders.GEMINI: return GeminiProvider() else: raise ValueError(f"Invalid LLM provider: {config.llm_provider}") ``` ## /src/zev/llms/ollama/__init__.py ```py path="/src/zev/llms/ollama/__init__.py" ``` ## /src/zev/llms/ollama/provider.py ```py path="/src/zev/llms/ollama/provider.py" from openai import OpenAI from zev.llms.inference_provider_base import InferenceProvider from zev.llms.openai.provider import OpenAIProvider from zev.config import config class OllamaProvider(OpenAIProvider): """ Same as OpenAIProvider, but takes a different base url and model. """ def __init__(self): if not config.ollama_base_url: raise ValueError("OLLAMA_BASE_URL must be set. Try running `zev --setup`.") if not config.ollama_model: raise ValueError("OLLAMA_MODEL must be set. Try running `zev --setup`.") # api_key is not used, but is still required by the OpenAI client # https://github.com/ollama/ollama/blob/5cfc1c39f3d5822b0c0906f863f6df45c141c33b/docs/openai.md?plain=1#L19 self.client = OpenAI(base_url=config.ollama_base_url, api_key="ollama") self.model = config.ollama_model ``` ## /src/zev/llms/ollama/setup.py ```py path="/src/zev/llms/ollama/setup.py" from zev.config.types import SetupQuestionText questions = ( SetupQuestionText( name="OLLAMA_BASE_URL", prompt="Enter the Ollama URL:", default="http://localhost:11434/v1", ), SetupQuestionText(name="OLLAMA_MODEL", prompt="Enter the model to use (e.g. llama3.2):"), ) ``` ## /src/zev/llms/openai/__init__.py ```py path="/src/zev/llms/openai/__init__.py" ``` ## /src/zev/llms/openai/provider.py ```py path="/src/zev/llms/openai/provider.py" from openai import OpenAI, AuthenticationError from zev.config import config from zev.constants import OPENAI_BASE_URL, OPENAI_DEFAULT_MODEL, PROMPT from zev.llms.inference_provider_base import InferenceProvider from zev.llms.types import OptionsResponse class OpenAIProvider(InferenceProvider): def __init__(self): if not config.openai_api_key: raise ValueError("OPENAI_API_KEY must be set. Try running `zev --setup`.") self.client = OpenAI(base_url=OPENAI_BASE_URL, api_key=config.openai_api_key) self.model = config.openai_model or OPENAI_DEFAULT_MODEL def get_options(self, prompt: str, context: str) -> OptionsResponse | None: try: assembled_prompt = PROMPT.format(prompt=prompt, context=context) response = self.client.beta.chat.completions.parse( model=self.model, messages=[{"role": "user", "content": assembled_prompt}], response_format=OptionsResponse, ) return response.choices[0].message.parsed except AuthenticationError as e: print("Error: There was an error with your OpenAI API key. You can change it by running `zev --setup`.") return ``` ## /src/zev/llms/openai/setup.py ```py path="/src/zev/llms/openai/setup.py" from zev.config.types import SetupQuestionText, SetupQuestionSelect, SetupQuestionSelectOption questions = ( SetupQuestionText( name="OPENAI_API_KEY", prompt="Your OPENAI api key:", default="", ), SetupQuestionSelect( name="OPENAI_MODEL", prompt="Choose which model you would like to default to:", options=[ SetupQuestionSelectOption( value="gpt-4o-mini", label="gpt-4o-mini", description="Good performance and speed, and cheaper" ), SetupQuestionSelectOption( value="gpt-4o", label="gpt-4o", description="More expensive and slower, but better performance" ), ], ), ) ``` ## /src/zev/llms/types.py ```py path="/src/zev/llms/types.py" from pydantic import BaseModel from typing import Optional class Command(BaseModel): command: str short_explanation: str is_dangerous: bool dangerous_explanation: Optional[str] = None class OptionsResponse(BaseModel): commands: list[Command] is_valid: bool explanation_if_not_valid: Optional[str] = None ``` ## /src/zev/main.py ```py path="/src/zev/main.py" import dotenv from pathlib import Path import pyperclip import questionary from rich import print as rprint from rich.console import Console import sys from zev.config.setup import run_setup from zev.constants import CONFIG_FILE_NAME from zev.llms.llm import get_inference_provider from zev.utils import get_env_context, get_input_string def setup(): run_setup() def show_options(words: str): context = get_env_context() console = Console() with console.status("[bold blue]Thinking...", spinner="dots"): inference_provider = get_inference_provider() response = inference_provider.get_options(prompt=words, context=context) if response is None: return if not response.is_valid: print(response.explanation_if_not_valid) return if not response.commands: print("No commands available") return options = [ questionary.Choice(cmd.command, description=cmd.short_explanation, value=cmd) for cmd in response.commands ] options.append(questionary.Choice("Cancel")) options.append(questionary.Separator()) selected = questionary.select( "Select command:", choices=options, use_shortcuts=True, style=questionary.Style( [ ("answer", "fg:#61afef"), ("question", "bold"), ("instruction", "fg:#98c379"), ] ), ).ask() if selected != "Cancel": print("") if selected.dangerous_explanation: rprint(f"[red]⚠️ Warning: {selected.dangerous_explanation}[/red]\n") try: pyperclip.copy(selected.command) rprint("[green]✓[/green] Copied to clipboard") except pyperclip.PyperclipException as e: rprint(f"[red]Could not copy to clipboard: {e} (the clipboard may not work at all if you are running over SSH)[/red]") rprint("[cyan]Here is your command:[/cyan]") print(selected.command) def run_no_prompt(): input = get_input_string("input", "Describe what you want to do", "", False) show_options(input) def app(): # check if .zevrc exists or if setting up again config_path = Path.home() / CONFIG_FILE_NAME args = [arg.strip() for arg in sys.argv[1:]] if not config_path.exists(): run_setup() print("Setup complete...\n") if len(args) == 1 and args[0] == "--setup": return elif len(args) == 1 and args[0] == "--setup": dotenv.load_dotenv(config_path, override=True) run_setup() print("Setup complete...\n") return elif len(args) == 1 and args[0] == "--version": print(f"zev version: 0.6.2") return # important: make sure this is loaded before actually running the app (in regular or interactive mode) dotenv.load_dotenv(config_path, override=True) if not args: run_no_prompt() return # Strip any trailing question marks from the input query = " ".join(args).rstrip("?") show_options(query) if __name__ == "__main__": app() ``` ## /src/zev/utils.py ```py path="/src/zev/utils.py" import os import platform def get_input_string( field_name: str, prompt: str, default: str = "", required: bool = False, ) -> str: if default: prompt = f"{prompt} (default: {default})" else: prompt = f"{prompt}" # ANSI escape code for green color (#98c379) green_color = "\033[38;2;152;195;121m" reset_color = "\033[0m" value = input(f"{green_color}{prompt}{reset_color}: ") or default if required and not value: print(f"{field_name} is required, please try again") return get_input_string(field_name, prompt, default, required) return value or default def get_env_context() -> str: os_name = platform.platform(aliased=True) shell = os.environ.get("SHELL") or os.environ.get("COMSPEC") return f"OS: {os_name}\nSHELL: {shell}" if shell else f"OS: {os_name}" ``` The better and more specific the context, the better the LLM can follow instructions. If the context seems verbose, the user can refine the filter using uithub. Thank you for using https://uithub.com - Perfect LLM context for any GitHub repo.