This file provides agent-specific guidance for working on gptme. For general project information, see README.md and docs.
- Never push directly to master - always use branches and PRs
- Branch naming:
feat/,fix/,docs/,refactor/prefixes - Commit format: Use Conventional Commits
feat:for new features (not just docs)fix:for bug fixesdocs:for documentation onlyrefactor:,test:,chore:as appropriate
- Stage files explicitly: Never use
git add .orgit commit -a - Create PRs: Use
gh pr createafter pushing branch
- Type hints: All functions must have type annotations
- Formatting:
ruff formatandruff check(run via pre-commit) - Type checking:
mypymust pass - KISS: Keep it simple - avoid over-engineering
- Small functions: Refactor deeply nested code into smaller units
- Minimal mocking: Prefer integration tests over heavy mocking
Run tests before submitting PRs:
make test # Fast tests (excludes slow/eval)
make test SLOW=1 # Include slow tests
make typecheck # mypy
make lint # ruff + other checksKey directories:
gptme/- Core library codegptme/cli/- CLI entry pointsgptme/tools/- Tool implementationsgptme/llm/- LLM provider integrationsgptme/server/- REST API server
tests/- Test suitedocs/- Sphinx documentation (RST + MD)scripts/- Build and utility scripts
We aim to keep gptme core small and focused. See docs/arewetiny.rst.
Belongs in core (gptme):
- Essential tools (shell, save, patch, browser, vision)
- Core infrastructure (chat loop, message handling, LLM providers)
- Features needed by most users
Belongs in gptme-contrib:
- Specialized tools (Twitter/X, Discord, email)
- Experimental features
- Integrations with specific services
- Multi-agent patterns (consortium)
When in doubt, start in gptme-contrib. If widely adopted, consider upstreaming.
We track startup time and code size. See docs/arewetiny.rst.
CI benchmarks enforce startup thresholds.
- Tool: A function the assistant can execute (shell, save, patch, etc.)
- ToolUse: Parsed representation of a tool invocation in a message
- Message: A single message in the conversation
- LogManager: Manages conversation history persistence
- Step: One LLM generation + tool execution cycle
- Turn: Complete user→assistant exchange (may include multiple steps)
See docs/glossary.md for full terminology.
- Create
gptme/tools/toolname.py - Implement
ToolSpecwithexecute()function - Tools are auto-discovered - no manual registration needed
- Add tests in
tests/test_tools_toolname.py
uv run gptme-server --port 5000make docs