feiskyer/codex-settings
OpenAI Codex CLI settings, configurations, skills and prompts for vibe coding
Deep Analysis
OpenAI Codex CLI配置、技能和提示词的综合管理套件,支持多个模型提供商和可扩展工作流自动化
Core Features
多模型提供商
支持LiteLLM、ChatGPT、Azure OpenAI、OpenRouter等
自定义提示词系统
7个预设提示词支持位置参数
技能插件系统
6个实验性技能包
配置文件管理
支持多种预定义配置快速切换
权限和沙箱控制
细粒度批准策略和沙箱模式
MCP服务器集成
支持模型上下文协议服务器扩展
Technical Implementation
克隆仓库到~/.codex
编辑config.toml选择提供商
设置批准策略和沙箱模式
使用/prompts:命令访问模板
自动发现SKILL.md文件
- 支持6种AI提供商即插即用
- 参数化Markdown模板提高可重用性
- 自动发现技能降低集成成本
- 细粒度权限控制平衡安全和自动化
- 跨多个AI提供商的环境统一管理
- 复杂开发工作流自动化
- 长时间运行的自动化任务
- 图像生成和YouTube转录等多模态任务
- 技能系统仍处实验阶段
- 需要手动配置LiteLLM和API密钥
OpenAI Codex CLI Settings and Custom Prompts
A curated collection of configurations, skills and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.
For Claude Code settings, skills, agents and custom commands, please refer feiskyer/claude-code-settings.
Overview
This repository provides:
- Flexible Configuration: Support for multiple model providers (LiteLLM/Copilot proxy, ChatGPT subscription, Azure OpenAI, OpenRouter, ModelScope, Kimi)
- Custom Prompts: Reusable prompt templates for common development tasks
- Skills (Experimental): Discoverable instruction bundles for specialized tasks (image generation, YouTube transcription, spec-driven workflows)
- Best Practices: Pre-configured settings optimized for development workflows
- Easy Setup: Simple installation and configuration process
Quick Start
Installation
# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak
# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex
# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codex
Basic Configuration
The default config.toml uses LiteLLM as a gateway. To use it:
-
Install LiteLLM and Codex CLI:
pip install -U 'litellm[proxy]' npm install -g @openai/codex -
Create a LiteLLM config file (full example litellm_config.yaml):
general_settings: master_key: sk-dummy litellm_settings: drop_params: true model_list: - model_name: gpt-5.1-codex-max model_info: mode: responses supports_vision: true litellm_params: model: github_copilot/gpt-5.1-codex-max drop_params: true extra_headers: editor-version: "vscode/1.95.0" editor-plugin-version: "copilot-chat/0.26.7" - model_name: claude-opus-4.5 litellm_params: model: github_copilot/claude-opus-4.5 drop_params: true extra_headers: editor-version: "vscode/1.95.0" editor-plugin-version: "copilot-chat/0.26.7" - model_name: "*" litellm_params: model: "github_copilot/*" extra_headers: editor-version: "vscode/1.95.0" editor-plugin-version: "copilot-chat/0.26.7" -
Start LiteLLM proxy:
litellm --config ~/.codex/litellm_config.yaml # Runs on http://localhost:4000 by default -
Run Codex:
codex
Configuration Files
Main Configuration
- config.toml: Default configuration using LiteLLM gateway
- Model:
gpt-5viamodel_provider = "github"(Copilot proxy onhttp://localhost:4000) - Approval policy:
on-request; reasoning summary:detailed; reasoning effort:high; raw agent reasoning visible - MCP servers:
claude(local),exa(hosted),chrome(DevTools overnpx)
- Model:
Alternative Configurations
Located in configs/ directory:
- OpenAI ChatGPT: Use ChatGPT subscription provider
- Azure OpenAI: Use Azure OpenAI service provider
- Github Copilot: Use Github Copilot via LiteLLM proxy
- OpenRouter: Use OpenRouter provider
- Model Scope: Use ModelScope provider
- Kimi: Use Moonshot Kimi provider
To use an alternative config:
# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml ~/.codex/config.toml
codex
Custom Prompts
Custom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.
/prompts:deep-reflector- Analyze development sessions to extract learnings, patterns, and improvements for future interactions./prompts:insight-documenter [breakthrough]- Capture and document significant technical breakthroughs into reusable knowledge assets./prompts:instruction-reflector- Analyze and improve Codex instructions in AGENTS.md based on conversation history./prompts:github-issue-fixer [issue-number]- Systematically analyze, plan, and implement fixes for GitHub issues with PR creation./prompts:github-pr-reviewer [pr-number]- Perform thorough GitHub pull request code analysis and review./prompts:ui-engineer [requirements]- Create production-ready frontend solutions with modern UI/UX standards./prompts:prompt-creator [requirements]- Create Codex custom prompts with proper structure and best practices.
Creating Custom Prompts
- Create a new
.mdfile in~/.codex/prompts/ - Use argument placeholders:
$1to$9: Positional arguments$ARGUMENTS: All arguments joined by spaces$$: Literal dollar sign
- Restart Codex to load new prompts
Skills (Experimental)
Skills are reusable instruction bundles that Codex automatically discovers at startup. Each skill has a name, description, and detailed instructions stored on disk. Codex injects only metadata (name, description, path) into context - the body stays on disk until needed.
How to Use Skills
Skills are automatically loaded when Codex starts. To use a skill:
-
List all skills: Use the
/skillscommand to see all available skills/skills -
Invoke a skill: Use
$<skill-name> [prompt]to invoke a skill with an optional prompt$kiro-skill Create a feature spec for user authentication $nanobanana-skill Generate an image of a sunset over mountains
Skills are stored in ~/.codex/skills/**/SKILL.md. Only files named exactly SKILL.md are recognized.
Available Skills
claude-skill - Handoff task to Claude Code CLI
claude-skill
Non-interactive automation mode for hands-off task execution using Claude Code. Use when you want to leverage Claude Code to implement features or review code.
Key Features:
- Multiple permission modes (default, acceptEdits, plan, bypassPermissions)
- Autonomous execution without approval prompts
- Streaming progress updates
- Structured final summaries
Requirements: Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
autonomous-skill - Long-running task automation
autonomous-skill
Execute complex, long-running tasks across multiple sessions using a dual-agent pattern (Initializer + Executor) with automatic session continuation.
Warning: workflows may pause when Codex requests permissions. Treat this as experimental; expect to babysit early runs and keep iterating on approvals/sandbox settings.
Key Features:
- Dual-agent pattern (Initializer creates task list, Executor completes tasks)
- Auto-continuation across sessions with progress tracking
- Task isolation with per-task directories (
.autonomous/<task-name>/) - Progress persistence via
task_list.mdandprogress.md - Non-interactive mode execution
Usage:
# Start a new autonomous task
~/.codex/skills/autonomous-skill/scripts/run-session.sh "Build a REST API for todo app"
# Continue an existing task
~/.codex/skills/autonomous-skill/scripts/run-session.sh --task-name build-rest-api-todo --continue
# List all tasks
~/.codex/skills/autonomous-skill/scripts/run-session.sh --list
nanobanana-skill - Image generation with Gemini
nanobanana-skill
Generate or edit images using Google Gemini API via nanobanana. Use when creating, generating, or editing images.
Key Features:
- Image generation with various aspect ratios (square, portrait, landscape, ultra-wide)
- Image editing capabilities
- Multiple model options (gemini-3-pro-image-preview, gemini-2.5-flash-image)
- Resolution options (1K, 2K, 4K)
Requirements:
GEMINI_API_KEYconfigured in~/.nanobanana.env- Python3 with google-genai, Pillow, python-dotenv
youtube-transcribe-skill - Extract YouTube subtitles
youtube-transcribe-skill
Extract subtitles/transcripts from a YouTube video URL and save as a local file.
Key Features:
- Dual extraction methods: CLI (
yt-dlp) and Browser Automation (fallback) - Automatic subtitle language selection (zh-Hans, zh-Hant, en)
- Cookie handling for age-restricted content
- Saves transcripts to local text files
Requirements:
yt-dlp(for CLI method), or- Browser automation MCP server (for fallback method)
kiro-skill - Interactive feature development
kiro-skill
Interactive feature development workflow from idea to implementation. Creates requirements (EARS format), design documents, and implementation task lists.
Triggered by: "kiro" or references to .kiro/specs/ directory
Workflow:
- Requirements → Define what needs to be built (EARS format with user stories)
- Design → Determine how to build it (architecture, components, data models)
- Tasks → Create actionable implementation steps (test-driven, incremental)
- Execute → Implement tasks one at a time
Storage: Creates files in .kiro/specs/{feature-name}/ directory
spec-kit-skill - Constitution-based development
spec-kit-skill
GitHub Spec-Kit integration for constitution-based spec-driven development.
Triggered by: "spec-kit", "speckit", "constitution", "specify", or references to .specify/ directory
Prerequisites:
# Install spec-kit CLI
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git
# Initialize project
specify init . --ai codex
Related Skills
wshobson/agents
wshobsonIntelligent automation and multi-agent orchestration for Claude Code
The most comprehensive Claude Code plugin ecosystem, covering full-stack development scenarios with a three-tier model strategy balancing performance and cost.
ComposioHQ/awesome-claude-skills
ComposioHQA curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
The most comprehensive Claude Skills resource list; connect-apps is a killer feature.
code-yeongyu/oh-my-opencode
code-yeongyuThe Best Agent Harness. Meet Sisyphus: The Batteries-Included Agent that codes like you.
Powerful multi-agent coding tool, but note OAuth limitations.
nextlevelbuilder/ui-ux-pro-max-skill
nextlevelbuilderAn AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
Essential for designers; comprehensive UI/UX knowledge base.
thedotmack/claude-mem
thedotmackA Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
A practical solution for Claude's memory issues.
OthmanAdi/planning-with-files
OthmanAdiClaude Code skill implementing Manus-style persistent markdown planning — the workflow pattern behind the $2B acquisition.
Context engineering best practices; an open-source implementation of Manus mode.

