feiskyer/codex-settings

OpenAI Codex CLI settings, configurations, skills and prompts for vibe coding

License:MITLanguage:Shell8916
agentic-ai代理人工智能claude-codeclaude-skillscodexcopilotlitellmopenaispec-driven-developmentvibe-coding

Deep Analysis

OpenAI Codex CLI配置、技能和提示词的综合管理套件,支持多个模型提供商和可扩展工作流自动化

推荐使用

Core Features

多模型提供商

支持LiteLLM、ChatGPT、Azure OpenAI、OpenRouter等

自定义提示词系统

7个预设提示词支持位置参数

技能插件系统

6个实验性技能包

配置文件管理

支持多种预定义配置快速切换

权限和沙箱控制

细粒度批准策略和沙箱模式

MCP服务器集成

支持模型上下文协议服务器扩展

Technical Implementation

Architecture:分层模块化架构,由配置层、提示词层、技能层组成
Execution Flow:
安装初始化

克隆仓库到~/.codex

选择模型提供商

编辑config.toml选择提供商

配置推理参数

设置批准策略和沙箱模式

加载提示词

使用/prompts:命令访问模板

启用技能

自动发现SKILL.md文件

Key Components:
TOML配置多配置文件管理
Markdown模板参数化提示词
LiteLLM网关多模型统一代理
Highlights
  • 支持6种AI提供商即插即用
  • 参数化Markdown模板提高可重用性
  • 自动发现技能降低集成成本
  • 细粒度权限控制平衡安全和自动化
Use Cases
  • 跨多个AI提供商的环境统一管理
  • 复杂开发工作流自动化
  • 长时间运行的自动化任务
  • 图像生成和YouTube转录等多模态任务
Limitations
  • 技能系统仍处实验阶段
  • 需要手动配置LiteLLM和API密钥
Tech Stack
OpenAI Codex CLILiteLLMTOMLMarkdownMCP

OpenAI Codex CLI Settings and Custom Prompts

A curated collection of configurations, skills and custom prompts for OpenAI Codex CLI, designed to enhance your development workflow with various model providers and reusable prompt templates.

For Claude Code settings, skills, agents and custom commands, please refer feiskyer/claude-code-settings.

Overview

This repository provides:

  • Flexible Configuration: Support for multiple model providers (LiteLLM/Copilot proxy, ChatGPT subscription, Azure OpenAI, OpenRouter, ModelScope, Kimi)
  • Custom Prompts: Reusable prompt templates for common development tasks
  • Skills (Experimental): Discoverable instruction bundles for specialized tasks (image generation, YouTube transcription, spec-driven workflows)
  • Best Practices: Pre-configured settings optimized for development workflows
  • Easy Setup: Simple installation and configuration process

Quick Start

Installation

# Backup existing Codex configuration (if any)
mv ~/.codex ~/.codex.bak

# Clone this repository to ~/.codex
git clone https://github.com/feiskyer/codex-settings.git ~/.codex

# Or symlink if you prefer to keep it elsewhere
ln -s /path/to/codex-settings ~/.codex

Basic Configuration

The default config.toml uses LiteLLM as a gateway. To use it:

  1. Install LiteLLM and Codex CLI:

    pip install -U 'litellm[proxy]'
    npm install -g @openai/codex
    
  2. Create a LiteLLM config file (full example litellm_config.yaml):

    general_settings:
      master_key: sk-dummy
    litellm_settings:
      drop_params: true
    model_list:
    - model_name: gpt-5.1-codex-max
      model_info:
        mode: responses
        supports_vision: true
      litellm_params:
        model: github_copilot/gpt-5.1-codex-max
        drop_params: true
        extra_headers:
          editor-version: "vscode/1.95.0"
          editor-plugin-version: "copilot-chat/0.26.7"
    - model_name: claude-opus-4.5
      litellm_params:
        model: github_copilot/claude-opus-4.5
        drop_params: true
        extra_headers:
          editor-version: "vscode/1.95.0"
          editor-plugin-version: "copilot-chat/0.26.7"
    - model_name: "*"
      litellm_params:
        model: "github_copilot/*"
        extra_headers:
          editor-version: "vscode/1.95.0"
          editor-plugin-version: "copilot-chat/0.26.7"
    
  3. Start LiteLLM proxy:

    litellm --config ~/.codex/litellm_config.yaml
    # Runs on http://localhost:4000 by default
    
  4. Run Codex:

    codex
    

Configuration Files

Main Configuration

  • config.toml: Default configuration using LiteLLM gateway
    • Model: gpt-5 via model_provider = "github" (Copilot proxy on http://localhost:4000)
    • Approval policy: on-request; reasoning summary: detailed; reasoning effort: high; raw agent reasoning visible
    • MCP servers: claude (local), exa (hosted), chrome (DevTools over npx)

Alternative Configurations

Located in configs/ directory:

To use an alternative config:

# Take ChatGPT for example
cp ~/.codex/configs/chatgpt.toml ~/.codex/config.toml
codex

Custom Prompts

Custom prompts are stored in the prompts/ directory. Access them via the /prompts: slash menu in Codex.

  • /prompts:deep-reflector - Analyze development sessions to extract learnings, patterns, and improvements for future interactions.
  • /prompts:insight-documenter [breakthrough] - Capture and document significant technical breakthroughs into reusable knowledge assets.
  • /prompts:instruction-reflector - Analyze and improve Codex instructions in AGENTS.md based on conversation history.
  • /prompts:github-issue-fixer [issue-number] - Systematically analyze, plan, and implement fixes for GitHub issues with PR creation.
  • /prompts:github-pr-reviewer [pr-number] - Perform thorough GitHub pull request code analysis and review.
  • /prompts:ui-engineer [requirements] - Create production-ready frontend solutions with modern UI/UX standards.
  • /prompts:prompt-creator [requirements] - Create Codex custom prompts with proper structure and best practices.

Creating Custom Prompts

  1. Create a new .md file in ~/.codex/prompts/
  2. Use argument placeholders:
    • $1 to $9: Positional arguments
    • $ARGUMENTS: All arguments joined by spaces
    • $$: Literal dollar sign
  3. Restart Codex to load new prompts

Skills (Experimental)

Skills are reusable instruction bundles that Codex automatically discovers at startup. Each skill has a name, description, and detailed instructions stored on disk. Codex injects only metadata (name, description, path) into context - the body stays on disk until needed.

How to Use Skills

Skills are automatically loaded when Codex starts. To use a skill:

  1. List all skills: Use the /skills command to see all available skills

    /skills
    
  2. Invoke a skill: Use $<skill-name> [prompt] to invoke a skill with an optional prompt

    $kiro-skill Create a feature spec for user authentication
    $nanobanana-skill Generate an image of a sunset over mountains
    

Skills are stored in ~/.codex/skills/**/SKILL.md. Only files named exactly SKILL.md are recognized.

Available Skills

claude-skill - Handoff task to Claude Code CLI

claude-skill

Non-interactive automation mode for hands-off task execution using Claude Code. Use when you want to leverage Claude Code to implement features or review code.

Key Features:

  • Multiple permission modes (default, acceptEdits, plan, bypassPermissions)
  • Autonomous execution without approval prompts
  • Streaming progress updates
  • Structured final summaries

Requirements: Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)

autonomous-skill - Long-running task automation

autonomous-skill

Execute complex, long-running tasks across multiple sessions using a dual-agent pattern (Initializer + Executor) with automatic session continuation.

Warning: workflows may pause when Codex requests permissions. Treat this as experimental; expect to babysit early runs and keep iterating on approvals/sandbox settings.

Key Features:

  • Dual-agent pattern (Initializer creates task list, Executor completes tasks)
  • Auto-continuation across sessions with progress tracking
  • Task isolation with per-task directories (.autonomous/<task-name>/)
  • Progress persistence via task_list.md and progress.md
  • Non-interactive mode execution

Usage:

# Start a new autonomous task
~/.codex/skills/autonomous-skill/scripts/run-session.sh "Build a REST API for todo app"

# Continue an existing task
~/.codex/skills/autonomous-skill/scripts/run-session.sh --task-name build-rest-api-todo --continue

# List all tasks
~/.codex/skills/autonomous-skill/scripts/run-session.sh --list
nanobanana-skill - Image generation with Gemini

nanobanana-skill

Generate or edit images using Google Gemini API via nanobanana. Use when creating, generating, or editing images.

Key Features:

  • Image generation with various aspect ratios (square, portrait, landscape, ultra-wide)
  • Image editing capabilities
  • Multiple model options (gemini-3-pro-image-preview, gemini-2.5-flash-image)
  • Resolution options (1K, 2K, 4K)

Requirements:

  • GEMINI_API_KEY configured in ~/.nanobanana.env
  • Python3 with google-genai, Pillow, python-dotenv
youtube-transcribe-skill - Extract YouTube subtitles

youtube-transcribe-skill

Extract subtitles/transcripts from a YouTube video URL and save as a local file.

Key Features:

  • Dual extraction methods: CLI (yt-dlp) and Browser Automation (fallback)
  • Automatic subtitle language selection (zh-Hans, zh-Hant, en)
  • Cookie handling for age-restricted content
  • Saves transcripts to local text files

Requirements:

  • yt-dlp (for CLI method), or
  • Browser automation MCP server (for fallback method)
kiro-skill - Interactive feature development

kiro-skill

Interactive feature development workflow from idea to implementation. Creates requirements (EARS format), design documents, and implementation task lists.

Triggered by: "kiro" or references to .kiro/specs/ directory

Workflow:

  1. Requirements → Define what needs to be built (EARS format with user stories)
  2. Design → Determine how to build it (architecture, components, data models)
  3. Tasks → Create actionable implementation steps (test-driven, incremental)
  4. Execute → Implement tasks one at a time

Storage: Creates files in .kiro/specs/{feature-name}/ directory

spec-kit-skill - Constitution-based development

spec-kit-skill

GitHub Spec-Kit integration for constitution-based spec-driven development.

Triggered by: "spec-kit", "speckit", "constitution", "specify", or references to .specify/ directory

Prerequisites:

# Install spec-kit CLI
uv tool install specify-cli --from git+https://github.com/github/spec-kit.git

# Initialize project
specify init . --ai codex
Highly Recommended
agents

wshobson/agents

wshobson

Intelligent automation and multi-agent orchestration for Claude Code

The most comprehensive Claude Code plugin ecosystem, covering full-stack development scenarios with a three-tier model strategy balancing performance and cost.

25.6k2.8k3 days ago
Highly Recommended
awesome-claude-skills

ComposioHQ/awesome-claude-skills

ComposioHQ

A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows

The most comprehensive Claude Skills resource list; connect-apps is a killer feature.

19.9k2.0k3 days ago
Recommended
oh-my-opencode

code-yeongyu/oh-my-opencode

code-yeongyu

The Best Agent Harness. Meet Sisyphus: The Batteries-Included Agent that codes like you.

Powerful multi-agent coding tool, but note OAuth limitations.

17.5k1.2k3 days ago
Highly Recommended
ui-ux-pro-max-skill

nextlevelbuilder/ui-ux-pro-max-skill

nextlevelbuilder

An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms

Essential for designers; comprehensive UI/UX knowledge base.

15.3k1.5k3 days ago
Recommended
claude-mem

thedotmack/claude-mem

thedotmack

A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.

A practical solution for Claude's memory issues.

14.0k9143 days ago
Highly Recommended
planning-with-files

OthmanAdi/planning-with-files

OthmanAdi

Claude Code skill implementing Manus-style persistent markdown planning — the workflow pattern behind the $2B acquisition.

Context engineering best practices; an open-source implementation of Manus mode.

9.3k8113 days ago