Vibe Coding¶

This work is licensed under a Creative Commons Attribution 4.0 International License.
"Vibe coding" refers to using an LLM to generate and edit code directly within your IDE (e.g., VS Code). This approach allows for a more fluid and interactive coding experience, where the LLM acts as a collaborative partner.
Who coined the term 'vibe coding'?
The term "vibe coding" originated with a tweet by Andrej Karpathy in February 2025,
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. I just mass chat, mass accept, skip to errors,…
— Andrej Karpathy (@karpathy) February 2, 2025
Vibe coding hands an AI agent meaningful authority over your machine. Before turning one loose on a project that matters, read the Coding safely with AI section at the bottom of this page — it covers code review, local execution risk, licensing, privacy, and accessibility.
Available Platforms¶
| Emoji | Meaning |
|---|---|
| VS Code | |
| GitHub CodeSpace | |
| Apple OS | |
| Windows | |
| Linux | |
| Command Line Interface | |
| Open Source | |
| Licensed | |
| API based |
Desktop IDEs and Standalone Editors¶
Claude Desktop¶
An easy-to-install desktop platform that connects to Anthropic's powerful LLM API, and allows you to connect to MCP servers.
VS Code¶
Microsoft's popular open-source code editor with extensive extension ecosystem, including numerous AI coding assistants (see VS Code Extensions section below).
Cursor¶
A popular standalone fork of VS Code, focused on integrating new models with stability and offering a flat-fee pricing model.
Positron¶
A next-generation data science IDE built on VS Code, developed by Posit (formerly RStudio), with native support for Python, R, and AI-assisted coding.
Firebase Studio¶
Firebase's integrated development environment for building and deploying Firebase apps with AI-powered code generation and assistance.
Google Antigravity¶
Google's experimental AI-powered standalone IDE with advanced Gemini integration for next-generation development workflows.
Windsurf¶
Standalone editor offering similar agentic and inline features with tiered pricing and a "just works" usability orientation.
VS Code Extensions¶
Claude Code¶
Official Anthropic VS Code extension providing AI pair programming with Claude models, featuring multi-file editing, debugging, and terminal integration.
Gemini CLI Companion¶
Google's VS Code extension powered by Gemini models, offering code completion, generation, and chat assistance with Google Cloud integration.
OpenAI Codex¶
Codex is integrated as an extension in VS Code
GitHub Copilot¶
Integrated with VS Code and GitHub CodeSpaces, provides agentic coding with periodic performance fluctuations and tiered pricing.
Cline¶
VS Code extension that's open-source and model-agnostic, pioneering features like "bring your own model" (BYOM) and operating on a per-request billing structure.
Roo Code¶
VS Code extension derived from Cline, prioritizes rapid feature development and customization, serving users interested in experimental capabilities.
Command Line Interface (CLI) Tools¶
Aider¶
A popular command-line tool for AI-driven coding, often used with local or remote LLMs.
Claude Code CLI¶
Official Anthropic command-line interface for Claude, enabling AI-assisted development directly from the terminal with support for MCP servers.
OpenAI Codex CLI¶
Command-line access to OpenAI's Codex models, integrated with GitHub Copilot for terminal-based AI assistance.
Google Gemini CLI¶
Google's command-line interface for Gemini models, providing AI coding assistance and integration with Google Cloud services.
OpenCode.ai¶
Open-source CLI tool supporting multiple AI models for code generation, analysis, and refactoring from the command line.
Browser-based Vibe Coding¶
Claude Code¶
Browser-based version of Claude Code providing AI pair programming capabilities through the web, featuring multi-file editing, code generation, and debugging without requiring a desktop installation.
ChatGPT¶
OpenAI's ChatGPT runs a sandboxed Python environment for executing code, analyzing data, and generating visualizations directly in the browser. Available on Plus and Team tiers.
Google Gemini¶
Google Gemini's web interface features code execution capabilities, allowing you to run Python code and see results inline with AI-generated explanations.
OpenWebUI¶
Self-hostable, open-source web interface supporting multiple LLM providers (OpenAI, Anthropic, Ollama) with built-in code execution, function calling, and customizable workflows.
Coding safely with AI¶
Vibe coding hands an AI agent real authority over your machine — your files, your network, your shell, sometimes your credentials. Most of the safety questions you'll face fall into six buckets: code review, local execution risk, bias and licensing, privacy, accessibility, and environmental footprint. Work through them in order before pointing an agent at code that matters.
Review every line¶
Never trust generated code blindly.
- Always review for correctness, efficiency, and maintainability.
- Test thoroughly with unit tests, integration tests, and edge cases.
- Check for common security flaws: SQL injection, XSS, weak authentication, secret leakage.
- Verify the code matches your project's coding standards and existing patterns.
Understand before using. If you don't understand a generated block, ask the AI to explain it or research the libraries it pulls in. Will you be able to debug this code in six months?
Refine iteratively. Start with a basic implementation, test it, then refine. Use the AI to help debug and improve, not just to generate-and-walk-away.
Local execution risks¶
What an agent on your machine can actually do
Desktop apps like Claude Desktop and ChatGPT Desktop, plus IDE-integrated agents like Cursor, Cline, and Claude Code, can run code on your laptop. Once you grant that capability, the agent can:
- File system access: read, modify, and delete files anywhere your user has permission
- Network access: make API calls and external connections from your machine
- Terminal access: execute arbitrary shell commands
- Environment variables: read sensitive credentials your shell exposes
Practices that keep this manageable:
- Review commands before approving them — most tools prompt; don't auto-approve everything.
- Work inside project-specific virtual environments rather than at user-root.
- Never store secrets in code. Use environment variables and secret managers.
- Be cautious with
sudoor administrator privileges; agents rarely need them. - Monitor agent actions actively when you're learning a new tool.
- Follow your institution's security and privacy policies.
- Consider sandboxed development environments (containers, VMs) for sensitive work — see ai_sandboxes.md.
Malicious code lives on the internet, and your agent might install it
LLMs occasionally hallucinate package names that an attacker can register on PyPI or npm. If your agent installs dependencies without review, it can pull in malicious code from a "false package." Read commands and requirements.txt / package.json diffs before approving them. (Vibe Check: False Packages — Hackaday)
Institutional policies. Universities and employers often restrict which AI tools may run against work code or sensitive data. Check with your IT or security team about approved tools, data classifications, code-review requirements for AI-generated code, and network access policies.
Bias, licensing, and intellectual property¶
AI coding models are trained on public code repositories. That training data carries baggage:
- Biased implementations — non-inclusive variable names, accessibility blind spots
- Licensed code that may conflict with your project's license
- Outdated patterns or deprecated APIs
- Historical security flaws that the model has learned to reproduce
Practices that limit the damage:
- Review generated code for inclusive language and accessibility — see bias.md.
- Check license compatibility for any libraries the AI suggests.
- Validate that patterns are current and recommended, not historical.
- Don't assume AI-generated code is "best practice" — it's "common practice."
Intellectual property. Most AI providers claim no copyright on generated output, but generated code can inadvertently replicate licensed code from training data. Your organization may have its own policies on AI-generated code ownership and disclosure. Document when and how you used AI tools during development. See legal.md for institutional and academic considerations.
Privacy and data handling¶
When you use a cloud AI agent, the following typically leaves your machine:
- Your prompts and code snippets
- File contents (with MCP, when explicitly attached, or when the agent reads files autonomously)
- Error messages and terminal output
- Project structure and metadata
Privacy best practices:
- Don't share sensitive data, credentials, or personal information in prompts.
- Review your organization's data classification policies before connecting agents to sensitive directories.
- Use local or self-hosted models for highly sensitive code when possible — Cline and Roo Code support BYOM via Ollama; Aider and OpenCode.ai work with local LLMs. See ollama.md.
- Be aware of each service's data retention policy.
- Consider anonymizing or redacting data before sharing with AI tools.
Accessibility and inclusive development¶
Use AI to improve accessibility.
- Ask for WCAG compliance review on UI code.
- Generate accessible alternatives for visual content (alt text, ARIA labels, descriptive captions).
- Check color contrast and screen-reader compatibility.
- Implement keyboard navigation as a default, not an afterthought.
Avoid perpetuating bias.
- Review generated identifiers and comments for inclusive language.
- Ask the AI to suggest alternatives if you spot problematic patterns.
- Consider diverse user needs when prompting for UI/UX implementations.
Environmental footprint¶
LLM inference is energy-intensive. Don't use a frontier model when a smaller one will do, cache results when you can, and avoid agentic loops that fire off speculative requests. Cumulative compute is the cost.
Model Context Protocol (MCP)¶
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools. This ensures interoperability and allows developers to more easily swap out models or context sources without re-engineering their entire application.