Disponible para trabajar

AI Agent Embedded in NachUI

I built an AI agent as part of the NachUI ecosystem, implemented as a dedicated package within a monorepo, enabling developers to interact with the design system through natural language while accessing real documentation and component source code in real time.

Problem

Modern UI libraries and design systems often require developers to manually navigate documentation, understand APIs, and inspect source code to use components effectively.

This creates friction, especially when:

  • exploring unfamiliar components
  • understanding props and variants
  • or needing quick access to real implementation details

Solution

I developed an AI agent embedded within NachUI that allows developers to query the system conversationally and receive accurate, context-based answers.

The agent can:

  • retrieve relevant documentation
  • access real component source code
  • explain usage and implementation details

All grounded strictly in the actual NachUI codebase and documentation.

How It Works

  • The agent is built using a tool-based architecture
  • User queries are processed by a language model (Gemini)
  • The agent dynamically decides when to call tools
  • Tools retrieve:
    • documentation from .velite/docs.json
    • component source code directly from the filesystem
  • Results are validated and returned as structured responses

Tech Stack

  • Core: TypeScript (monorepo package: @repo/ai)
  • AI SDK: Vercel AI SDK
  • Model: Google Gemini
  • Validation: Zod
  • Architecture: Tool-based agent (ToolLoopAgent)
  • Monorepo: Modular architecture with isolated AI package

Technical Decisions

  • Tool-based agent architecture: allows the model to access real data (docs + code) instead of relying only on static prompts
  • Zod for structured outputs: ensures type-safe and predictable responses for UI rendering and integrations
  • Filesystem access for components: guarantees accuracy by reading real source code instead of approximations
  • Custom system prompt: enforces strict rules to prevent hallucinations and maintain consistency with NachUI philosophy
  • Modular provider architecture: supports multiple AI providers (Google, OpenAI, Groq, etc.) without changing core logic
  • Low-temperature configuration: improves determinism and reduces incorrect or inconsistent outputs
  • Monorepo package separation (@repo/ai): isolates the agent logic, enables reuse across multiple applications (docs, playground, etc.), and allows the system to scale without tight coupling to the frontend

Challenges & Learnings

  • Preventing hallucinations when working with LLMs
  • Designing reliable tool-calling flows
  • Balancing flexibility and strictness in the system prompt
  • Structuring documentation for efficient retrieval
  • Handling multi-language responses consistently