Architecture Overview

System Overview

ReCloud is an advanced three-tier architecture that transforms standard coding assistants into autonomous AI orchestration engines. The system implements sophisticated multi-step workflows through intelligent tool chaining, enabling complex task execution that goes far beyond simple prompt-response interactions.

graph TB
    subgraph "User Interface Layer"
        UI[Desktop Application<br/>System Orchestrator]
    end

    subgraph "Logic Layer"
        MCP[MCP Server<br/>Tool Interface]
    end

    subgraph "Service Layer"
        API[REST API Server<br/>Workflow Engine]
    end

    subgraph "External Services"
        GEMINI[Google Gemini AI]
        FS[(File System)]
        CURSOR[Cursor IDE<br/>Client]
    end

    CURSOR --> UI
    UI --> MCP
    MCP --> API
    API --> GEMINI
    MCP --> FS
    API --> FS

Core Architecture Patterns

Dynamic Configuration System

The system implements sophisticated runtime agent configuration through integrated policy sources that enable contextual agent adaptation based on user preferences and task requirements.

Protocol Integration

Complete integration with modern IDEs through standardized communication protocols for seamless tool exposure and context preservation.

Hybrid Execution Model

Intelligent execution routing across three processing targets:

  • Remote Execution: Cloud-based AI processing for complex tasks
  • Local Execution: Hardware-accelerated processing on user systems
  • Hybrid Execution: Optimized combination of remote and local processing

Declarative Configuration Framework

Configuration framework that enables declarative workflow specification without requiring code modifications.

System Components

1. Backend API Server

Purpose: Enterprise-grade REST API server for AI task processing and complex workflow execution.

Technology Stack:

  • Node.js + TypeScript + Express.js for robust API development
  • Google Generative AI integration for advanced AI processing
  • Comprehensive data validation and type safety
  • Template processing for dynamic content generation

Key Capabilities:

  • High-performance REST API with comprehensive endpoint coverage
  • Advanced Google Gemini AI integration with multiple model support
  • Workflow execution engine for complex task orchestration
  • Enterprise-grade file processing and data transformation capabilities

2. MCP Logic Server

Purpose: Advanced server implementing intelligent tool processing with unified interface design.

Technology Stack:

  • Node.js + TypeScript for type-safe development
  • Model Context Protocol SDK integration
  • Dynamic configuration engine
  • Intelligent execution routing

Innovative Features:

  • Unified Tool Interface: Single tool with comprehensive subcommand system
  • Dynamic Configuration: Runtime directive creation from multiple policy sources
  • Intelligent Routing: Advanced hybrid execution with optimal resource utilization
  • Context Preservation: Complete state management across complex workflows

3. Desktop Orchestrator

Purpose: Cross-platform desktop application providing complete system orchestration and user experience management.

Technology Stack:

  • Electron framework for native cross-platform desktop applications
  • Embedded server distribution for seamless operation
  • Persistent configuration management

Core Functions:

  • Complete System Orchestration: Full server lifecycle and resource management
  • Advanced Configuration: Sophisticated agent behavior profile management
  • Seamless IDE Integration: Automatic IDE configuration
  • Real-time Monitoring: Comprehensive health checks and status reporting
  • Intelligent Tool Management: Project-level tool activation with persistence

Data Flow Architecture

Primary Execution Flow

sequenceDiagram
    participant Cursor as Cursor IDE
    participant TrayApp
    participant MCP
    participant API
    participant Gemini

    Cursor->>TrayApp: Tool request
    TrayApp->>MCP: Execute command
    MCP->>MCP: Apply configuration
    MCP->>API: Remote processing
    API->>Gemini: AI processing
    Gemini-->>API: AI response
    API-->>MCP: Formatted result
    MCP-->>Cursor: Protocol response

Configuration Assembly Process

sequenceDiagram
    participant TrayApp
    participant Environment
    participant ConfigBuilder
    participant MCP

    TrayApp->>Environment: Set configuration
    Environment->>ConfigBuilder: Read settings
    ConfigBuilder->>ConfigBuilder: Build configuration
    ConfigBuilder->>MCP: Apply configuration
    MCP->>MCP: Use configuration

Workflow Engine Architecture

Declarative Configuration System

The workflow engine uses a configuration framework that enables declarative workflow specification:

name: analyzeSentiment
execution_target: remote
model: "gemini-2.5-flash"
parameters:
  - name: "text"
    type: "string"
    required: true
step_types:
  - name: "ai_prompt"
    handler: "ai_prompt"
    prompt_template: "Analyze sentiment: {{flow_input.text}}"
flow:
  - step: "analyze_sentiment"
    type: "ai_prompt"

Step Handler System

The workflow engine implements a modular handler system:

  • AI-powered text processing with templating
  • External API integrations with authentication
  • File system operations with security validation
  • Cloud execution environments
  • Internal service orchestration

Execution Routing

Intelligent routing between execution environments:

flowchart TD
    A[Receive command] --> B[Check configuration]
    B --> C{Execution target?}

    C -->|remote| D[Remote API call]
    C -->|local| E[Local handler]
    C -->|hybrid| E

    D --> F[Return result]
    E --> G[Execute locally]
    G --> F

System Dependencies

Component Relationships

Desktop Application
    └── Embedded MCP Server
        └── Tool routing system

MCP Server
    └── Backend API (HTTP orchestration)
        └── Google Gemini AI (processing)

Backend API
    └── Google Gemini AI (processing)
    └── File System (persistence)

External Dependencies

  • Google Gemini AI: Advanced AI processing and reasoning
  • Cursor IDE: Primary IDE integration
  • File System: Results storage and caching infrastructure

Performance & Scalability

Optimization Features

  • Multi-level Caching: Intelligent cache hierarchies for configuration and execution data
  • Resource Management: Efficient memory and processing resource utilization
  • Concurrent Processing: Parallel execution support for multi-step workflows
  • Load Distribution: Intelligent routing across available processing resources

Monitoring & Observability

  • Execution Metrics: Comprehensive tracking of workflow performance and latency
  • Health Checks: System status monitoring and automated recovery
  • Logging Integration: Structured logging with configurable verbosity levels
  • Performance Analytics: Detailed insights into system utilization and bottlenecks

ReCloud provides the robust architectural foundation for advanced AI orchestration, enabling complex multi-step workflows with enterprise-grade performance and reliability.

Slopbook® Engine - powered by Slopman