← Back to Portfolio

PMCR-O Quickstart: From Zero to Autonomous AI Agent in 30 Minutes

⏱️ 30 Minutes

Zero to autonomous AI agent. Complete .NET 10 setup with Aspire orchestration, Ollama LLM, and PostgreSQL knowledge vault.

Welcome to the PMCR-O framework. In the next 30 minutes, you'll build a microservices-based autonomous AI system that matches your actual architecture:

  • gRPC Services - Individual agents (Planner, Maker, Checker, Reflector, Orchestrator)
  • REST API Gateway - HTTP endpoints for external communication
  • Knowledge Vault - RAG system with pgvector for memory
  • Workflow Orchestration - Microsoft Agents AI for complex processes

This matches your actual PMCR-O source architecture from the examples folder.

Prerequisites

✅ What You Need

  • .NET 10 SDK - Download from microsoft.com
  • Ollama - Local LLM server (ollama.ai)
  • PostgreSQL - Database with pgvector extension
  • Visual Studio Code or your preferred IDE
💡 Pro Tip: Use Docker for PostgreSQL if you don't want to install it locally.

Step 1: Create the Solution (5 minutes)

Step 1 of 6

Create the microservices solution structure matching your actual PMCR-O architecture:

Bash
# Create solution directory
mkdir PmcroQuickstart
cd PmcroQuickstart

# Create solution
dotnet new sln -n PmcroQuickstart

# Core projects (matching your architecture)
# AppHost - Console app with Aspire (or use: dotnet new aspire-apphost)
dotnet new console -n PmcroQuickstart.AppHost
dotnet new classlib -n PmcroQuickstart.ServiceDefaults
dotnet new classlib -n PmcroQuickstart.Shared

# gRPC Services (individual microservices)
dotnet new webapi -n PmcroQuickstart.PlannerService
dotnet new webapi -n PmcroQuickstart.MakerService
dotnet new webapi -n PmcroQuickstart.CheckerService
dotnet new webapi -n PmcroQuickstart.ReflectorService
dotnet new webapi -n PmcroQuickstart.OrchestratorService

# REST API Gateway
dotnet new webapi -n PmcroQuickstart.OrchestrationApi

# Knowledge Service (RAG)
dotnet new webapi -n PmcroQuickstart.KnowledgeService

# Add all projects to solution (Windows PowerShell)
dotnet sln add PmcroQuickstart.AppHost/PmcroQuickstart.AppHost.csproj
dotnet sln add PmcroQuickstart.ServiceDefaults/PmcroQuickstart.ServiceDefaults.csproj
dotnet sln add PmcroQuickstart.Shared/PmcroQuickstart.Shared.csproj
dotnet sln add PmcroQuickstart.PlannerService/PmcroQuickstart.PlannerService.csproj
dotnet sln add PmcroQuickstart.MakerService/PmcroQuickstart.MakerService.csproj
dotnet sln add PmcroQuickstart.CheckerService/PmcroQuickstart.CheckerService.csproj
dotnet sln add PmcroQuickstart.ReflectorService/PmcroQuickstart.ReflectorService.csproj
dotnet sln add PmcroQuickstart.OrchestratorService/PmcroQuickstart.OrchestratorService.csproj
dotnet sln add PmcroQuickstart.OrchestrationApi/PmcroQuickstart.OrchestrationApi.csproj
dotnet sln add PmcroQuickstart.KnowledgeService/PmcroQuickstart.KnowledgeService.csproj

# Alternative (Linux/Mac): for proj in */**/*.csproj; do dotnet sln add "$proj"; done

Step 2: Configure ServiceDefaults (5 minutes)

Step 2 of 6

The ServiceDefaults project provides shared configuration for OpenTelemetry, health checks, and HTTP resilience. Replace the contents of ServiceDefaults/Extensions.cs:

C#
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Diagnostics.HealthChecks;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Diagnostics.HealthChecks;
using Microsoft.Extensions.ServiceDiscovery;
using OpenTelemetry;
using OpenTelemetry.Metrics;
using OpenTelemetry.Trace;

namespace Microsoft.Extensions.Hosting
{
    public static class Extensions
    {
        public static TBuilder AddServiceDefaults<TBuilder>(this TBuilder builder)
            where TBuilder : IHostApplicationBuilder
        {
            builder.ConfigureOpenTelemetry();
            builder.AddDefaultHealthChecks();
            builder.Services.AddServiceDiscovery();

            builder.Services.ConfigureHttpClientDefaults(http =>
            {
                http.AddStandardResilienceHandler();
                http.AddServiceDiscovery();
            });

            return builder;
        }

        public static TBuilder ConfigureOpenTelemetry<TBuilder>(this TBuilder builder)
            where TBuilder : IHostApplicationBuilder
        {
            builder.Logging.AddOpenTelemetry(logging =>
            {
                logging.IncludeFormattedMessage = true;
                logging.IncludeScopes = true;
            });

            builder.Services.AddOpenTelemetry()
                .WithMetrics(metrics =>
                {
                    metrics.AddAspNetCoreInstrumentation()
                        .AddHttpClientInstrumentation()
                        .AddRuntimeInstrumentation();
                })
                .WithTracing(tracing =>
                {
                    tracing.AddSource(builder.Environment.ApplicationName)
                        .AddAspNetCoreInstrumentation()
                        .AddHttpClientInstrumentation();
                });

            return builder;
        }

        public static TBuilder AddDefaultHealthChecks<TBuilder>(this TBuilder builder)
            where TBuilder : IHostApplicationBuilder
        {
            builder.Services.AddHealthChecks()
                .AddCheck("self", () => HealthCheckResult.Healthy(), ["live"]);

            return builder;
        }

        public static WebApplication MapDefaultEndpoints(this WebApplication app)
        {
            if (app.Environment.IsDevelopment())
            {
                app.MapHealthChecks("/health");
                app.MapHealthChecks("/alive", new HealthCheckOptions
                {
                    Predicate = r => r.Tags.Contains("live")
                });
            }

            return app;
        }
    }
}

Step 3: Set Up Ollama with GPU Support (5 minutes)

Step 3 of 6

Install Ollama and pull a model for our agent:

Bash
# Install Ollama (Linux/Mac)
curl -fsSL https://ollama.ai/install.sh | sh

# Or download from https://ollama.ai for Windows

# Start Ollama service
ollama serve

# In another terminal, pull a model (we'll use Qwen2.5 Coder for tool-calling)
ollama pull qwen2.5-coder:7b

# Verify it's working
ollama list

💡 GPU Support: If you have NVIDIA GPU, Ollama will automatically detect and use it for faster inference.

Step 4: Configure PostgreSQL with pgvector (5 minutes)

Step 4 of 6

Set up PostgreSQL with vector extensions for the knowledge vault:

Bash
# Using Docker (recommended)
docker run -d \
  --name pmcro-postgres \
  -e POSTGRES_DB=knowledge \
  -e POSTGRES_USER=pmcro \
  -e POSTGRES_PASSWORD=pmcro123 \
  -p 5432:5432 \
  pgvector/pgvector:pg16

# Wait for container to start
sleep 10

# Verify connection
docker exec pmcro-postgres psql -U pmcro -d knowledge -c "SELECT * FROM pg_extension WHERE extname = 'vector';"

Step 5: Build the Knowledge Service (5 minutes)

Step 5 of 6

The Knowledge service provides RAG (Retrieval-Augmented Generation) capabilities. Replace KnowledgeService/Program.cs:

C#
using Microsoft.EntityFrameworkCore;
using Npgsql.EntityFrameworkCore.PostgreSQL;

var builder = WebApplication.CreateBuilder(args);

// Add service defaults & Aspire components
builder.AddServiceDefaults();

// Add PostgreSQL with pgvector
builder.Services.AddDbContext<KnowledgeContext>(options =>
    options.UseNpgsql(builder.Configuration.GetConnectionString("knowledge"),
        npgsql => npgsql.UseVector()));

// Add Ollama chat client for embeddings
builder.Services.AddSingleton<IChatClient>(sp =>
{
    var endpoint = new Uri("http://localhost:11434");
    var model = "nomic-embed-text"; // Embedding model
    return new OllamaChatClient(endpoint, model);
});

var app = builder.Build();

// Configure the HTTP request pipeline
if (app.Environment.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}

app.MapDefaultEndpoints();

// Knowledge API endpoints
app.MapPost("/knowledge/store", async (KnowledgeItem item, KnowledgeContext db, IChatClient embedder) =>
{
    // Generate embedding for the content
    var embedding = await embedder.CompleteAsync(
        new ChatMessage(ChatRole.User, item.Content),
        new ChatOptions { ResponseFormat = ChatResponseFormat.Json });

    // Store in database
    item.Embedding = JsonSerializer.Deserialize<float[]>(embedding.Content) ?? Array.Empty<float>();
    item.Id = Guid.NewGuid();
    item.Timestamp = DateTime.UtcNow;

    db.KnowledgeItems.Add(item);
    await db.SaveChangesAsync();

    return Results.Created($"/knowledge/{item.Id}", item);
});

app.MapGet("/knowledge/search", async (string query, KnowledgeContext db, IChatClient embedder) =>
{
    // Generate embedding for search query
    var queryEmbedding = await embedder.CompleteAsync(
        new ChatMessage(ChatRole.User, query),
        new ChatOptions { ResponseFormat = ChatResponseFormat.Json });

    var embedding = JsonSerializer.Deserialize<float[]>(queryEmbedding.Content) ?? Array.Empty<float>();

    // Vector similarity search
    var results = await db.KnowledgeItems
        .Select(k => new { k, Similarity = EF.Functions.VectorCosineSimilarity(k.Embedding, embedding) })
        .OrderByDescending(x => x.Similarity)
        .Take(5)
        .Select(x => x.k)
        .ToListAsync();

    return Results.Ok(results);
});

app.Run();

// Data models
public class KnowledgeItem
{
    public Guid Id { get; set; }
    public string Content { get; set; } = "";
    public float[] Embedding { get; set; } = Array.Empty<float>();
    public DateTime Timestamp { get; set; }
}

public class KnowledgeContext : DbContext
{
    public KnowledgeContext(DbContextOptions<KnowledgeContext> options) : base(options) { }

    public DbSet<KnowledgeItem> KnowledgeItems => Set<KnowledgeItem>();

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.HasPostgresExtension("vector");

        modelBuilder.Entity<KnowledgeItem>().HasKey(k => k.Id);
        modelBuilder.Entity<KnowledgeItem>().Property(k => k.Embedding)
            .HasColumnType("vector(768)") // Adjust dimension based on your embedding model
            .HasColumnName("embedding");

        // Create vector similarity index
        modelBuilder.Entity<KnowledgeItem>()
            .HasIndex(k => k.Embedding)
            .HasMethod("ivfflat")
            .HasOperators("vector_cosine_ops")
            .HasAnnotation("Npgsql:StorageParameter:lists", 100);
    }
}

Step 6: Build the Planner gRPC Service (5 minutes)

Step 6 of 6

The Planner service is a gRPC service (matching your actual architecture). Replace PlannerService/Program.cs:

C#
using Microsoft.AspNetCore.Server.Kestrel.Core;
using ProjectName.PlannerService.Services;

var builder = WebApplication.CreateBuilder(args);

// Add service defaults (OpenTelemetry, health checks, etc.)
builder.AddServiceDefaults();

// Add gRPC services
builder.Services.AddGrpc();

// Configure Kestrel for HTTP/2 (required for gRPC)
builder.WebHost.ConfigureKestrel(options =>
{
    options.ConfigureEndpointDefaults(listenOptions =>
    {
        listenOptions.Protocols = HttpProtocols.Http2;
    });
});

// Ollama setup (uses Aspire connection string)
var ollamaUri = builder.Configuration.GetConnectionString("ollama") 
    ?? "http://localhost:11434";
var modelId = "qwen2.5-coder:7b";

// Register Ollama chat client
builder.Services.AddSingleton<IChatClient>(sp =>
{
    var httpClient = sp.GetRequiredService<IHttpClientFactory>().CreateClient("ollama");
    var baseClient = new OllamaApiClient(httpClient, modelId);
    return new ChatClientBuilder(baseClient)
        .UseFunctionInvocation()
        .Build();
});

var app = builder.Build();

// Map gRPC service
app.MapGrpcService<PlannerAgentService>();
app.MapGet("/", () => "Planner Service - gRPC endpoint available");

app.Run();

Create PlannerService/Services/PlannerAgentService.cs:

C#
using Grpc.Core;
using ProjectName.Shared.Grpc;

namespace ProjectName.PlannerService.Services;

public class PlannerAgentService : AgentService.AgentServiceBase
{
    private readonly IChatClient _chatClient;
    private readonly ILogger<PlannerAgentService> _logger;

    public PlannerAgentService(IChatClient chatClient, ILogger<PlannerAgentService> logger)
    {
        _chatClient = chatClient;
        _logger = logger;
    }

    public override async Task<AgentResponse> ExecuteTask(
        AgentRequest request,
        ServerCallContext context)
    {
        _logger.LogInformation("🧭 I AM the Planner. Analyzing: {Intent}", request.Intent);

        // BIP Identity-First Prompt
        var systemPrompt = @"I AM the Planner.
I TRANSFER complex intent into minimal viable plans.
I EVOLVE through reflection on execution outcomes.";

        var history = new ChatHistory
        {
            new ChatMessage(ChatRole.System, systemPrompt),
            new ChatMessage(ChatRole.User, request.Intent)
        };

        var chatOptions = new ChatOptions
        {
            ResponseFormat = ChatResponseFormat.Json
        };

        var response = await _chatClient.CompleteChatAsync(history, chatOptions);

        return new AgentResponse
        {
            Content = response.Content,
            Success = true
        };
    }
}

Step 7: Configure the AppHost (5 minutes)

Final Step

Configure Aspire with your actual PMCR-O microservices architecture:

C#
var builder = DistributedApplication.CreateBuilder(args);

// Infrastructure (matching your source)
var postgres = builder.AddPostgres("postgres")
    .WithImage("pgvector/pgvector:pg16")
    .WithDataVolume()
    .AddDatabase("knowledge");

var ollama = builder.AddOllama("ollama", port: 11434)
    .WithDataVolume()
    .WithContainerRuntimeArgs("--gpus=all")
    .WithEnvironment("OLLAMA_NUM_GPU", "1");

// Models
var llama = ollama.AddModel("qwen2.5-coder:7b");
var embedder = ollama.AddModel("nomic-embed-text");

// Individual gRPC Services (matching your architecture)
var planner = builder.AddProject<Projects.PmcroQuickstart_PlannerService>("planner-agent")
    .WithReference(ollama)
    .WaitFor(llama);

var maker = builder.AddProject<Projects.PmcroQuickstart_MakerService>("maker-agent")
    .WithReference(ollama)
    .WaitFor(llama);

var checker = builder.AddProject<Projects.PmcroQuickstart_CheckerService>("checker-agent")
    .WithReference(ollama)
    .WaitFor(llama);

var reflector = builder.AddProject<Projects.PmcroQuickstart_ReflectorService>("reflector-agent")
    .WithReference(ollama)
    .WaitFor(llama);

var orchestrator = builder.AddProject<Projects.PmcroQuickstart_OrchestratorService>("orchestrator-agent")
    .WithReference(ollama)
    .WaitFor(llama);

// Knowledge Service (RAG)
var knowledgeService = builder.AddProject<Projects.PmcroQuickstart_KnowledgeService>("knowledge-service")
    .WithReference(postgres)
    .WaitFor(postgres);

// REST API Gateway (your OrchestrationApi)
var orchestrationApi = builder.AddProject<Projects.PmcroQuickstart_OrchestrationApi>("orchestration-api")
    .WithReference(planner)
    .WithReference(maker)
    .WithReference(checker)
    .WithReference(reflector)
    .WithReference(orchestrator)
    .WithReference(knowledgeService)
    .WithReference(ollama);

builder.Build().Run();

🚀 Launch Your Autonomous Agent

Start everything with a single command:

Bash
# From the solution directory
dotnet run --project PmcroQuickstart.AppHost

# Aspire dashboard will open at http://localhost:15888
# Your autonomous agent is now running!

🎉 Congratulations!

You've just built a PMCR-O microservices system that matches your actual source architecture! The system includes:

  • Individual gRPC Services for each PMCR-O agent
  • REST API Gateway for external HTTP communication
  • Knowledge Vault with pgvector for RAG
  • Workflow Orchestration using Microsoft Agents AI

✅ Validated against your source code in examples/pmcro_source_dump.txt

Next Steps

Your PMCR-O agent is ready for expansion:

  • Add Maker & Checker agents - Complete the PMCR-O cycle
  • Integrate Reflector - Enable self-improvement
  • Connect Orchestrator - Multi-agent coordination
  • Scale to production - Deploy with Kubernetes

Ready to go deeper? Check out the complete PMCR-O tutorial series and technical documentation.

🔗 Internal Links for Deeper Learning:

Shawn Delaine Bellazan

About Shawn Delaine Bellazan

Resilient Architect & PMCR-O Framework Creator

Shawn is the creator of the PMCR-O framework, a self-referential AI architecture that embodies the strange loop it describes. With 15+ years in enterprise software development, Shawn specializes in building resilient systems at the intersection of philosophy and technology. His work focuses on autonomous AI agents that evolve through vulnerability and expression.