Documentation

What Transmute does, what it needs from your codebase, and how the delivery process works — with no hand-waving.

What Transmute Does

Transmute adds AI-powered capabilities to existing .NET codebases without requiring a rewrite, a microservices migration, or a platform change.

The core workflow is: analyze → blueprint → approve → generate. Transmute reads your repository, maps your architecture, and produces a detailed integration plan. Your team reviews and approves that plan. Only then does any code get generated — and it arrives as a pull request, not a wholesale replacement of your codebase.

Examples of what a Transmute integration can deliver:

  • Natural language interfaces to existing data layers (search, filtering, summarization)
  • AI-assisted triage or classification built into existing service layer patterns
  • Document understanding and extraction wired into existing data models
  • Copilot-style suggestion features that operate on your domain vocabulary
  • Embedding pipelines connected to your existing entity model

Transmute does not replace your architecture, change your deployment model, introduce undisclosed dependencies, or generate code without your team's explicit approval.

What Codebases Work Best

Transmute is not a general-purpose AI code tool. It is optimized for a specific type of .NET project. If your codebase fits the profile below, the integration process is predictable. If it doesn't, we will tell you in the discovery sprint before any further money is spent.

Hard requirements

  • .NET 6+ — Framework 4.x and .NET Core 3.x projects can work but require additional scoping
  • Standard layered architecture — some recognizable separation of presentation, business logic, and data access
  • Fewer than 500 core files in scope — this is the ceiling for the analysis phase within a single engagement; larger codebases are supported by scoping the integration to a specific bounded context
  • Build must pass before we start — we do not debug pre-existing compilation errors

What helps

  • Consistent naming conventions (even informal ones)
  • EF Core, Dapper, or another recognizable data access pattern
  • Existing interfaces or abstractions at service boundaries
  • Unit or integration tests (not required, but useful)

What we've seen that still works

Inconsistent naming. No interfaces. Giant controller actions. Business logic in the data layer. Stored procedures everywhere. These are common. They slow the analysis phase slightly but don't disqualify a project. We surface what we find in the Blueprint and adjust the integration approach accordingly.

Not sure if your codebase qualifies? The discovery sprint will answer that definitively. At the end of a sprint, you either have a Blueprint you can act on, or a clear explanation of why this engagement isn't right for your project — and no further commitment required.

The Blueprint

The Blueprint is the deliverable from the discovery sprint. It is a structured technical document — not a presentation, not a sales artifact — that your engineering team reads and annotates before any code is generated.

What it contains

  • Architecture summary — what we observed about your codebase: layering, naming patterns, data access style, existing abstractions
  • Integration targets — the specific files, services, and interfaces where AI capabilities will be introduced, with rationale for each choice
  • Code sketches — representative examples of what the generated code will look like, in your naming conventions
  • Dependencies introduced — every external package that will be added, with version and justification
  • Boundary conditions — what the integration explicitly will not touch, and why
  • Risk notes — anything we observed that could complicate the integration or require special handling

The approval step

After Blueprint delivery, your team has a review window — standard is five business days, adjustable by agreement. During this window:

  • You can accept the Blueprint as written
  • You can request changes to scope, approach, or specific integration targets
  • You can reject the Blueprint entirely — no code generation proceeds, and the engagement closes at this stage

Code generation begins only after a written sign-off from your team. This is not a formality. The approval step is where integration mismatches get caught, not in your CI pipeline.

The PR

Code generation produces a pull request against your target branch. It arrives like any other PR in your workflow — your team reviews it, requests changes, and merges when ready.

What a generated PR includes

  • Compile-verified output — the PR builds against your project before it is submitted. We do not submit code that does not compile.
  • Pattern-matching code — naming conventions, indentation style, and layering choices follow what is already in your repo, not a house style we impose
  • No surprise dependencies — every package added was disclosed in the Blueprint and approved by your team before generation began
  • PR description — a plain-English explanation of what changed and why, keyed to the Blueprint sections it implements
  • Test stubs — if your project has an existing test suite, the PR includes skeleton tests for generated methods, following your existing test patterns

What it doesn't do

  • Refactor existing code outside the integration targets — the PR only adds what was in the Blueprint
  • Introduce architectural changes — if the Blueprint called for adding a service interface, the PR adds a service interface; it doesn't reorganize your project structure
  • Auto-merge — the PR waits for your review like everything else

// Example PR description excerpt

Implements Blueprint § 3.2 — DocumentSearchService

Adds natural language search to the existing IDocumentRepository
interface. New method: SearchAsync(string query, int topK).

Pattern: follows existing async service pattern in
Services/DocumentService.cs lines 44–82.

Dependencies added: Microsoft.SemanticKernel 1.x (approved Blueprint § 5)

Does not modify: DocumentRepository.cs, existing query methods,
EF Core configuration.

FAQ

These are the questions we hear most often from engineering managers and CTOs evaluating whether Transmute is the right fit.

What access do you need to our codebase?

Read-only. We need to clone the repository to perform analysis. We do not need write access at any stage — the PR is submitted via your standard review process. If your security policy requires, we can perform analysis in an airgapped or VPN-connected environment.

Who owns the generated code?

You do. The generated code becomes part of your repository, subject to your IP ownership, the moment the PR is merged. We retain no rights to it and do not store copies of your codebase or generated output after the engagement closes.

What happens if the generated code doesn't meet our standards?

The Blueprint approval step is designed to catch this before generation begins. If the delivered PR has issues your team identifies in review, we revise. The engagement doesn't close until the PR meets the standards agreed to in the approved Blueprint. There is no additional charge for revision cycles within agreed scope.

Can this work alongside our existing AI tooling (Copilot, Cursor, etc.)?

Yes. Transmute is not a developer tool — it runs once to produce a specific integration, not continuously as part of your IDE. It neither depends on nor conflicts with the AI tooling your developers are already using. The PR it generates looks like any other PR to those tools.

We've heard "AI-generated code" before and it never compiles. How is this different?

The compile requirement is not a marketing claim — it is an exit condition. The PR is not submitted unless it builds against your project. This means we test against your solution file, your project references, and your existing NuGet packages. If it doesn't compile, you don't see it. We've turned down engagements because analysis showed the codebase had enough structural inconsistency that we couldn't meet the compile guarantee within reasonable scope. We'd rather say no than ship broken code.

Still have questions?

The discovery sprint is where the real due diligence happens. It ends with a Blueprint, not a pitch deck.