Portfolio OS Engineering Strategy

Product Engineer — Portfolio Experience

Engineering Strategy

How I'd support the team building the Portfolio OS — making fast prototypes production-ready and setting foundations that let everyone keep shipping.

APIsAuth & GDPRDatabasesDeploymentsObservabilityAI / LLMWeb ProductsTool Integration

This is an initial framework, not a final plan. Cards are interactive — click to see deeper thinking.

01

Understand the Challenge

One customer, many products, many systems — stitched together today by the support agent's memory.

At Enpal's scale — hundreds of thousands of customers, each with multiple products — every manual step compounds. What works for 100 customers becomes a bottleneck at 100,000.

Product breadth

flip ↻

A customer has solar panels on the roof, a battery in the garage, a wallbox outside, a smart meter on the wall, and a heat pump in the basement. Five products, each with its own contract, lifecycle, and support history.

The math

flip ↻

5 product types means 5 separate systems to maintain, 5 sets of training for agents, 5 potential failure points. One unified view eliminates the multiplication.

Internal side

Data fragmentation

flip ↻

A customer calls about their battery. The agent checks one system. Then the customer asks about their solar contract. That's a different system. Then about the wallbox installation. A third system.

The cost

flip ↻

Every extra minute searching across systems × hundreds of thousands of customers = massive cost. One unified view turns a 10-minute call into a 3-minute call.

Agent pain

flip ↻

A new agent joins the team. They need to learn five different tools, five different logins, five different interfaces — just to answer one customer's question.

The impact

flip ↻

Simpler tools mean faster onboarding, less burnout, lower turnover. An agent who's productive in a week instead of a month changes the economics of the whole team.

Customer side

Self-service gap

flip ↻

A customer wants to check their installation date. Today: they call, wait on hold, an agent looks it up. Tomorrow: they open the app, see the date, done. No ticket created.

At scale

flip ↻

Every question a customer answers themselves is one fewer ticket. At scale, self-service can reduce ticket volume by 30-50% — that's thousands of calls that never need to happen.

Disconnected experience

flip ↻

A customer has solar panels and a heat pump from Enpal, but each feels like a separate company. Two different portals, two different support flows, no shared history.

The risk

flip ↻

Customers who feel unknown churn faster. When someone sees all their Enpal products in one place, it builds trust and increases lifetime value.

AI needs actions — the bridge

flip ↻

A customer asks the chatbot “Can you reschedule my installation?” Without a real API, the AI can only say “Please call us.” With a real API, it actually reschedules it. That's the difference.

The difference

flip ↻

AI without real APIs generates polite but useless responses. AI with real APIs resolves issues in seconds. The bridge between both sides is backend skills — real endpoints that do real things.

So what architecture would support all of this?

02

Design the Architecture

One API layer, one data model, one source of truth. The no-code team and AI builders stay fast — the engineering anchor makes it solid underneath.

API-first design

flip ↻

The no-code team builds a dashboard in Retool. An AI agent answers customer questions. A production app shows contracts. All three get their data from the exact same API.

In practice

flip ↻

If tomorrow we change how contracts work, we update one API. Every consumer — Retool, AI, production app — gets the change automatically. No drift, no duplicated logic.

Clear data model

flip ↻

Customer, Products, Contracts, Communications, Tickets — five entities that describe every customer relationship. Defined once in the database, used by every tool equally.

The guarantee

flip ↻

The agent cockpit and the AI chatbot see the exact same customer data. No 'the dashboard shows something different from the app' bugs.

Data isolation per customer

flip ↻

Customer A calls support. The agent sees Customer A's products, contracts, and tickets — and nothing else. Customer B's data doesn't exist in that view. Built in from day one.

I've built this

flip ↻

In Odys, every query is scoped by professional ID — same approach here with customer ID. GDPR compliance becomes automatic, not a feature you bolt on later.

Modular components

flip ↻

The agent cockpit is one module. The customer-facing app is another. Admin tools are a third. They all share the same API, but if you update one, the others don't change.

The benefit

flip ↻

The team fixing a bug in admin tools doesn't risk breaking the agent cockpit. Ship independently, break less.

Click any node to learn more.

AI layer — the skills agents need

Intelligent routing

flip ↻

A customer writes in. AI reads the issue, checks which products they have, and routes it — to an AI agent for simple questions, or to the right human team for complex ones.

How it works

flip ↻

No one manually assigns tickets. The system decides in seconds: AI can handle this, or this needs a specialist. The API provides the product context to make it accurate.

AI service agents

flip ↻

“When is my installation?” — the AI looks it up in the real system via the API, gives an accurate answer, and closes the ticket. No human needed.

I've built this

flip ↻

My RAG career chatbot works this way — an AI agent answering questions from real data with hybrid retrieval. The key: it calls an API for facts, it doesn't guess.

Expert agents

flip ↻

For complex issues, AI assists the human agent: pulls up the customer's full history, suggests a response, flags what's unusual. The human decides.

The principle

flip ↻

AI augments, not replaces. The agent gets relevant context in seconds instead of searching through five tools. Faster resolution, fewer mistakes.

AI service journey

Customer IssueRouting
SimpleAI AgentResolved ✓
ComplexExpert + HumanResolved ✓

Key insight: When a no-code prototype graduates to real code, the data layer doesn't change — only the frontend does.

Speed is the advantage. The question is how to keep it.

03

Equip the Team

The no-code builders and AI tinkerers are already fast. These patterns make what they ship solid — without slowing them down.

Graduation rule

flip ↻

Someone on the team builds a prototype in Retool in two days. It works. People start using it. A month later, 200 agents depend on it daily. That's when it needs to graduate to real code — before it breaks under load.

The handoff

flip ↻

When it crosses the threshold, I sit with the builder, understand what they built, and rebuild it with proper auth, tests, and deployment — keeping the same API so nothing breaks downstream.

No-Code PrototypeThreshold Crossed?Yes →Rebuild in Real CodeProductionNo →Keep in No-Code

Reusable component library

flip ↻

A builder needs a customer details card. Instead of designing it from scratch, they grab it from the shared library. Same look, same behavior, five minutes instead of two hours.

The result

flip ↻

One library, one design language. A new team member joins and is productive on day one because the building blocks already exist.

Schema-first approach

flip ↻

Before anyone builds anything, we agree: what is a 'Customer'? What fields does a 'Contract' have? This gets documented once. Every tool — no-code, AI, production — uses the same definition.

The guarantee

flip ↻

No surprises. When the AI reads customer data, it sees the exact same structure as the agent cockpit. One schema means zero integration bugs.

Observability by default

flip ↻

A new feature goes live on Monday. By Tuesday we know: how many people used it, how fast it loaded, if any errors occurred. Not because someone added monitoring later — it shipped with it.

In practice

flip ↻

Sentry for errors, analytics for usage. If something is live, it's monitored. No 'we didn't know it was broken' moments.

Documentation as code

flip ↻

Six months from now, someone asks “why does this work this way?” The answer is in the code repository, next to the code — not in someone's memory or a lost Slack message.

When someone leaves

flip ↻

Decisions live where the code lives. When someone leaves the team, the knowledge stays. When someone joins, they can read the 'why' before touching the code.

Safeguards — keeping speed sustainable

Cleanup cycles

flip ↻

Sprint 1: features. Sprint 2: features. Sprint 3: stop. Fix the things that have been piling up. Improve the foundations. Then go fast again. This cycle is non-negotiable.

The habit

flip ↻

Scheduled, not reactive. You don't wait until things break — you build the habit of looking back into the team's rhythm. Predictable maintenance beats emergency firefighting.

Testing at boundaries

flip ↻

Someone updates a no-code workflow. It sends different data to the API. Before any user sees the problem, an automated test catches it and flags it. Fixed before lunch.

How it works

flip ↻

Contract tests at every API boundary. If any consumer — no-code, AI, or production code — sends unexpected data, the test suite catches it before users do.

Guardrails & monitoring

flip ↻

The AI reschedules an appointment — allowed. The AI cancels a contract — blocked. Error rates spike after a deploy — feature work stops until it's fixed. Clear lines, enforced automatically.

The line

flip ↻

Real-time monitoring means the team knows within minutes, not days. AI guardrails are measurable and auditable — not just promises in a document.

One owner per system

flip ↻

It's 2 AM. The payment webhook is failing. There's no question about who to call — the owner's name is right there next to the system. No orphaned code, no 'I think Maria built that.'

The rule

flip ↻

Every production system has a name next to it. When someone leaves, ownership transfers explicitly. No system is ever truly unknown.

Bottom line: Moving fast is the advantage. The engineering anchor's job is to keep it that way — without letting speed turn into debt.

With the right foundations, here's how to roll it out.

04

Execute in Phases

Short feedback cycles with the team. Learn from what exists first, build foundations second, measure impact third.

Phase 1Wk 1-4
Phase 2Wk 5-8
Phase 3Wk 9-12

Phase 1 — Weeks 1-4

flip ↻

Week 1: sit with agents, watch them work, understand the pain. Weeks 2-4: build the first customer portfolio screen — type in a customer name, see everything in one place.The MVP

Deliverables

flip ↻

One working screen that agents can actually use. Plus a written map of existing systems, pain points, and which prototypes are closest to needing real code.

Phase 2 — Weeks 5-8

flip ↻

The portfolio view becomes a full agent cockpit. Agents use it daily. Admin tools for edge cases. After each week, we ask: what's missing? What's annoying? Then fix it.

Deliverables

flip ↻

Cockpit in daily use by the support team. First real feedback from internal users. The tool starts saving time instead of adding overhead.

Phase 3 — Weeks 9-12

flip ↻

The AI layer plugs into the same APIs from phases 1 and 2. AI agents start handling simple queries. We measure: did resolution time drop? Did CSAT improve? If not, we adjust.

Deliverables

flip ↻

AI agents handling simple queries autonomously. Guardrails tested. Before/after metrics proving whether customer experience actually improved — not assumed.

Interactive mockups — both apps are connected

Portfolio OS — Customer ViewClick products · Request callback
🔍 Maria Schmidt|

Maria Schmidt

Customer since March 2024 · Berlin · 5 products

Active

Contracts

Solar Lease #SL-4821Active
Battery Add-on #BA-1203Active
Wallbox Install #WB-892Pending

Need help?

Portfolio OS — Agent CockpitClick tickets · Resolve with AI

My Queue · 4 open

Maria Schmidt

5 products · Berlin · Since 2024

RescheduleAdd Note

Current Ticket

I need to change my wallbox installation date — I won't be home on April 28

via WhatsApp · 12 min ago

⚡ AI Suggestion

Next available date: May 2. Offer to reschedule? The customer has rescheduled once before (Mar → Apr).

How:Weekly syncs with Operations and Service team leads to validate what we're building. The no-code team ships prototypes, I make them production-ready. Short loops, shared ownership.

But before building anything — listen first.

05

Learn Before Building

I'd learn before I build. These are the first things I'd want to understand.

Systems & tools

  • What does the current system look like? What tools do agents use today?
  • What no-code prototypes already exist and what state are they in?
  • What's the current tech stack and deployment setup?

Pain points & data

  • Where are the biggest pain points for agents right now?
  • What data is already available and what's missing?
  • Which prototypes are closest to needing production-grade code?

Team & collaboration

  • How does the no-code/AI team work today? What's the handoff process?
  • Who are the key stakeholders in Operations and Service?
  • What does the feedback loop with support agents look like?

The real strategy starts with the first conversation. I'm ready for it.