top of page
Group.png

From Static Screens to Adaptive Surfaces: Inside Calfus’s Generative UI Engine

  • Sai Teja Yalla
  • Dec 29, 2025
  • 6 min read

For most of the last decade, UI development has revolved around one core idea: assemble reusable components into fixed screens. Designers craft layouts in Figma, engineers translate them into React or native views, and every variation - by role, plan, device, or feature flag - becomes another screen to design, build, and maintain.


That model is predictable and reliable, but it starts to creak as soon as you want truly adaptive products. If every “if this, show that” needs to be encoded by hand, deep personalization quickly hits a wall.


At Calfus, we’ve started to invert that relationship. Instead of treating screens as static artifacts, we treat them as dynamic surfaces that can be composed on the fly by an AI agent. The agent decides what the user needs right now; our UI engine decides how it should look and behave.


The Core Idea: UI as Data, Not Templates

The key shift in our architecture is that the UI is no longer hardwired into templates. Instead, the UI is described as data that our frontend can render safely and consistently.

Under the hood, every generative view in Calfus is driven by a structured payload that answers three questions:

  1. What should be shown?

    A list of components: cards, charts, tables, buttons, inputs, tags, alerts, panels, and so on.

  2. How are they arranged?

    Layout metadata: sections, grids, stacks, responsive behavior, and priority hints.

  3. Where does the data come from?

    Bindings between UI elements and domain data, plus actions that fire when the user interacts.

Think of it as a compact blueprint rather than a finished floor plan. The blueprint travels from the agent to the client as plain data. The client renders it using our own component library and styling system. The agent can change the blueprint over time - adding a panel, rearranging sections, or inserting an inline tool - without shipping new front-end code for each variation.


What Generative UI Looks Like Generative UI surfaces show up in the places where users’ questions and workflows branch the most.

  • Adaptive investigation panels: When a user clicks into an anomaly, opportunity, or alert, they land on a workspace that isn’t static. The system looks at the context type of event, user role, historical patterns and composes a view tailored to that situation. A data analyst might see time-series charts and pivot controls; an operations owner might see a prioritized set of actions and impact summaries.


  • Context-aware sidebars and helpers: As the user explores, we stream in additional components: comparison charts, related items, suggested next steps, or short explanations. These appear in sidebars or inline sections that can expand, collapse, or reorder themselves depending on engagement and relevance.


  • Task-oriented layouts: For workflows like triaging a queue, planning work, or reviewing a segment of accounts, the system can reorder lists, highlight different dimensions, or swap controls (e.g., toggles vs. sliders vs. presets) based on what seems to help the user move faster.


From the user’s perspective, the app feels like it is constantly “laying out the table” with the most relevant tools and insights. From the engineering side, we’re not building infinite variants of each page; we’re supplying building blocks and policies.


Inside the Generative UI Stack

To make this work, we broke the problem into four main layers: the component catalog, the UI schema, the agent layer, and the renderer.


1. Component catalog and design system

Everything starts with a strictly defined component catalog:

  • Each component has a clear, typed API: required props, optional props, and allowed variations.

  • All visual details like colors, spacing, typography, come from design tokens, not from the agent.

  • Certain components are marked as “foundational” (layout primitives, containers), while others are “high-level” (dashboards, summary cards, call-to-action blocks).

This catalog is the sandbox the agent is allowed to play in. If a component isn’t in that list, it doesn’t exist as far as the agent is concerned.


2. UI schema: On top of the catalog, we define a schema that describes how components fit together:

  • A view is a tree (or graph) of components, each with:

    • a type (e.g., Card, TimeseriesChart, PrimaryButton),

    • a set of props (title, description, metrics, actions),

    • layout hints (importance, region, order, breakpoint behavior).

  • Components can reference each other by ID so that a filter component can control a table, or a summary card can link to a detail panel.

  • Data bindings connect components to domain objects, queries, or actions in our backend.


The schema is versioned and validated. If a payload doesn’t conform to the schema - wrong component name, missing props, invalid layout combination - we reject it before it ever hits the screen.


3. Agent layer: The agent layer is responsible for deciding what to show next:

  • It takes in context: user identity and role, current view, data state, past interactions in the session.

  • It has access to a set of capabilities: “fetch this data,” “summarize these records,” “suggest next actions,” “generate an explanation,” etc.

  • It produces a UI payload that conforms to our schema, using only approved components and props.


Crucially, the agent doesn’t control styling or low-level layout details. It works at the level of: “Add a two-column section with this chart on the left and these actions on the right,” not “use a 14px gray font with a specific hex code.”


When the user interacts with the UI like clicks a button, changes a filter, or drills into a record that event goes back to the agent, which can in turn respond with an updated payload. Sometimes the update is small (toggling a section), and sometimes it’s structural (replacing a table with a chart because the user switched questions).


4. Client renderer: On the client side, we have a renderer that translates schema into live views:

  • Each schema component type is mapped to a concrete implementation in our front-end codebase.

  • The renderer is responsible for:

    • applying design tokens and layout rules,

    • handling responsiveness and accessibility,

    • reconciling updates (only re-rendering what changed).

  • We treat the schema like a “virtual UI tree.” When a new payload arrives, we diff it against the current one and apply the minimal set of updates.


Because the renderer is just normal application code, we can test it like any other UI: unit tests for individual components, integration tests for common payloads, and visual regression checks for key layouts.


Where We Still Use Traditional UI

Generative UI isn’t a replacement for everything. At Calfus, we intentionally limit it to areas where adaptability provides outsized value.

We still use traditional, handcrafted UI for:

  • Core navigation and shells: The main layout, global navigation, and key brand touchpoints remain highly controlled and stable.

  • Compliance- and security-sensitive flows: Anything involving permissions, approvals, legal text, or irreversible actions runs through carefully designed, versioned screens.

  • High-traffic marketing and onboarding pages: These benefit from meticulous copy, layout, and experimentation that’s easier to manage with explicit designs.

Generative surfaces live inside this shell, in zones that are clearly understood as adaptive and context driven.


How This Changes the Developer’s Role

From the outside, it may look like the agent is “doing the UI work,” but , generative UI shifts developers into more leveraged positions rather than replacing them.

At Calfus, engineers are responsible for:

  • Defining the primitives: Designing and building the component library, layout primitives, and design tokens that keep experiences consistent and accessible.

  • Encoding guardrails: Implementing constraints around density, hierarchy, contrast, localization, and responsiveness so that even a novel layout respects our standards.

  • Exposing safe capabilities: Wiring the agent’s abilities to well-defined backend APIs and domain operations, with clear permissions and failure modes.

  • Building observability into the UI engine:Logging which payloads are generated, how users interact with them, and where things fail, so we can iterate on both the agent’s behavior and the catalog.

  • Curating patterns: When the agent repeatedly converges on a useful layout pattern, we often promote that pattern into a reusable template or higher-level component.

In other words, developers are not “handing the UI off to the AI.” They are building the infrastructure that makes adaptive interfaces safe, coherent, and aligned with the product’s vision.

Benefits We’re Seeing So Far

Even in its current form, this approach is already shifting how we deliver product at Calfus:

  • Faster iteration on complex workflows: Instead of designing and implementing a dozen screen variants, we iterate on the agent’s prompts, policies, and payloads while reusing the same catalog and renderer.

  • Richer personalization without exploding the codebase: Multiple user segments can see different layouts and entry points, all backed by the same front-end code.

Looking Ahead

We’re still early in this journey, but the direction is clear: the future of UI at Calfus is less about shipping static screens and more about building an engine that can compose the right interface for each moment.

Traditional, reusable components are not going away. If anything, they are more important than ever—they’re the foundation on which generative layouts stand. The difference is that instead of wiring every screen by hand, we’re teaching our systems how to assemble those components intelligently, within guardrails defined by our designers and engineers.

As we continue to invest in this stack, we expect to:

  • expand the catalog with richer, higher-level components,

  • refine policies that balance adaptability with predictability, and

  • open up more surfaces in the product to agent-driven layouts.

For teams considering a similar move, our advice is simple: start small and start constrained. Pick a single workflow where users have diverse needs, define a tight component catalog, and let an agent orchestrate the layout within strict bounds. The combination of solid engineering discipline and generative intelligence can unlock experiences that feel uniquely tailored - without sacrificing control.

 

bottom of page