Reducers
Reducers control how parallel branches in a workflow merge their state when they fan in. When two or more branches write to the same field, the Foreman needs to know whether to add the values, append them to a list, take their union, or just keep the last write. Reducers answer that question.
In Microbus, reducers are picked automatically by field-name prefix. There is no per-branch configuration, and no decision needed at fan-in time - the field name itself encodes the merge behavior.
The Fan-In Problem
When a graph fans out into parallel branches and those branches converge on a shared successor, every branch writes its piece of state. If two branches write to the same field, the framework has to combine those writes somehow.
Consider this graph:
graph.AddTransition(submit, verifyCredit)
graph.AddTransition(submit, verifyIdentity)
graph.AddTransition(submit, verifyEmployment)
graph.AddTransition(verifyCredit, decide)
graph.AddTransition(verifyIdentity, decide)
graph.AddTransition(verifyEmployment, decide)Each verification task writes to a failures field if it failed:
func (svc *Service) VerifyCredit(ctx context.Context, flow *workflow.Flow, score int) (failures []string, err error) {
if score < 600 {
return []string{"low credit score"}, nil
}
return nil, nil
}When decide runs, it expects to see all the failures from all three verifications. But three branches are about to write to the same failures field. Without a reducer, fan-in would just pick one (last write wins) and silently lose the others.
The Four Built-In Reducers
Microbus ships with four reducers, selected by the prefix of the field name:
| Prefix | Reducer | Behavior | Example |
|---|---|---|---|
sum* | numeric add | Sums numeric writes from each branch. | sumScore, sumFailures |
list* | append | Concatenates array writes in branch-completion order; duplicates kept. | listMessages, listEvents |
set* | union or merge | Array → element union (de-duplicated); object → field-by-field merge. | setTags, setUsers |
| anything else | replace | Last write wins. | decision, status |
Renaming failures to listFailures (or setFailures if the verifications might produce duplicate strings) is enough to fix the fan-in problem in the example above. No graph change, no per-task configuration.
The Naming Convention
Reducers are recognized by a strict prefix-then-uppercase rule:
sumScore→ matchessum*(the character after the prefix is uppercase).sumscore→ does not match.sumis treated as part of the field name.summary→ does not match. The character aftersumism, lowercase.Listings→ does not match. The prefix is case-sensitive (must be lowercaselist).
The rule keeps everyday English words (summary, listening, setup) from being interpreted as reducer-managed fields. The flip side: if you genuinely want a reducer-managed field, the field name must be camelCase with the prefix as the first lowercase token.
The Delta Rule
Reducers operate on the delta each branch produces, not on the full accumulated value. This is the rule that bites in practice.
Wrong:
func (svc *Service) VerifyCredit(ctx context.Context, flow *workflow.Flow, listFailures []string, score int) (listFailuresOut []string, err error) {
if score < 600 {
return append(listFailures, "low credit score"), nil
}
return listFailures, nil
}This task reads the existing listFailures, appends to it, and returns the appended array. Across three parallel branches, fan-in receives three arrays that already contain each other’s contents. The list* reducer then appends them all - producing duplicates of every entry up to nine times.
Right:
func (svc *Service) VerifyCredit(ctx context.Context, flow *workflow.Flow, score int) (listFailures []string, err error) {
if score < 600 {
return []string{"low credit score"}, nil
}
return nil, nil
}Each branch writes only its own contribution. The reducer combines them. The branches do not need to know what the other branches produced.
The same rule applies to sum* (return the increment, not the running total) and set* (return the elements you want added, not the existing set).
set* Semantics
set* behaves differently depending on the value type, by design - so a single reducer name can handle both element-set and field-merge use cases.
Array Values: Element Union
When all branches write arrays, set* does a de-duplicated union:
// Branch A returns: setTags = ["red", "blue"]
// Branch B returns: setTags = ["blue", "green"]
// Fan-in produces: setTags = ["red", "blue", "green"]The de-duplication is by value comparison. For arrays of primitives (strings, numbers, booleans) this is exact-match dedup. For arrays of objects, two objects are equal only if every field is equal.
Object Values: Field-by-Field Merge
When all branches write objects, set* merges field-by-field:
// Branch A returns: setUsers = {"alice": {...}, "bob": {...}}
// Branch B returns: setUsers = {"bob": {...}, "carol": {...}}
// Fan-in produces: setUsers = {"alice": {...}, "bob": {...}, "carol": {...}}If both branches write the same key, last write wins for that key. Inner objects are not recursively merged - merge is one level deep.
Mixed Types
A branch writing an array and another writing an object on the same set* field is undefined. Don’t do this. The framework treats the field as either array-set or object-set per flow based on the first write; subsequent writes of the wrong type fail the step.
SetReducer Escape Hatch
Some field names cannot follow the prefix convention - for example, when the workflow’s outputs are consumed by an external system that dictates the schema. graph.SetReducer overrides the prefix-based selection:
graph.SetReducer("failures", workflow.ReducerList)
graph.SetReducer("totalScore", workflow.ReducerSum)
graph.SetReducer("tags", workflow.ReducerSet)Use this sparingly. The prefix convention is a readability feature - reading listFailures makes the merge behavior obvious. failures with a SetReducer somewhere upstream does not. New code should prefer the convention.
Edge Cases
Empty Branches
A forEach transition over an empty array spawns zero branches. No writes happen on that field, and fan-in proceeds with whatever the upstream task wrote (possibly nothing). The reducer does not get invoked at all.
nil Writes
A branch that returns nil for a reducer-managed field is a no-op for that field. The reducer does not receive a nil to combine with the existing value. Useful for branches that conditionally contribute - return nil to skip.
func (svc *Service) VerifyCredit(ctx context.Context, flow *workflow.Flow, score int) (listFailures []string, err error) {
if score < 600 {
return []string{"low credit score"}, nil
}
return nil, nil // No-op for fan-in.
}Ordering in list*
list* appends in branch-completion order, which is non-deterministic - branches finish whenever the foreman’s worker dispatches them. If you need a stable order, sort the result downstream of the fan-in:
func (svc *Service) Decide(ctx context.Context, flow *workflow.Flow, listFailures []string) (decision string, err error) {
sort.Strings(listFailures) // make output stable for downstream consumers
if len(listFailures) > 0 {
return "rejected", nil
}
return "approved", nil
}Nested Subgraphs
When a subgraph writes to a reducer-managed field, the same rules apply - the subgraph’s final state is treated as one branch contribution from the parent’s perspective. The subgraph itself can have its own reducer fan-ins internally; they resolve before the subgraph’s state is merged back into the parent.
Reducer Selection Cannot Vary by Branch
The reducer for a field is fixed by the field name (or by SetReducer) for the lifetime of the flow. Two branches writing the same field always go through the same reducer - there’s no “this branch should sum, that branch should append” mode.
See Also
- Building Agentic Workflows - the fan-out and fan-in patterns where reducers come into play.
- Workflows overview - conceptual background on tasks, graphs, and state.
- Package
workflow- theReducertype and the constants used bySetReducer. - Package
coreservices/foreman- the orchestration engine that applies reducers at fan-in time.