R2R (Run-to-Run) control is an iterative, learning-based control structure that continuously updates process conditions (recipes) by reflecting each run's outcome and the system state into the next run. In manufacturing environments with variation and gradual drift, R2R-based APC is widely used as an industry-standard approach to keep the process aligned with the target quality over time.

Feedback Control — Outcome-based correction

Feedback control adjusts the next run's conditions based on the measurements from the previous run. It computes the error between the target and the actual outcome, then updates the recipe to reduce that error.

This is not a simple "rule of thumb" tweak; it is an error convergence mechanism performed within designed control rules and constraints.

Feedforward Control — State-aware proactive adjustment

Feedforward control proactively adjusts conditions before the run starts, reflecting the current system state and historical context. Rather than correcting after results appear, it shifts the starting point by accounting for predictable changes and disturbances.

Therefore, feedforward is not merely correction; it is state-aware predictive adjustment.

Summary

  • Feedback: After-the-fact correction to reduce outcome error
  • Feedforward: Before-the-fact adjustment based on state and history
  • Together, these mechanisms continuously align the process to the target.
Feedback Feedforward
In short: R2R is not repeating a fixed recipe. It is a dynamic control framework that keeps updating the recipe by reflecting outcomes and system state.

Concept diagram

A quick view of how the feedback loop and feedforward inputs combine.

R2R-based APC (Feedback + Feedforward) Continuously updates the recipe using outcomes (feedback) and system state (feedforward) Run Process execution Measurement Outcome Controller Recipe update Next run Feedback Outcome-error correction System state / History Feedforward State-aware proactive adjustment Updated recipe

Cooking analogy — Not "following," but "converging"

Instant noodles come with instructions: a fixed amount of water and a fixed cooking time. In reality, the outcome still varies because the environment is never exactly the same -- heat intensity, cookware, and surrounding conditions all change.

So, naturally, we do two things:

  • Feedback: If the result was slightly off, we adjust next time.
  • Feedforward: If we know this environment needs a different approach, we adjust in advance.

It's not repeating the same recipe; it's adjusting until the result converges to what you want. R2R in semiconductor manufacturing follows the same principle -- continuously aligning conditions toward the target quality.

This analogy is meant to make the concept intuitive. In practice, real control is executed safely under process-specific rules and constraints.

Analogy diagram

No essay tone -- just the "structure" of feedback vs. feedforward.

Analogy: A "noodle recipe" is not a fixed rule -- it's a process of converging to the desired taste Feedback adjusts after tasting; feedforward adjusts before you start based on the situation Feedback (Outcome-based) Correct the cooking conditions next time after tasting Recipe Cook Taste / Result "Adjust after the outcome" Feedforward (State-based) Adjust proactively before cooking based on conditions State / Context Conditions Cook "Adjust before you start" Summary: Feedback = outcome-based correction, Feedforward = state-based proactive adjustment -- together they drive convergence to the target