
Traditional DSP Optimisation Is Broken—Here’s Why
Design of Experiments (DoE) is the industry norm—but it’s no longer enough.
​
The industry-standard for DSP development is Design of Experiments (DoE)—a statistical method used to screen and optimise process variables. This approach has been widely adopted because it offers a structured framework for experimentation. It allows process engineers to test multiple variables systematically and understand how they affect outcomes such as yield, purity, and throughput.
But while DoE is powerful, it comes with major limitations:
-
It’s empirical, not predictive—DoE maps correlations, but it doesn't offer mechanistic insight.
-
It’s resource-intensive—optimising just 8 variables in a full factorial design can require 256 experiments.
-
It’s piecemeal—each unit operation is typically optimised independently, ignoring interactions with upstream or downstream steps.
-
It’s slow—lagging assays, equipment constraints, and material bottlenecks delay iterations.
​
.png)
A Closer Look at DoE Methods in DSP​​
-
Full Factorial Designs - This method tests every possible combination of high and low levels for each variable. While it offers complete coverage of variable interactions, it becomes infeasible when the number of variables grows. For example, 6 variables at 2 levels each already require 64 experiments.
-
Fractional Factorial Designs - To reduce the experimental burden, fractional designs test only a subset of combinations. They can still detect main effects and some interactions, but sacrifice resolution and risk missing subtle but important variable interactions.
-
Response Surface Methodology (RSM) - RSM focuses on building a more detailed model of the optimal region by using designs like Central Composite or Box-Behnken. These generate a polynomial regression model to describe the response landscape. While more efficient than factorials, they still require dozens of experiments and careful interpretation.
-
Plackett–Burman Screening - Often used in early-stage development, this approach quickly identifies the most influential factors out of many. However, it ignores interactions and non-linearities, making it a coarse tool suitable only for narrowing scope.
Each of these methods has merit, but they all share common downsides: they operate on localised datasets, assume static interactions, and must be repeated at each development stage.
The Real-World Challenge
Beyond the theoretical structure of DoE lies a much messier reality—especially when scaling DSP across development stages. Challenges appear at every turn, and their severity often depends on your company’s maturity, resources, and internal capabilities. But across the board, common issues emerge:
​
​​
-
Experiments are costly and limited by material, time, and capacity.
-
Assays are slow, imprecise, and expensive, creating lag between data generation and decisions.
-
Access to representative feedstock—especially early in development—is difficult, particularly when using wave bags or non-production systems.
-
Capital expense and semi-consumable costs (membranes, resin, etc.) are high—even if you use a fraction of the equipment’s lifecycle.
-
Tech transfer surprises abound, especially when models don’t scale linearly or knowledge doesn’t transfer well between teams or facilities.
-
Lead times for consumables, ingredients, and CDMO access are increasingly long.
-
Data integration is painful. Vendor-specific hardware and proprietary control systems lock up useful datasets, making it hard to perform holistic optimisation.
The real challenge? DSP is inherently interdisciplinary. It spans biology, chemistry, engineering, and data science. The pain points often appear at the intersections—where disciplines meet, but knowledge doesn’t flow. And because scale-up is nonlinear, even small errors in process assumptions can compound dramatically.
​
When problems occur at pilot or manufacturing scale, you need a cool head and deep knowledge—but few teams have broad enough expertise across every unit operation to solve issues quickly.
​
What’s needed now is a shift: away from disconnected, empirical trial-and-error, toward integrated, adaptive, and predictive optimisation methods that reflect the complexity of modern biomanufacturing. Smarter methods now exist—namely, hybrid models that combine the rigour of mechanistic modelling with the flexibility of AI/machine learning. These models learn from a small number of real experiments to create a virtual representation of your process. Unlike DoE, which operates in disconnected silos, hybrid models can simulate entire DSP workflows holistically, allowing for faster, more accurate, and more scalable optimisation across multiple variables and process steps.

