Digital twins. Who is not using the phrase nowadays. In 2014, the use of the phrase started exploding (check Google Ngram and you’ll discover the hockey stick curve I’m referring to). But while everyone is talking about it, only few know what it means, and what it can offer in terms of practical value for the water industry.
While multiple definitions are developing, for now, I would like to describe a digital twin as a digital version of a real system or process that offers additional value compared to the situation without the digital component. This ‘value’ can mean different things as you’ll discover later in this article.
In my opinion, there’s a paralysis by overanalysis and a lack of inspiring practical examples. Especially at the ‘treatment side’: the treatment of wastewater, production of process or drinking water or water reuse. We have seen quite some DT examples at the ‘network side’ to solve leakages or save pumping energy. These were almost exclusively data-driven (e.g., AI) applications.
There is also a lot of confusion and only few distinguish data-driven from mechanistic models, for example. Data-driven models are black box approaches, meaning that an algorithm gets trained on available (big) data sets. Mechanistic models are based on process understanding and are often forgotten in the context of the AI buzz. An example is the family of activated sludge models (ASM) that have been applied for wastewater treatment plant design since the late 80s. Or AMOZONE, our company’s mechanistic model for ozonation and advanced oxidation. Mechanistic models can be generically applied across plants and their parameters have a physical, chemical or biological meaning and can hence be interpreted by process people, accelerating their process understanding and building trust. Our laws of physics, chemistry and biology are too often forgotten. While data-driven approaches will play their role, a major barrier I see is bridging the gap between data science and process engineering. Only that bridge will lead to practical outcomes.
While data-driven approaches will play their role, a major barrier I see is bridging the gap between data science and process engineering
We have been building a mechanistic digital twin for the oxidation process of a Dutch drinking water utility. A unit process that was chosen because of urgent needs. The digital twin predicts in real-time the removal of individual trace organics and the formation of undesired by-products. These are variables that cannot be measured with an onsite sensor. In addition, the utility is using the DT to run ‘what-if’ scenarios to save energy and better prepare for the future. For example, ‘how would our treatment process react to blending different surface water sources?’ Finally, the DT will be used to support a future plant extension. Going ‘live’ in 2023, this DT will be the first of its kind in the world.
Even though we could have started building a DT for the whole treatment train, we started small. And that’s important to understand in this early stage of DTs, it is important to establish a final roadmap. Most important is to get started with a concrete case focused on acute needs. Module by module, a DT can be extended while demonstrating the value.
The value of DTs can be summarized in five categories. The first and most obvious one is smart and proactive operation, with the goal of balancing cost, performance and carbon footprint in a changing world. A second one is fast and efficient training. We see lots of potential in disruptive training programs through models combined with augmented or virtual reality. A third is preparing for change, including climate impact. The fourth category is better communication and documentation. A DT serves as a database for operational experiences and improves with time. And finally, a DT enables informed and fast decision making.
We need to be practical, basic education around DTs is needed and examples need to be shown. The business case for DTs has never been so strong and they will be a key enabler for a bright future.