Nuclear waste clean-up steps up a gear
Washington River Protection Solutions (WRPS) is contracted by the US Department of Energy to manage clean-up of the Hanford site, a 586-square-mile area with 56 million gallons of nuclear waste. Their mission is to reduce the time associated with clean-up, given that hoteling and operation costs add up to millions of dollars a day.
Since 2016, Twinn, formerly known as Lanner, has worked as part of the extended WRPS Mission Analysis Engineering Team. Together, we’ve put the operation at the vanguard of predictive simulation, using digital twins to support safe, efficient clean-up.
The first phase of our partnership was to create a digital twin ecosystem. The initial models, developed using our Witness Horizon predictive simulation software, have helped uncover previously unknown bottlenecks and direct millions of dollars of investment more efficiently.
The second phase of collaboration has focused on evolving WRPS’ digital twin capabilities to facilitate better, faster decision making.
Stakeholders within WRPS and the Department of Energy were increasingly impressed with predictive simulation – and there was rising demand for modelling to be integrated into decision-making processes.
The team was asked to answer extremely complex questions. A single query about improving throughput could easily involve 80 scenarios. It took hours to prepare and check a vast number of input files, days of computing time to run scenarios, and additional days to analyse the large data volumes. One set of experimentation could easily take a month to inform a decision.
The WRPS Mission Analysis Engineering Team and Twinn wanted to find innovative ways to speed up this process.
Together, we looked at 2 ways to boost modelling efficiency: reducing compute time and speeding up analysis. This led to a 2-pronged approach:
We developed the bespoke app collaboratively using Witness.io, Twinn’s web service for offering scalable simulation performance to power mass experimentation. Using the cloud can drastically reduce overall computing time because many scenarios can be executed in parallel using dedicated simulation cores. It also boosts flexibility and enhances business continuity. This was crucial during the Covid-19 pandemic because the team were able to access the experimentation service remotely. Plus, it’s easy to scale capabilities as the digital twin ecosystem expands.
The AI and machine learning elements were developed in partnership with data science specialists from Royal HaskoningDHV. The team is now using AI and machine learning to analyse simulation outcomes and identify possible improvements via automatic bottleneck detection. They’re also working on ways to speed up and extend simulations with surrogate models based on fast AI approximations.
This is accelerating the iterative experimentation process, helping home in on blind spots and answer questions more quickly.