Skip to Content

From weeks to days to hours … Deploying & Running Real-Time Applications Efficiently

This piece is based on an edited transcript of our webcast on “Deploying & Running Real-Time Applications Efficiently“, a conversation with Sigurd Jevne, Solutions Delivery Manager.

Sigurd, in a nutshell, explain what you and your team members are doing at Turbulent Flux?
My name is Sigurd Jevne, and I am Solutions Delivery Manager at Turbulent Flux. I am part of the Solutions Delivery Team in charge of all customer facing activities in the company. We are responsible for deploying our real-time hybrid solutions. “Hybrid” means a combination of simulation models based on physics and machine learning models. After the solutions are deployed, the delivery team ensures that our customers can successfully use our applications over the lifetime of the field. One example of applications we deploy is our virtual flow meter – the FLUX VFM.

Sigurd Jevne, Solutions Delivery Manager

Before you deploy our solutions, we certainly have to gather quite a bit of data. Tell me about what is required from an operator in terms of data and technology infrastructure?
The sensor data that we need from our customers are pressure, temperature and valve positions. It does not require any additional sensors to be installed. Our customers must be able to stream real-time sensor data from a data platform or historian database through an API. This is generally not an issue, since our FLUX API supports all major providers of data platforms. Then, to build the simulation models of the wells or the pipelines, all we need is standard documentation that all customers have available.

Once you have the data, what do you do with the data?
Once all the data and documentation are received, and the connection to the data platform is in place, we start the deployment of the solutions. This means, the delivery team will first build the simulation models in our web-based solution and the data scientists will build the machine learning models. Since we have access to historical sensor data, the next step is to tune the simulation models. We generally tune on the friction and the heat transfer to match the field sensor data. Our data scientists then train the machine learning models on the historical sensor data. After this, we are basically ready to deliver outputs from our FLUX VFM.

How long is the process from receipt of data to actually deploying a real-time FLUX VFM?
Deploying and calibrating a VFM for a single well takes us only two to three days. This is undertaken by one delivery engineer with some assistance from a data scientist. It has been interesting to observe how the deployment time has gradually decreased. This is thanks to our software, which has been further developed over the last few years, and our level of automation has increased.

Talking about automation, what exactly is automated when it comes to deploying VFMs at Turbulent Flux?
Our development team has built a range of tools that help automate large parts of the work around tuning simulation models and training machine learning models. Our long-term ambition is to also automate the deployment of our solutions to a degree where the users simply structure their input data, click a button and then all software modules are configured automatically. This gives great independence to the operator.

How do you validate and visualize the results?
As part of the deployment phase, we validate the VFM outputs against reference measurements provided by our customers. The reference measurements can be flow measurements at the test separator, outputs from multi-phase meters etc. During operations, the VFM outputs are validated against well tests and, if necessary, the VFMs are re-tuned.

How do we make the tuning process efficient?
The process of tuning the simulation models is to a large extent automated. We are continuously taking steps on the software side to further automate this process. We use our tool FLUX Optimizer for tuning the models, which makes it a very efficient process. First of all, the setup of tuning simulations through the optimizer is simple and quick. The optimizer performs multivariate optimization, which means we can tune on all necessary tuning parameters in a single run.

On a different topic, our FLUX VFM is self-adjusting. What does self-adjustment actually mean and how does it impact on your workflows?
The FLUX VFM is self-adjusting, because it applies a combination of machine learning and numerical optimization methods to calculate some of its own inputs. These inputs are typically the reservoir flow model inputs: reservoir pressure, temperature, gas-oil-ratio and water cut. The calculations are fully automated, and the self-adjustment runs typically once or twice per day. This way, the VFM keeps itself up-to-date over time with changing reservoir conditions and with very limited intervention from us in the delivery team.

So where do you actually still intervene within the deployment and running of the VFMs? In other words, how do you divide responsibilities within the team when you are working on a specific asset? How many people are involved?
We divide between the deployment phase and the operational phase. During the deployment phase, our customers interact with a project team – a project manager and a technical lead. The technical lead is typically an experienced engineer who is responsible for the quality of the delivery and for leading the technical team. For large deployments, a typical team consists of two to three engineers and scientists. Once the solutions are deployed and in the operational phase, the technical lead will typically become the point of contact for customers to help monitor, maintain and provide training. He/She will also facilitate the communication between the customer and our product and development team. In this way, we ensure the success of the solution and the success of the customer for the lifetime of using the solution.

Can you expand a bit on the next steps that we are taking, short-term to medium-term, in terms of automating the deployment and running of our applications? And how do we think about creating even more efficiencies here?
We have to a large degree automated the tuning of the VFMs. The next step is to fully automate this process. Once this is in place, tuning simulations will be triggered automatically when new reference data becomes available. Then, the users may evaluate the results of the tuning simulations before pushing the update into production. This is one improvement that will be made available later this year. Another improvement is to further automate the retraining of the machine learning models. This means as new data becomes available, and provided that the machine learning model actually needs retraining, it will automatically be retrained and the user can evaluate the results and push it into the solution.

Lastly, what is our long-term vision on optimizing the solution?
Our vision is to provide solutions that are scalable, automated and user-friendly. We would like to enable our customers to perform the deployment and monitoring of the solutions themselves. This means that gradually, our role as a delivery team will shift from deploying the solutions to providing training to our customers, and ensuring that they are successful in deploying and using our solutions.

Thank you for your insight, Sigurd.

Back to top