This piece is based on an edited transcript of our webcast on “Real-time Applications that Self-Adjust“, a conversation between Darren Mansfield, Business Development, and CTO Lars Wollebæk.
Lars, tell me more about what you envisioned with building our proprietary FLUX Simulator and please expand on the way you have built the tool!
First of all, we have years of experience within multiphase flow intelligence and simulations. We know how the simulators available have been used within the oil and gas industry – most often for engineering studies. In other words, it has been the de facto standard to use such tools when you are doing engineering projects and troubleshooting.
Flow simulators really did not have the traction within operations. This is where we saw the gap! We saw that these simulators had the potential for real-time analysis, which would consequently benefit the operator in getting greater insight on monitoring and optimizing their production. That said, we knew we needed to work on the philosophy of how the technology is designed in order to get this into day-to-day operations. Traditionally, flow simulators were very much the tool for specialists – something that you can use if you have spent a number of years studying and working with such tools. We needed to change this perception!
We wanted to build a simulator that was a lot simpler to consume, something that you can set up and run without the experience that you require for traditional flow assurance tools.”
In addition, we saw that there was a benefit of using data driven models within the system, to scale the tools both from an applicability and technology perspective. All in all, this was really the outset we had when we started this.
Lastly, we wanted to build something that we envisioned to be a single source of truth. For us this means that you can always go back to the same model, then run or re-run your simulations. We saw that there was a huge bottleneck between the asset managers that had needs for simulations and getting results, and the flow assurance team that had to perform these simulations.


We speak of our tools to be close to plug and play, what exactly does this mean?
We initially run some configurations to set up the system. The system is designed around the philosophy that you can subscribe to essentially any type of data and compare it with what and where you think it makes sense. You utilize this data and use it as either boundary conditions for simulations or input data. To a large extent, it is very much plug and play, at least after you have performed the initial configuration. We have built this in a way to make it as robust as possible – so that it will always be up and running, so that it will always be maintained, and it should always give you the results that you are after. That is why we focus on maintaining the right level of complexity and usability to make this happen.
This makes a lot of sense and corresponds well with our commercial model, which is subscription-based, where we have the ability to flex accordingly to the size of the operations. Coming back to the simulator, we refer to it as a self-adjusting solution. What does self-adjustment mean and how is it done?
Self-adjustment means that we can utilize the available sensor data and find the optimum configuration for the simulator, in order to then find the best fit to the data whilst keeping the system up-to-date all the time. We know there are continuously changing conditions inside the system. Typically, the reservoir is changing, which means you might get higher influx of water, or you might get some additional gas or the reservoir might be depleted.
By continuously self-adjusting, we do not have to rely on modifications that are done every now and then, whenever you have the time. Solutions should maintain themselves at all times!”
A self-adjusting, real-time solution that is based on a hybrid modeling approach. Why do we believe in combining physics-based with machine learning models?
This is a good question because there was a lot of hype around data driven models a few years back, where models seemed to be able to solve all the problems. Yet, we do not quite see it that way. We see that these are two approaches that really complement each other. Physics modeling is based on building causal models that create a connection between the cause and effect. On the other hand, data driven models are about finding correlations in data and one can focus on individual outputs to a larger extent. You can draw on different data sets and different meanings and establish a correlation, but it does not necessarily say anything about causation.
So that is why making these two technologies work together is really beneficial, because you can play on the strengths and weaknesses of the different approaches. For instance, the physics model is reliant on a lot of input data, and you have to know a lot about your system. Yet, you might not actually have all this information. On the other hand, for data driven models, you can actually risk finding correlations that are not really causal relationships. So having a hybrid model is central to us.
We often get asked about why we use cloud, and a typical concern raised in many business development discussions is the topic around data security. Can you expand as to why our software is cloud-native and how we overcome security issues?
Regarding ‘why cloud’, first of all data is now more and more being made available via the internet. This means we can actually use Cloud and do not have to deploy systems on prem. With that, the accessibility of the system is significantly eased. This benefits us and the operator as we can maintain a system easily, without having to go on site and conduct all of the updates and modifications manually. Instead, we can go look at the system, see what the problem is and then fix, modify or add aspects to the models. The ease and timely maintenance of the system is really one of the greatest benefits.
Secondly, it allows for the scalability of the system. Within a cloud environment, we can easily scale up and down the system. Let’s say that you would like to add new models to new wells, or pipelines, we can easily just scale up the system. This means you do not have to wait to get the new resources or new hardware installed. With that, your new model becomes available immediately. The same applies if you want to scale down, if you would like to take something off your system that you don’t need anymore. We can easily do so. The flexibility that is offered is significant.
Regarding security, first of all, we are reliant on well-known cloud providers. They focus heavily on security and are continuously monitoring what is happening on their system. In a way, they are of great help and have a lot of expertise in this area. In addition, we have our own security measures. We conduct authentication and authorization. All deployments are specific to each client, which means every client has their own area on the cloud. There is no cross communication and read over from different clients, which makes everyone isolated in a sense. We feel comfortable that this is a safe approach. Additionally, we apply a data in transit approach. This means we do not actually store data on our system, but data is consistently in transit. We read the data, we process the data and we transfer it back immediately.

One aspect we have not yet spoken about is how we connect our solutions to our clients’ data.
In simple terms, you need to give us an endpoint where we can read and write data back to you through our API. This might be the only aspect you require for set-up. Our API is fully open and as long as you have a subscription, you will be able to use it. Our API is the same API we use to build our own in-house UI. This means everything that we do in configuring and setting up the system, you can actually do and program on your end. I should also mention that this is a highly configurable system. You can set this up easily and we offer connections to all core providers. If you use a provider we do not use yet, it is typically an easy extension to add.
And on the data aspect – what does the operator require for our solution to work effectively?
This is a good question, which really depends on your needs and the solution that you want to set up. We have a set of standard data requirements, which includes information about the physical setup of your system, e.g., what is your well trajectory, your well diameters, what kind of fluid are you producing. In addition to that, we typically want to have pressure and temperature measurements in a well setup. This will be pressure and temperature data at upstream and downstream production, choke information (preferably downstream), as well as bottomhole pressure and temperature. That said, our system is highly flexible and configurable, so you can subscribe to any kind of sensor that can provide input to a system.
Thank you, Lars, for this insight.