News

Is Cloud-native computing a hype?

Aug 14, 2020

By Gjermund Weisz, Chief Operating Officer

Many in the oil and gas industry will claim Yes. To address the title of this article, let's think about what types of consumer software we use every day. As an example, today I used the local weather forecast, traffic conditions, my meeting schedule, the activity plans of my children, and made some phone calls. All these software services benefit from being designed for Cloud-native computing with access to a scalable pool of machine resources managed in a resource-efficient manner. In my daily activities I do not need to worry about local machine capacity limitations, outdated software, or lack of network capabilities to gather and process information in split seconds. This is because the consumer software service industry was among the first adopters of Cloud-native services, and successful companies have considerably advanced their solutions over the years. This development is driven by the increasing demand for data availability that only cloud architecture can provide. In other words, if consumer software service companies choose to stay with legacy solutions, they will be outperformed by competition.

The oil and gas industry has not experienced this rapid cloud transformation and still relies on legacy software installed on local machines. Industry domain software for exploration and production that use operational data are still in an early cloud computing stage. Operational data used for input and comparisons are most often locked down in silos and not easily accessible by those who can use the data for process improvements. Limited availability of data stalls innovation growth in domain software and new Cloud-native computing solutions. The good news is that data availability is improving as new innovative companies are creating open data environments to use operational data in cloud environments where data can be managed with open API’s.

Let us look at what Cloud-native computing means. Software that is deployed on a Cloud-native architecture takes advantage of Cloud-native technology such as a container orchestrator. Parts of an application can be packaged into containers which are then controlled or “orchestrated” by a technology like for example Kubernetes. The orchestrator scales computing resources up or down as needed and includes automation to restart crashed or degraded containers. Updates are automated with no downtime. True Cloud-native solutions also offer an open machine-based interface (machine-to-machine) such as a rich REST API that makes it easy to automate communication between applications. This is beneficial as it offers the ability to scale with use and make 3rd party integration possible, thereby supporting innovations.

Traditional deployments underutilize computing resources. They lack an effective orchestrator with on-demand available computing power and have less effective communication API’s – if any. Going Cloud-native is cost effective! Because it is much easier and cost effective to outsource digital clouds to companies that specialize in this (titans like Google, Microsoft, and Amazon) than it is to train or hire software experts to build infrastructure in house.

In short, Cloud-native computing enables almost limitless scaling as well as secure and easy data sharing. This allows companies to perform more powerful computing and share data more easily. People use Cloud-native solutions for consumer services every day, and we will see the same trend for our industry, too. I predict that within five years, Cloud-native software in oil and gas will not be considered by anyone as a hype any longer, but as a critical requirement to deliver scalable and affordable user value. Companies who fail to fully adopt cloud-thinking in their operations will face a competitive disadvantage, mostly because of cost competitiveness, enhanced functionality and insights gained from cloud computing and easy data sharing. Cloud-thinking fosters collaboration!

Turbulent Flux utilized the latest Cloud-native technologies when we developed our own software. This means that our solutions are user scalable and can be operated in whole or in part by other systems or people or in any desired combination. In practice, many of our clients will use our foundation technology to create their own solutions, with a high degree of automation. We now work with oil and gas companies and partners that are among the early adopters of our cloud-native technology as they see this as a building block in their digital transformation.

Meet the Turbo-Team - Johan Henriksson

Aug 11, 2020

Meet Johan Henriksson. Product Portfolio Manager at Turbulent Flux. Commercial and technical alike. Reflective. Thirsty for knowledge. Motto: “Accept nothing just for the sake of acceptance. Challenge everything to make things better. Simplify, so that more people can understand and make use of it.”

Johan is the guy who loves to share our product knowledge with others and educate people of what we are doing. For him it is about creating and maintaining relationships, and he enjoys discussions and interaction with clients. “I am in a hybrid role between our commercial and technical teams. I am in constant dialogue with clients and our commercial team to understand client needs and expectations. I then translate this into requirements for what we need internally to achieve on the technical and service fronts.”

Johan is involved in every aspect of our business. He supports tender work, runs product demonstrations and client feedback sessions, presents for new business opportunities, manages our internal Lunch & Learn sessions, works closely with marketing to express what we do and are good at. The list is endless.

Not surprisingly, Johan comes with the right background to do so. He previously worked 11 years at SPT Group and Schlumberger, initially in software development and later as the OLGA Product Analyst. This shaped his interest for both technical and commercial discussions – “a combination that continuously pushes me to learn new things and adapt old knowledge to new situations,” he states. Like many here, Johan has a Ph.D. – Computational Physics from Linköping University, Sweden.

“I knew the founders of Turbulent Flux well after working closely with them before. Working on something that hadn’t been solved before is what really triggered my interest. I love such challenges and when I got the opportunity, I was keen to join the group.” He comments that it is also the diversity of the people, their different backgrounds both culturally and in domain, that makes Turbulent Flux unique. “We have people here from nine different countries, from China to Iran to France. Despite that we all have different stories that we bring together it feels like working with friends. Actually, it really feels like family.”

Johan is never truly off-work but does enjoy gardening, cooking, judo and books. “I have just finished Aldrig Näcken by my favorite Swedish crime author Stieg Trenter and now I am re-reading the autobiography Surely you’re joking, Mr. Feynman!

On the question of where Johan sees Turbulent Flux in five years, he responds confidently: “I see us as a preferred and trusted partner with a global footprint, a company at the forefront of technology development for multiphase flow simulations renowned for easy-to-use, user-centric solutions.” No one could have worded this better.

Because with Johan, we stay one step ahead!

How we obtain accurate VFM flow rates

Aug 06, 2020

By Torgeir Ruden, Software Development Manager

Virtual Flow Metering (VFM) is an increasingly attractive method for estimating multiphase flow rates in oil & gas production systems. It is used to compute flow rates from readily available field measurements such as pressure and temperature. Since a VFM is purely software-based, it is easily integrated with other software systems, making the results from the VFM readily available for further production optimization analysis.

In order for a VFM to be a viable option, either as a standalone system or as a digital twin to a physical flow meter, the results from the VFM have to be sufficiently accurate. In most cases, a consistent full-scale error in the range of 5-10% for phase flow rates is required by the industry. The main challenge and key for a successful VFM solution is to keep the same level of accuracy for the lifetime of an asset, adjusting to changes as the field matures. Self-adjusting VFMs like our FLUX VFM are designed to provide the required accuracy without costly maintenance and interruptions over the lifetime of an asset.

One of the main challenges in consistently obtaining accurate phase flow rates using a first-principles transient flow simulator in a VFM, such as our transient FLUX Simulator, is to accurately determine how the inflow boundary condition changes over time, specifically for fluid composition and the fluid driving force (reservoir pressure in a well). A good representation of the inflow boundary enables the multiphase simulator to model the fluid flow through the well or pipeline, giving accurate phase flow rates at the desired measurement point.

Provided there are enough sensor values, sufficiently sensitive to changes in phase flow rates, boundary inflow parameters can be determined at any given point in time. In the FLUX VFM this is done using our FLUX Optimizer, which looks at how pressure and temperature changes across the well. We require a minimum of three pressure and temperature increases/decreases to resolve the three relevant inflow parameters: Gas-Oil-Ratio (GOR), Water-Cut (WC) and fluid driving force. In a well, this would typically be the pressure-drop across the choke and the pressure-drop and temperature-drop from bottom-hole sensor to the upstream choke sensor.

To provide even better accuracy in predicted inflow parameters, we have introduced the use of data analytics in addition to sensor values. We call it FLUX Analytics. Our data-driven model is trained to provide guidance to the FLUX VFM. This means that the result requirements from such a data-driven guidance-model is less severe than for a full data-driven VFM. We have done so because a first-principle VFM may find certain flow features useful, which may however not be interesting to the end-user. Worth to note is that this approach can also be used to create a VFM in case where the number of useful sensors is less than the required number of sensors. With that we are creating sufficient extra information to assure the required accuracy of our FLUX VFM.

FLUX VFM Architecture


Our FLUX VFM has successfully been used on a number of offshore and onshore wells. In the case of an onshore well project, the pressure-drop across the topside choke was insignificant due to the choke being fully open at all times. This meant that the pressure-drop was not sensitive to changes in flow. In addition, due to the small pressure-drop across the choke, the measurement error from the sensors were quite significant. Without any additional information this would have severely impacted the accuracy of the results.

It would have been required to have one of the inflow parameters fixed to get results of acceptable accuracy. However, the data models in our FLUX Analytics were able to capture various flow features such as the total volumetric flow rates based on sensor values. These features were used to guide the FLUX VFM in addition to the remaining sensor values. It meant that all inflow parameters could be derived and no parameters needed to be physically fixed. The overall full-scale error was reduced well within the 5-10% boundaries. An average full-scale error of less than 4% was even obtained for all phase flow-rates over a period of two months for one well, where the well experienced a 15-20% change in both GOR and WC.

If you like to learn more about our solutions contact us on info@turbulentflux.com

Meet the Turbo-Team - Torgeir Ruden

Aug 04, 2020

Meet Torgeir Ruden. Co-founder of Turbulent Flux. Software Development. Manager. Visionary. Curious. Motto: “Tomorrow is even better than today.” Loves the mountains and a good bottle of red wine from the Bordeaux region.

Torgeir manages the software development team at Turbulent Flux. As he frames it “I help developers to do their job and provide my technical expertise. Yet, I am coding every day. Coding is my passion. It is essential as a software developer to code every day, otherwise you lose touch with the changing technological landscape.” Torgeir has been with Turbulent Flux from the start, with the ambition to create the best possible product for flow simulation. “For us it has always been about breaking down the complexity of flow simulation software. This is a major challenge, but not unrealistic. Solutions should be available for many more operations than there are today, but deployment time, processes and costs hinder this. We are changing this. Good solutions are complex, but they should always be connected to ease of use. We have as an ambition to make flow simulation software as easy to switch on as starting a car.”

Turbulent Flux is taking the right steps into this direction. Together with a strong team, Torgeir has over the years worked on the development of the software through client-driven projects, assuring a market-centered solution. As such, his job is as much about being behind the scenes, as it is about building relationships and dialogues with clients. “I regard this as highly rewarding. The direct discussions and feedback from clients assure we address the issues the clients face. This makes for better solutions and quick turnaround time.”

Torgeir highlights the easy and quick communication through competent, good people as being what makes Turbulent Flux a fantastic workplace. He has initially been the Lead Scientific Programmer for the Turbo-Team. He previously worked for SPT Group / Schlumberger as a Senior Software Engineer and as a Senior Engineer for the Center for Scientific Computing at the University of Oslo (UiO). Torgeir has a PhD in Theoretical Chemistry from UiO.

And what does Torgeir do when not coding? “I love mountain hikes in summer, vineyard tours in Europe with family and friends. Norway is home, but I enjoy travelling – San Diego is a place I could easily live in.” Torgeir highlights the brilliant weather Southern California offers – warm summers, mild winters. We are happy Torgeir has chosen to stay in Norway and use the long winters and short summers to continue on the success of Turbulent Flux.

With Torgeir, we stay one step ahead!

A Brief History of Flow Simulations

Jul 24, 2020

By Johan Henriksson, Product Portfolio Manager

While the history of practical flow simulations and their wider application within the oil and gas industry is about half-a-century old, the foundational concepts date back to Ancient Greece. The seed of what later became fluid mechanics was planted by Archimedes around year 250 BC. While most famous for the ‘Law of buoyancy,’ commonly known as Archimedes’ principle, he also introduced fundamental concepts within fluid mechanics by postulating: “If fluid parts are continuous and uniformly distributed, then that of them which is the least compressed is driven along by that which is more compressed” [1]. In other words, Archimedes postulated pressure-driven fluid flow.

Two millennia later, the mathematical foundation of fluid mechanics was laid down by Leonhard Euler and Joseph-Louis Lagrange. Then in 1822, Claude-Louis Navier expanded on this work and incorporated viscosity, leading up to what we now know as the Navier–Stokes equations. The equations constitute a rigorous description of fluid flow, but the devil is in the details and analytic solutions can only be obtained for simple systems. More practical applications – flow simulations – were still 150 years away, awaiting computers to enter the stage. With increasing access to more and more powerful computers, computational fluid dynamics (CFD) became commonplace in for example aeronautics and automotive industry, but the techniques were not practically transferrable to multiphase flow in pipes on larger scale, not even today.

Focusing on oil and gas applications, there is a number of complicating factors. Pressure and temperature may vary significantly along the length of a pipe and with them fluid properties. Further, there are phase transitions such as evaporation and condensation, dispersions, liquid droplets carried in the gas phase, mechanical and thermal interactions between the fluids and the pipe, and much more. Already, the challenges at hand span across multiple disciplines, e.g., fluid mechanics and thermodynamics. Still, we have not addressed aspects of flow in pipeline networks, time-dependent aspects of the flow, or the fact that the characteristics (flow regimes) change dramatically with changing flow conditions.

A saving grace for fluid flow in pipes is that the fluid motion primarily is in the axial direction of the pipe. Effects in the cross section are of limited interest and can often be treated in an averaged manner, resulting in one-dimensional so-called hydraulic models. While simpler, a hydraulic model is by no means trivial for multiphase flow. Yet again, the devil is in the details as there is a need to create models to address the challenging phenomena mentioned above. This leads to a framework of bespoke models not only in terms of the different phenomena but also in terms of varying flow conditions.

In the 1970s, computers enabled the first wave of multiphase flow simulations. The first models introduced belong to a family called steady-state models. These models disregard any time-dependent variations in the flow, i.e., each simulation estimates the conditions in a pipe under the assumption that no conditions or properties change with time. Despite this simplification, the models offer fundamental insights into multiphase flow and are commonly used in all phases from concept engineering through to operations. With often fast and easy-to-use workflows, steady-state simulations have reached a broad adoption across domains.

While the oil and gas industry adopted steady-state models in the 70s, development of transient models, i.e., models that incorporate time-dependent changes in the flow, gained interest in a different energy sector, namely nuclear power. Models were developed for multiphase flow of water and steam in nuclear reactors. In the 1980s, work started to adopt these models to oil and gas applications. The added dimension of time increases the complexity by orders of magnitude as compared to steady-state models. As a consequence, transient simulations have not reached an as broad adoption across domains but are mainly limited to smaller groups of expert users. Nonetheless, transient multiphase flow simulations changed the oil and gas industry forever. Up until the late 80s, every offshore development had required its own platform. With transient multiphase flow simulations at hand, it was possible to perform the comprehensive design assessments and risk analyses required to safely move from individual platforms to subsea multiphase flowlines. Following the first successful subsea installations on the Norwegian Continental Shelf in the late 80s, the technology has redefined development concepts and helped push the limits further, and deeper all over the world.

So far, we have considered flow simulations for design of installations and better operational practices. Around the turn of the millennium, the interest grew around the logical next step; to apply existing engineering software in online applications connected to live measurements, to mimic the multiphase flow at real-time. The adoption to online environments is, however, not without challenges. Engineering applications require flow simulations which deliver reasonable results under almost any circumstances. The solutions rely on numerous model components, and which one to apply depends on the physical conditions. In operations, predictability is key. To achieve the best performance possible calibration is required, but the numerous model components and their interdependencies make calibration difficult. On top of that comes the challenge to adopt engineering software to meet the demands on performance and maintainability associated with online deployment.

Over the first two decades of the 21st century, improvements in computer storage and network connectivity allowed operators to collect and store more production data than ever before. This lead flow simulations in a new direction and introduced machine learning and so-called data driven models. With historical information about a producing asset available, computers can create (train) models to predict complex phenomena without detailed knowledge of the underlying physics. No matter how complex the phenomenon, the resulting data-driven model is always fast to execute and easy to deploy. However, data-driven models are no silver bullet but come with challenges of their own. For example, training of data-driven models requires large amounts of high-quality data. Furthermore, since the models are based purely on historical data, they have very limited capability to extrapolate beyond the range of historically known conditions.

Five decades into flow simulations, three decades into transient flow simulations, and two decades into flow simulations online, the question remains: Is there a solution for predictive and practical transient flow simulations in an operational real-time setting? We believe there is – hybrid technology. Hybrid solutions combine first-principle physics with machine learning. First-principle physics is a necessity to address operational conditions that have not occurred before. It is also required to gain a complete understanding of the entire flowing system from inlet to outlet. However, the first-principles flow simulation software must be robust, fast, easy to calibrate, and easy to deploy to meet the demands of real-time operations. Combine this with the ability of machine learning models to make the most out of available data, and you have a solution that can meet your expectations. In other words, it is about performant and autonomous flow simulations that self-adjust to changing conditions, deliver the best possible predictions, and enable easy access to always up-to-date simulation models for consumption to analyze and optimize your operations.

[1] G.A. Tokaty. A History and Philosophy of Fluid Mechanics. ISBN 0-486-68103-3. 1994.

If you like to learn more about our solutions contact us on info@turbulentflux.com.

Automation

Jul 17, 2020

Why should automation matter to you?

Automation revolves around efficiency and getting the most out of the resources available – it is about increased throughput and better return on investment. For organizations, this ultimately boils down to nickels and dimes invested, but it is not synonymous to cost cuts and the two must not be confused. Rather, it is about us and how we make the most out of our time. In fact, one can argue that each and every one of us stand to earn more from automation personally.

So, how is it that we personally stand to earn more from automation? Most of us have tasks that are tedious and repetitive. Tasks that need to be done and are of routine. These tasks are treacherous as they lull us into a false security that is error prone, making us spend even more time on these tasks. Through automation, we can marry personal goals with organizational ones and increase throughput as well as improve quality. As if ridding ourselves of the boring, repetitive tasks is not award enough, there is an added bonus – we have freed up time to spend on the interesting parts of our job which really excite us and bring additional value to our companies.

To exemplify, consider software development. Automation permeates development practices, e.g., through automated builds, automated and integrated testing, and automated deployment. In this context, automation means reduction of manual work. According to the 2017 State of DevOps Report, high-performing organizations spend 20-40% less time on manual work than their peers. Furthermore, they deliver at higher standards, as reflected by 20% less time spent on unplanned work and rework. At the end, this results in a staggering 45% more time spent on new development.

Thus, automation creates a win-win situation for us and our organizations. While the organization gets increased throughput and improved quality, we get to spend more of our time on tackling the interesting challenges. That is why automation matters to you!

Johan Henriksson, Product Portfolio Manager

If you like to learn more about our solutions contact us on info@turbulentflux.com

Create value from data democratization

Jul 15, 2020

By Gjermund Weisz, Chief Operating Officer

Data democratization means that everyone has access to data and there are no gatekeepers that create a bottleneck at the gateway to the data. The goal is to have anybody use data at any time to make decisions without barriers to access or understanding. This is already happening in the oil & gas industry, but the industry needs to work more on value creation on top of the data itself.

This is nothing new. In the early 2000’s for example, I started a software company in Location Based Services for outdoor activities. At that time, we needed to access maps to visualize activity locations. Map data originated from multiple sources which were difficult or expensive to access. In 2005 Google changed it all when they made a massive map database available (Google Maps) with a free and open Application Programming Interface (API). This fueled a massive software innovation on top of map data that combined other data sources and algorithms to create strong growth in user values. Today, we consume map-fueled solutions in our daily life without thinking about the time before map data became easily available. Just think about our daily lives with traffic advice, local weather forecasts, store locators, and activity tracking. In industry, maps are used for office locations, asset tracking, augmented reality and much more.

The oil & gas industry is now creating their own “maps” generated by software solutions that gather data from independent sources and make it available through APIs. Open APIs make data available for external consumption while it before was only contained in non-accessible silos. Innovations have started to create user values that matter for the industry. There is a need for new software tools that will do more with less resources. We will continue to see growth in the number of innovations in areas such as production surveillance, production optimization and equipment monitoring/maintenance. The winning innovations will prove their user value and be scalable. Scalability is about the right technology stack – technology that is accessible, quick to implement, and easy to maintain – combined with a working business model.

In Turbulent Flux we have built the next generation, scalable flow simulation software to create concrete user values on top of production data. The software uses available production data to calculate production critical flow information and enable increased production with optimized operations. The software is accessible through an open API published on the web to facilitate innovation and easy adoption. This enables the industry to effectively take advantage of production data as digital transformation changes the way we consume and act on available information.

If you like to learn more about our solutions contact us on info@turbulentflux.com

Why accessibility to flow rates matters more than ever

Jul 03, 2020

By Johan Henriksson, Product Portfolio Manager

Since the early days of oil and gas production, operators have faced the challenge to measure the flow of gas, oil, and water to manage and optimize production in wells and pipelines and to assure safety in their operations. The solutions available to address flow rates all have their own advantages and disadvantages, however, they are generally costly to put in place and to operate. With that, state-of-the-art technology has typically been restricted to the more profitable wells.

The year 2020 is mind-changing both for society and for industry, and the oil and gas industry is no exception. With Covid-19 causing medical and economic issues globally, and the Russia–Saudi Arabia oil price war imposing further pressure on the oil price, the price of crude went negative for the first time in history in April this year. At the same time as the economic impacts resulted in furloughs, many companies implemented work-from-home practices to keep their employees safe. On the personal level this caused new challenges for many households. On corporate level, it led to rethinking, e.g., policies on on-site work vs. working remotely, how digitalization can further increase efficiency, and how to optimize operations to avoid shutdown and deferrals. Once more, we see the increased focus on cost sensitivity and cost cutting – perhaps the one entity familiar to us in current times.

It may be unquestionable that oil and gas assets with existing flow simulation technology in place will look heavily into optimizing operations. However, many fields are not equipped with such technology and will continue to face challenges. We know that operators know they need to address this and work smarter – use better tools, reduce overhead costs, make quick intelligent decisions. Access to the right technology should not be a luxury but a given in times when economic stability of businesses, health and safety, and the welfare of individual households demand this.

Access to real-time insights of the production flow is the starting point for real-time production monitoring and production optimization. Such insights rely on real-time access to already available information, e.g., pressure and temperature information. Pressure and temperature sensors are commonplace in oil and gas assets, and these sensors have since long been connected devices which deliver data to historians, databases, data platforms, etc. However, while the information has been available, it has not been generally accessible. With the transition to Cloud-based solutions this landscape changes. Data is not only centralized to a larger extent but also more accessible as data platforms enable secure access to necessary data through APIs. This offers great opportunities for the solutions that can monetize the increasing accessibility to data. The winners will be the solutions which can easily connect to the various data sources, consume the information, and convert this into real-time, actionable insights. These solutions will leverage accessibility to data and be one step closer to the ultimate goal – accurate production monitoring and production optimization.

The digitalization era has given us the ability to automate processes, deploy faster, and make information more accessible. Turbulent Flux echoes this and offers advanced production monitoring and optimization technology that combines predictive capabilities of physical flow models with the speed and self-correcting abilities of data analytics. With Cloud-native software and a rich API open for any third-party use and integration, Turbulent Flux allows you to address your concerns quickly and widely.

For businesses to survive 2020, the right solutions must be in place; solutions that are easy and quick deployed at a low cost.

If you like to learn more about our solutions contact us on info@turbulentflux.com.

Get our latest news and insights

* indicates required

By submitting this form you agree to our Privacy Policy and allow us to keep you updated on Turbulent Flux products and services from time to time.

You can unsubscribe at any time by clicking the link in the footer of our emails.
For information about our privacy practices, please visit our website.