Today’s high oil prices make every production moment crucial and well downtime costlier than ever. When a well fails, money sits underneath the earth’s surface, but you cannot get to it. In addition, you have equipment and a crew draining money out of your pocket while you wait to replace a critical component. Ideally, you wouldn’t wait. You would be ready for equipment failure.
Example: One operator reported that downtimes causing an average of 400 bbl per day of production loss is normal practice. If we assume a minimum margin of $50 per bbl, that is more than $7 Million dollars of uncaptured revenue in that year. That’s a hefty price tag. Oil companies need to ask themselves: “what more can be done. Have all measures been taken to keep downtime to its minimum?”
With high equipment costs, companies used to balk at owning spare equipment. On the contrary, some companies consider backups as standard procedures. The trick is deciding on a balance between stockpiling backups and knowing what you really need ahead of time. I believe this balance can be achieved with “Automated Predictive Analytics”.
Predictive analytics compares incoming data from the field to expected or understood behaviors and trends to predict the future. It encompasses a variety of techniques from statistics, modeling and data mining that analyze current and historical facts to make future predictions.
Automated predictive analytics leverages systems to sift through large amount of data and alert for issues. Automating predictive analytics means you can monitor and address ALL equipment on the critical path on a daily basis–more frequently if your data permits. Automating steps up the productivity of your engineers by minimizing the need to search for problem wells. Instead, your engineers can focus on addressing problem wells.
If you are not already on the predictive mode, these two cost-effective solutions can get you started on the right path.
1. Collect well and facility failure data (including causes of failure data). Technology, processes and the right training make this happen. There are a few tools available off-the-shelf. You may already have them in house; activate their use.
2. Integrate systems and data then automate the analysis: expected well models, trends and thresholds need to be integrated with actual daily data flowing in from the field. “Workflow” automation tools on the market can exceed your expectations when it comes to integration and automating some of the analysis.
Example: One operator in North Dakota reported that 22 of its 150 producing wells in the Bakken have failed within the first two years due to severe scaling in the pump and production tubing. Analytics correlated rate and timing of failure with transient alkalinity spikes in the water analyses. The cause was attributed to fracturing-fluid flowback. (Journal of Petroleum Technology, March 2012).
In the above example, changes in production and pressure data would trigger the need to check water composition that could in turn trigger an action on engineers to check the level of scale inhibitors used on the well before the pump fails. This kind of analysis requires data (and systems) integration.
One more point on well failure systems. Too many equipment failures occur without proper knowledge on what went wrong. The rush to get the well producing again discourages what is sometimes seen as “low priority research”. Yet, this research could prevent future disruptions. By bringing the data together and using it to its full potential companies can save money now and for years to come.