Tag Archives: Big data

How To Turbocharge Oil & Gas Analyses With Machine Learning and The Right EIM Foundation

It is generally accepted that good analysis of oil and gas data results in actionable insights, which in turn leads to better profits and growth. With today’s advancements in technology and processing power, more data and better analysis are easily achievable but will require the right EIM (Enterprise Information Management)  foundation to make “all” data available and “analyses-ready”.

The evidence of those analytics are clear and ubiquitous. In an article in JPT (Journal of Petroleum Technology) by Stephen Rassenfoss, “Four Answers To the Question: What Can I Learn From Analytics?”, Devon Energy concludes it is possible to increase production by 25% by drilling the lateral toe-up in Cana-Woodford Shale. Range Resources, responding to a different question and with Machine Learning (ML) analysis, concluded more production in the Marcellus is associated with wells fracked with as much sand volume as the reservoir can handle.

All Data All The Time = More Studies More Return

Looking closer at the article, both studies were based on a relatively small data set; Devon Energy and Range Resources only used 300 and 156 wells respectively.  Both companies stated that a larger data set would help their respective studies. So, why some studies rely on a small population of wells when there are thousands more that could have been included to reach a deeper understanding.

While the answer depends on the study itself, we find two key data”preparation” problems that may contribute to the answer a) data findability/ availability b) data readiness for analyses. In some E&P companies, data preparation can consume over 50% of total study’s time. This is where I believe EIM can make a difference by taking a proactive role.

 Three Strategic EIM Initiatives to Turbocharge Your Organization’s Analytics

Information preparation for exploratory analytics like the above, require Oil and Gas companies to embrace a new paradigm in EIM. The traditional “data management” has its applications but can be rigid and limiting because it requires predefined schemas.

We share our favorite three EIM strategic initiatives to deliver  more, trustworthy and analyses-ready information:

  • Strategic and Selective Information Governance Program – A strong data governance model ensures data can be trusted, correlated and integrated, this is a foundational step and will take standardizing, and mastering key entities and attributes.   Tip: key enabling technology is Master Data Management (MDM)
  •  Multi-Stream Data Correlation – Together with the MDM, “Big Data” technology and processes enable the inclusion and further correlation of data from a variety of streams, without the prejudice of predefined data schema.
  • Collaborative Process and Partnership – From years of lessons learned, we’ve noticed that none of the above will move the needle much at all if implemented in isolation. A collaborative process with the sole purpose of fostering a close partnership between IM engineers/ architects, data scientists, and the business, is what differentiates success from failure. As the organization finds new “nuggets of insights,” the EIM team’s role is to put the necessary structure in place to capture the required data systematically and then infiltrate it into the organization’s DNA.

New analytics are positively changing how we produce and manage oil and gas fields. Companies that invest in getting their EIM foundation right will lead the race among its competition.

Disclosure:

For help on defining and implementing EIM strategy please contact us.
With Petroleum Engineers, Geoscientist, Data Scientists and Enterprise Information Architects on the Certis team, we help companies design and implement EIM solutions that support their business goals. for more information on our services please email us at info@certisinc.com.

Why Connecting Silos With Better IM Architecture Is Important

If you work in an oil and gas company, then you are familiar with the functional divides. We are all familiar with the jokes about geologists vs. engineers. We laugh and even create our own. But jokes aside, oil and gas companies operate in silos and with reason.

But while organizational silos may be necessary to excel and maintain standards of excellence, collaboration and connection across the silos are crucial for survival.

For an energy company to produce hydrocarbons from an asset, it needs all the departments to work together (geoscience, engineering, finance, land, supply chain …etc.). This requires sharing of detailed information and collaborating beyond meeting rooms and email attachments. But the reality in many oil and gas companies today is different, functional silos extend to information silos.

Connected Silos Are Good. Isolated Silos Are Bad

In an attempt to connect silos, “Asset Teams” or “Matrix” organizations are formed and incentive plans are carefully crafted to share goals between functions. These are great strides, but no matter the organizational structure, or the incentive provided, miscommunications, delays, and poor information hand-over are still common place. Until we solve the problem of seamless information sharing, the gap between functional departments will persist; because we are human and we rationalize our decisions differently.  This is where technology and automation (if architected correctly) can play a role in closing the gap between the silos.

Asset team members and supporting business staff have an obligation to share information not only through meetings and email attachments but through organizing and indexing asset files throughout the life of the asset. Fit-for-Purpose IM architecture has a stratigic role to play in closing the gap between the functional silos.  

Connecting Functional Silos With IM Takes Vision & Organizational Commitment 

Advancements in IM (Information Management) and BPMS (Business Process Management Systems) can easily close a big part of the remaining gap. But many companies have not been successful in doing so, despite significant investments in data and process projects. There can be many reasons for this, I share with you two of the most common pitfalls I come across:

  • Silo IM projects or systems –  Architecting and working on IM projects within one function without regard to impact on other departments. I have seen millions of dollars spent to solve isolated geoscience data needs, without accounting for impact on engineering and land departments. Or spent on Exploration IM projects without regard to Appraisal and Development phases of the asset. Quite often, organizations do not take the time to look at the end-to-end processes and its impact on company’s goals. As a result, millions of dollars are spent on IM projects without bringing the silos any closer.  Connecting silos through an IM architecture requires a global vision.
  • Lack of commitment to enterprise standards – If each department defines and collects information according to their own needs without regard of the company’s needs, it is up to other departments to translate and reformat. This often means rework and repetitive verification whenever information reaches a new departmental ‘checkpoint’.

The above pitfalls can be mitigated by recognizing the information dependencies and commonalities between departments then architecting global solutions based on accepted standards and strong technology. It takes a solid vision and commitment.

For a free consultation on how to connect silos effectively, please schedule your appointment with a Certis consultant. Email us at info@certisinc.com or call us on 281-377-5523.

Part 3 of 3: Are we progressing? Oil & Gas Data Management Journey the 2000s

The 1990’s shopping spree for applications produced a spaghetti of links between databases and applications while also chipping away the petro professional’s effective time with manual data entry. Then, a wave of mega M&As hit the industry in late 90s early part of the 2000s.

Mega M&As (mergers and acquisitions) continued into the first part of the 2000s, bringing with them—at least for those on the acquiring side – a new level of data management complexity.

With mega M&As, the acquiring companies inherit more databases and systems, and many physical boxes upon boxes of data. This influx of information proved to be too much at the outset and companies struggled – and continue to struggle – to check the quality of the technical data they’d inherited. Unknown at the time, the data quality issues present at the outset of these M&As would have lasting effects on current and future data management efforts. In some cases it gave rise to law suites that were settled in millions of dollars. 

Early 2000s

Companies started to experiment with the Internet.  At that time, that meant experimenting with simple reporting and limited intelligence on the intranet.  Reports were still mostly distributed via email attachments and/or posted in a centralized network folder.

I am convinced that it was the Internet that  necessitated cleaning technical data and key header information for two reasons: 1) Web reports forced the integration between systems as business users wanted data from multiple silo databases on one page. Often times than not, real-time integration could not be realized without cleaning the data first 2)  Reports on the web linked directly to databases exposed more “holes” and multiple “versions for the same data”;  it revealed how necessary it was to have only ONE VERSION of information, and that  had better be the truth.

The majors were further ahead but for many other E&P companies, Engineers were still integrating technical information manually, taking a day or more to get a complete view and understanding of their wells, excel was the tool moslty. Theoretically, with these new technologies, it should be possible to automate and instantaneously give a 360 degree-view of a well, field, basin and what have you. However, in practice it was a different story because of poor data quality.  Many companies started data cleaning projects, some efforts were massive, in tens of millions of dollars, and involved merging systems from many past acquisitions.

In the USA, in addition to the internet, the collapse of Enron in October 2001 and the Sarbanes–Oxley Act enacted in July 30, 2002, forced publicly traded oil and gas companies to document and get better transparency into operations and finances. Data management professionals were busy implementing their understanding of SOX in the USA. This required tightener definitions and processes around data.

Mid 2000s

By mid-2000s, many companies started looking into data governance. Sustaining data quality was now in the forefront.  The need for both sustainable quality data and data integration gave rise to Well Master Data Management initiatives. Projects on well hierarchy, data definitions, data standards, data processes and more were all evolving around reporting and data cleaning projects. Each company working on its own standards, sharing success stories from time to time.  Energetics, PPDM and DAMA organizations came in handy but not fully relied on.

Late 2000s

When working on sustaining data quality, one runs into the much-debated subject of who owns the data?  While for years, the IT department tried to lead the “data management” efforts, they were not fit to clean technical oil and gas data alone; they needed heavy support from the business. However the engineers and geoscientists did not feel it was their priority to clean “company-wide” data.

CIOs and CEOs started realizing that separating data from systems is a better proposition for E&P.  Data lives forever while systems come and go. We started seeing a movement towards a data management department, separate and independent from IT, but working close together. Few majors made this move in mid 2000s with good success stories others are started in late 2000s. First by having a Data Management Manager reporting to the CIO (and maybe dotted line to report to a business VP) then reporting directly to the business.

Who would staff a separate data management department?  You guessed it; resources came from both the business and IT.  In the past each department or asset had its own team of technical assistants “Techs” who would support their data needs (purchase, clean, load, massage…etc.) Now many companies are seeing a consolidation of “Techs” in one data management department supporting many departments.

Depending on how the DM department is run, this can be a powerful model if it is truly run as a service organization with the matching sense of urgency that E&P operations see. In my opinion, this could result in cheaper, faster and better data services for the company, and a more rewarding career path for those who are passionate about data.

Late 2008 and throughout 2009 the gas prices started to fall, more so in the USA than in other parts of the world. Shale Natural Gas has caught up with the demand and was exceeding it.  In April 2010, we woke up to witness one of the largest offshore oil spill disasters in history. A BP well, Macondo, exploded and was gushing oil.

For companies that put all their bets on gas fields or offshore fields, they did not have appetite for data management projects. For those well diversified or more focused on onshore liquids, data management projects were either full speed or business as usual.

 2010 to 2015 ….

Companies that had enjoyed the high oil prices since the 2007 started investing heavily in “digital” oilfields.  More than 20 years had passed since the majors started this initiative (I was on this type of project with Schlumberger for one of the majors back in 1998). But now it is more justifiable than ever. Technology prices have come down, systems capacities are up, network reliability is strong, wireless-connections are reasonably steady and more. All have come together like a prefect storm to resurrect the “smart” field initiatives like-never before. Even the small independents were now investing in this initiative. High oil prices were justifying the price tag (multiple millions of dollars) on these projects. A good part of these projects is in managing and integrating real time data steams and intelligent calculations.

Two more trends appeared in the first half of the 2010s:

  • Professionalizing the petroleum data management. Seemed like a natural progression now data management departments are in every company. The PPDM organization has a competency model that is worth looking into. Some of the majors have their own models that are tied to their HR structure. The goal is to reward a DM professional’s contribution to business’ assets. (Also please see my blog on MSc in Petroleum DM)
  • Larger companies are starting to experiment and harness the power of Big Data, and the integration of structured with unstructured data. Meta data and managing unstructured has become more important than ever.

Both trends have tremendous contributions that are yet to be fully harnessed.  The Big Data trend in particular is nudging data managers to start thinking of more sophisticated “analysis” than they did before .  Albeit one could argue that Technical Assistants that helped engineers with some analysis, were also nudging towards data analytics initiatives.

In December 2015, the oil price collapses more than 60% from its peak

But to my friends’ disappointment, standards are still being defined. Well hierarchy, while is seems simple to the business folks, getting it all automated and running smoothly across all types and locations of assets  will require the intervention of the UN.  With the data quality commotion some data management departments are a bit detached from the operations reality and take too long to deliver.

This concludes my series on the history of Petroleum Data Management. Please add your thoughts would love to hear your views.

For Data Nerds

  1. Data ownership has now come full circle, from the business to IT and back to business.
  2. The rise of Shale and Coal-bed Methane properties, fast evolution of field technologies are introducing new data needs. Data management systems and services need to stay nimble and agile. The old ways of taking years to come up with a usable system is too slow.
  3. Data cleaning projects are costly, especially when cleaning legacy data, so prioritizing and having a complete strategy that aligns with the business’ goals are key to success. Starting with well-header data is a very good start, aligning with what operations really need will require paying attention to many other data types, including mealtime measurements.
  4. When instituting governance programs, having a sustainable, agile and robust quality program is more important than temporarily patching problems based on a specific system.
  5. Tying data rules to business processes while starting from the wellspring of the data is prudent to sustainable solutions.
  6. Consider outsourcing all your legacy data cleanups if it takes resources away from supporting day to day business needs. Legacy data cleaning outsources to specialized companies will always be faster, cheaper and more accurate.
  7. Consider leveraging standardized data rules from organizations like PPDM instead of building them from scratch. Consider adding to the PPDM rules database as you define new ones. When rules are standardized data, sharing exchanging data becomes easier and cost effective.  

E&P Companies Looking to New Ways to Deliver Data Management Services to Improve efficiency and transparency

Effective data management, specifically in the exploration and production (E&P) business, has a significant positive impact on operational efficiency and profitability of oil and gas companies. Upstream companies are now realizing that a separate “Data Management” or “Data Services” department is needed in addition to the conventional IT department. Those Departments’ key responsibilities are to “professionally” and “effectively” manage E&P technical data assets worth millions, in some cases, billions of dollars.

Traditional Data Management Processes Cannot Keep up with Today’s Industry Information Flow 

Currently, day-to-day “data management” tasks in the oil and gas industry are directed and partially tracked using Excel spreadsheets, emails and phone calls. One of the companies I visited last month, was using excel to validate receipt of seismic data against contracts and PO; e.g. all surveys and all their associated data. Another one used excel to maintain a list of all Wire-line Log data ordered by petrophysicists in a month to compare to end-of-the-month invoices.

Excel might be adequate if an E&P company is small and has little ambition to grow. However, the larger a company’s capital (and ambitions) the more information and processes are involved to manage data and documents’ life cycle. Consider more than 20,000 drilling permits issued a year in Texas alone. Trying to manage this much information with a spreadsheet, some tasks are bound to fall through the cracks.

Senior managers are more interested than ever in the source , integrity and accuracy of information that affect and influence their HSE and financial statements.
Providing senior managers with such information requires transparent data management processes that are clearly defined, repeatable, verifiable and that  allow managers to identify, evaluate and address or alleviate any obvious or potential risks…. before they become a risk. Preferably all delivered efficiently and cost-effectively.

Choosing the Right E&P Data Management Workflow Tools

It’s tempting to stay with the old way of doing things – the “way we have always done it” – because you have already established a certain (and personal), rhythm of working, inputting and managing data. Even the promise of improved profitability and efficiency is often not enough to convince people to try something new. But the advantages of new workflow tools and programs cannot and should not be underestimated.

For example, a workflow tool can help automate the creation of data management tasks, log and document technical meta data,  track data related correspondence, alert you for brewing issues. When all set and done, data management department would be set for growth and for handling more load of work with out skipping a beat.  Growing by adding more people is not sustainable.

So, where to start?

There are multiple data management workflow tools available from a variety of different vendors, so how do you know which one will work best for your company? You will need to ensure that your workflow tool or software is able to do the following:

  • Keep detailed technical documentation of incoming data from vendors, thereby minimizing duplication of work associated with “cataloging” or “tagging”;
  • Integrate with other systems in your organizations such as Seismic, Contracts, Accounting…etc., including proprietary software programs;
  • Allow sharing of tasks between data managers;
  • Enable collaboration and discussion to minimize scattered email correspondence; and,
  • Automatically alert others of issues such as requests that still need to be addressed

 

 

What Impact Does Big Data Technology Bring To Oil and Gas?

Dealing with the massive influx of information gathered from exploration projects or real time gauges at the established fields is pushing the traditional data-management architecture to its limits in the oil and gas industry. More sensors, from 4-D seismic or from fiber optics in wells, crack the gap wider between data capture advancements and the traditional ways of managing and analyzing data. It is the challenge of managing the sheer volume of collected data and the need to sift through it in a timely fashion that Big Data technologies can promise to help us solve.  This was just one of the suggestions on the table at the recent Data Management Workshop I attended in Turkey earlier this month.

For me, one of the main issue with the whole Big Data concept within the oil and gas industry is that, while it sounds promising, it has yet to deliver tangible return that companies need to see in order to prove its worth.  To overcome this dilemma, Big Data vendors such as TeraData, Oracle, and IBM should consider demonstrating concrete new examples of real-life oil & gas wins. By new I mean challenges that are not possible to solve with traditional data architecture and tools. Vendors should also be able to offer Big Data technology at a price that makes it viable for the companies to “try” it and experiment.

The Oil and Gas industry is notoriously slow to adopt new software technology, particularly when it comes to anything that tries to take the place of traditional methods that have proven to work already, unless its value is apparent.  To quote my good friend ” we operate with fat margins we don’t feel the urgency”.  However, E&P companies should put their creative hats on to work alongside Big Data technology vendors. Big Data may just be the breakthrough that we need to make a tangible step-change in how we consume and analyse subsurface and surface data with agility.

If either side, vendors and E&P companies, fail to deliver, Big Data becomes a commercial white elephant and is doomed to very slow adoption.

At the workshop we had Oracle, Teradata, and IBM all showing interesting tools. However they showed examples from other industries and occasionally referred to examples that are possible to solve with the conventional data technology. They left the audience still wondering!

One Big Data example that is relevant and hits home was presented by CGG. CGG used pattern recognition (on Teradata technology) to find all logs that exhibit a specific pattern a petrophysicist may be interested in. This type of analysis require scanning through millions of log curves, not just meta-data which is what we had been bound to in traditional architecture. This opens up new horizons to serendipity and who knows maybe to new discoveries.

 

What More Can Be Done To Reduce Well Failure & Downtime? Predictive Analytics.

Today’s high oil prices make every production moment crucial and well downtime costlier than ever. When a well fails, money sits underneath the earth’s surface, but you cannot get to it. In addition, you have equipment and a crew draining money out of your pocket while you wait to replace a critical component. Ideally, you wouldn’t wait. You would be ready for equipment failure.

Example: One operator reported that downtimes causing an average of 400 bbl per day of production loss is normal practice. If we assume a minimum margin of $50 per bbl, that is more than $7 Million dollars of uncaptured revenue in that year. That’s a hefty price tag. Oil companies need to ask themselves: “what more can be done. Have all measures been taken to keep downtime to its minimum?”

With high equipment costs, companies used to balk at owning spare equipment. On the contrary, some companies consider backups as standard procedures. The trick is deciding on a balance between stockpiling backups and knowing what you really need ahead of time. I believe this balance can be achieved with “Automated Predictive Analytics”.

Predictive analytics compares incoming data from the field to expected or understood behaviors and trends to predict the future. It encompasses a variety of techniques from statistics, modeling and data mining that analyze current and historical facts to make future predictions.

Automated predictive analytics leverages systems to sift through large amount of data and alert for issues.  Automating predictive analytics means you can monitor and address ALL equipment on the critical path on a daily basis–more frequently if your data permits. Automating steps up the productivity of your engineers by minimizing the need to search for problem wells. Instead, your engineers can focus on addressing problem wells.

If you are not already on the predictive mode, these two cost-effective solutions can get you started on the right path.

1. Collect well and facility failure data (including causes of failure data). Technology, processes and the right training make this happen. There are a few tools available off-the-shelf. You may already have them in house; activate their use.

2.  Integrate systems and data then automate the analysis: expected well models, trends and thresholds need to be integrated with actual daily data flowing in from the field. “Workflow” automation tools on the market can exceed your expectations when it comes to integration and automating some of the analysis.

Example: One operator in North Dakota reported that 22 of its 150 producing wells in the Bakken have failed within the first two years due to severe scaling in the pump and production tubing. Analytics correlated rate and timing of failure with transient alkalinity spikes in the water analyses. The cause was attributed to fracturing-fluid flowback. (Journal of Petroleum Technology, March 2012).

In the above example, changes in production and pressure data would trigger the need to check water composition that could in turn trigger an action on engineers to check the level of scale inhibitors used on the well before the pump fails. This kind of analysis requires data (and systems) integration.

One more point on well failure systems. Too many equipment failures occur without proper knowledge on what went wrong. The rush to get the well producing again discourages what is sometimes seen as “low priority research”. Yet, this research could prevent future disruptions. By bringing the data together and using it to its full potential companies can save money now and for years to come.