Category Archives: Information Management

Part 3 of 3: Are we progressing? Oil & Gas Data Management Journey the 2000s

The 1990’s shopping spree for applications produced a spaghetti of links between databases and applications while also chipping away the petro professional’s effective time with manual data entry. Then, a wave of mega M&As hit the industry in late 90s early part of the 2000s.

Mega M&As (mergers and acquisitions) continued into the first part of the 2000s, bringing with them—at least for those on the acquiring side – a new level of data management complexity.

With mega M&As, the acquiring companies inherit more databases and systems, and many physical boxes upon boxes of data. This influx of information proved to be too much at the outset and companies struggled – and continue to struggle – to check the quality of the technical data they’d inherited. Unknown at the time, the data quality issues present at the outset of these M&As would have lasting effects on current and future data management efforts. In some cases it gave rise to law suites that were settled in millions of dollars. 

Early 2000s

Companies started to experiment with the Internet.  At that time, that meant experimenting with simple reporting and limited intelligence on the intranet.  Reports were still mostly distributed via email attachments and/or posted in a centralized network folder.

I am convinced that it was the Internet that  necessitated cleaning technical data and key header information for two reasons: 1) Web reports forced the integration between systems as business users wanted data from multiple silo databases on one page. Often times than not, real-time integration could not be realized without cleaning the data first 2)  Reports on the web linked directly to databases exposed more “holes” and multiple “versions for the same data”;  it revealed how necessary it was to have only ONE VERSION of information, and that  had better be the truth.

The majors were further ahead but for many other E&P companies, Engineers were still integrating technical information manually, taking a day or more to get a complete view and understanding of their wells, excel was the tool moslty. Theoretically, with these new technologies, it should be possible to automate and instantaneously give a 360 degree-view of a well, field, basin and what have you. However, in practice it was a different story because of poor data quality.  Many companies started data cleaning projects, some efforts were massive, in tens of millions of dollars, and involved merging systems from many past acquisitions.

In the USA, in addition to the internet, the collapse of Enron in October 2001 and the Sarbanes–Oxley Act enacted in July 30, 2002, forced publicly traded oil and gas companies to document and get better transparency into operations and finances. Data management professionals were busy implementing their understanding of SOX in the USA. This required tightener definitions and processes around data.

Mid 2000s

By mid-2000s, many companies started looking into data governance. Sustaining data quality was now in the forefront.  The need for both sustainable quality data and data integration gave rise to Well Master Data Management initiatives. Projects on well hierarchy, data definitions, data standards, data processes and more were all evolving around reporting and data cleaning projects. Each company working on its own standards, sharing success stories from time to time.  Energetics, PPDM and DAMA organizations came in handy but not fully relied on.

Late 2000s

When working on sustaining data quality, one runs into the much-debated subject of who owns the data?  While for years, the IT department tried to lead the “data management” efforts, they were not fit to clean technical oil and gas data alone; they needed heavy support from the business. However the engineers and geoscientists did not feel it was their priority to clean “company-wide” data.

CIOs and CEOs started realizing that separating data from systems is a better proposition for E&P.  Data lives forever while systems come and go. We started seeing a movement towards a data management department, separate and independent from IT, but working close together. Few majors made this move in mid 2000s with good success stories others are started in late 2000s. First by having a Data Management Manager reporting to the CIO (and maybe dotted line to report to a business VP) then reporting directly to the business.

Who would staff a separate data management department?  You guessed it; resources came from both the business and IT.  In the past each department or asset had its own team of technical assistants “Techs” who would support their data needs (purchase, clean, load, massage…etc.) Now many companies are seeing a consolidation of “Techs” in one data management department supporting many departments.

Depending on how the DM department is run, this can be a powerful model if it is truly run as a service organization with the matching sense of urgency that E&P operations see. In my opinion, this could result in cheaper, faster and better data services for the company, and a more rewarding career path for those who are passionate about data.

Late 2008 and throughout 2009 the gas prices started to fall, more so in the USA than in other parts of the world. Shale Natural Gas has caught up with the demand and was exceeding it.  In April 2010, we woke up to witness one of the largest offshore oil spill disasters in history. A BP well, Macondo, exploded and was gushing oil.

For companies that put all their bets on gas fields or offshore fields, they did not have appetite for data management projects. For those well diversified or more focused on onshore liquids, data management projects were either full speed or business as usual.

 2010 to 2015 ….

Companies that had enjoyed the high oil prices since the 2007 started investing heavily in “digital” oilfields.  More than 20 years had passed since the majors started this initiative (I was on this type of project with Schlumberger for one of the majors back in 1998). But now it is more justifiable than ever. Technology prices have come down, systems capacities are up, network reliability is strong, wireless-connections are reasonably steady and more. All have come together like a prefect storm to resurrect the “smart” field initiatives like-never before. Even the small independents were now investing in this initiative. High oil prices were justifying the price tag (multiple millions of dollars) on these projects. A good part of these projects is in managing and integrating real time data steams and intelligent calculations.

Two more trends appeared in the first half of the 2010s:

  • Professionalizing the petroleum data management. Seemed like a natural progression now data management departments are in every company. The PPDM organization has a competency model that is worth looking into. Some of the majors have their own models that are tied to their HR structure. The goal is to reward a DM professional’s contribution to business’ assets. (Also please see my blog on MSc in Petroleum DM)
  • Larger companies are starting to experiment and harness the power of Big Data, and the integration of structured with unstructured data. Meta data and managing unstructured has become more important than ever.

Both trends have tremendous contributions that are yet to be fully harnessed.  The Big Data trend in particular is nudging data managers to start thinking of more sophisticated “analysis” than they did before .  Albeit one could argue that Technical Assistants that helped engineers with some analysis, were also nudging towards data analytics initiatives.

In December 2015, the oil price collapses more than 60% from its peak

But to my friends’ disappointment, standards are still being defined. Well hierarchy, while is seems simple to the business folks, getting it all automated and running smoothly across all types and locations of assets  will require the intervention of the UN.  With the data quality commotion some data management departments are a bit detached from the operations reality and take too long to deliver.

This concludes my series on the history of Petroleum Data Management. Please add your thoughts would love to hear your views.

For Data Nerds

  1. Data ownership has now come full circle, from the business to IT and back to business.
  2. The rise of Shale and Coal-bed Methane properties, fast evolution of field technologies are introducing new data needs. Data management systems and services need to stay nimble and agile. The old ways of taking years to come up with a usable system is too slow.
  3. Data cleaning projects are costly, especially when cleaning legacy data, so prioritizing and having a complete strategy that aligns with the business’ goals are key to success. Starting with well-header data is a very good start, aligning with what operations really need will require paying attention to many other data types, including mealtime measurements.
  4. When instituting governance programs, having a sustainable, agile and robust quality program is more important than temporarily patching problems based on a specific system.
  5. Tying data rules to business processes while starting from the wellspring of the data is prudent to sustainable solutions.
  6. Consider outsourcing all your legacy data cleanups if it takes resources away from supporting day to day business needs. Legacy data cleaning outsources to specialized companies will always be faster, cheaper and more accurate.
  7. Consider leveraging standardized data rules from organizations like PPDM instead of building them from scratch. Consider adding to the PPDM rules database as you define new ones. When rules are standardized data, sharing exchanging data becomes easier and cost effective.  

Part 2: Are we progressing? Oil & Gas Data Management Journey

In my previous blog, I looked back to the 1960s, 70s, and 80s, and how E&P technical data was generated and stored. For those three decades, data management was predominantly and virtually exclusively on paper. As I looked to the 90s, I found them packed with events that affected all areas of data value chain, from generation to consumption to archival.

Early 90s: Driving Productivity Forward

The early 90s continued one dominant theme from the late 1980s: the relentless drive for increased productivity throughout the business. This productivity focus coincided with three technological advancements that made their way into the industry. First of all, dropping costs of hardware with their growing capacity meant that computers became part of each office with meaningful scientific programs on them. Second, the increased capabilities of “networks” and “server/client” opened up new possibilities by centralizing and sharing one source of data. Third, proven success of relational databases and the SQL offered sophisticated ways to access and manipulate more data.

All this meant that, by the early 90s, engineers and the majority of geoscientists were able to do an increasing portion of their work on their own computers. At that time, the world of computer was divided into two; UNIX for G&G professionals and PC for the rest. Despite the divide of technologies, increases in productivity were tangible. Technology had proven itself useful and helpful to the cause, and was here to stay.

Petroleum Geoscience- and Engineering- specific software applications started springing up in the market like Texas wild flowers in March. Although some companies built seismic and log interpretation software back in the 70s using Cray super computers and on DEC mini computers, not many could afford an $800,000 computer (yes, one computer that is) with limited capacity. “I remember selling software on time share for CGG back in the 80s” my friend commented, “companies had to connect to expensive super computers on extremely slow connections” he adds.  So when the computer became affordable and with the right power for E&P technical applications, the software market flourished.

The industry was thirsty for software and absorbed all of what was produced on the market and then some; operators who could afford it created their own. The big service companies decided they were not going to miss out. Schlumberger acquired Geoquest in 1992 for its seismic data processing services and tools, then also acquired Finder, Eclipse and a long string of other applications.

The only problem with all these different software applications was that they existed standalone; each application had its own database and did not communicate with another. As a result, working on each hydrocarbon asset meant multiple data entry points or multiple reformatting and re-loading. This informational and collaborative disconnect between the different E&P applications was chipping away the very productivity and efficiency the industry was desperate to harness.

Nevertheless, the standardization of defining, capturing, storing and exchanging E&P data was starting to be of interest to many organizations. PPDM in Canada and later POSC in the USA (now Energetics) were formed in 1988 and 1990 respectively. PPDM’s mission at the time was focused on creating an upstream data model that could be utilized by different applications. POSC’s mission was broader; to develop a standardized E&P data model and data exchange standards.

Schlumberger had a solution for its own suite of applications; it offered both Geoframe and Finder as answers to the data mess with Finder being the master database that fed Geoframe with information, and Geoframe integrated the various software applications together.

Mid-90s: Making Connections

In the mid-90s, Halliburton acquired Landmark Graphics and unveiled the OpenWorks platform for its suites of applications in April 1997 at the AAPG. Their market positioning? Integrated reservoir management and a data management solutions. OpenWorks offered similar data integration to GeoFrame but with its own set of scientific software. Geoframe and OpenWorks would butt heads for years to come, both promoting their vision of data management and integrated workflows. It seemed that the larger companies were either a Schlumberger or Landmark shop.

In 1997, the Open Spirit Alliance funded by a consortium (Schlumberger, Shell and Chevron) was born and interoperability was its mission. PrismTech was to develop and market an application integration framework that any company could utilize, it was to be open. Open Spirit platform was officially launched at the SEG in 1998.

Late 90s: Big Industry Changes

Come the late 90s, another drop in oil prices combined with other macroeconomics appeared to trigger a surge in “mega” M&A activities starting with Exxon acquiring Mobil in 1998, BP acquiring Amoco in 1999, and then Conoco acquiring Philips in 2000, these mega acquisitions continued through early 2000s.

All this M&A in the 90s added complexity to what was already a complex technical dataflow environment.

For the data nerds

  • In the 90s, the industry rapidly evolved from hand-written scout tickets, and hand-drawn maps to electronic data.
  • The “E&P software spring” produced many silo databases. These databases often overlapped in what they stored creating multiple versions of the same data.
  • The IT department’s circle of influence was slowly but surely expanding to include managing E&P data. IT was building data systems, supporting them, uploading data to them and generating reports.
  • Engineers and Geoscientist still kept their own versions of data, but in MANY locations now. While hardcopies were the most trusted form (perceived to be the most reliable), technical data was also stored in disks, network drives, personal drives and in various applications’ databases and flat files. It compounded the data management problems of the years prior to computerization of processes.
  • Relational databases and SQL proved to be valuable to the industry. But it was expensive to support a variety of databases; many operators standardized and requested systems on Oracle (or SQLServer later).
  • Systems not on relational databases either faded away to the background or converted to relational databases that were accepted by operators.
  • Two standard data models emerged PPDM and POSC (now Energetics) and one data integration platform from the OpenSpirit (now part of the Tibco suite).
  • Geos and engineers validated and cleaned their own data (sometimes with the help of Geotechs or technical assistants) prior to their analyses.

 Stay tuned for the Millennium, and please add your own memories (and of course please correct me for what is not accurate ….)

Are we progressing? Oil & Gas Data Management Journey…

Last month, I had dinner with a long-term friend who is now part of a team that sets strategic technical plans for his E&P employer. Setting strategies requires a standardized view of technical & financial data across all assets, in this case, multinational assets around the world. This data is required at both granular and helicopter level.  One of the things he mentioned was “I have to start by fixing data standards. I am surprised how little progress data-management standards have made since the POSC days in the mid 90s.”

How did Data Management evolve in oil & gas? Are we repeating mistakes? Are we making any progress? Here is what my oil and gas friends and I remember in this first part of a three-part series.  Please join me on this journey down memory lane and add your own thoughts.

The 1960s & 70s

Perhaps we can call these times, mainframe times.  Mainframes started to make their way into our industry around the mid-60s. At that time, they were mostly used to maintain accounting data. Like most data at this time, E&P accounting data was manually entered into systems, and companies employed large data-entry staff to input. Any computational requirement of the data was through feeding  programs through “punch cards”.

Wireline logs (together with Seismic data) were one of the very first technical data that required the use of computers, mainly at the service provider’s Computer Centers and then at the large offices of the largest major operators. A friend of mine at Schlumberger remembers the first log data processing center in Houston opening about 1970. In the mid-70’s more oil city offices (Midland, Oklahoma City, etc.) established regional computing centers. Here, wireline log data, including petrophysical and geological processing, was “translated” from films into paper log graphics for clients.

A geophysicist friend remembers using mainframe computers to read seismic tapes in the mid-70s. He said, “Everything was scheduled. I would submit my job, consisting of data and many Punch Cards, into boxes to get the output I needed to start my interpretation. That output could be anything from big roll of papers for seismic sections to an assemblage of data that could then be plotted. Jobs that took 4 hours  to process on a mainframe in the 70’s are instantaneous today”

The Society of Exploration Geophysicist (SEG) introduced and published data formatting standard SEG_Y in 1975.  SEG-Y formats are still utilized today.

The need to use a standard, well number identification process became apparent as early as 1956. Regulatory agencies started assigning API numbers to wells in the late 60s in the USA. The concept of developing world wide global well ID numbers is still being discussed today with some organizations making good progress.

The 2nd half of the 70s, pocket calculators and mini computers made their way to the industry. With that some computations could be done at the office or on the logging truck at the field without the need for Mainframes.

The 1980s

Early 80s. With the proven success of 3D seismic introduced by ExxonMobil, large and special projects started heavily processing 3D seismic on Mainframes. However, the majority of technical data was still mainly on paper. Wireline logs were still printed on paper for petrophysicists  to add their handwritten interpretations. Subsurface maps were still drawn, contoured and colored by hand. Engineering data came in from the field on paper and was then recorded on a running paper table. A reservoir engineer remembers   “We hired data clerks to read field paper forms and write the data in table (also on paper)”.

As personal computers  (PCs) made their way into the industry, some large companies started experimenting,  albeit they lacked the personal side since PCs were numbered and located in a common area. Employees were only given occasional access to them. These were also standalone computers, not networked. Data transfer from one PC to another happened via floppy disk. It was during this time that engineers were first exposed to spreadsheets (boy did they love those spreadsheets! I know I do)

Mid-80s. March 1986, oil prices crashed, a 55% drop over few days. In the three years following the crash, the industry shed staff the way cats shed hair. The number of petroleum staff dropped from approximately 1,000,000 employed staff to approximately 500,000 in three years.

oil price

Late 80s. But what seemed bad for the industry, may have also done the industry a favor. The oil price crash may have actually accelerated the adoption of technology. With a lot less staff, companies were looking for ways to accomplish more with less staff.

A geologist friend remembers using Zmap as early as 1988, which was the beginning of the move towards predominantly computer-based maps and technical data.

For data nerds: 

  • Engineers and geo professionals were responsible for maintaining their own data in their offices.
  • Although not very formal, copies of the data were maintained in centralized “physical” libraries. Data was very important in the “heat of the moment” after the project is complete, that data is someone else’s issue. Except there was no “someone else” yet.
  • This system produced many, many versions of the same data (or a little variation of it) all over. This data was predominantly kept on physical media and some kept on floppy disks which were mostly maintained by individuals.
  • From the 60s through to the end of the 80s, we can say there were mostly two global standards, one for the seismic data formatting – SEG-Y – and the other for log data – LAS (Log Ascii Standard). Any other standards were country- or company-specific.

I would love to hear from you if you feel I have missed anything or if you can add to our knowledge of how technical E&P data was managed during the above period.

Stay tuned for the 90s …

Data and Processes are your two friends in fat or skinny margin times – Some tools and ideas to weather low oil-prices

well;  2014 is ending with oil prices down and an upward trend on M&A activities. For those that are nearing retirement age, this is not all bad news. For those of us that are still building our careers and companies, well, we have uncertain times ahead of us. This got me asking: is it a double whammy to have most knowledgeable staff retiring when oil prices are low? I think it is.

At the very least, companies will no longer have the “fat margins” to forgive errors or to sweep costly mistakes under the rug! While costs must be watched closely, with the right experience some costs can be avoided all together. This experience is about to retire.

For those E&P companies that have already invested (or are investing) in putting in place, the right data and processes that captured knowledge into their analysis and opportunity prioritization, will be better equipped to weather low prices.  On the other hand, companies that have been making money “despite themselves” will be living on their savings hoping to weather the storm. If the storm stays too long or is too strong they will not survive.

Controlling cost the right way

Blanket cost cutting, across all projects is not good business. For example some wells do not withstand shutting down or differing repairs, you would risk losing the wells altogether. Selectively prioritizing capital and operational costs with higher margins and controllable risks, however, is good business. To support this good business practice is a robust foundation of systems, processes and data practices that empower a company to watch important matrices and act fast!

We also argue that without relevant experience some opportunities may not be recognized or fully realized.

Here are some good tools to weather these low prices:

Note that this is a quick list of things that you can do “NOW” for just few tens or few hundred thousand dollars (rather than the million dollar projects that may not be agile at these times)

  •  If you do not have this already, consider implementing a system that will give you a 360 degree view of your operations and capital projects. Systems like these need to have the capability to bring data from various data systems, including spreadsheets. We love the OVS solutions (http://ovsgroup.com/ ). It is lean, comes with good processes right out of the box and can be implanted to get you up and running within 90 days.
  • When integrating systems you may need some data cleaning. Don’t let that deter you; in less than few weeks you can get some data cleaned. Companies like us, Certisinc.com, will take thousands of records validate, de-duplicate, correct errors, complete what is missing and give you a pristine set. So consider outsourcing data cleaning efforts. By outsourcing you can have 20 maybe 40 data admins to go through thousands of records in a matter of a few days.
  • Weave the about-to-retire knowledge into your processes before it is too late. Basically understand their workflow and decision making process, take what is good, and implement it into systems, processes and automated workflows. It takes a bit of time to discover them and put them in place. But now is the time to do it. Examples are: ESP surveillance, Well failure diagnosis, identifying sweet frac’ing spots…etc. There are thousands upon thousands of workflows that can be implemented to forge almost error proof procedures for  “new-on-the job” staff
  • Many of your resources are retiring, consider hiring retirees, but if they would rather be on the beach than sitting around the office after 35+ years of work; then leverage systems like OGmentorsTM (http://youtu.be/9nlI6tU9asc ).

In short, the importance of timely and efficient access to right & complete data, and the right knowledge weaved into systems and processes are just as important, if not more important, during skinny margin times.

Good luck. I wish you all Happy Holidays and a Happy 2015.

A master’s degree in Petroleum Data Management?

I had dinner with one of the VPs of a major E&P company last week. One of the hot topics on the table was about universities agreeing to offer MSc in Petroleum Data Management. Great idea!  I thought. But it brought so many questions to my mind.

 

Considering where our industry is with information management (way behind many other industries), I asked who will define the syllabus for this PDM MSc? The majors? The service companies? Small independents? Boutique specialized service providers? IMS professors? All of the above?

 

Should we allow ourselves to be swayed by the majors and giant service companies? With their funding they certainly have the capability to influence the direction, but is this the correct (or only) direction? I can think of few areas where majors implementation of DM would be an overkill for small independents, they would get bogged down with processes that make it difficult to be agile! The reason that made the independents successful with the unconventional.

 

What should the prerequisite be? A science degree? Any science degree? Is a degree required at all? I know at least couple exceptional managers, managing data management projects and setting up DM from scratch for oil and gas companies, they manage billions of dollars’ worth of data. They do not have a degree, what happens to them?

 

It takes technology to manage the data.  MSc in Petroleum Data Management is no different. But unlike petroleum engineering and geoscience technologies, technology to manage data progresses fast, what is valid today may not still be valid next year! Are we going to teach technology or are we teaching about Oil and Gas data? This is an easy one, at least in my mind it is, we need both. But more about the data itself and how it is used to help operators and their partners be safer, find more and grow. We should encourage innovation to support what companies need.

 

PPDM – http://www.ppdm.org/ is still trying to define some standards, POSC (Petrochemical Open Standards Consortium (I think that is what it stands for, but not sure) came and went, Energistics – http://www.energistics.org/ is here and is making a dent, Openspirit – http://www.tibco.com/industries/oil-gas/openspirit made a dent but is no longer non-profit. Will there be standards that are endorsed by the universities?

The variations from company to company in how data management is implemented today is large. Studying and comparing the different variations will make a good thesis I think…

I am quite excited about the potential of this program and will be watching developments with interest.

 

E&P Companies Looking to New Ways to Deliver Data Management Services to Improve efficiency and transparency

Effective data management, specifically in the exploration and production (E&P) business, has a significant positive impact on operational efficiency and profitability of oil and gas companies. Upstream companies are now realizing that a separate “Data Management” or “Data Services” department is needed in addition to the conventional IT department. Those Departments’ key responsibilities are to “professionally” and “effectively” manage E&P technical data assets worth millions, in some cases, billions of dollars.

Traditional Data Management Processes Cannot Keep up with Today’s Industry Information Flow 

Currently, day-to-day “data management” tasks in the oil and gas industry are directed and partially tracked using Excel spreadsheets, emails and phone calls. One of the companies I visited last month, was using excel to validate receipt of seismic data against contracts and PO; e.g. all surveys and all their associated data. Another one used excel to maintain a list of all Wire-line Log data ordered by petrophysicists in a month to compare to end-of-the-month invoices.

Excel might be adequate if an E&P company is small and has little ambition to grow. However, the larger a company’s capital (and ambitions) the more information and processes are involved to manage data and documents’ life cycle. Consider more than 20,000 drilling permits issued a year in Texas alone. Trying to manage this much information with a spreadsheet, some tasks are bound to fall through the cracks.

Senior managers are more interested than ever in the source , integrity and accuracy of information that affect and influence their HSE and financial statements.
Providing senior managers with such information requires transparent data management processes that are clearly defined, repeatable, verifiable and that  allow managers to identify, evaluate and address or alleviate any obvious or potential risks…. before they become a risk. Preferably all delivered efficiently and cost-effectively.

Choosing the Right E&P Data Management Workflow Tools

It’s tempting to stay with the old way of doing things – the “way we have always done it” – because you have already established a certain (and personal), rhythm of working, inputting and managing data. Even the promise of improved profitability and efficiency is often not enough to convince people to try something new. But the advantages of new workflow tools and programs cannot and should not be underestimated.

For example, a workflow tool can help automate the creation of data management tasks, log and document technical meta data,  track data related correspondence, alert you for brewing issues. When all set and done, data management department would be set for growth and for handling more load of work with out skipping a beat.  Growing by adding more people is not sustainable.

So, where to start?

There are multiple data management workflow tools available from a variety of different vendors, so how do you know which one will work best for your company? You will need to ensure that your workflow tool or software is able to do the following:

  • Keep detailed technical documentation of incoming data from vendors, thereby minimizing duplication of work associated with “cataloging” or “tagging”;
  • Integrate with other systems in your organizations such as Seismic, Contracts, Accounting…etc., including proprietary software programs;
  • Allow sharing of tasks between data managers;
  • Enable collaboration and discussion to minimize scattered email correspondence; and,
  • Automatically alert others of issues such as requests that still need to be addressed

 

 

Change Coming Our Way, Prepare Data Systems to Store Lateral’s Details.

Effectively, during the past decade, oil and gas companies have aimed their spotlight on efficiency. But should this efficiency be at the expense of data collection? Many companies are now realizing that it shouldn’t.

Consider the increasingly important re-fracturing effort.  It turns out, in at least one area, that only 45% of re-fracs were considered successful if the candidates were selected using production data alone.  However, if additional information (such as detailed completion, production, well integrity and reservoir characterization data) were also used a success rate of 80% was observed. See the snip below from the Society of Petroleum Engineer’s paper “SPE 134330” by M.C Vincent 2010).

Capture

Prepare data systems to store details, otherwise left in files.

Measurements while drilling (MWD), mud log – cuttings analysis and granular frac data are some of the data that can be collected without changing drilling or completion operations workflow and the achieved efficiency.  This information when acquired at the field will make its way to petrophysicists and engineers. Most likely it ends up in reports, folders and project databases.  Many companies do not think of this data storage beyond that.

We argue, however, to take advantage of this opportunity archival databases should also be expanded to store this information in a structured manner. This information should also funnel its way to various analytical tools. This practice will allow technical experts to dive straight into analyzing the wells  data instead of diverting a large portion of their time in looking for and piecing data together. Selecting the best re-frac candidates in a field will require the above well data and then some. Many companies are starting to study those opportunities.

Good data practices to consider

To maximize economic success from re-stimulation (or from first stimulation for that matter) consider these steps that are often overlooked:

  1. Prepare archival databases to specifically capture and retain data from lateral portions of wells. This data may include cuttings analysis, Mud log analysis, rock mechanics analysis, rock properties, granular frac data, and well integrity data.
  2. Don’t stop at archiving the data, but expose it to engineers and readily accessible to statistical and Artificial Intelligence tools. One of those tools is Tibco Spotfire.
  3. Integrate, integrate, integrate. Engineers depend on ALL data sources; internal, partners, third party, latest researches and media, to find new correlations and possibilities. Analytic platforms that can bring together a variety of data sources and types should be made available. Consider Big Data Platforms.
  4. Clean, complete and accurate data will integrate well. If you are not there yet, get a company that will clean data for you.

Quality and granular well data is the cornerstone to increasing re-frac success in horizontal wells and in other processes as well.  Collecting data and managing it well, even if you do not need it immediately, is an exercise of discipline but it is also a strategic decision that must be made and committed to from top down. Whether you are drilling to “flip” or you are developing for a long term. Data is your asset.

 

Capture The Retiring Knowledge

The massive knowledge that is retiring and about to retire in the next five years will bring some companies to a new low in productivity. The U.S. Bureau of Labor Statistics reported that 60% of job openings from 2010 to 2020 across all industries will result from retirees leaving the workforce, and it’s estimated that up to half of the current oil & gas workforce could retire in the next five to ten years.

For companies that do not have their processes defined and weaved into their everyday culture and systems — relying on their engineers and geoscientists knowledge instead — retirement of these professionals will cause a ‘brain drain,’ potentially costing these companies real down time and real money.

One way to minimize the impact of “Brain Drain” is by documenting a company’s unique technical processes and weaving them into training programs and, where possible, into automating technology. Why are process flows important to document? Process flow maps and documents are the geographical maps that give new employees the direction and the transparency they need, not only to ramp up a learning curve faster, but also to repeat the success that experienced resources deliver with their eyes closed.

For example, if a reservoir engineer decides to commission a transient test, equipment must be transported to location, the well is shut down and penetrated, pressure buildup is measured, data is interpreted, and BHP is extrapolated and Kh is calculated.
The above transient test process, if well mapped, would consist of: 1) Decisions 2) Tasks/ Activities 3) A Sequence Flow 4) Responsible and Accountable Parties 5) Clear Input and Output 6) and Possible Reference Materials and Safety Rules. These process components, when well documented and defined, allow a relatively new engineer to easily run the operation from start to end without downtime.

When documenting this knowledge, some of the rules will make its way in contracts and sometimes in technology enablers, such as software and workflow applications. The retiring knowledge can easily be weaved into the rules, reference materials, the sequence flow, and in information systems.

Documenting technical processes is one of the tools to minimize the impact of a retiring workforce. Another equally important way to capture and preserve knowledge is to ensure that data in personal network drives is accumulated, merged with mainstream information, and put in context early enough for the retiring workforce to verify its accuracy before they leave.

Processes and data  for a company make the foundation of a competitive edge, cuts back on rework and errors, and helps for quickly identifying new opportunities.

To learn more about our services on Processes or Data contact us at info@certisinc.com

In-hand data, both internal and external, can be the difference between millions of dollars gained or millions of dollars lost

The Eagle Ford… Bakken… Permian… Montney… Booming plays with over 50 active operators each. Each operator brings its own development strategy and its own philosophy. While some operators appear successful in every unconventional play they touch, others always seem to come last to the party, or to miss the party altogether. Why?

 Information. With all things being equal (funding availability, access to geoscience and engineering expertise), one variable becomes timely access to quality information and understanding what the data is telling you, faster than the competition.

 “Few if any operators understand how (shale) behaves, why one fracture stage within a well produces 10 times more oil or gas than its neighbor, or how to find sweet spots to overcome inequity.”  Colorado School of Mines Rhonda Duey

 Over 60 operators in the Eagle Ford alone. Studying the strategy and philosophy of each operator in a play would, should, yield insight as to what works, what does not work and why? Landing depth, fracking parameters, lateral length, flow-back design, etc… All may matter, all may contribute to better production rates, better ultimate recoveries and better margins. And yes, each play really is unique.

 WHERE TO LOOK?

 A wealth of information from each operator is buried in shareholders’ presentations, their reported regulatory data, and published technical papers. Collecting relevant information and organizing it correctly will enable engineers and geo staff to find those insights. Today, engineers and geologists cannot fully take advantage of this information as it’s not readily consumable and their time is stretched as it is.

 We all agree, taking advantage of Shale plays is not only about efficiency, but it is also about being effective. The fastest and cheapest way to effectiveness is to build on what others have proven to work and avoid what is known not to work.

 Here are some thoughts on how to leverage external data sources to your advantage:

  • Understand the goal of the study from engineers and geoscientists. Optimized lateral completion? Optimized fracking? Reducing drilling costs? All of the above?
  • Implement “big data” technology with a clear vision of the output. This requires integration between data systems to correlate data from various external sources with various internal sources.
  • Not ready to invest in “big data” initiatives or don’t have the time? Outsource information curation (gathering and loading data) for focused studies.
  • Utilize data scientists and analytical tools to find trends in your data, then qualify findings with solid engineering and geoscience understanding.
  • Consider a consortium among operators to exchange key data otherwise not made available publicly. If all leases are leased in the play, then the competition among operators is over. Then shouldn’t play data be shared to maximize recovery from the reservoirs?
  • Build a culture of “complete understanding” by leveraging various sources of external data. 

Better Capital Allocation With A Rear-View Mirror – Look Back

In front of you are two choices: Tie up $100 million with low return or over spend by $50 million with no reliable return. Which option do you choose? Neither is acceptable.

“It seemed we were either tying up cash and missing on other opportunities, or overspending where we should not have in the first place,” said a former officer of a US independent. “We heard great stories at presentations from engineers and geoscientists as they were painting the picture to executives to fund their programs. But at the end of the year, the growth was never where we had expected it to be.”

Passing by poor investments through better allocation of capital greatly enhances company performance. To achieve this, executives needed a system to look back and evaluate what each asset team had predicted compared to the actual performance of the asset. They needed a look-back system where hindsight is always 20/20.

A look-back system is beneficial not only for better capital allocation, but also to identify and understand the reasons for low or high performance of an investment.

Implementing a look-back system is data intensive. The data needed, however, typically has already been collected and stored as part of everyday operations. For example most companies have an AFE system that captures predicted economics of well projects. All companies keep system(s) to capture production volumes and accounting data for both revenue and costs.  Data for evaluating an investment after-the-fact is already available – for the most part.  The reason executives did not have a look-back system was buried in their processes. In how each asset’s economic returns are calculated and allocated.

Here are few tips to consider when implementing a look-back system for an oil and gas company:

  • Start with the end. Identify the performance indicators (KPI) required to measure assets’ performance.
  • Standardize how economics are prepared by each asset team. Only then will you be able to compare apples to apples.
  • Allocate costs and revenue back to each well. Granularity matters and is key. With granularity, mistakes of lumping costs under a wrong category can be avoided and easily rectified.
  • Missing information for the KPI’s? Introduce processes to capture and enter data in company’s systems (historically this information may be in presentation slides and personal spreadsheets).
  • If well information is scattered across systems, data integration will be needed. Well, AFE, Production, Reserves, and Accounting data will need to be correlated.
  • Automate the generation of information to executives. Engineers and geoscientist should not have to prepare reports at the end of each month or quarter to management. Their time is FAR better spent making money and assets work harder for their investors.
  • Know it is a change to the culture. Leadership support must be behind the initiative and well communicated throughout the stake holders.

“Once we implemented a look-back system, we funded successful teams more and reduced the budget from under performing assets, then we utilized the freed money to grow. We were a better company all around” – Former Officer of a Large Independent.