Tag Archives: oil and gas

Are we progressing? Oil & Gas Data Management Journey…

Last month, I had dinner with a long-term friend who is now part of a team that sets strategic technical plans for his E&P employer. Setting strategies requires a standardized view of technical & financial data across all assets, in this case, multinational assets around the world. This data is required at both granular and helicopter level.  One of the things he mentioned was “I have to start by fixing data standards. I am surprised how little progress data-management standards have made since the POSC days in the mid 90s.”

How did Data Management evolve in oil & gas? Are we repeating mistakes? Are we making any progress? Here is what my oil and gas friends and I remember in this first part of a three-part series.  Please join me on this journey down memory lane and add your own thoughts.

The 1960s & 70s

Perhaps we can call these times, mainframe times.  Mainframes started to make their way into our industry around the mid-60s. At that time, they were mostly used to maintain accounting data. Like most data at this time, E&P accounting data was manually entered into systems, and companies employed large data-entry staff to input. Any computational requirement of the data was through feeding  programs through “punch cards”.

Wireline logs (together with Seismic data) were one of the very first technical data that required the use of computers, mainly at the service provider’s Computer Centers and then at the large offices of the largest major operators. A friend of mine at Schlumberger remembers the first log data processing center in Houston opening about 1970. In the mid-70’s more oil city offices (Midland, Oklahoma City, etc.) established regional computing centers. Here, wireline log data, including petrophysical and geological processing, was “translated” from films into paper log graphics for clients.

A geophysicist friend remembers using mainframe computers to read seismic tapes in the mid-70s. He said, “Everything was scheduled. I would submit my job, consisting of data and many Punch Cards, into boxes to get the output I needed to start my interpretation. That output could be anything from big roll of papers for seismic sections to an assemblage of data that could then be plotted. Jobs that took 4 hours  to process on a mainframe in the 70’s are instantaneous today”

The Society of Exploration Geophysicist (SEG) introduced and published data formatting standard SEG_Y in 1975.  SEG-Y formats are still utilized today.

The need to use a standard, well number identification process became apparent as early as 1956. Regulatory agencies started assigning API numbers to wells in the late 60s in the USA. The concept of developing world wide global well ID numbers is still being discussed today with some organizations making good progress.

The 2nd half of the 70s, pocket calculators and mini computers made their way to the industry. With that some computations could be done at the office or on the logging truck at the field without the need for Mainframes.

The 1980s

Early 80s. With the proven success of 3D seismic introduced by ExxonMobil, large and special projects started heavily processing 3D seismic on Mainframes. However, the majority of technical data was still mainly on paper. Wireline logs were still printed on paper for petrophysicists  to add their handwritten interpretations. Subsurface maps were still drawn, contoured and colored by hand. Engineering data came in from the field on paper and was then recorded on a running paper table. A reservoir engineer remembers   “We hired data clerks to read field paper forms and write the data in table (also on paper)”.

As personal computers  (PCs) made their way into the industry, some large companies started experimenting,  albeit they lacked the personal side since PCs were numbered and located in a common area. Employees were only given occasional access to them. These were also standalone computers, not networked. Data transfer from one PC to another happened via floppy disk. It was during this time that engineers were first exposed to spreadsheets (boy did they love those spreadsheets! I know I do)

Mid-80s. March 1986, oil prices crashed, a 55% drop over few days. In the three years following the crash, the industry shed staff the way cats shed hair. The number of petroleum staff dropped from approximately 1,000,000 employed staff to approximately 500,000 in three years.

oil price

Late 80s. But what seemed bad for the industry, may have also done the industry a favor. The oil price crash may have actually accelerated the adoption of technology. With a lot less staff, companies were looking for ways to accomplish more with less staff.

A geologist friend remembers using Zmap as early as 1988, which was the beginning of the move towards predominantly computer-based maps and technical data.

For data nerds: 

  • Engineers and geo professionals were responsible for maintaining their own data in their offices.
  • Although not very formal, copies of the data were maintained in centralized “physical” libraries. Data was very important in the “heat of the moment” after the project is complete, that data is someone else’s issue. Except there was no “someone else” yet.
  • This system produced many, many versions of the same data (or a little variation of it) all over. This data was predominantly kept on physical media and some kept on floppy disks which were mostly maintained by individuals.
  • From the 60s through to the end of the 80s, we can say there were mostly two global standards, one for the seismic data formatting – SEG-Y – and the other for log data – LAS (Log Ascii Standard). Any other standards were country- or company-specific.

I would love to hear from you if you feel I have missed anything or if you can add to our knowledge of how technical E&P data was managed during the above period.

Stay tuned for the 90s …

What Impact Does Big Data Technology Bring To Oil and Gas?

Dealing with the massive influx of information gathered from exploration projects or real time gauges at the established fields is pushing the traditional data-management architecture to its limits in the oil and gas industry. More sensors, from 4-D seismic or from fiber optics in wells, crack the gap wider between data capture advancements and the traditional ways of managing and analyzing data. It is the challenge of managing the sheer volume of collected data and the need to sift through it in a timely fashion that Big Data technologies can promise to help us solve.  This was just one of the suggestions on the table at the recent Data Management Workshop I attended in Turkey earlier this month.

For me, one of the main issue with the whole Big Data concept within the oil and gas industry is that, while it sounds promising, it has yet to deliver tangible return that companies need to see in order to prove its worth.  To overcome this dilemma, Big Data vendors such as TeraData, Oracle, and IBM should consider demonstrating concrete new examples of real-life oil & gas wins. By new I mean challenges that are not possible to solve with traditional data architecture and tools. Vendors should also be able to offer Big Data technology at a price that makes it viable for the companies to “try” it and experiment.

The Oil and Gas industry is notoriously slow to adopt new software technology, particularly when it comes to anything that tries to take the place of traditional methods that have proven to work already, unless its value is apparent.  To quote my good friend ” we operate with fat margins we don’t feel the urgency”.  However, E&P companies should put their creative hats on to work alongside Big Data technology vendors. Big Data may just be the breakthrough that we need to make a tangible step-change in how we consume and analyse subsurface and surface data with agility.

If either side, vendors and E&P companies, fail to deliver, Big Data becomes a commercial white elephant and is doomed to very slow adoption.

At the workshop we had Oracle, Teradata, and IBM all showing interesting tools. However they showed examples from other industries and occasionally referred to examples that are possible to solve with the conventional data technology. They left the audience still wondering!

One Big Data example that is relevant and hits home was presented by CGG. CGG used pattern recognition (on Teradata technology) to find all logs that exhibit a specific pattern a petrophysicist may be interested in. This type of analysis require scanning through millions of log curves, not just meta-data which is what we had been bound to in traditional architecture. This opens up new horizons to serendipity and who knows maybe to new discoveries.

 

In-hand data, both internal and external, can be the difference between millions of dollars gained or millions of dollars lost

The Eagle Ford… Bakken… Permian… Montney… Booming plays with over 50 active operators each. Each operator brings its own development strategy and its own philosophy. While some operators appear successful in every unconventional play they touch, others always seem to come last to the party, or to miss the party altogether. Why?

 Information. With all things being equal (funding availability, access to geoscience and engineering expertise), one variable becomes timely access to quality information and understanding what the data is telling you, faster than the competition.

 “Few if any operators understand how (shale) behaves, why one fracture stage within a well produces 10 times more oil or gas than its neighbor, or how to find sweet spots to overcome inequity.”  Colorado School of Mines Rhonda Duey

 Over 60 operators in the Eagle Ford alone. Studying the strategy and philosophy of each operator in a play would, should, yield insight as to what works, what does not work and why? Landing depth, fracking parameters, lateral length, flow-back design, etc… All may matter, all may contribute to better production rates, better ultimate recoveries and better margins. And yes, each play really is unique.

 WHERE TO LOOK?

 A wealth of information from each operator is buried in shareholders’ presentations, their reported regulatory data, and published technical papers. Collecting relevant information and organizing it correctly will enable engineers and geo staff to find those insights. Today, engineers and geologists cannot fully take advantage of this information as it’s not readily consumable and their time is stretched as it is.

 We all agree, taking advantage of Shale plays is not only about efficiency, but it is also about being effective. The fastest and cheapest way to effectiveness is to build on what others have proven to work and avoid what is known not to work.

 Here are some thoughts on how to leverage external data sources to your advantage:

  • Understand the goal of the study from engineers and geoscientists. Optimized lateral completion? Optimized fracking? Reducing drilling costs? All of the above?
  • Implement “big data” technology with a clear vision of the output. This requires integration between data systems to correlate data from various external sources with various internal sources.
  • Not ready to invest in “big data” initiatives or don’t have the time? Outsource information curation (gathering and loading data) for focused studies.
  • Utilize data scientists and analytical tools to find trends in your data, then qualify findings with solid engineering and geoscience understanding.
  • Consider a consortium among operators to exchange key data otherwise not made available publicly. If all leases are leased in the play, then the competition among operators is over. Then shouldn’t play data be shared to maximize recovery from the reservoirs?
  • Build a culture of “complete understanding” by leveraging various sources of external data. 

Bring It On Sooner & Keep It Lifting Longer. Solutions To Consider For ESPs (Or Any Field Equipment)

Settled on average 6,000 feet below the surface, electrical submersible pumps (a.k.a ESPs) provide artificial lift for liquid hydrocarbons for more than 130,000 wells worldwide.
Installing the correct ESP system for the well, installing it precisely, and careful monitoring of the system is paramount to reducing the risk of a premature end to an ESP life cycle. But the increasingly long laterals of horizontal wells, along with rapid drilling in remote areas, is creating challenges for efficient operations and the ESP’s life span. Implementing the correct processes and data strategies will, undoubtedly, be the cheapest and fastest way to overcome some of the challenges.

1- Implement A Process Flow That Works, Break The Barriers

When a decision is made to install an ESP in a well, a series of actions are triggered: preparing specifications, arranging for power, ordering equipment, scheduling operations, testing, and finally installing it in a well, to state a few. These actions and decisions involve individuals from multiple departments within an organization as well as external vendors and contractors. These series of actions form a process flow that is sometimes inefficient and is drawn out, causing delays in producing revenue. In addition, sometimes processes fall short causing premature pump failures that interrupt production and raise operational costs.
Research of many industry processes shows communication challenges are one of the root causes for delays, according to LMA Consulting Group Inc. Furthermore, communication challenges increase exponentially when actions change hands and departments. A good workflow will cut across departmental barriers to focus on the ultimate goal of making sure Engineering, Procurement, Logistics, accounting, vendors, contractors and field operations all are on the same page and have a simple and direct means to communicate effectively. But more importantly, the workflow will allow for the team to share the same level of urgency and keep stakeholders well informed with the correct information about their projects. If you are still relying on phones, papers and emails to communicate, look for workflow technology that will bring all parties on one page.

A well-thought through workflow coupled with fit-for-purpose technology and data is critical, not only to ensure consistent successful results each time but also to minimize delays in revenue.

2- ESP Rented Or Purchased, It Does Not Matter… QA/QC Should Be Part Of Your Process

Although ESPs are rented and the vendor will switch out non-performing ones, ensuring that the right ESP is being installed for a well should be an important step of the operator’s process and procedures. Skipping this step means operators will incur the cost of shut downs and tempering of reservoir conditions that may otherwise be stabilized – not to mention exposure to risks each time a well is penetrated.
More importantly a thoughtful workflow ensures a safe and optimal life span for ESPs regardless of the engineers or vendors involved, especially in this age of a mass retiring of knowledge.

At today’s oil prices, interrupted production for a well of 1,000 barrels per day will cost an operator at least $250,000 of delayed revenue for a 5 day operation. Predictive and prescriptive analytics in addition to efficient processes can keep the interruption to the minimum if not delay it altogether.

3- Know Why And How It Failed Then Improve Your Processes – You Need The Data And The Knowledge

One last point in this blog: Because ESPs consist of several components, a motor, a pump, a cable, elastomer, etc… ESP failure can, therefore, be electrical, mechanical, thermal or fluid/gas composition. Capturing and understanding the reasons for a failure in a system to allow for effective data analysis provides insight that can be carried forward to future wells and to monitoring systems. Integrating this knowledge into systems such as predictive analysis or even prescriptive analytic to guide new engineers will have an effect on operator’s bottom-line. A few vendors in the market offer these kind of technology, weaving the right technology, data and processes to work in synergy is where the future is.

On how to implement these solutions please contact our team at info@certisinc.com.

Related articles

More Shale Data Should Equal More Production, Unless Your Data is an Unusable Mess

As the U.S. overtakes Russia in Oil & Gas production because of its unconventional fields, new operators flood the industry. Inevitably, competition increases. The need for predictable and optimized well performance is greater than ever. The fastest route to optimization is quality data that can be used to understand unconventional fields better and drive the flow of operations efficiently.

However, as more data pours in, the cracks in many E&P companies’ data management systems are exposed. Geoscientists and engineers are left to make their own integrations and correlations between disperse systems and left digging through folders trying to find documents for knowledge.

Some of the trouble lies in the new methods of analyzing vast array of data that were not considered as prominent in conventional fields. For example, geoscientists break shale down by geology, geochemistry, and geomechanics, and engineers now look into fracs using microseimic lenses. While this data was used in conversional fields before, the stress on and ways of analyzing them is different now; new parameters have emerged as key measures such as TOC and brittleness. When it comes to shale fields, the industry is still learning from acquired data.

Well organized (and quality) information that is easily found and efficiently flows through internal departments and supplying vendors, not only will allow for faster reaction to operation’s needs & opportunities, it will turn into better strategy to increase EUR per well through better understanding of the reservoirs.

How you take care of your data directly impacts your engineers and geoscientists efficiency and the speed they can find good production opportunities. Fast and efficient is the name of the game when it comes to unconventional and competitive world.

It is not enough to provide a place to store new unconventional information and flow it to analytical systems, while those are the first steps they must fit into a holistic approach that takes unconventional integrated operational processes to the next level of efficiency.

Cut Search Time for Critical Documents from Days to Seconds. It is Time to Stop Digging in Folder Structures

It wasn’t long ago when geoscientists and petroleum engineers at one renowned oil company might spend days searching for documents.  “Searching” meant digging through folders (as many as 1500 of them!!), and discerning whether a “found” file was an official report or only an earlier draft.  To give you an idea, some critical HSE documents were buried as deeply as within the 13th   sub-folder (and then the correct version had to be selected!!)

Obviously in this situation emergency and critical decision cycle times were lengthened by the difficulty of finding the “buried” technical documents. The average time to locate and validate the accuracy of a document was calculated at 3 days.

When Certis arrived, the company’s folder system looked like an episode of “Hoarders”. The hoarder believes there is an organized system to his “madness”, but nobody else in the home can quite figure it out. Over the years, over 2,000,000 documents had been amassed at this location, and that total was growing fast. As engineers and geoscientists floated in and out, the system fell victim to hundreds of interpretations. Unlike the hoarder’s goods, these documents contained vital information that accumulated years of studies and billions of dollars of data acquisitions. Years of knowledge, buried, literally.

In today’s competitive and fast pace operations in our Oil and Gas industry, data is accumulating faster than ever and decisions must be made faster than ever by petro-professionals that are already overextended.  Compounded with the fact that a large portion of the knowledge is within a workforce that may soon retire means that Oil and Gas companies that want to stay exceptional and competitive cannot afford to waste petro-professionals time hunting for critical records.

So, how do you get to a point where your organization can locate the right document instantly?  We believe it is all about Processes, Technology and People put in place (a cliché but so true)

When Certis completed this project, the technical community could locate their documents within few seconds using “google-like” search. More importantly they were (and are now) able to locate the “latest” version and trust it. The solution had to address 3 elements, people, processes and technology.

The final solution meant collapsing folders from 2000 down to 150, using a DRM system without burdening the technical community and implementing complete processes with a service element that ensured sustainability.

Centralized, standardized and institutionalized systems and processes were configured to take full advantage of the taxonomy and DRM systems. Once the ease of use and the value were demonstrated to the people, buy-in was easy to get.

Technology advances faster than our ability to keep up. This is especially true when working with professionals whose focus is (and should be!) on their projects, not on data management. We had to break the fear of change by proving there is a better way to work that increases efficiency and makes employee’s lives easier.

Legacy Documents, what do you do with them?

Because solving operational issues at the field requires access to complete historical information, exhuming technical legacy documents, physical or electronic, from their buried locations was the next task.

On this project the work involved prioritizing, locating, removing duplicates, clustering, and tagging files with standard meta-data. With a huge number of files accumulated in network drives and library rooms, a company must keep an eye on “cost/ benefit” ratio. How to prioritize and how to tag technical files become two key success factors to designing a cost-effective migration project.

This topic can go on and on since there were so many details that made this project successful. But that may be for another post.

Read more about Certis and about our oil and gas DRM services http://ow.ly/oRQ5f

How an E & P Subsidiary took its Information Communications from Risky to Efficient

It starts with chatter around the workplace. A company is growing. Procedures that were once “nice to have” are now serious money bleeds. That is exactly what Certis found when they revamped a major E&P subsidiary’s communication procedures.

When an oil and gas company plants itself in any nation to explore for business opportunities, its communications with the nation’s government and with its JV partners can be, understandably, informal for the early stages of the project. As the company moves from Exploration and Appraisal phases towards a full fledge Development and Operation, what once worked with lax communications becomes a risky endeavor.

While these risks can be underplayed next to health and safety hazards, we discovered they warranted immediate action if the company is to survive long term. Consider these two real situations, to name a few:

1)      Sensitive information leaks, for example, at early stages of exploration efforts, any discovery would have a large impact on a company’s stock price (if public) and serious implications on their competitor’s behavior.

2)      Growing companies’ watch millions of dollars become billions of dollars almost overnight. Those large dollar amounts require complete technical data and timely communications to appease the government and the JV partner. The flow of information becomes crucial.

Knowing something is broken isn’t the same as understanding how it is broken and how to fix it.

Most employees can feel the weak spots in their company. When you start to sense problems, the cost of fixing them seems outlandish. But overtime the scales tip. Often, when the scales tip, the problem has grown to overwhelming proportions for employees to handle alone.

The scale had long ago tipped for this client.  Our team’s role was to quickly identify causes of communication problems, and orchestrate a long-term plan and processes to mitigate risks.

Over a period of few weeks, we surveyed the office, field, and rigs in two different continents. We went through a full cycle of process improvement. At the end we were able to divide their information communications needs into four process categories: 1) Documents and Data Management 2) Decisions Documentation 3) Security and Access Management 4) Request Management.

Our plan started with ‘Quick Wins’ that changed the way the subsidiary did business in the first month. Imagine being able to institute relevant changes in your company in one month. Yes, it was that easy to solve. The rest of the implementation plan spanned over 4 months. Communication policies, standards and procedures were to be defined and complied to across the organization.

We all know that the cost of fixing is cheap compared to the cost of cleaning up a huge mess later.

The costs of missed opportunities, reduced stock prices, or the cost of million-dollar lawsuits make this kind of projects important, combine that with the relevant low fixing cost, makes this project a “high” priority.

I believe a company needs to do more than simply comply with government or JV partner contracts. To build strong relationships, you must be able to readily prove your compliance. That’s just good business.

Our client’s new transparent business practices allow the government to view them as a serious and trusted part of the country’s future. It is impossible to put a price on a valued relationship. But successful business people know that gaining trust means big business over time.

What about your company? Is it starting to feel the risks of outdated communication systems?

$250 Million Oil Take-Over Deal Implodes Due To Disastrous Data Management

As professionals in the oil and gas sector we all know that when it comes to a merger and acquisition (M&A) that having access to quality data is essential. In its absence deals don’t get made, investors lose $000,000s and livelihoods are put at risk.

So we were pretty taken aback recently to hear of one deal – of a public company – which fell through because the organization couldn’t even list their complete assets with confidence – such was the mess of their data.

We were talking with a CEO recently who “vented” about a recently failed acquisition.  He is a major player who has worked in the sector since the mid-1970s, he told us here why the $150 Million to $250 million investment his company was prepared to make didn’t just fall flat, but imploded:  “Despite asking this company repeatedly to give us access to their “complete” data sets they failed to deliver time and again. We became increasingly frustrated and discouraged to the extent we wouldn’t even make a proposal in the region of $80 million for the company.  What was so galling to us was that it was obvious this company badly needed an investor and had approached us to bid”

We all know what data is needed for M&A investments to happen, some of which we can get from public records and from commercial organizations such as I.H.S and Drilling Info (in the USA). But those sources alone are not nearly sufficient. So what were they thinking? Did they think data would take care of itself? Or was someone just not doing his/her job well?

The CEO continues “…. in the past when companies were under pressure, typically a lot of data got swept under the rug as it were. Today though, investors demand tighter regulation of data and I suspect that, because of this, in ten years’ time some companies just aren’t going to make it. If our company had been allowed to invest and take over we could have solved many of the organization’s problems, saved some jobs and even added value. Sadly, in this event, due to poor management of critical data that scenario was never allowed to take place. The deal never even got past the first hurdle. No-one is going to invest $millions when they don’t have a clue of (or confidence in the data of) what they’re buying.”

Considering this was a company which had a responsibility for public money the management team should never have been allowed free rein without critical data management regulations or at the very least “guidelines”.

What is your opinion?