Tag Archives: oil and gas

Managing Data For The Sake Of Managing Data Or Are You Making a Difference?

A client and now dear friend of mine told me once “We are not managing data for the sake of data management, we are doing it to support the business.” We connected immediately, and I took this as a sign that she would achieve great things for her company.

Supporting the business is the only reason to justify an IM group in an E&P company. But how does an Information Management connect (and prove the value of) enterprise initiatives that may take years to complete, to business operations that fluctuate with commodity prices?

Let’s look at the typical experience of many companies in the past few years:

When oil prices hovered for a lengthy period at approximately $100 a barrel, most businesses prioritized exploration and production to find new plays as fast as possible. Drill faster, complete faster, produce sooner, and find more. In this “growth” mode data came in, fast and furious. Companies threw in serious money to gather and analyze every data.

However, when oil prices hit $26 a barrel, “survival” mode kicked in. Most companies renegotiated their contracts and loans while trying to maintain base oil or gas production (revenue) at the least cost possible. Meeting or exceeding production targets became existential, not just good for business. Here in this mode, some data gathering slowed significantly, while the focus on producing wells and its facilities heightened.

Two entirely different sets of processes, completely different sets of priorities, could force totally different data management projects. In ‘growth’ mode, the focus was on the speed of processing directional surveys, logs, perforation, costs, and frac data. In ‘survival mode,’ the focus changed to Wells’ and facilities’ performance and integrity.

TECHNICAL DATA

All technical data is critical to an oil and gas company and should be available, boom or bust. It is also, entirely understandable that, in a world of limited resources, projects with the highest impact to the business are prioritized first. Shifting IM priorities with the change in commodity prices or change of business focus is not simple.

However, a good EIM strategy will support the business in any mode, growth, survival, or any other mode, with ease. The good news is, it is entirely possible to have such an EIM strategy, simply by focusing efforts towards organizational goals through growth and lean times alike. Also, today’s advancements in technology allow for increased agility in organizational response. But you got to have a strategy.

Once a strategy is defined and embraced, every information management project, for both structured and unstructured information, must advance the ball towards the goal, or just be killed. This is not as easy as it sounds, of course. It requires expertise and the dedicated effort. Prioritizing efforts, identifying weaknesses, choosing the right technology, all can help your organization grow faster in growth mode, as well as to swim, rather than tread water in survival mode.

Has your organization defined a strategy yet? Are they working to support the business, or are they just managing data for the sake of data management?

For greater clarity on your position, call or email us to schedule a complimentary strategy appraisal with one of our consultants.

Reminded Again, Narrow Focus Leads To Failure Every Time. Why do Some Data Projects Never Make It?

toronto
In 1993, an incident occurred in the Toronto Dominion Bank Tower that caught national attention, enough so that it made the infamous “Darwin Awards”. A lawyer, in an attempt to demonstrate the safety and strength of the building’s windows to visiting law students, crashed through a pane of glass with his shoulder and fell 24 floors to his death. Maybe the Glass did not break but it pulled off the wall. 
 
The lawyer made a classic mistake, he had focused on one specific thing to the exclusion of the big picture. If he had taken a look at his hypothesis from a wider angle, he might have considered the numerous other factors  that may have contributed to his doomed demonstration – the bond between the glass and the frame,  the yielding effect of material after repeated tests, or simply the view of the courtyard below (the high risk should it fail) might have been enough to make him reconsider his “leap of logic”. He focused on a specific item and ignored the other factors. 
 
Such a narrowed focus is equally risky to an information management project, or any project really. Although we are getting better we often focus on one thing: technology implementation and ignore other aspects.
From my experience, many factors contribute to the success or failure of information management in Oil & Gas projects. People, technology, processes, legacy data, Integration, a company’s culture, operational model, infrastructure, time constraints, or external influences such as vendors and partners, just to name a few. Each has a degree of influence on the project, but rarely will they cause the demise of the project – unless they are ignored! The key to success in any project is the consideration of all aspects, and an assessment of the risks they impose, prior to spending millions.
As an example, let’s look at survey data. How would you manage that data?
Often, companies focus on two elements:
  • Finding the technology to host the data
  • Migration of the data to the new solution
Success is declared at the end of these two steps, but two years down the road, the business has not embraced the solution, or worse yet, they continue to see incomplete surveys, a problem the new technology was supposed to solve. Failure, in this case, is less abrupt than an appointment with the Toronto Dominion Courtyard, but it is failure nonetheless.
 
More often than not, projects like the one above fail to take into consideration the other aspects that will keep data quality intact.
Even more often, these projects fail to consider external factors such as data acquisition vendors. These external vendors have their own processes and formats. If your project ignores our increasingly integrated world, and cannot cooperate with the processes, technology, and data formats of key external vendors and business partners, your project will yield very limited results and will not be sustainable. 

To achieve sustainable success in data management projects or any projects for that matter, it is necessary to consider the context surrounding the project, not just the specifics. Without this context, like the unfortunate lawyer, your project too can look forward to a rather significant fall.

Juicy Data Aligned

juice

Around the corner from my house is a local shop selling an excellent assortment of fresh vegetable and fruit juices. Having tried their product, I was hooked, and thought it would be a good addition to my diet on a daily basis. But I knew with my schedule that unless I made a financial commitment, and paid ahead of time, I would simply forget to return on a regular basis.  For this reason, I broached the subject of a subscription with the vendor. If the juice was already paid for, and all I had to do was drop in and pick it up, I’d save time, and have incentive to stop by (or waste money).

However, the owner of the shop did not have a subscription model, and had no set process for handling one. But as any great business person does when dealing with a potential long term loyal customer, the owner accommodated my proposition, and simply wrote the subscription terms on a piece of paper (my name, total number of juices owed and date of first purchase), and communicated the arrangement with her staff. This piece of paper, was tacked to the wall behind the counter. I could now walk in at any time, and ask for my juice. Yess!

Of course, this wasn’t a perfect system, but it aligned with business needs (more repeat business), and worked without fail, until, of course, it eventually failed. On my second to last visit, the clerk behind the counter could not find the paper. Whether or not I got the juice owed to me that day is irrelevant to the topic at hand…the business response, however, is not.

When I went in today, they had a bigger piece of paper, with a fluorescent tag on it and large fonts. More importantly, they had also added another data point, labeled ‘REMAINING DRINKS’. This simple addition to their data and slight change to the process made it easier and faster for the business to serve a client. Previously, the salesperson would have to count the number of drinks I had had to date, add the current order, then deduct from the total subscription. But now, at a glance a salesperson can tell if I have remaining drinks or not, and as you can imagine deducting the 2 juices I picked up today from the twelve remaining is far simpler. Not to mention the data and process adjustment, helped them avoid liability, and improved their margins (more time to serve other customers). To me, this is a perfect example of aligning data solutions to business needs.

There are several parallels in the above analogy to our business, the oil and gas industry, albeit with a great deal more complexity. The data needs of our petro professionals, land, geoscience and engineering have been proven to translate directly into financial gains, but are we doing enough listening to what the real needs of the business are? Reference our blog on Better Capital Allocation With A Rear-View Mirror – Look Back for an example on what it takes to align data to corporate needs.

There is real value to harvest inside an individual organization when data strategies are elevated to higher standards. Like the juice shop, oil and gas can reap benefits from improved data handling in terms of response time, reduction in overhead, and client (stakeholder) satisfaction, but on a far larger scale.  If the juice shop had not adapted their methodology in response to their failure of process (even if it wasn’t hugely likely to reoccur) the customer perception might be that they didn’t care to provide better service. Instead, they might just get unofficial advertising from readers asking where I get my juice. I’d suggest that the oil and gas industry could benefit from similar data-handling improvements. Most companies today align their data management strategies to departmental and functional needs.  Unless the data is also aligned to the corporate goals many companies will continue to leave money on the table.

We have been handicapped by high margins, will this happen again or will we learn?

About 15 to 20 years ago, we started to discuss and plan the implementation of databases in Oil and Gas, in hopes of  reaping the benefits of all its promises. And we did plan and deploy those databases.  It is now no longer conceivable to draw geological maps by hand or to store production volumes in books. Also, in the last ten years, we have moved beyond simple storage of digital content and have started looking into managing data quality more aggressively. Here too, we have made inroads. But have we done enough?

Have you ever wondered why companies are still cleaning their data over and over again? Or why we are still putting up building blocks such as standards for master well lists and hierarchies? It seems to me that the industry as a whole is not able to break through the foundational stages of enterprise information management.  Because they can’t break through, they are unable to achieve a sustainable, robust foundation that allows their systems to  keep pace with business growth or business assets diversification.

Perversely, I believe this is because the oil and gas industry has been handicapped by high margins. When a company is making money despite itself, throwing additional bodies and resources to solve a pressing issue seems like the fastest and most effective solution in that moment. Because the industry is structured in such a way that opportunities have to be seized in the moment, there is often little time to wait for the right solution to be implemented.

Throwing money at a problem is not always the wrong thing to do. However, if it becomes your go-to solution, you are asking for trouble.

I would argue that highly leveraged companies have put themselves at high risk of bankruptcy because they do not invest sufficiently in efficiency and agility through optimized processes and quality information flow. For example, coming up with the most effective completion for your reservoir requires access to quality and granular technical data. This data does not just happen, it takes a great deal of wiring and plumbing work to obtain your organization’s data and processes, luckily if done right, it is a one-time investment with minimal operational upkeep.

According to Bloomberg, CNN and Oil & Gas 360 reports, during this ongoing downturn, at least 60 companies have entered chapter 11 in the USA alone. Ultra, Swift, Sabine, Quicksilver, American Energy are just a few of these highly leveraged but otherwise technically excellent companies.

Without the required behind the scenes investment, engineers and geoscientist will  find a way to get the data they need to make decisions. They will, and often do, work hard to bring data from many siloed systems. For each engineer to still have to massage data is throwing money at the problem. If the correct platform is implemented in your company, this information would flow like clockwork to everyone that needs it with little to no manual work.

WHAT COULD HAVE BEEN DONE?

We all know it is never the wrong time to make a profit. Consequently, it is never the wrong time to invest in the right foundation. During a downturn, lower demand creates an abundance of the only resource unavailable during an upturn – time. This time, spent wisely, could bring huge dividends during the next upswing in prices. Conversely, during a period of high prices, it is the other resources we cannot afford to waste. During a boom, we cannot ignore building sustainable longterm data and process solutions the RIGHT way.

It is never the wrong time to make a profit. Consequently, it is never the wrong time to invest in the right foundation.

Of course, there is no single “right way” that will work for everyone. The right way for your organization is entirely subjective, the only rule being that it must align with your company’s operations models and goals. By contrast, the only truly wrong way is to do nothing, or invest nothing at all.

If your organization has survived more than ten years, then it has seen more than one downturn, along with prosperous times. If you’ve been bitten before, it’s time to be twice shy. Don’t let the false security of high margins handicap you from attaining sustainable and long-term information management solutions.

Here are some key pointers that you probably already know:

      Track and automate repeatable tasks – many of your organization’s manual and repeatable tasks have become easier to track and automate with the help of BPMS solutions. Gain transparency into your processes, automate them, and make them leaner whenever possible.  

   Avoid Duplication of Effort – Siloed systems and departmental communication issues result in significant duplicated efforts or reworks of the same data.  Implementing strong data QA process upstream can resolve this. The farther upstream, the better. For example, geoscientists are forced to rework their maps when they discover inaccuracy in the elevation or directional survey data. These are simple low hanging fruits that should be easy to remove by implementing controls at the source, and at each stop along the way.

  Take an Enterprise View –  Most E&P companies fall under the enterprise category. Even if they are a smaller player, they often employ more people than the average small to medium business  (especially during a boom) and deal with a large number of vendors, suppliers, and clients. Your organization should deploy enterprise solutions that match your company’s enterprise operations model. Most E&P companies fall in the lower right quadrant in the below MIT matrix.

mitopmodel

Are we progressing? Oil & Gas Data Management Journey…

Last month, I had dinner with a long-term friend who is now part of a team that sets strategic technical plans for his E&P employer. Setting strategies requires a standardized view of technical & financial data across all assets, in this case, multinational assets around the world. This data is required at both granular and helicopter level.  One of the things he mentioned was “I have to start by fixing data standards. I am surprised how little progress data-management standards have made since the POSC days in the mid 90s.”

How did Data Management evolve in oil & gas? Are we repeating mistakes? Are we making any progress? Here is what my oil and gas friends and I remember in this first part of a three-part series.  Please join me on this journey down memory lane and add your own thoughts.

The 1960s & 70s

Perhaps we can call these times, mainframe times.  Mainframes started to make their way into our industry around the mid-60s. At that time, they were mostly used to maintain accounting data. Like most data at this time, E&P accounting data was manually entered into systems, and companies employed large data-entry staff to input. Any computational requirement of the data was through feeding  programs through “punch cards”.

Wireline logs (together with Seismic data) were one of the very first technical data that required the use of computers, mainly at the service provider’s Computer Centers and then at the large offices of the largest major operators. A friend of mine at Schlumberger remembers the first log data processing center in Houston opening about 1970. In the mid-70’s more oil city offices (Midland, Oklahoma City, etc.) established regional computing centers. Here, wireline log data, including petrophysical and geological processing, was “translated” from films into paper log graphics for clients.

A geophysicist friend remembers using mainframe computers to read seismic tapes in the mid-70s. He said, “Everything was scheduled. I would submit my job, consisting of data and many Punch Cards, into boxes to get the output I needed to start my interpretation. That output could be anything from big roll of papers for seismic sections to an assemblage of data that could then be plotted. Jobs that took 4 hours  to process on a mainframe in the 70’s are instantaneous today”

The Society of Exploration Geophysicist (SEG) introduced and published data formatting standard SEG_Y in 1975.  SEG-Y formats are still utilized today.

The need to use a standard, well number identification process became apparent as early as 1956. Regulatory agencies started assigning API numbers to wells in the late 60s in the USA. The concept of developing world wide global well ID numbers is still being discussed today with some organizations making good progress.

The 2nd half of the 70s, pocket calculators and mini computers made their way to the industry. With that some computations could be done at the office or on the logging truck at the field without the need for Mainframes.

The 1980s

Early 80s. With the proven success of 3D seismic introduced by ExxonMobil, large and special projects started heavily processing 3D seismic on Mainframes. However, the majority of technical data was still mainly on paper. Wireline logs were still printed on paper for petrophysicists  to add their handwritten interpretations. Subsurface maps were still drawn, contoured and colored by hand. Engineering data came in from the field on paper and was then recorded on a running paper table. A reservoir engineer remembers   “We hired data clerks to read field paper forms and write the data in table (also on paper)”.

As personal computers  (PCs) made their way into the industry, some large companies started experimenting,  albeit they lacked the personal side since PCs were numbered and located in a common area. Employees were only given occasional access to them. These were also standalone computers, not networked. Data transfer from one PC to another happened via floppy disk. It was during this time that engineers were first exposed to spreadsheets (boy did they love those spreadsheets! I know I do)

Mid-80s. March 1986, oil prices crashed, a 55% drop over few days. In the three years following the crash, the industry shed staff the way cats shed hair. The number of petroleum staff dropped from approximately 1,000,000 employed staff to approximately 500,000 in three years.

oil price

Late 80s. But what seemed bad for the industry, may have also done the industry a favor. The oil price crash may have actually accelerated the adoption of technology. With a lot less staff, companies were looking for ways to accomplish more with less staff.

A geologist friend remembers using Zmap as early as 1988, which was the beginning of the move towards predominantly computer-based maps and technical data.

For data nerds: 

  • Engineers and geo professionals were responsible for maintaining their own data in their offices.
  • Although not very formal, copies of the data were maintained in centralized “physical” libraries. Data was very important in the “heat of the moment” after the project is complete, that data is someone else’s issue. Except there was no “someone else” yet.
  • This system produced many, many versions of the same data (or a little variation of it) all over. This data was predominantly kept on physical media and some kept on floppy disks which were mostly maintained by individuals.
  • From the 60s through to the end of the 80s, we can say there were mostly two global standards, one for the seismic data formatting – SEG-Y – and the other for log data – LAS (Log Ascii Standard). Any other standards were country- or company-specific.

I would love to hear from you if you feel I have missed anything or if you can add to our knowledge of how technical E&P data was managed during the above period.

Stay tuned for the 90s …

What Impact Does Big Data Technology Bring To Oil and Gas?

Dealing with the massive influx of information gathered from exploration projects or real time gauges at the established fields is pushing the traditional data-management architecture to its limits in the oil and gas industry. More sensors, from 4-D seismic or from fiber optics in wells, crack the gap wider between data capture advancements and the traditional ways of managing and analyzing data. It is the challenge of managing the sheer volume of collected data and the need to sift through it in a timely fashion that Big Data technologies can promise to help us solve.  This was just one of the suggestions on the table at the recent Data Management Workshop I attended in Turkey earlier this month.

For me, one of the main issue with the whole Big Data concept within the oil and gas industry is that, while it sounds promising, it has yet to deliver tangible return that companies need to see in order to prove its worth.  To overcome this dilemma, Big Data vendors such as TeraData, Oracle, and IBM should consider demonstrating concrete new examples of real-life oil & gas wins. By new I mean challenges that are not possible to solve with traditional data architecture and tools. Vendors should also be able to offer Big Data technology at a price that makes it viable for the companies to “try” it and experiment.

The Oil and Gas industry is notoriously slow to adopt new software technology, particularly when it comes to anything that tries to take the place of traditional methods that have proven to work already, unless its value is apparent.  To quote my good friend ” we operate with fat margins we don’t feel the urgency”.  However, E&P companies should put their creative hats on to work alongside Big Data technology vendors. Big Data may just be the breakthrough that we need to make a tangible step-change in how we consume and analyse subsurface and surface data with agility.

If either side, vendors and E&P companies, fail to deliver, Big Data becomes a commercial white elephant and is doomed to very slow adoption.

At the workshop we had Oracle, Teradata, and IBM all showing interesting tools. However they showed examples from other industries and occasionally referred to examples that are possible to solve with the conventional data technology. They left the audience still wondering!

One Big Data example that is relevant and hits home was presented by CGG. CGG used pattern recognition (on Teradata technology) to find all logs that exhibit a specific pattern a petrophysicist may be interested in. This type of analysis require scanning through millions of log curves, not just meta-data which is what we had been bound to in traditional architecture. This opens up new horizons to serendipity and who knows maybe to new discoveries.

 

In-hand data, both internal and external, can be the difference between millions of dollars gained or millions of dollars lost

The Eagle Ford… Bakken… Permian… Montney… Booming plays with over 50 active operators each. Each operator brings its own development strategy and its own philosophy. While some operators appear successful in every unconventional play they touch, others always seem to come last to the party, or to miss the party altogether. Why?

 Information. With all things being equal (funding availability, access to geoscience and engineering expertise), one variable becomes timely access to quality information and understanding what the data is telling you, faster than the competition.

 “Few if any operators understand how (shale) behaves, why one fracture stage within a well produces 10 times more oil or gas than its neighbor, or how to find sweet spots to overcome inequity.”  Colorado School of Mines Rhonda Duey

 Over 60 operators in the Eagle Ford alone. Studying the strategy and philosophy of each operator in a play would, should, yield insight as to what works, what does not work and why? Landing depth, fracking parameters, lateral length, flow-back design, etc… All may matter, all may contribute to better production rates, better ultimate recoveries and better margins. And yes, each play really is unique.

 WHERE TO LOOK?

 A wealth of information from each operator is buried in shareholders’ presentations, their reported regulatory data, and published technical papers. Collecting relevant information and organizing it correctly will enable engineers and geo staff to find those insights. Today, engineers and geologists cannot fully take advantage of this information as it’s not readily consumable and their time is stretched as it is.

 We all agree, taking advantage of Shale plays is not only about efficiency, but it is also about being effective. The fastest and cheapest way to effectiveness is to build on what others have proven to work and avoid what is known not to work.

 Here are some thoughts on how to leverage external data sources to your advantage:

  • Understand the goal of the study from engineers and geoscientists. Optimized lateral completion? Optimized fracking? Reducing drilling costs? All of the above?
  • Implement “big data” technology with a clear vision of the output. This requires integration between data systems to correlate data from various external sources with various internal sources.
  • Not ready to invest in “big data” initiatives or don’t have the time? Outsource information curation (gathering and loading data) for focused studies.
  • Utilize data scientists and analytical tools to find trends in your data, then qualify findings with solid engineering and geoscience understanding.
  • Consider a consortium among operators to exchange key data otherwise not made available publicly. If all leases are leased in the play, then the competition among operators is over. Then shouldn’t play data be shared to maximize recovery from the reservoirs?
  • Build a culture of “complete understanding” by leveraging various sources of external data.