Crossing The Border From a Mere Change to Cultural Expectation for QUALITY DATA

Culture sets certain expectations of behavior, and once accepted, there is no deviation. Even if you are removed from the cultural origin, these behaviors are ingrained and follow long after. 
I recently experienced this first hand, when a dear friend of mine was diagnosed with cancer. Of course, I was very distraught. When, a few weeks later, he was admitted to hospital for surgery, I visited him and his wonderful wife. This was natural to me, visiting a sick or injured friend at home or in the hospital, is not only a kind gesture, but is an expected social obligation ingrained in me since childhood.  What seemed to my friends as a thoughtful gesture was something I could not imagine not doing, or imagine friends of my culture not doing for me.
 
It made me wonder what makes a behavior “culturally” accepted and ingrained? When did visiting a sick friend become more than a thoughtful gesture, and cross the barrier into social obligation? How did this transition occur?  
 
These musings extended to Oil and Gas corporate culture. What behaviors were so ingrained at work that they had become second nature? Did they serve a purpose, such as to improve data quality? If not, what would it take to weave in these behaviors, and make them the expected social norm, and a clear moral obligation or expected practice within an organization? In an ideal world, these cultural obligations would lead to employees and employers alike feeling that it is “on them” to report and correct data quality issues, no matter at what point in the process they were discovered.  
 
 I thought it might be a good idea to ask my readers these questions. Are such behaviors ingrained in your workplace deep enough to be considered cultural? How would you weave them in, if not? If they are a part of your corporate culture, can you point to any policies and practices that may have led to this?  

Reminded Again, Narrow Focus Leads To Failure Every Time. Why do Some Data Projects Never Make It?

toronto
In 1993, an incident occurred in the Toronto Dominion Bank Tower that caught national attention, enough so that it made the infamous “Darwin Awards”. A lawyer, in an attempt to demonstrate the safety and strength of the building’s windows to visiting law students, crashed through a pane of glass with his shoulder and fell 24 floors to his death. Maybe the Glass did not break but it pulled off the wall. 
 
The lawyer made a classic mistake, he had focused on one specific thing to the exclusion of the big picture. If he had taken a look at his hypothesis from a wider angle, he might have considered the numerous other factors  that may have contributed to his doomed demonstration – the bond between the glass and the frame,  the yielding effect of material after repeated tests, or simply the view of the courtyard below (the high risk should it fail) might have been enough to make him reconsider his “leap of logic”. He focused on a specific item and ignored the other factors. 
 
Such a narrowed focus is equally risky to an information management project, or any project really. Although we are getting better we often focus on one thing: technology implementation and ignore other aspects.
From my experience, many factors contribute to the success or failure of information management in Oil & Gas projects. People, technology, processes, legacy data, Integration, a company’s culture, operational model, infrastructure, time constraints, or external influences such as vendors and partners, just to name a few. Each has a degree of influence on the project, but rarely will they cause the demise of the project – unless they are ignored! The key to success in any project is the consideration of all aspects, and an assessment of the risks they impose, prior to spending millions.
As an example, let’s look at survey data. How would you manage that data?
Often, companies focus on two elements:
  • Finding the technology to host the data
  • Migration of the data to the new solution
Success is declared at the end of these two steps, but two years down the road, the business has not embraced the solution, or worse yet, they continue to see incomplete surveys, a problem the new technology was supposed to solve. Failure, in this case, is less abrupt than an appointment with the Toronto Dominion Courtyard, but it is failure nonetheless.
 
More often than not, projects like the one above fail to take into consideration the other aspects that will keep data quality intact.
Even more often, these projects fail to consider external factors such as data acquisition vendors. These external vendors have their own processes and formats. If your project ignores our increasingly integrated world, and cannot cooperate with the processes, technology, and data formats of key external vendors and business partners, your project will yield very limited results and will not be sustainable. 

To achieve sustainable success in data management projects or any projects for that matter, it is necessary to consider the context surrounding the project, not just the specifics. Without this context, like the unfortunate lawyer, your project too can look forward to a rather significant fall.

To Build Fit Enterprise Solutions, Be Physical …

The British and the Americans speak the same language. But, say “I have a flat” to a British, and it means something completely different than said to an American. The former would congratulate you, and the latter would feel sorry for you. Flat in the UK means an apartment. Flat in Houston means a flat tire. The same 4 words, arranged in the exact same way, in what is ostensibly the same language, and yet either speaker would confuse their audience, if the audiences were transposed.

It is the same thing in business – if you cross different corporate cultures or even inter-organizational boundaries, industry terminology might sound the same but mean very different things. Sometimes we think we are communicating, but we are not.

Why is this a problem? Because it is not possible to build an enterprise data management solution to serve all departments without addressing variations in expectations for the same word. Especially if the term in question is one that defines your organization’s values and activities.

“Sometimes we think we are communicating, but we are not”

In the corporate world of Energy E&P, the word “completion” means different things to the different departments. If you mention a “Completion” to a Landman, he will assume you are referring to the subsurface horizon for his leases (it is more complex than this, but for the sake of this argument we need not dive into details). If a “Completion” is referenced to a Production Engineer, she immediately thinks of the intersection of a wellbore and a reservoir horizon. To a Completion Engineer, the same term means the process of completing a well after the well has reached final depth.

As organizations’ data management practice become more matured, they start to make their way towards the right of the EIM MM (Enterprise Information Management Maturity Model). Centralized solutions such as Master Data Management (MDM) are important and are designed to serve ALL departments to break as many silos as possible.

Naturally, to create a centralized solution that addresses needs across the enterprise, you must first reach consensus on how to build that solution. The solution must ensure that the data is NOT LOST, NOT OVERWRITTEN and is FULLY CAPTURED and useful to EVERYONE. What is the best way to reach consensus without the risk of losing data?

Get Physical

To answer the above question, many agree that information systems need to be built based on the physical reality to gather granular data …

By basing your data on the physical world and capture granular data as practically possible, you not only make it possible to capture all related information but also possible to report it in any combination of grouping and queries. See the example in figure 1.

Focus on Enterprise Needs and Departmental needs will follow…

I have seen systems that ignore wellbore data yet store only completions per well. At other clients, I have seen systems that take short cuts by storing wells, wellbore and wellbore completion data in one line (this necessitates overwriting old completion data with new everytime there is a change), these are “fit-for-purpose” systems.  These are not enterprise level solutions, but rather serve departmental needs.

Too often systems are designed for the need of one group/department/purpose rather than for the need of the company as a whole. However, if the needs of the whole are defined and understood, both company and groups will have what they need and then some.

Let’s look at an example to clarify this position:

Figure 1 Multi lateral well

Figure 1 Multi lateral well

In Figure 1 above, how would you store the data for the well in your organization or your department? Would you define the data captured as one well, three bores, and three completions? Or maybe two completions? One?
Depending on your department or organizational definitions, any of the above definitions could be fit-for-purpose correct. Accounting systems might keep track of ONLY one completion if it made Payroll and Tax sense. While Land may only keep track of 2 completions if the bores are in two zones. An engineer would track three completions and will be specific to one completion per wellbore. The regulatory department may want you to report something entirely different.
How do we decide the number of completions so that the information is captured accurately, yet remains useful to a Landman, Accountant, Engineer, and Geoscientist? Build based on the physical reality and stay granular.
In Figure 1, physically speaking, we see one well with three paths (3 wellbores). Each bore has its own configuration that open to the reservoir (completions). In total, this well has three different ‘Completions’,  one ‘Completion’ for each of the horizontal bores.
Accounting can query how many different cost centers the well has, and depending on the production (and other complex rules) the answer could be three but it could be 1.  Depending on the lease agreement, Landman could get a result of one or 3 completions. An engineer can also easily query and graph this data to find the three pathways, and determine each completion job per wellbore.
While it could be argued that data needs to be presented differently to each department, the underlying source data must reflect the physical truth. After all, we cannot control what people call things and certainly cannot change the lingo.

Juicy Data Aligned

juice

Around the corner from my house is a local shop selling an excellent assortment of fresh vegetable and fruit juices. Having tried their product, I was hooked, and thought it would be a good addition to my diet on a daily basis. But I knew with my schedule that unless I made a financial commitment, and paid ahead of time, I would simply forget to return on a regular basis.  For this reason, I broached the subject of a subscription with the vendor. If the juice was already paid for, and all I had to do was drop in and pick it up, I’d save time, and have incentive to stop by (or waste money).

However, the owner of the shop did not have a subscription model, and had no set process for handling one. But as any great business person does when dealing with a potential long term loyal customer, the owner accommodated my proposition, and simply wrote the subscription terms on a piece of paper (my name, total number of juices owed and date of first purchase), and communicated the arrangement with her staff. This piece of paper, was tacked to the wall behind the counter. I could now walk in at any time, and ask for my juice. Yess!

Of course, this wasn’t a perfect system, but it aligned with business needs (more repeat business), and worked without fail, until, of course, it eventually failed. On my second to last visit, the clerk behind the counter could not find the paper. Whether or not I got the juice owed to me that day is irrelevant to the topic at hand…the business response, however, is not.

When I went in today, they had a bigger piece of paper, with a fluorescent tag on it and large fonts. More importantly, they had also added another data point, labeled ‘REMAINING DRINKS’. This simple addition to their data and slight change to the process made it easier and faster for the business to serve a client. Previously, the salesperson would have to count the number of drinks I had had to date, add the current order, then deduct from the total subscription. But now, at a glance a salesperson can tell if I have remaining drinks or not, and as you can imagine deducting the 2 juices I picked up today from the twelve remaining is far simpler. Not to mention the data and process adjustment, helped them avoid liability, and improved their margins (more time to serve other customers). To me, this is a perfect example of aligning data solutions to business needs.

There are several parallels in the above analogy to our business, the oil and gas industry, albeit with a great deal more complexity. The data needs of our petro professionals, land, geoscience and engineering have been proven to translate directly into financial gains, but are we doing enough listening to what the real needs of the business are? Reference our blog on Better Capital Allocation With A Rear-View Mirror – Look Back for an example on what it takes to align data to corporate needs.

There is real value to harvest inside an individual organization when data strategies are elevated to higher standards. Like the juice shop, oil and gas can reap benefits from improved data handling in terms of response time, reduction in overhead, and client (stakeholder) satisfaction, but on a far larger scale.  If the juice shop had not adapted their methodology in response to their failure of process (even if it wasn’t hugely likely to reoccur) the customer perception might be that they didn’t care to provide better service. Instead, they might just get unofficial advertising from readers asking where I get my juice. I’d suggest that the oil and gas industry could benefit from similar data-handling improvements. Most companies today align their data management strategies to departmental and functional needs.  Unless the data is also aligned to the corporate goals many companies will continue to leave money on the table.

We have been handicapped by high margins, will this happen again or will we learn?

About 15 to 20 years ago, we started to discuss and plan the implementation of databases in Oil and Gas, in hopes of  reaping the benefits of all its promises. And we did plan and deploy those databases.  It is now no longer conceivable to draw geological maps by hand or to store production volumes in books. Also, in the last ten years, we have moved beyond simple storage of digital content and have started looking into managing data quality more aggressively. Here too, we have made inroads. But have we done enough?

Have you ever wondered why companies are still cleaning their data over and over again? Or why we are still putting up building blocks such as standards for master well lists and hierarchies? It seems to me that the industry as a whole is not able to break through the foundational stages of enterprise information management.  Because they can’t break through, they are unable to achieve a sustainable, robust foundation that allows their systems to  keep pace with business growth or business assets diversification.

Perversely, I believe this is because the oil and gas industry has been handicapped by high margins. When a company is making money despite itself, throwing additional bodies and resources to solve a pressing issue seems like the fastest and most effective solution in that moment. Because the industry is structured in such a way that opportunities have to be seized in the moment, there is often little time to wait for the right solution to be implemented.

Throwing money at a problem is not always the wrong thing to do. However, if it becomes your go-to solution, you are asking for trouble.

I would argue that highly leveraged companies have put themselves at high risk of bankruptcy because they do not invest sufficiently in efficiency and agility through optimized processes and quality information flow. For example, coming up with the most effective completion for your reservoir requires access to quality and granular technical data. This data does not just happen, it takes a great deal of wiring and plumbing work to obtain your organization’s data and processes, luckily if done right, it is a one-time investment with minimal operational upkeep.

According to Bloomberg, CNN and Oil & Gas 360 reports, during this ongoing downturn, at least 60 companies have entered chapter 11 in the USA alone. Ultra, Swift, Sabine, Quicksilver, American Energy are just a few of these highly leveraged but otherwise technically excellent companies.

Without the required behind the scenes investment, engineers and geoscientist will  find a way to get the data they need to make decisions. They will, and often do, work hard to bring data from many siloed systems. For each engineer to still have to massage data is throwing money at the problem. If the correct platform is implemented in your company, this information would flow like clockwork to everyone that needs it with little to no manual work.

WHAT COULD HAVE BEEN DONE?

We all know it is never the wrong time to make a profit. Consequently, it is never the wrong time to invest in the right foundation. During a downturn, lower demand creates an abundance of the only resource unavailable during an upturn – time. This time, spent wisely, could bring huge dividends during the next upswing in prices. Conversely, during a period of high prices, it is the other resources we cannot afford to waste. During a boom, we cannot ignore building sustainable longterm data and process solutions the RIGHT way.

It is never the wrong time to make a profit. Consequently, it is never the wrong time to invest in the right foundation.

Of course, there is no single “right way” that will work for everyone. The right way for your organization is entirely subjective, the only rule being that it must align with your company’s operations models and goals. By contrast, the only truly wrong way is to do nothing, or invest nothing at all.

If your organization has survived more than ten years, then it has seen more than one downturn, along with prosperous times. If you’ve been bitten before, it’s time to be twice shy. Don’t let the false security of high margins handicap you from attaining sustainable and long-term information management solutions.

Here are some key pointers that you probably already know:

      Track and automate repeatable tasks – many of your organization’s manual and repeatable tasks have become easier to track and automate with the help of BPMS solutions. Gain transparency into your processes, automate them, and make them leaner whenever possible.  

   Avoid Duplication of Effort – Siloed systems and departmental communication issues result in significant duplicated efforts or reworks of the same data.  Implementing strong data QA process upstream can resolve this. The farther upstream, the better. For example, geoscientists are forced to rework their maps when they discover inaccuracy in the elevation or directional survey data. These are simple low hanging fruits that should be easy to remove by implementing controls at the source, and at each stop along the way.

  Take an Enterprise View –  Most E&P companies fall under the enterprise category. Even if they are a smaller player, they often employ more people than the average small to medium business  (especially during a boom) and deal with a large number of vendors, suppliers, and clients. Your organization should deploy enterprise solutions that match your company’s enterprise operations model. Most E&P companies fall in the lower right quadrant in the below MIT matrix.

mitopmodel

Non-Disruptive, Non-Invasive Data Governance for Oil & Gas

Trading post

Establishing data governance is not a new activity. It is, at its heart, an extension of man’s desire to define the world, and to communicate these discoveries in a more efficient manner. A good data standard can be linked to the use of Latin as a lingua franca by merchants in medieval Europe. Few English merchants could speak Dutch, but most were taught Latin (and vice versa). Latin provided a set of definitions and rules understood by all, promoted by rote memorization of grammar and a large number of books, policed by data stewards in the form of tutors who rapped children’s knuckles when they got it wrong.  (ok, maybe this is a stretch a bit, but I like the story :-))

Not a Blank Slate

In Oil and Gas, I see data governance programs in many forms, from centralized formats to a completely distributed approach, and everything in between. These implementations come with varying degrees of success.

So when I came across Robert Seiner’s book “Non-Invasive Data Governance” I asked myself could this work for oil and gas? In my judgment, a distributed, organic, and non-invasive approach could be an option to deploy a data governance program in a faster, more uniform and comprehensive manner, which in turn would yield better success.

Non-invasive data governance is built around identifying already in place, de facto standards, and processes to capture and manipulate data. If there isn’t one standard, then “converging” and “formalizing” to one standard that suits is put in place. In the new world, data stewards will be recognized “formally” and will maintain “universal” standards for work they have been doing all along…

To me, this approach has far-reaching implications to raise the bar on data quality standards. This approach weaves the quality standards in the DNA and the culture of an organization.

Business Specific Pidgin

Let’s continue the historical analogy a little. Trade was still conducted without the advantage of a lingua franca, albeit with greater difficulty. Typically, this was accomplished by the evolution of pidgin languages. The first encounters, however, were most likely exercises in frustration, as both parties attempted to learn one another’s needs, defined goods and services, and the perceived value of these. In speaking a pidgin language with another merchant, if either party used a differing definition, or even presented his offer in an unfamiliar sentence structure, the business venture could go south very quickly. Similarly, within a single oil and gas organization. For each data group, there needs to be one standard for all.

The oil and gas industry would not be where it is today without some established data standards and data processes already in place. Data governance will never be a blank slate. The problem is that while standards exist that are recognized across the industry, there are many terms that differ from one team to another and are not quite formalized or fully recognized.

The non-invasive DG approach is to formalize what is currently not formal and monitor it for continuous improvement over time. For example, wellbore survey data can be captured in different ways, none of which are wrong, just different. One team would store latitude, longitude, geodetic system, Easting, Northing, and distance. Another team might use Negative and Positive to indicate directions instead of Easting and Northing. These are very subtle differences, however, when flowing data from one system to another (and data flow we do a lot) a level of accuracy is lost in the translation.

Let me know your thoughts…

 

Technical Documents Architecture: Separate for Sustainable Efficiency

If like many oil and gas companies, your technical documents are scattered, or buried in folders and nested folders, you have an opportunity to increase the efficiency of your petro-professionals by organizing their technical documents, speed up how they locate these documents, or better yet, do both. If you do it right, you can architect a solution that is sustainable.

Organizing electronic files for an oil and gas company is not as complex as it first may seem. It is very similar to the way you organize files and documents on your PC. The fundamental question you must ask yourself is “ How do I sustainably tag or organize my files so that I can find what I need in 5 seconds or less?”. This question guides my efforts for all designs and solutions that I help my client’s engineers and geoscientists build. The topic is big and long, but here I would like to share my thoughts on architecture.

This topic is big with many angles, for this blog I would like to share my thoughts on architecting the technical documents healthy environment.

This Valentine’s month: Separate for Sustainability

Many oil and gas companies do not distinguish between the active work area and a long-term final-version area for their electronic files. What we find instead is one environment with ALL technical files in one place. This repository often contains both active and long-gone projects, including files for divested wells. Attempts to organize this chaotic mess happen every other year. This kind of architecture requires organizing these files every few years, with a hefty price tag!

 

In my opinion, for an oil and gas company to have a sustainable documents management practice, there should be at least 4 working areas within your environment (see diagram below).

TDRM architecture

An Amicable Separation

Area #1 Team work area

By establishing a day-to-day work area that can by definition be an organic mess coordinated by those who know exactly what everything is, your team can collaborate in whatever method works best for them. This area is usually cluttered with analyses files, multiple versions of a document, manuals, research material, cost proposals from vendors, and more.

This flexibility is key to a productive, happy work environment. It only becomes a problem when others are exposed to a ‘mess’ that is not of their own making. For this reason, each team should have their own defined work area.

(Yes, I hear some of you say today’s technology are designed curtail this mess and no need to have a separate work environment. I think that may be possible if the technology is used by a homogenous skilled staff.  Oil and gas staff are of all ages and at different levels of software savviness)

Area #2 Final versions area

Separating your final-versions area from the working area has two immediate benefits

1) Efficiently and effectively declare and distribute the final version to the enterprise

2) Allows the removal of inactive files from work area (declutter).

Your final versions area should provide access to (and the easy identification of) the latest version of a report in a timely manner and without delays. Unfortunately,  this area is often not formalized (it is not separated from area #1), causing delays for other teams who need access to a given file – they need to notify your team, have a member of your team identify the correct file, and then possibly to send them the file.

Often, distribution of final versions is a complex dance of requests and delivery between multiple teams or individuals. By separating the archival/final versions area, and providing access to authorized resources, this jitterbug contest can become a synchronized line dance. If all parties that need a file can identify the right file on their own, and retrieve it themselves, significant delays can be avoided.

Furthermore, by separating the final-version area from a work area, you have a chance to sustain the serenity and completeness of technical well files and specifically well files and records (most important assets you can have). Allowing any company to easily open a data room when and if needed.

Areas # 3 and 4: External Collaboration and Access

When considering work areas and final versions, it is important to consider accessibility, external as well as internal. Providing data to JV partners bases on their WI and JOV data requirements and collaborating with vendors during well directional design or completion treatment is essential to keeping the technical documents preserved and not lost in a web of email attachments.

To me, this architecture is non-invasive or intrusive to engineers and geoscientists workflow.

Summary: Separate, but don’t go far

Separating final-version area from work area can have an immediate and strong benefit to productivity, balancing team flexibility with the requirements of other teams in the organization. While the day-to-day work area should be organic and flexible, it is important that the Archival/Final Version repository is defined. This is because it is not serving just one team, but the needs of an organization as a whole. This separation of working area and final versions/archival area provides a sustainable solution that meets the 5-second accessibility requirement outlined above.

Having outlined the benefits of this simple change in a complex working environment, we’d love to hear from the community. Do you have a better approach? Questions regarding implementation? Have you implemented something like this, and if so, what was your experience? Whatever the input, we would love to hear from you.

Coming Current with E&P Data Management Efforts

During the PNEC 2015 conference last week, we managed to entice some of the attendees, passing by our booth, to take part in a short survey. As an incentive we offered a chance to win a prize and made the survey brief, we could’t make it too long and risk getting little or no intelligence.

I’m not sure if any of you will find the results to be a revelation or offer anything new that you already did not know anecdotally. But if nothing else, they may substantiate “feelings” with some numbers.

You will be pleased to know that more than 60% of the replies are from operators or NOC.  In this week’s blog I share the results  and offer my thoughts on the first survey question.

graph

In the above question and graph “Which data projects are of high priority in your mind?”, it appears the industry continues to pursue data integration projects and the majority of the participants (73%) consider them to be the highest priority. Followed closely on the priority list were “data quality” projects (data governance and legacy data cleaning), 65% consider these a priority.

Thoughts…

Integration will always be at the top of the priority list in the E&P world until we truly connect the surface measurements with the subsurface data in real time. Also, given that data integration cannot be achieved without pristine data, it is no surprise that data quality follows integration as a close second.

Because many “data cleaning” projects are driven by the need to integrate, data quality efforts are still focused on incoming data and mostly on “identification” data, such is the case in MDM projects.

Nonetheless, how a well was configured 20 years earlier and what failures (or not) were encountered during those 20 years are telling facts to engineers. Therefore, the quality of “legacy” technical data is just as important as of new incoming data.

Reaching deeper than identification and header data to ensure technical information is complete and accurate is not only important for decision making, but as my friend at a major company would say: it is important firstly for safety reasons, then for removing waste (lean principle) and then for decisions.  Of course chipping away slowly at the large mountain of data is a grueling task and can be demotivating if there are only limited results.

To get them done right with impactful E&P business results, these projects should be tackled with a clear vision and a holistic approach. As an industry we need to think about  legacy data preparation strategically, do them once and be done with it.

Legacy data cleanup projects are temporary (with a start and an end date), experience tells me they are best accomplished by outsourcing them to professional data cleaning firms that fully understand E&P data.

This blog is getting too long, I’d better cover the results of the rest of the survey in the next one.

Please share your thoughts and correct me where you feel I got it wrong….

 

Part 3 of 3: Are we progressing? Oil & Gas Data Management Journey the 2000s

The 1990’s shopping spree for applications produced a spaghetti of links between databases and applications while also chipping away the petro professional’s effective time with manual data entry. Then, a wave of mega M&As hit the industry in late 90s early part of the 2000s.

Mega M&As (mergers and acquisitions) continued into the first part of the 2000s, bringing with them—at least for those on the acquiring side – a new level of data management complexity.

With mega M&As, the acquiring companies inherit more databases and systems, and many physical boxes upon boxes of data. This influx of information proved to be too much at the outset and companies struggled – and continue to struggle – to check the quality of the technical data they’d inherited. Unknown at the time, the data quality issues present at the outset of these M&As would have lasting effects on current and future data management efforts. In some cases it gave rise to law suites that were settled in millions of dollars. 

Early 2000s

Companies started to experiment with the Internet.  At that time, that meant experimenting with simple reporting and limited intelligence on the intranet.  Reports were still mostly distributed via email attachments and/or posted in a centralized network folder.

I am convinced that it was the Internet that  necessitated cleaning technical data and key header information for two reasons: 1) Web reports forced the integration between systems as business users wanted data from multiple silo databases on one page. Often times than not, real-time integration could not be realized without cleaning the data first 2)  Reports on the web linked directly to databases exposed more “holes” and multiple “versions for the same data”;  it revealed how necessary it was to have only ONE VERSION of information, and that  had better be the truth.

The majors were further ahead but for many other E&P companies, Engineers were still integrating technical information manually, taking a day or more to get a complete view and understanding of their wells, excel was the tool moslty. Theoretically, with these new technologies, it should be possible to automate and instantaneously give a 360 degree-view of a well, field, basin and what have you. However, in practice it was a different story because of poor data quality.  Many companies started data cleaning projects, some efforts were massive, in tens of millions of dollars, and involved merging systems from many past acquisitions.

In the USA, in addition to the internet, the collapse of Enron in October 2001 and the Sarbanes–Oxley Act enacted in July 30, 2002, forced publicly traded oil and gas companies to document and get better transparency into operations and finances. Data management professionals were busy implementing their understanding of SOX in the USA. This required tightener definitions and processes around data.

Mid 2000s

By mid-2000s, many companies started looking into data governance. Sustaining data quality was now in the forefront.  The need for both sustainable quality data and data integration gave rise to Well Master Data Management initiatives. Projects on well hierarchy, data definitions, data standards, data processes and more were all evolving around reporting and data cleaning projects. Each company working on its own standards, sharing success stories from time to time.  Energetics, PPDM and DAMA organizations came in handy but not fully relied on.

Late 2000s

When working on sustaining data quality, one runs into the much-debated subject of who owns the data?  While for years, the IT department tried to lead the “data management” efforts, they were not fit to clean technical oil and gas data alone; they needed heavy support from the business. However the engineers and geoscientists did not feel it was their priority to clean “company-wide” data.

CIOs and CEOs started realizing that separating data from systems is a better proposition for E&P.  Data lives forever while systems come and go. We started seeing a movement towards a data management department, separate and independent from IT, but working close together. Few majors made this move in mid 2000s with good success stories others are started in late 2000s. First by having a Data Management Manager reporting to the CIO (and maybe dotted line to report to a business VP) then reporting directly to the business.

Who would staff a separate data management department?  You guessed it; resources came from both the business and IT.  In the past each department or asset had its own team of technical assistants “Techs” who would support their data needs (purchase, clean, load, massage…etc.) Now many companies are seeing a consolidation of “Techs” in one data management department supporting many departments.

Depending on how the DM department is run, this can be a powerful model if it is truly run as a service organization with the matching sense of urgency that E&P operations see. In my opinion, this could result in cheaper, faster and better data services for the company, and a more rewarding career path for those who are passionate about data.

Late 2008 and throughout 2009 the gas prices started to fall, more so in the USA than in other parts of the world. Shale Natural Gas has caught up with the demand and was exceeding it.  In April 2010, we woke up to witness one of the largest offshore oil spill disasters in history. A BP well, Macondo, exploded and was gushing oil.

For companies that put all their bets on gas fields or offshore fields, they did not have appetite for data management projects. For those well diversified or more focused on onshore liquids, data management projects were either full speed or business as usual.

 2010 to 2015 ….

Companies that had enjoyed the high oil prices since the 2007 started investing heavily in “digital” oilfields.  More than 20 years had passed since the majors started this initiative (I was on this type of project with Schlumberger for one of the majors back in 1998). But now it is more justifiable than ever. Technology prices have come down, systems capacities are up, network reliability is strong, wireless-connections are reasonably steady and more. All have come together like a prefect storm to resurrect the “smart” field initiatives like-never before. Even the small independents were now investing in this initiative. High oil prices were justifying the price tag (multiple millions of dollars) on these projects. A good part of these projects is in managing and integrating real time data steams and intelligent calculations.

Two more trends appeared in the first half of the 2010s:

  • Professionalizing the petroleum data management. Seemed like a natural progression now data management departments are in every company. The PPDM organization has a competency model that is worth looking into. Some of the majors have their own models that are tied to their HR structure. The goal is to reward a DM professional’s contribution to business’ assets. (Also please see my blog on MSc in Petroleum DM)
  • Larger companies are starting to experiment and harness the power of Big Data, and the integration of structured with unstructured data. Meta data and managing unstructured has become more important than ever.

Both trends have tremendous contributions that are yet to be fully harnessed.  The Big Data trend in particular is nudging data managers to start thinking of more sophisticated “analysis” than they did before .  Albeit one could argue that Technical Assistants that helped engineers with some analysis, were also nudging towards data analytics initiatives.

In December 2015, the oil price collapses more than 60% from its peak

But to my friends’ disappointment, standards are still being defined. Well hierarchy, while is seems simple to the business folks, getting it all automated and running smoothly across all types and locations of assets  will require the intervention of the UN.  With the data quality commotion some data management departments are a bit detached from the operations reality and take too long to deliver.

This concludes my series on the history of Petroleum Data Management. Please add your thoughts would love to hear your views.

For Data Nerds

  1. Data ownership has now come full circle, from the business to IT and back to business.
  2. The rise of Shale and Coal-bed Methane properties, fast evolution of field technologies are introducing new data needs. Data management systems and services need to stay nimble and agile. The old ways of taking years to come up with a usable system is too slow.
  3. Data cleaning projects are costly, especially when cleaning legacy data, so prioritizing and having a complete strategy that aligns with the business’ goals are key to success. Starting with well-header data is a very good start, aligning with what operations really need will require paying attention to many other data types, including mealtime measurements.
  4. When instituting governance programs, having a sustainable, agile and robust quality program is more important than temporarily patching problems based on a specific system.
  5. Tying data rules to business processes while starting from the wellspring of the data is prudent to sustainable solutions.
  6. Consider outsourcing all your legacy data cleanups if it takes resources away from supporting day to day business needs. Legacy data cleaning outsources to specialized companies will always be faster, cheaper and more accurate.
  7. Consider leveraging standardized data rules from organizations like PPDM instead of building them from scratch. Consider adding to the PPDM rules database as you define new ones. When rules are standardized data, sharing exchanging data becomes easier and cost effective.  

Part 2: Are we progressing? Oil & Gas Data Management Journey

In my previous blog, I looked back to the 1960s, 70s, and 80s, and how E&P technical data was generated and stored. For those three decades, data management was predominantly and virtually exclusively on paper. As I looked to the 90s, I found them packed with events that affected all areas of data value chain, from generation to consumption to archival.

Early 90s: Driving Productivity Forward

The early 90s continued one dominant theme from the late 1980s: the relentless drive for increased productivity throughout the business. This productivity focus coincided with three technological advancements that made their way into the industry. First of all, dropping costs of hardware with their growing capacity meant that computers became part of each office with meaningful scientific programs on them. Second, the increased capabilities of “networks” and “server/client” opened up new possibilities by centralizing and sharing one source of data. Third, proven success of relational databases and the SQL offered sophisticated ways to access and manipulate more data.

All this meant that, by the early 90s, engineers and the majority of geoscientists were able to do an increasing portion of their work on their own computers. At that time, the world of computer was divided into two; UNIX for G&G professionals and PC for the rest. Despite the divide of technologies, increases in productivity were tangible. Technology had proven itself useful and helpful to the cause, and was here to stay.

Petroleum Geoscience- and Engineering- specific software applications started springing up in the market like Texas wild flowers in March. Although some companies built seismic and log interpretation software back in the 70s using Cray super computers and on DEC mini computers, not many could afford an $800,000 computer (yes, one computer that is) with limited capacity. “I remember selling software on time share for CGG back in the 80s” my friend commented, “companies had to connect to expensive super computers on extremely slow connections” he adds.  So when the computer became affordable and with the right power for E&P technical applications, the software market flourished.

The industry was thirsty for software and absorbed all of what was produced on the market and then some; operators who could afford it created their own. The big service companies decided they were not going to miss out. Schlumberger acquired Geoquest in 1992 for its seismic data processing services and tools, then also acquired Finder, Eclipse and a long string of other applications.

The only problem with all these different software applications was that they existed standalone; each application had its own database and did not communicate with another. As a result, working on each hydrocarbon asset meant multiple data entry points or multiple reformatting and re-loading. This informational and collaborative disconnect between the different E&P applications was chipping away the very productivity and efficiency the industry was desperate to harness.

Nevertheless, the standardization of defining, capturing, storing and exchanging E&P data was starting to be of interest to many organizations. PPDM in Canada and later POSC in the USA (now Energetics) were formed in 1988 and 1990 respectively. PPDM’s mission at the time was focused on creating an upstream data model that could be utilized by different applications. POSC’s mission was broader; to develop a standardized E&P data model and data exchange standards.

Schlumberger had a solution for its own suite of applications; it offered both Geoframe and Finder as answers to the data mess with Finder being the master database that fed Geoframe with information, and Geoframe integrated the various software applications together.

Mid-90s: Making Connections

In the mid-90s, Halliburton acquired Landmark Graphics and unveiled the OpenWorks platform for its suites of applications in April 1997 at the AAPG. Their market positioning? Integrated reservoir management and a data management solutions. OpenWorks offered similar data integration to GeoFrame but with its own set of scientific software. Geoframe and OpenWorks would butt heads for years to come, both promoting their vision of data management and integrated workflows. It seemed that the larger companies were either a Schlumberger or Landmark shop.

In 1997, the Open Spirit Alliance funded by a consortium (Schlumberger, Shell and Chevron) was born and interoperability was its mission. PrismTech was to develop and market an application integration framework that any company could utilize, it was to be open. Open Spirit platform was officially launched at the SEG in 1998.

Late 90s: Big Industry Changes

Come the late 90s, another drop in oil prices combined with other macroeconomics appeared to trigger a surge in “mega” M&A activities starting with Exxon acquiring Mobil in 1998, BP acquiring Amoco in 1999, and then Conoco acquiring Philips in 2000, these mega acquisitions continued through early 2000s.

All this M&A in the 90s added complexity to what was already a complex technical dataflow environment.

For the data nerds

  • In the 90s, the industry rapidly evolved from hand-written scout tickets, and hand-drawn maps to electronic data.
  • The “E&P software spring” produced many silo databases. These databases often overlapped in what they stored creating multiple versions of the same data.
  • The IT department’s circle of influence was slowly but surely expanding to include managing E&P data. IT was building data systems, supporting them, uploading data to them and generating reports.
  • Engineers and Geoscientist still kept their own versions of data, but in MANY locations now. While hardcopies were the most trusted form (perceived to be the most reliable), technical data was also stored in disks, network drives, personal drives and in various applications’ databases and flat files. It compounded the data management problems of the years prior to computerization of processes.
  • Relational databases and SQL proved to be valuable to the industry. But it was expensive to support a variety of databases; many operators standardized and requested systems on Oracle (or SQLServer later).
  • Systems not on relational databases either faded away to the background or converted to relational databases that were accepted by operators.
  • Two standard data models emerged PPDM and POSC (now Energetics) and one data integration platform from the OpenSpirit (now part of the Tibco suite).
  • Geos and engineers validated and cleaned their own data (sometimes with the help of Geotechs or technical assistants) prior to their analyses.

 Stay tuned for the Millennium, and please add your own memories (and of course please correct me for what is not accurate ….)