Part 2: Are we progressing? Oil & Gas Data Management Journey

In my previous blog, I looked back to the 1960s, 70s, and 80s, and how E&P technical data was generated and stored. For those three decades, data management was predominantly and virtually exclusively on paper. As I looked to the 90s, I found them packed with events that affected all areas of data value chain, from generation to consumption to archival.

Early 90s: Driving Productivity Forward

The early 90s continued one dominant theme from the late 1980s: the relentless drive for increased productivity throughout the business. This productivity focus coincided with three technological advancements that made their way into the industry. First of all, dropping costs of hardware with their growing capacity meant that computers became part of each office with meaningful scientific programs on them. Second, the increased capabilities of “networks” and “server/client” opened up new possibilities by centralizing and sharing one source of data. Third, proven success of relational databases and the SQL offered sophisticated ways to access and manipulate more data.

All this meant that, by the early 90s, engineers and the majority of geoscientists were able to do an increasing portion of their work on their own computers. At that time, the world of computer was divided into two; UNIX for G&G professionals and PC for the rest. Despite the divide of technologies, increases in productivity were tangible. Technology had proven itself useful and helpful to the cause, and was here to stay.

Petroleum Geoscience- and Engineering- specific software applications started springing up in the market like Texas wild flowers in March. Although some companies built seismic and log interpretation software back in the 70s using Cray super computers and on DEC mini computers, not many could afford an $800,000 computer (yes, one computer that is) with limited capacity. “I remember selling software on time share for CGG back in the 80s” my friend commented, “companies had to connect to expensive super computers on extremely slow connections” he adds.  So when the computer became affordable and with the right power for E&P technical applications, the software market flourished.

The industry was thirsty for software and absorbed all of what was produced on the market and then some; operators who could afford it created their own. The big service companies decided they were not going to miss out. Schlumberger acquired Geoquest in 1992 for its seismic data processing services and tools, then also acquired Finder, Eclipse and a long string of other applications.

The only problem with all these different software applications was that they existed standalone; each application had its own database and did not communicate with another. As a result, working on each hydrocarbon asset meant multiple data entry points or multiple reformatting and re-loading. This informational and collaborative disconnect between the different E&P applications was chipping away the very productivity and efficiency the industry was desperate to harness.

Nevertheless, the standardization of defining, capturing, storing and exchanging E&P data was starting to be of interest to many organizations. PPDM in Canada and later POSC in the USA (now Energetics) were formed in 1988 and 1990 respectively. PPDM’s mission at the time was focused on creating an upstream data model that could be utilized by different applications. POSC’s mission was broader; to develop a standardized E&P data model and data exchange standards.

Schlumberger had a solution for its own suite of applications; it offered both Geoframe and Finder as answers to the data mess with Finder being the master database that fed Geoframe with information, and Geoframe integrated the various software applications together.

Mid-90s: Making Connections

In the mid-90s, Halliburton acquired Landmark Graphics and unveiled the OpenWorks platform for its suites of applications in April 1997 at the AAPG. Their market positioning? Integrated reservoir management and a data management solutions. OpenWorks offered similar data integration to GeoFrame but with its own set of scientific software. Geoframe and OpenWorks would butt heads for years to come, both promoting their vision of data management and integrated workflows. It seemed that the larger companies were either a Schlumberger or Landmark shop.

In 1997, the Open Spirit Alliance funded by a consortium (Schlumberger, Shell and Chevron) was born and interoperability was its mission. PrismTech was to develop and market an application integration framework that any company could utilize, it was to be open. Open Spirit platform was officially launched at the SEG in 1998.

Late 90s: Big Industry Changes

Come the late 90s, another drop in oil prices combined with other macroeconomics appeared to trigger a surge in “mega” M&A activities starting with Exxon acquiring Mobil in 1998, BP acquiring Amoco in 1999, and then Conoco acquiring Philips in 2000, these mega acquisitions continued through early 2000s.

All this M&A in the 90s added complexity to what was already a complex technical dataflow environment.

For the data nerds

  • In the 90s, the industry rapidly evolved from hand-written scout tickets, and hand-drawn maps to electronic data.
  • The “E&P software spring” produced many silo databases. These databases often overlapped in what they stored creating multiple versions of the same data.
  • The IT department’s circle of influence was slowly but surely expanding to include managing E&P data. IT was building data systems, supporting them, uploading data to them and generating reports.
  • Engineers and Geoscientist still kept their own versions of data, but in MANY locations now. While hardcopies were the most trusted form (perceived to be the most reliable), technical data was also stored in disks, network drives, personal drives and in various applications’ databases and flat files. It compounded the data management problems of the years prior to computerization of processes.
  • Relational databases and SQL proved to be valuable to the industry. But it was expensive to support a variety of databases; many operators standardized and requested systems on Oracle (or SQLServer later).
  • Systems not on relational databases either faded away to the background or converted to relational databases that were accepted by operators.
  • Two standard data models emerged PPDM and POSC (now Energetics) and one data integration platform from the OpenSpirit (now part of the Tibco suite).
  • Geos and engineers validated and cleaned their own data (sometimes with the help of Geotechs or technical assistants) prior to their analyses.

 Stay tuned for the Millennium, and please add your own memories (and of course please correct me for what is not accurate ….)

3 thoughts on “Part 2: Are we progressing? Oil & Gas Data Management Journey

  1. Nigel Goodwin

    Wasn’t OpenWorks well before 1997? I remember Tigress was built on top of OpenWorks back in 1990 or even before. It used Sybase, if I remember correctly.

    POSC’s goal of interoperability and plug and play between multiple vendors was always going to be a challenge, but it was a great way to meet and share amongst those of us interested in E&P data management, and it was a great catalyst for emerging ‘exploration databases’.

    Reply
  2. Certis Inc. Post author

    With Landmark Graphics before it was acquired by Halliburton, right? I could not find any material. But you must be right, otherwise why would Halliburton be interested in them?

    Reply
  3. Steve Hawtin

    Your comment about relational databases is at best misleading. First of all there were plenty of key data repositories that were not based on relational databases: the two leading Petrophysics systems, for example, were Recall (from Z&S) and Geolog (from Mincom) neither of which had a relational database. The most commonly used rendition of the POSC Epicentre model was the ObjectSIP one which employed an object store. The software I was responsible for, GeoScene, employed an object store. Most seismic data was processed in custom structures, the relational databases couldn’t cope. OpenWorks while it stored most of the actual bits in Oracle did so in a way that required access through their API and GeoFrame (which was later of course) stored the actual “data” in files outside the relational database.

    Landmark hit the big time in the 1980s, they had taken the concepts of personal power machines pioneered by the Xerox 1186 and Symbolics machines. Their first big “growth spurt” came when they sold custom built workstations (with two big screens) direct to Geophysicists in 1985-88 ish.

    The key integration platform of the early to mid 1990s was Geoshare (see http://dm4ep.com/art07.htm for my take on that). I would also claim the DAEX (aka “Data Exchange”) which I was responsible for was a key integration platform. Both of those were killed off by the dip in investment that occurred in 2000-2001. The Crouse conference (PNEC) originated as the Geoshare User Group annual meeting.

    If you want to see a snapshot of what “Integration” looked like in 1999 I could suggest you look at “Data Integration Technologies in Exploration and Production” which I published in 2000 (available at http://dm4ep.com/data/ditep-2000.pdf ). It provides a contemporary view on the topic and quite a lot of what it says remains relevant today.

    To claim that “two data models existed” is also not my recollection. I think I am right in saying that only one set of data (the “haddock” data) was ever loaded into the Epicentre model (that was done as the European part of the IPP). The complete model was never used commercially. Lots of vendors claimed to use “an extended subset” of the model, someone produced a presentation claiming that their cat was “an extended subset” for a particular meaning of the words “extended” and “subset”. It would be more true to claim that “Finder” and “PPDM” were the two most employed data models, but that would ignore OpenWorks, Recall, Geolog, CGG, GeoScene, Tigress, Terralog, GeoFrame, COMPASS, CPS3 and a host of other data models that each played an important part for some oil companies, let alone all the custom built systems and the host of key data held in spreadsheets.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s