Tag Archives: data assets

Why Connecting Silos With Better IM Architecture Is Important

If you work in an oil and gas company, then you are familiar with the functional divides. We are all familiar with the jokes about geologists vs. engineers. We laugh and even create our own. But jokes aside, oil and gas companies operate in silos and with reason.

But while organizational silos may be necessary to excel and maintain standards of excellence, collaboration and connection across the silos are crucial for survival.

For an energy company to produce hydrocarbons from an asset, it needs all the departments to work together (geoscience, engineering, finance, land, supply chain …etc.). This requires sharing of detailed information and collaborating beyond meeting rooms and email attachments. But the reality in many oil and gas companies today is different, functional silos extend to information silos.

Connected Silos Are Good. Isolated Silos Are Bad

In an attempt to connect silos, “Asset Teams” or “Matrix” organizations are formed and incentive plans are carefully crafted to share goals between functions. These are great strides, but no matter the organizational structure, or the incentive provided, miscommunications, delays, and poor information hand-over are still common place. Until we solve the problem of seamless information sharing, the gap between functional departments will persist; because we are human and we rationalize our decisions differently.  This is where technology and automation (if architected correctly) can play a role in closing the gap between the silos.

Asset team members and supporting business staff have an obligation to share information not only through meetings and email attachments but through organizing and indexing asset files throughout the life of the asset. Fit-for-Purpose IM architecture has a stratigic role to play in closing the gap between the functional silos.  

Connecting Functional Silos With IM Takes Vision & Organizational Commitment 

Advancements in IM (Information Management) and BPMS (Business Process Management Systems) can easily close a big part of the remaining gap. But many companies have not been successful in doing so, despite significant investments in data and process projects. There can be many reasons for this, I share with you two of the most common pitfalls I come across:

  • Silo IM projects or systems –  Architecting and working on IM projects within one function without regard to impact on other departments. I have seen millions of dollars spent to solve isolated geoscience data needs, without accounting for impact on engineering and land departments. Or spent on Exploration IM projects without regard to Appraisal and Development phases of the asset. Quite often, organizations do not take the time to look at the end-to-end processes and its impact on company’s goals. As a result, millions of dollars are spent on IM projects without bringing the silos any closer.  Connecting silos through an IM architecture requires a global vision.
  • Lack of commitment to enterprise standards – If each department defines and collects information according to their own needs without regard of the company’s needs, it is up to other departments to translate and reformat. This often means rework and repetitive verification whenever information reaches a new departmental ‘checkpoint’.

The above pitfalls can be mitigated by recognizing the information dependencies and commonalities between departments then architecting global solutions based on accepted standards and strong technology. It takes a solid vision and commitment.

For a free consultation on how to connect silos effectively, please schedule your appointment with a Certis consultant. Email us at info@certisinc.com or call us on 281-377-5523.

Part 3 of 3: Are we progressing? Oil & Gas Data Management Journey the 2000s

The 1990’s shopping spree for applications produced a spaghetti of links between databases and applications while also chipping away the petro professional’s effective time with manual data entry. Then, a wave of mega M&As hit the industry in late 90s early part of the 2000s.

Mega M&As (mergers and acquisitions) continued into the first part of the 2000s, bringing with them—at least for those on the acquiring side – a new level of data management complexity.

With mega M&As, the acquiring companies inherit more databases and systems, and many physical boxes upon boxes of data. This influx of information proved to be too much at the outset and companies struggled – and continue to struggle – to check the quality of the technical data they’d inherited. Unknown at the time, the data quality issues present at the outset of these M&As would have lasting effects on current and future data management efforts. In some cases it gave rise to law suites that were settled in millions of dollars. 

Early 2000s

Companies started to experiment with the Internet.  At that time, that meant experimenting with simple reporting and limited intelligence on the intranet.  Reports were still mostly distributed via email attachments and/or posted in a centralized network folder.

I am convinced that it was the Internet that  necessitated cleaning technical data and key header information for two reasons: 1) Web reports forced the integration between systems as business users wanted data from multiple silo databases on one page. Often times than not, real-time integration could not be realized without cleaning the data first 2)  Reports on the web linked directly to databases exposed more “holes” and multiple “versions for the same data”;  it revealed how necessary it was to have only ONE VERSION of information, and that  had better be the truth.

The majors were further ahead but for many other E&P companies, Engineers were still integrating technical information manually, taking a day or more to get a complete view and understanding of their wells, excel was the tool moslty. Theoretically, with these new technologies, it should be possible to automate and instantaneously give a 360 degree-view of a well, field, basin and what have you. However, in practice it was a different story because of poor data quality.  Many companies started data cleaning projects, some efforts were massive, in tens of millions of dollars, and involved merging systems from many past acquisitions.

In the USA, in addition to the internet, the collapse of Enron in October 2001 and the Sarbanes–Oxley Act enacted in July 30, 2002, forced publicly traded oil and gas companies to document and get better transparency into operations and finances. Data management professionals were busy implementing their understanding of SOX in the USA. This required tightener definitions and processes around data.

Mid 2000s

By mid-2000s, many companies started looking into data governance. Sustaining data quality was now in the forefront.  The need for both sustainable quality data and data integration gave rise to Well Master Data Management initiatives. Projects on well hierarchy, data definitions, data standards, data processes and more were all evolving around reporting and data cleaning projects. Each company working on its own standards, sharing success stories from time to time.  Energetics, PPDM and DAMA organizations came in handy but not fully relied on.

Late 2000s

When working on sustaining data quality, one runs into the much-debated subject of who owns the data?  While for years, the IT department tried to lead the “data management” efforts, they were not fit to clean technical oil and gas data alone; they needed heavy support from the business. However the engineers and geoscientists did not feel it was their priority to clean “company-wide” data.

CIOs and CEOs started realizing that separating data from systems is a better proposition for E&P.  Data lives forever while systems come and go. We started seeing a movement towards a data management department, separate and independent from IT, but working close together. Few majors made this move in mid 2000s with good success stories others are started in late 2000s. First by having a Data Management Manager reporting to the CIO (and maybe dotted line to report to a business VP) then reporting directly to the business.

Who would staff a separate data management department?  You guessed it; resources came from both the business and IT.  In the past each department or asset had its own team of technical assistants “Techs” who would support their data needs (purchase, clean, load, massage…etc.) Now many companies are seeing a consolidation of “Techs” in one data management department supporting many departments.

Depending on how the DM department is run, this can be a powerful model if it is truly run as a service organization with the matching sense of urgency that E&P operations see. In my opinion, this could result in cheaper, faster and better data services for the company, and a more rewarding career path for those who are passionate about data.

Late 2008 and throughout 2009 the gas prices started to fall, more so in the USA than in other parts of the world. Shale Natural Gas has caught up with the demand and was exceeding it.  In April 2010, we woke up to witness one of the largest offshore oil spill disasters in history. A BP well, Macondo, exploded and was gushing oil.

For companies that put all their bets on gas fields or offshore fields, they did not have appetite for data management projects. For those well diversified or more focused on onshore liquids, data management projects were either full speed or business as usual.

 2010 to 2015 ….

Companies that had enjoyed the high oil prices since the 2007 started investing heavily in “digital” oilfields.  More than 20 years had passed since the majors started this initiative (I was on this type of project with Schlumberger for one of the majors back in 1998). But now it is more justifiable than ever. Technology prices have come down, systems capacities are up, network reliability is strong, wireless-connections are reasonably steady and more. All have come together like a prefect storm to resurrect the “smart” field initiatives like-never before. Even the small independents were now investing in this initiative. High oil prices were justifying the price tag (multiple millions of dollars) on these projects. A good part of these projects is in managing and integrating real time data steams and intelligent calculations.

Two more trends appeared in the first half of the 2010s:

  • Professionalizing the petroleum data management. Seemed like a natural progression now data management departments are in every company. The PPDM organization has a competency model that is worth looking into. Some of the majors have their own models that are tied to their HR structure. The goal is to reward a DM professional’s contribution to business’ assets. (Also please see my blog on MSc in Petroleum DM)
  • Larger companies are starting to experiment and harness the power of Big Data, and the integration of structured with unstructured data. Meta data and managing unstructured has become more important than ever.

Both trends have tremendous contributions that are yet to be fully harnessed.  The Big Data trend in particular is nudging data managers to start thinking of more sophisticated “analysis” than they did before .  Albeit one could argue that Technical Assistants that helped engineers with some analysis, were also nudging towards data analytics initiatives.

In December 2015, the oil price collapses more than 60% from its peak

But to my friends’ disappointment, standards are still being defined. Well hierarchy, while is seems simple to the business folks, getting it all automated and running smoothly across all types and locations of assets  will require the intervention of the UN.  With the data quality commotion some data management departments are a bit detached from the operations reality and take too long to deliver.

This concludes my series on the history of Petroleum Data Management. Please add your thoughts would love to hear your views.

For Data Nerds

  1. Data ownership has now come full circle, from the business to IT and back to business.
  2. The rise of Shale and Coal-bed Methane properties, fast evolution of field technologies are introducing new data needs. Data management systems and services need to stay nimble and agile. The old ways of taking years to come up with a usable system is too slow.
  3. Data cleaning projects are costly, especially when cleaning legacy data, so prioritizing and having a complete strategy that aligns with the business’ goals are key to success. Starting with well-header data is a very good start, aligning with what operations really need will require paying attention to many other data types, including mealtime measurements.
  4. When instituting governance programs, having a sustainable, agile and robust quality program is more important than temporarily patching problems based on a specific system.
  5. Tying data rules to business processes while starting from the wellspring of the data is prudent to sustainable solutions.
  6. Consider outsourcing all your legacy data cleanups if it takes resources away from supporting day to day business needs. Legacy data cleaning outsources to specialized companies will always be faster, cheaper and more accurate.
  7. Consider leveraging standardized data rules from organizations like PPDM instead of building them from scratch. Consider adding to the PPDM rules database as you define new ones. When rules are standardized data, sharing exchanging data becomes easier and cost effective.  

E&P Companies Looking to New Ways to Deliver Data Management Services to Improve efficiency and transparency

Effective data management, specifically in the exploration and production (E&P) business, has a significant positive impact on operational efficiency and profitability of oil and gas companies. Upstream companies are now realizing that a separate “Data Management” or “Data Services” department is needed in addition to the conventional IT department. Those Departments’ key responsibilities are to “professionally” and “effectively” manage E&P technical data assets worth millions, in some cases, billions of dollars.

Traditional Data Management Processes Cannot Keep up with Today’s Industry Information Flow 

Currently, day-to-day “data management” tasks in the oil and gas industry are directed and partially tracked using Excel spreadsheets, emails and phone calls. One of the companies I visited last month, was using excel to validate receipt of seismic data against contracts and PO; e.g. all surveys and all their associated data. Another one used excel to maintain a list of all Wire-line Log data ordered by petrophysicists in a month to compare to end-of-the-month invoices.

Excel might be adequate if an E&P company is small and has little ambition to grow. However, the larger a company’s capital (and ambitions) the more information and processes are involved to manage data and documents’ life cycle. Consider more than 20,000 drilling permits issued a year in Texas alone. Trying to manage this much information with a spreadsheet, some tasks are bound to fall through the cracks.

Senior managers are more interested than ever in the source , integrity and accuracy of information that affect and influence their HSE and financial statements.
Providing senior managers with such information requires transparent data management processes that are clearly defined, repeatable, verifiable and that  allow managers to identify, evaluate and address or alleviate any obvious or potential risks…. before they become a risk. Preferably all delivered efficiently and cost-effectively.

Choosing the Right E&P Data Management Workflow Tools

It’s tempting to stay with the old way of doing things – the “way we have always done it” – because you have already established a certain (and personal), rhythm of working, inputting and managing data. Even the promise of improved profitability and efficiency is often not enough to convince people to try something new. But the advantages of new workflow tools and programs cannot and should not be underestimated.

For example, a workflow tool can help automate the creation of data management tasks, log and document technical meta data,  track data related correspondence, alert you for brewing issues. When all set and done, data management department would be set for growth and for handling more load of work with out skipping a beat.  Growing by adding more people is not sustainable.

So, where to start?

There are multiple data management workflow tools available from a variety of different vendors, so how do you know which one will work best for your company? You will need to ensure that your workflow tool or software is able to do the following:

  • Keep detailed technical documentation of incoming data from vendors, thereby minimizing duplication of work associated with “cataloging” or “tagging”;
  • Integrate with other systems in your organizations such as Seismic, Contracts, Accounting…etc., including proprietary software programs;
  • Allow sharing of tasks between data managers;
  • Enable collaboration and discussion to minimize scattered email correspondence; and,
  • Automatically alert others of issues such as requests that still need to be addressed