Category Archives: Uncategorized

We have been handicapped by high margins, will this happen again or will we learn?

About 15 to 20 years ago, we started to discuss and plan the implementation of databases in Oil and Gas, in hopes of  reaping the benefits of all its promises. And we did plan and deploy those databases.  It is now no longer conceivable to draw geological maps by hand or to store production volumes in books. Also, in the last ten years, we have moved beyond simple storage of digital content and have started looking into managing data quality more aggressively. Here too, we have made inroads. But have we done enough?

Have you ever wondered why companies are still cleaning their data over and over again? Or why we are still putting up building blocks such as standards for master well lists and hierarchies? It seems to me that the industry as a whole is not able to break through the foundational stages of enterprise information management.  Because they can’t break through, they are unable to achieve a sustainable, robust foundation that allows their systems to  keep pace with business growth or business assets diversification.

Perversely, I believe this is because the oil and gas industry has been handicapped by high margins. When a company is making money despite itself, throwing additional bodies and resources to solve a pressing issue seems like the fastest and most effective solution in that moment. Because the industry is structured in such a way that opportunities have to be seized in the moment, there is often little time to wait for the right solution to be implemented.

Throwing money at a problem is not always the wrong thing to do. However, if it becomes your go-to solution, you are asking for trouble.

I would argue that highly leveraged companies have put themselves at high risk of bankruptcy because they do not invest sufficiently in efficiency and agility through optimized processes and quality information flow. For example, coming up with the most effective completion for your reservoir requires access to quality and granular technical data. This data does not just happen, it takes a great deal of wiring and plumbing work to obtain your organization’s data and processes, luckily if done right, it is a one-time investment with minimal operational upkeep.

According to Bloomberg, CNN and Oil & Gas 360 reports, during this ongoing downturn, at least 60 companies have entered chapter 11 in the USA alone. Ultra, Swift, Sabine, Quicksilver, American Energy are just a few of these highly leveraged but otherwise technically excellent companies.

Without the required behind the scenes investment, engineers and geoscientist will  find a way to get the data they need to make decisions. They will, and often do, work hard to bring data from many siloed systems. For each engineer to still have to massage data is throwing money at the problem. If the correct platform is implemented in your company, this information would flow like clockwork to everyone that needs it with little to no manual work.

WHAT COULD HAVE BEEN DONE?

We all know it is never the wrong time to make a profit. Consequently, it is never the wrong time to invest in the right foundation. During a downturn, lower demand creates an abundance of the only resource unavailable during an upturn – time. This time, spent wisely, could bring huge dividends during the next upswing in prices. Conversely, during a period of high prices, it is the other resources we cannot afford to waste. During a boom, we cannot ignore building sustainable longterm data and process solutions the RIGHT way.

It is never the wrong time to make a profit. Consequently, it is never the wrong time to invest in the right foundation.

Of course, there is no single “right way” that will work for everyone. The right way for your organization is entirely subjective, the only rule being that it must align with your company’s operations models and goals. By contrast, the only truly wrong way is to do nothing, or invest nothing at all.

If your organization has survived more than ten years, then it has seen more than one downturn, along with prosperous times. If you’ve been bitten before, it’s time to be twice shy. Don’t let the false security of high margins handicap you from attaining sustainable and long-term information management solutions.

Here are some key pointers that you probably already know:

      Track and automate repeatable tasks – many of your organization’s manual and repeatable tasks have become easier to track and automate with the help of BPMS solutions. Gain transparency into your processes, automate them, and make them leaner whenever possible.  

   Avoid Duplication of Effort – Siloed systems and departmental communication issues result in significant duplicated efforts or reworks of the same data.  Implementing strong data QA process upstream can resolve this. The farther upstream, the better. For example, geoscientists are forced to rework their maps when they discover inaccuracy in the elevation or directional survey data. These are simple low hanging fruits that should be easy to remove by implementing controls at the source, and at each stop along the way.

  Take an Enterprise View –  Most E&P companies fall under the enterprise category. Even if they are a smaller player, they often employ more people than the average small to medium business  (especially during a boom) and deal with a large number of vendors, suppliers, and clients. Your organization should deploy enterprise solutions that match your company’s enterprise operations model. Most E&P companies fall in the lower right quadrant in the below MIT matrix.

mitopmodel

Non-Disruptive, Non-Invasive Data Governance for Oil & Gas

Trading post

Establishing data governance is not a new activity. It is, at its heart, an extension of man’s desire to define the world, and to communicate these discoveries in a more efficient manner. A good data standard can be linked to the use of Latin as a lingua franca by merchants in medieval Europe. Few English merchants could speak Dutch, but most were taught Latin (and vice versa). Latin provided a set of definitions and rules understood by all, promoted by rote memorization of grammar and a large number of books, policed by data stewards in the form of tutors who rapped children’s knuckles when they got it wrong.  (ok, maybe this is a stretch a bit, but I like the story :-))

Not a Blank Slate

In Oil and Gas, I see data governance programs in many forms, from centralized formats to a completely distributed approach, and everything in between. These implementations come with varying degrees of success.

So when I came across Robert Seiner’s book “Non-Invasive Data Governance” I asked myself could this work for oil and gas? In my judgment, a distributed, organic, and non-invasive approach could be an option to deploy a data governance program in a faster, more uniform and comprehensive manner, which in turn would yield better success.

Non-invasive data governance is built around identifying already in place, de facto standards, and processes to capture and manipulate data. If there isn’t one standard, then “converging” and “formalizing” to one standard that suits is put in place. In the new world, data stewards will be recognized “formally” and will maintain “universal” standards for work they have been doing all along…

To me, this approach has far-reaching implications to raise the bar on data quality standards. This approach weaves the quality standards in the DNA and the culture of an organization.

Business Specific Pidgin

Let’s continue the historical analogy a little. Trade was still conducted without the advantage of a lingua franca, albeit with greater difficulty. Typically, this was accomplished by the evolution of pidgin languages. The first encounters, however, were most likely exercises in frustration, as both parties attempted to learn one another’s needs, defined goods and services, and the perceived value of these. In speaking a pidgin language with another merchant, if either party used a differing definition, or even presented his offer in an unfamiliar sentence structure, the business venture could go south very quickly. Similarly, within a single oil and gas organization. For each data group, there needs to be one standard for all.

The oil and gas industry would not be where it is today without some established data standards and data processes already in place. Data governance will never be a blank slate. The problem is that while standards exist that are recognized across the industry, there are many terms that differ from one team to another and are not quite formalized or fully recognized.

The non-invasive DG approach is to formalize what is currently not formal and monitor it for continuous improvement over time. For example, wellbore survey data can be captured in different ways, none of which are wrong, just different. One team would store latitude, longitude, geodetic system, Easting, Northing, and distance. Another team might use Negative and Positive to indicate directions instead of Easting and Northing. These are very subtle differences, however, when flowing data from one system to another (and data flow we do a lot) a level of accuracy is lost in the translation.

Let me know your thoughts…

 

Technical Documents Architecture: Separate for Sustainable Efficiency

If like many oil and gas companies, your technical documents are scattered, or buried in folders and nested folders, you have an opportunity to increase the efficiency of your petro-professionals by organizing their technical documents, speed up how they locate these documents, or better yet, do both. If you do it right, you can architect a solution that is sustainable.

Organizing electronic files for an oil and gas company is not as complex as it first may seem. It is very similar to the way you organize files and documents on your PC. The fundamental question you must ask yourself is “ How do I sustainably tag or organize my files so that I can find what I need in 5 seconds or less?”. This question guides my efforts for all designs and solutions that I help my client’s engineers and geoscientists build. The topic is big and long, but here I would like to share my thoughts on architecture.

This topic is big with many angles, for this blog I would like to share my thoughts on architecting the technical documents healthy environment.

This Valentine’s month: Separate for Sustainability

Many oil and gas companies do not distinguish between the active work area and a long-term final-version area for their electronic files. What we find instead is one environment with ALL technical files in one place. This repository often contains both active and long-gone projects, including files for divested wells. Attempts to organize this chaotic mess happen every other year. This kind of architecture requires organizing these files every few years, with a hefty price tag!

 

In my opinion, for an oil and gas company to have a sustainable documents management practice, there should be at least 4 working areas within your environment (see diagram below).

TDRM architecture

An Amicable Separation

Area #1 Team work area

By establishing a day-to-day work area that can by definition be an organic mess coordinated by those who know exactly what everything is, your team can collaborate in whatever method works best for them. This area is usually cluttered with analyses files, multiple versions of a document, manuals, research material, cost proposals from vendors, and more.

This flexibility is key to a productive, happy work environment. It only becomes a problem when others are exposed to a ‘mess’ that is not of their own making. For this reason, each team should have their own defined work area.

(Yes, I hear some of you say today’s technology are designed curtail this mess and no need to have a separate work environment. I think that may be possible if the technology is used by a homogenous skilled staff.  Oil and gas staff are of all ages and at different levels of software savviness)

Area #2 Final versions area

Separating your final-versions area from the working area has two immediate benefits

1) Efficiently and effectively declare and distribute the final version to the enterprise

2) Allows the removal of inactive files from work area (declutter).

Your final versions area should provide access to (and the easy identification of) the latest version of a report in a timely manner and without delays. Unfortunately,  this area is often not formalized (it is not separated from area #1), causing delays for other teams who need access to a given file – they need to notify your team, have a member of your team identify the correct file, and then possibly to send them the file.

Often, distribution of final versions is a complex dance of requests and delivery between multiple teams or individuals. By separating the archival/final versions area, and providing access to authorized resources, this jitterbug contest can become a synchronized line dance. If all parties that need a file can identify the right file on their own, and retrieve it themselves, significant delays can be avoided.

Furthermore, by separating the final-version area from a work area, you have a chance to sustain the serenity and completeness of technical well files and specifically well files and records (most important assets you can have). Allowing any company to easily open a data room when and if needed.

Areas # 3 and 4: External Collaboration and Access

When considering work areas and final versions, it is important to consider accessibility, external as well as internal. Providing data to JV partners bases on their WI and JOV data requirements and collaborating with vendors during well directional design or completion treatment is essential to keeping the technical documents preserved and not lost in a web of email attachments.

To me, this architecture is non-invasive or intrusive to engineers and geoscientists workflow.

Summary: Separate, but don’t go far

Separating final-version area from work area can have an immediate and strong benefit to productivity, balancing team flexibility with the requirements of other teams in the organization. While the day-to-day work area should be organic and flexible, it is important that the Archival/Final Version repository is defined. This is because it is not serving just one team, but the needs of an organization as a whole. This separation of working area and final versions/archival area provides a sustainable solution that meets the 5-second accessibility requirement outlined above.

Having outlined the benefits of this simple change in a complex working environment, we’d love to hear from the community. Do you have a better approach? Questions regarding implementation? Have you implemented something like this, and if so, what was your experience? Whatever the input, we would love to hear from you.

What Impact Does Big Data Technology Bring To Oil and Gas?

Dealing with the massive influx of information gathered from exploration projects or real time gauges at the established fields is pushing the traditional data-management architecture to its limits in the oil and gas industry. More sensors, from 4-D seismic or from fiber optics in wells, crack the gap wider between data capture advancements and the traditional ways of managing and analyzing data. It is the challenge of managing the sheer volume of collected data and the need to sift through it in a timely fashion that Big Data technologies can promise to help us solve.  This was just one of the suggestions on the table at the recent Data Management Workshop I attended in Turkey earlier this month.

For me, one of the main issue with the whole Big Data concept within the oil and gas industry is that, while it sounds promising, it has yet to deliver tangible return that companies need to see in order to prove its worth.  To overcome this dilemma, Big Data vendors such as TeraData, Oracle, and IBM should consider demonstrating concrete new examples of real-life oil & gas wins. By new I mean challenges that are not possible to solve with traditional data architecture and tools. Vendors should also be able to offer Big Data technology at a price that makes it viable for the companies to “try” it and experiment.

The Oil and Gas industry is notoriously slow to adopt new software technology, particularly when it comes to anything that tries to take the place of traditional methods that have proven to work already, unless its value is apparent.  To quote my good friend ” we operate with fat margins we don’t feel the urgency”.  However, E&P companies should put their creative hats on to work alongside Big Data technology vendors. Big Data may just be the breakthrough that we need to make a tangible step-change in how we consume and analyse subsurface and surface data with agility.

If either side, vendors and E&P companies, fail to deliver, Big Data becomes a commercial white elephant and is doomed to very slow adoption.

At the workshop we had Oracle, Teradata, and IBM all showing interesting tools. However they showed examples from other industries and occasionally referred to examples that are possible to solve with the conventional data technology. They left the audience still wondering!

One Big Data example that is relevant and hits home was presented by CGG. CGG used pattern recognition (on Teradata technology) to find all logs that exhibit a specific pattern a petrophysicist may be interested in. This type of analysis require scanning through millions of log curves, not just meta-data which is what we had been bound to in traditional architecture. This opens up new horizons to serendipity and who knows maybe to new discoveries.

 

How an E & P Subsidiary took its Information Communications from Risky to Efficient

It starts with chatter around the workplace. A company is growing. Procedures that were once “nice to have” are now serious money bleeds. That is exactly what Certis found when they revamped a major E&P subsidiary’s communication procedures.

When an oil and gas company plants itself in any nation to explore for business opportunities, its communications with the nation’s government and with its JV partners can be, understandably, informal for the early stages of the project. As the company moves from Exploration and Appraisal phases towards a full fledge Development and Operation, what once worked with lax communications becomes a risky endeavor.

While these risks can be underplayed next to health and safety hazards, we discovered they warranted immediate action if the company is to survive long term. Consider these two real situations, to name a few:

1)      Sensitive information leaks, for example, at early stages of exploration efforts, any discovery would have a large impact on a company’s stock price (if public) and serious implications on their competitor’s behavior.

2)      Growing companies’ watch millions of dollars become billions of dollars almost overnight. Those large dollar amounts require complete technical data and timely communications to appease the government and the JV partner. The flow of information becomes crucial.

Knowing something is broken isn’t the same as understanding how it is broken and how to fix it.

Most employees can feel the weak spots in their company. When you start to sense problems, the cost of fixing them seems outlandish. But overtime the scales tip. Often, when the scales tip, the problem has grown to overwhelming proportions for employees to handle alone.

The scale had long ago tipped for this client.  Our team’s role was to quickly identify causes of communication problems, and orchestrate a long-term plan and processes to mitigate risks.

Over a period of few weeks, we surveyed the office, field, and rigs in two different continents. We went through a full cycle of process improvement. At the end we were able to divide their information communications needs into four process categories: 1) Documents and Data Management 2) Decisions Documentation 3) Security and Access Management 4) Request Management.

Our plan started with ‘Quick Wins’ that changed the way the subsidiary did business in the first month. Imagine being able to institute relevant changes in your company in one month. Yes, it was that easy to solve. The rest of the implementation plan spanned over 4 months. Communication policies, standards and procedures were to be defined and complied to across the organization.

We all know that the cost of fixing is cheap compared to the cost of cleaning up a huge mess later.

The costs of missed opportunities, reduced stock prices, or the cost of million-dollar lawsuits make this kind of projects important, combine that with the relevant low fixing cost, makes this project a “high” priority.

I believe a company needs to do more than simply comply with government or JV partner contracts. To build strong relationships, you must be able to readily prove your compliance. That’s just good business.

Our client’s new transparent business practices allow the government to view them as a serious and trusted part of the country’s future. It is impossible to put a price on a valued relationship. But successful business people know that gaining trust means big business over time.

What about your company? Is it starting to feel the risks of outdated communication systems?

Improved Seismic Data Services Cycle Time By 50 Weeks – Looking back at a project delivered by Certis Inc.

When you’re an Oil & Gas exploration company that manages over 5,000 lines and surveys a year, efficiency in managing data is vital. Yet, three years ago, a company that size had never measured the efficiency of the process of delivering data to the business. There was a general feeling that 80% of the time the process was acceptable. However, they could not see what was going wrong the other 20% of the time. Even within the 80%, they had to wonder if ‘acceptable’ was the best they could do. In essence, they knew they had their weaknesses, but were too close to the problems to see the solutions. They turned to a process improvement firm with oil and gas data experience to shed some light on any workflow problems they felt, but couldn’t see. This is when we came in.

To give you a visual of the work done for the E&P Company, we need to take a step back in time. Remember the days before email? Now, go further back to a time you don’t remember. Imagine a time before mail trucks and planes, a time, when you if you wanted to mail a letter to Spain, you needed to put it on a ship. It would take months to get there. It would need to withstand seasick sailors, ocean storms, and Moby Dick to arrive at its destination. You know it may never be as pretty as the day you mailed it. Now, pretend that time was three years ago. Three years ago, your letter spent months aboard a ship and now, you zap it electronically in seconds.

That is what this oil & gas company has gone through in the past three years. By improving their Seismic data processes, have gone from taking a year to complete a full cycle of receipt to archival to two weeks or less. Remember, in 2010, the oil company considered the delivery time for data to the business as acceptable in 80% of the cases. While they were suspicious of inefficiencies, they could not imagine going from the mail ship to email in three years. But that is exactly what they did.

Every business occasionally needs an outsider to look in to get a better vision of their systems potential. The company began to see that their processing was far from optimal, but it could be.

$250 Million Oil Take-Over Deal Implodes Due To Disastrous Data Management

As professionals in the oil and gas sector we all know that when it comes to a merger and acquisition (M&A) that having access to quality data is essential. In its absence deals don’t get made, investors lose $000,000s and livelihoods are put at risk.

So we were pretty taken aback recently to hear of one deal – of a public company – which fell through because the organization couldn’t even list their complete assets with confidence – such was the mess of their data.

We were talking with a CEO recently who “vented” about a recently failed acquisition.  He is a major player who has worked in the sector since the mid-1970s, he told us here why the $150 Million to $250 million investment his company was prepared to make didn’t just fall flat, but imploded:  “Despite asking this company repeatedly to give us access to their “complete” data sets they failed to deliver time and again. We became increasingly frustrated and discouraged to the extent we wouldn’t even make a proposal in the region of $80 million for the company.  What was so galling to us was that it was obvious this company badly needed an investor and had approached us to bid”

We all know what data is needed for M&A investments to happen, some of which we can get from public records and from commercial organizations such as I.H.S and Drilling Info (in the USA). But those sources alone are not nearly sufficient. So what were they thinking? Did they think data would take care of itself? Or was someone just not doing his/her job well?

The CEO continues “…. in the past when companies were under pressure, typically a lot of data got swept under the rug as it were. Today though, investors demand tighter regulation of data and I suspect that, because of this, in ten years’ time some companies just aren’t going to make it. If our company had been allowed to invest and take over we could have solved many of the organization’s problems, saved some jobs and even added value. Sadly, in this event, due to poor management of critical data that scenario was never allowed to take place. The deal never even got past the first hurdle. No-one is going to invest $millions when they don’t have a clue of (or confidence in the data of) what they’re buying.”

Considering this was a company which had a responsibility for public money the management team should never have been allowed free rein without critical data management regulations or at the very least “guidelines”.

What is your opinion?