Tag Archives: Shale

Part 3 of 3: Are we progressing? Oil & Gas Data Management Journey the 2000s

The 1990’s shopping spree for applications produced a spaghetti of links between databases and applications while also chipping away the petro professional’s effective time with manual data entry. Then, a wave of mega M&As hit the industry in late 90s early part of the 2000s.

Mega M&As (mergers and acquisitions) continued into the first part of the 2000s, bringing with them—at least for those on the acquiring side – a new level of data management complexity.

With mega M&As, the acquiring companies inherit more databases and systems, and many physical boxes upon boxes of data. This influx of information proved to be too much at the outset and companies struggled – and continue to struggle – to check the quality of the technical data they’d inherited. Unknown at the time, the data quality issues present at the outset of these M&As would have lasting effects on current and future data management efforts. In some cases it gave rise to law suites that were settled in millions of dollars. 

Early 2000s

Companies started to experiment with the Internet.  At that time, that meant experimenting with simple reporting and limited intelligence on the intranet.  Reports were still mostly distributed via email attachments and/or posted in a centralized network folder.

I am convinced that it was the Internet that  necessitated cleaning technical data and key header information for two reasons: 1) Web reports forced the integration between systems as business users wanted data from multiple silo databases on one page. Often times than not, real-time integration could not be realized without cleaning the data first 2)  Reports on the web linked directly to databases exposed more “holes” and multiple “versions for the same data”;  it revealed how necessary it was to have only ONE VERSION of information, and that  had better be the truth.

The majors were further ahead but for many other E&P companies, Engineers were still integrating technical information manually, taking a day or more to get a complete view and understanding of their wells, excel was the tool moslty. Theoretically, with these new technologies, it should be possible to automate and instantaneously give a 360 degree-view of a well, field, basin and what have you. However, in practice it was a different story because of poor data quality.  Many companies started data cleaning projects, some efforts were massive, in tens of millions of dollars, and involved merging systems from many past acquisitions.

In the USA, in addition to the internet, the collapse of Enron in October 2001 and the Sarbanes–Oxley Act enacted in July 30, 2002, forced publicly traded oil and gas companies to document and get better transparency into operations and finances. Data management professionals were busy implementing their understanding of SOX in the USA. This required tightener definitions and processes around data.

Mid 2000s

By mid-2000s, many companies started looking into data governance. Sustaining data quality was now in the forefront.  The need for both sustainable quality data and data integration gave rise to Well Master Data Management initiatives. Projects on well hierarchy, data definitions, data standards, data processes and more were all evolving around reporting and data cleaning projects. Each company working on its own standards, sharing success stories from time to time.  Energetics, PPDM and DAMA organizations came in handy but not fully relied on.

Late 2000s

When working on sustaining data quality, one runs into the much-debated subject of who owns the data?  While for years, the IT department tried to lead the “data management” efforts, they were not fit to clean technical oil and gas data alone; they needed heavy support from the business. However the engineers and geoscientists did not feel it was their priority to clean “company-wide” data.

CIOs and CEOs started realizing that separating data from systems is a better proposition for E&P.  Data lives forever while systems come and go. We started seeing a movement towards a data management department, separate and independent from IT, but working close together. Few majors made this move in mid 2000s with good success stories others are started in late 2000s. First by having a Data Management Manager reporting to the CIO (and maybe dotted line to report to a business VP) then reporting directly to the business.

Who would staff a separate data management department?  You guessed it; resources came from both the business and IT.  In the past each department or asset had its own team of technical assistants “Techs” who would support their data needs (purchase, clean, load, massage…etc.) Now many companies are seeing a consolidation of “Techs” in one data management department supporting many departments.

Depending on how the DM department is run, this can be a powerful model if it is truly run as a service organization with the matching sense of urgency that E&P operations see. In my opinion, this could result in cheaper, faster and better data services for the company, and a more rewarding career path for those who are passionate about data.

Late 2008 and throughout 2009 the gas prices started to fall, more so in the USA than in other parts of the world. Shale Natural Gas has caught up with the demand and was exceeding it.  In April 2010, we woke up to witness one of the largest offshore oil spill disasters in history. A BP well, Macondo, exploded and was gushing oil.

For companies that put all their bets on gas fields or offshore fields, they did not have appetite for data management projects. For those well diversified or more focused on onshore liquids, data management projects were either full speed or business as usual.

 2010 to 2015 ….

Companies that had enjoyed the high oil prices since the 2007 started investing heavily in “digital” oilfields.  More than 20 years had passed since the majors started this initiative (I was on this type of project with Schlumberger for one of the majors back in 1998). But now it is more justifiable than ever. Technology prices have come down, systems capacities are up, network reliability is strong, wireless-connections are reasonably steady and more. All have come together like a prefect storm to resurrect the “smart” field initiatives like-never before. Even the small independents were now investing in this initiative. High oil prices were justifying the price tag (multiple millions of dollars) on these projects. A good part of these projects is in managing and integrating real time data steams and intelligent calculations.

Two more trends appeared in the first half of the 2010s:

  • Professionalizing the petroleum data management. Seemed like a natural progression now data management departments are in every company. The PPDM organization has a competency model that is worth looking into. Some of the majors have their own models that are tied to their HR structure. The goal is to reward a DM professional’s contribution to business’ assets. (Also please see my blog on MSc in Petroleum DM)
  • Larger companies are starting to experiment and harness the power of Big Data, and the integration of structured with unstructured data. Meta data and managing unstructured has become more important than ever.

Both trends have tremendous contributions that are yet to be fully harnessed.  The Big Data trend in particular is nudging data managers to start thinking of more sophisticated “analysis” than they did before .  Albeit one could argue that Technical Assistants that helped engineers with some analysis, were also nudging towards data analytics initiatives.

In December 2015, the oil price collapses more than 60% from its peak

But to my friends’ disappointment, standards are still being defined. Well hierarchy, while is seems simple to the business folks, getting it all automated and running smoothly across all types and locations of assets  will require the intervention of the UN.  With the data quality commotion some data management departments are a bit detached from the operations reality and take too long to deliver.

This concludes my series on the history of Petroleum Data Management. Please add your thoughts would love to hear your views.

For Data Nerds

  1. Data ownership has now come full circle, from the business to IT and back to business.
  2. The rise of Shale and Coal-bed Methane properties, fast evolution of field technologies are introducing new data needs. Data management systems and services need to stay nimble and agile. The old ways of taking years to come up with a usable system is too slow.
  3. Data cleaning projects are costly, especially when cleaning legacy data, so prioritizing and having a complete strategy that aligns with the business’ goals are key to success. Starting with well-header data is a very good start, aligning with what operations really need will require paying attention to many other data types, including mealtime measurements.
  4. When instituting governance programs, having a sustainable, agile and robust quality program is more important than temporarily patching problems based on a specific system.
  5. Tying data rules to business processes while starting from the wellspring of the data is prudent to sustainable solutions.
  6. Consider outsourcing all your legacy data cleanups if it takes resources away from supporting day to day business needs. Legacy data cleaning outsources to specialized companies will always be faster, cheaper and more accurate.
  7. Consider leveraging standardized data rules from organizations like PPDM instead of building them from scratch. Consider adding to the PPDM rules database as you define new ones. When rules are standardized data, sharing exchanging data becomes easier and cost effective.  

Change Coming Our Way, Prepare Data Systems to Store Lateral’s Details.

Effectively, during the past decade, oil and gas companies have aimed their spotlight on efficiency. But should this efficiency be at the expense of data collection? Many companies are now realizing that it shouldn’t.

Consider the increasingly important re-fracturing effort.  It turns out, in at least one area, that only 45% of re-fracs were considered successful if the candidates were selected using production data alone.  However, if additional information (such as detailed completion, production, well integrity and reservoir characterization data) were also used a success rate of 80% was observed. See the snip below from the Society of Petroleum Engineer’s paper “SPE 134330” by M.C Vincent 2010).

Capture

Prepare data systems to store details, otherwise left in files.

Measurements while drilling (MWD), mud log – cuttings analysis and granular frac data are some of the data that can be collected without changing drilling or completion operations workflow and the achieved efficiency.  This information when acquired at the field will make its way to petrophysicists and engineers. Most likely it ends up in reports, folders and project databases.  Many companies do not think of this data storage beyond that.

We argue, however, to take advantage of this opportunity archival databases should also be expanded to store this information in a structured manner. This information should also funnel its way to various analytical tools. This practice will allow technical experts to dive straight into analyzing the wells  data instead of diverting a large portion of their time in looking for and piecing data together. Selecting the best re-frac candidates in a field will require the above well data and then some. Many companies are starting to study those opportunities.

Good data practices to consider

To maximize economic success from re-stimulation (or from first stimulation for that matter) consider these steps that are often overlooked:

  1. Prepare archival databases to specifically capture and retain data from lateral portions of wells. This data may include cuttings analysis, Mud log analysis, rock mechanics analysis, rock properties, granular frac data, and well integrity data.
  2. Don’t stop at archiving the data, but expose it to engineers and readily accessible to statistical and Artificial Intelligence tools. One of those tools is Tibco Spotfire.
  3. Integrate, integrate, integrate. Engineers depend on ALL data sources; internal, partners, third party, latest researches and media, to find new correlations and possibilities. Analytic platforms that can bring together a variety of data sources and types should be made available. Consider Big Data Platforms.
  4. Clean, complete and accurate data will integrate well. If you are not there yet, get a company that will clean data for you.

Quality and granular well data is the cornerstone to increasing re-frac success in horizontal wells and in other processes as well.  Collecting data and managing it well, even if you do not need it immediately, is an exercise of discipline but it is also a strategic decision that must be made and committed to from top down. Whether you are drilling to “flip” or you are developing for a long term. Data is your asset.

 

In-hand data, both internal and external, can be the difference between millions of dollars gained or millions of dollars lost

The Eagle Ford… Bakken… Permian… Montney… Booming plays with over 50 active operators each. Each operator brings its own development strategy and its own philosophy. While some operators appear successful in every unconventional play they touch, others always seem to come last to the party, or to miss the party altogether. Why?

 Information. With all things being equal (funding availability, access to geoscience and engineering expertise), one variable becomes timely access to quality information and understanding what the data is telling you, faster than the competition.

 “Few if any operators understand how (shale) behaves, why one fracture stage within a well produces 10 times more oil or gas than its neighbor, or how to find sweet spots to overcome inequity.”  Colorado School of Mines Rhonda Duey

 Over 60 operators in the Eagle Ford alone. Studying the strategy and philosophy of each operator in a play would, should, yield insight as to what works, what does not work and why? Landing depth, fracking parameters, lateral length, flow-back design, etc… All may matter, all may contribute to better production rates, better ultimate recoveries and better margins. And yes, each play really is unique.

 WHERE TO LOOK?

 A wealth of information from each operator is buried in shareholders’ presentations, their reported regulatory data, and published technical papers. Collecting relevant information and organizing it correctly will enable engineers and geo staff to find those insights. Today, engineers and geologists cannot fully take advantage of this information as it’s not readily consumable and their time is stretched as it is.

 We all agree, taking advantage of Shale plays is not only about efficiency, but it is also about being effective. The fastest and cheapest way to effectiveness is to build on what others have proven to work and avoid what is known not to work.

 Here are some thoughts on how to leverage external data sources to your advantage:

  • Understand the goal of the study from engineers and geoscientists. Optimized lateral completion? Optimized fracking? Reducing drilling costs? All of the above?
  • Implement “big data” technology with a clear vision of the output. This requires integration between data systems to correlate data from various external sources with various internal sources.
  • Not ready to invest in “big data” initiatives or don’t have the time? Outsource information curation (gathering and loading data) for focused studies.
  • Utilize data scientists and analytical tools to find trends in your data, then qualify findings with solid engineering and geoscience understanding.
  • Consider a consortium among operators to exchange key data otherwise not made available publicly. If all leases are leased in the play, then the competition among operators is over. Then shouldn’t play data be shared to maximize recovery from the reservoirs?
  • Build a culture of “complete understanding” by leveraging various sources of external data. 

Better Capital Allocation With A Rear-View Mirror – Look Back

In front of you are two choices: Tie up $100 million with low return or over spend by $50 million with no reliable return. Which option do you choose? Neither is acceptable.

“It seemed we were either tying up cash and missing on other opportunities, or overspending where we should not have in the first place,” said a former officer of a US independent. “We heard great stories at presentations from engineers and geoscientists as they were painting the picture to executives to fund their programs. But at the end of the year, the growth was never where we had expected it to be.”

Passing by poor investments through better allocation of capital greatly enhances company performance. To achieve this, executives needed a system to look back and evaluate what each asset team had predicted compared to the actual performance of the asset. They needed a look-back system where hindsight is always 20/20.

A look-back system is beneficial not only for better capital allocation, but also to identify and understand the reasons for low or high performance of an investment.

Implementing a look-back system is data intensive. The data needed, however, typically has already been collected and stored as part of everyday operations. For example most companies have an AFE system that captures predicted economics of well projects. All companies keep system(s) to capture production volumes and accounting data for both revenue and costs.  Data for evaluating an investment after-the-fact is already available – for the most part.  The reason executives did not have a look-back system was buried in their processes. In how each asset’s economic returns are calculated and allocated.

Here are few tips to consider when implementing a look-back system for an oil and gas company:

  • Start with the end. Identify the performance indicators (KPI) required to measure assets’ performance.
  • Standardize how economics are prepared by each asset team. Only then will you be able to compare apples to apples.
  • Allocate costs and revenue back to each well. Granularity matters and is key. With granularity, mistakes of lumping costs under a wrong category can be avoided and easily rectified.
  • Missing information for the KPI’s? Introduce processes to capture and enter data in company’s systems (historically this information may be in presentation slides and personal spreadsheets).
  • If well information is scattered across systems, data integration will be needed. Well, AFE, Production, Reserves, and Accounting data will need to be correlated.
  • Automate the generation of information to executives. Engineers and geoscientist should not have to prepare reports at the end of each month or quarter to management. Their time is FAR better spent making money and assets work harder for their investors.
  • Know it is a change to the culture. Leadership support must be behind the initiative and well communicated throughout the stake holders.

“Once we implemented a look-back system, we funded successful teams more and reduced the budget from under performing assets, then we utilized the freed money to grow. We were a better company all around” – Former Officer of a Large Independent.

Bring It On Sooner & Keep It Lifting Longer. Solutions To Consider For ESPs (Or Any Field Equipment)

Settled on average 6,000 feet below the surface, electrical submersible pumps (a.k.a ESPs) provide artificial lift for liquid hydrocarbons for more than 130,000 wells worldwide.
Installing the correct ESP system for the well, installing it precisely, and careful monitoring of the system is paramount to reducing the risk of a premature end to an ESP life cycle. But the increasingly long laterals of horizontal wells, along with rapid drilling in remote areas, is creating challenges for efficient operations and the ESP’s life span. Implementing the correct processes and data strategies will, undoubtedly, be the cheapest and fastest way to overcome some of the challenges.

1- Implement A Process Flow That Works, Break The Barriers

When a decision is made to install an ESP in a well, a series of actions are triggered: preparing specifications, arranging for power, ordering equipment, scheduling operations, testing, and finally installing it in a well, to state a few. These actions and decisions involve individuals from multiple departments within an organization as well as external vendors and contractors. These series of actions form a process flow that is sometimes inefficient and is drawn out, causing delays in producing revenue. In addition, sometimes processes fall short causing premature pump failures that interrupt production and raise operational costs.
Research of many industry processes shows communication challenges are one of the root causes for delays, according to LMA Consulting Group Inc. Furthermore, communication challenges increase exponentially when actions change hands and departments. A good workflow will cut across departmental barriers to focus on the ultimate goal of making sure Engineering, Procurement, Logistics, accounting, vendors, contractors and field operations all are on the same page and have a simple and direct means to communicate effectively. But more importantly, the workflow will allow for the team to share the same level of urgency and keep stakeholders well informed with the correct information about their projects. If you are still relying on phones, papers and emails to communicate, look for workflow technology that will bring all parties on one page.

A well-thought through workflow coupled with fit-for-purpose technology and data is critical, not only to ensure consistent successful results each time but also to minimize delays in revenue.

2- ESP Rented Or Purchased, It Does Not Matter… QA/QC Should Be Part Of Your Process

Although ESPs are rented and the vendor will switch out non-performing ones, ensuring that the right ESP is being installed for a well should be an important step of the operator’s process and procedures. Skipping this step means operators will incur the cost of shut downs and tempering of reservoir conditions that may otherwise be stabilized – not to mention exposure to risks each time a well is penetrated.
More importantly a thoughtful workflow ensures a safe and optimal life span for ESPs regardless of the engineers or vendors involved, especially in this age of a mass retiring of knowledge.

At today’s oil prices, interrupted production for a well of 1,000 barrels per day will cost an operator at least $250,000 of delayed revenue for a 5 day operation. Predictive and prescriptive analytics in addition to efficient processes can keep the interruption to the minimum if not delay it altogether.

3- Know Why And How It Failed Then Improve Your Processes – You Need The Data And The Knowledge

One last point in this blog: Because ESPs consist of several components, a motor, a pump, a cable, elastomer, etc… ESP failure can, therefore, be electrical, mechanical, thermal or fluid/gas composition. Capturing and understanding the reasons for a failure in a system to allow for effective data analysis provides insight that can be carried forward to future wells and to monitoring systems. Integrating this knowledge into systems such as predictive analysis or even prescriptive analytic to guide new engineers will have an effect on operator’s bottom-line. A few vendors in the market offer these kind of technology, weaving the right technology, data and processes to work in synergy is where the future is.

On how to implement these solutions please contact our team at info@certisinc.com.

Related articles

What More Can Be Done To Reduce Well Failure & Downtime? Predictive Analytics.

Today’s high oil prices make every production moment crucial and well downtime costlier than ever. When a well fails, money sits underneath the earth’s surface, but you cannot get to it. In addition, you have equipment and a crew draining money out of your pocket while you wait to replace a critical component. Ideally, you wouldn’t wait. You would be ready for equipment failure.

Example: One operator reported that downtimes causing an average of 400 bbl per day of production loss is normal practice. If we assume a minimum margin of $50 per bbl, that is more than $7 Million dollars of uncaptured revenue in that year. That’s a hefty price tag. Oil companies need to ask themselves: “what more can be done. Have all measures been taken to keep downtime to its minimum?”

With high equipment costs, companies used to balk at owning spare equipment. On the contrary, some companies consider backups as standard procedures. The trick is deciding on a balance between stockpiling backups and knowing what you really need ahead of time. I believe this balance can be achieved with “Automated Predictive Analytics”.

Predictive analytics compares incoming data from the field to expected or understood behaviors and trends to predict the future. It encompasses a variety of techniques from statistics, modeling and data mining that analyze current and historical facts to make future predictions.

Automated predictive analytics leverages systems to sift through large amount of data and alert for issues.  Automating predictive analytics means you can monitor and address ALL equipment on the critical path on a daily basis–more frequently if your data permits. Automating steps up the productivity of your engineers by minimizing the need to search for problem wells. Instead, your engineers can focus on addressing problem wells.

If you are not already on the predictive mode, these two cost-effective solutions can get you started on the right path.

1. Collect well and facility failure data (including causes of failure data). Technology, processes and the right training make this happen. There are a few tools available off-the-shelf. You may already have them in house; activate their use.

2.  Integrate systems and data then automate the analysis: expected well models, trends and thresholds need to be integrated with actual daily data flowing in from the field. “Workflow” automation tools on the market can exceed your expectations when it comes to integration and automating some of the analysis.

Example: One operator in North Dakota reported that 22 of its 150 producing wells in the Bakken have failed within the first two years due to severe scaling in the pump and production tubing. Analytics correlated rate and timing of failure with transient alkalinity spikes in the water analyses. The cause was attributed to fracturing-fluid flowback. (Journal of Petroleum Technology, March 2012).

In the above example, changes in production and pressure data would trigger the need to check water composition that could in turn trigger an action on engineers to check the level of scale inhibitors used on the well before the pump fails. This kind of analysis requires data (and systems) integration.

One more point on well failure systems. Too many equipment failures occur without proper knowledge on what went wrong. The rush to get the well producing again discourages what is sometimes seen as “low priority research”. Yet, this research could prevent future disruptions. By bringing the data together and using it to its full potential companies can save money now and for years to come.

More Shale Data Should Equal More Production, Unless Your Data is an Unusable Mess

As the U.S. overtakes Russia in Oil & Gas production because of its unconventional fields, new operators flood the industry. Inevitably, competition increases. The need for predictable and optimized well performance is greater than ever. The fastest route to optimization is quality data that can be used to understand unconventional fields better and drive the flow of operations efficiently.

However, as more data pours in, the cracks in many E&P companies’ data management systems are exposed. Geoscientists and engineers are left to make their own integrations and correlations between disperse systems and left digging through folders trying to find documents for knowledge.

Some of the trouble lies in the new methods of analyzing vast array of data that were not considered as prominent in conventional fields. For example, geoscientists break shale down by geology, geochemistry, and geomechanics, and engineers now look into fracs using microseimic lenses. While this data was used in conversional fields before, the stress on and ways of analyzing them is different now; new parameters have emerged as key measures such as TOC and brittleness. When it comes to shale fields, the industry is still learning from acquired data.

Well organized (and quality) information that is easily found and efficiently flows through internal departments and supplying vendors, not only will allow for faster reaction to operation’s needs & opportunities, it will turn into better strategy to increase EUR per well through better understanding of the reservoirs.

How you take care of your data directly impacts your engineers and geoscientists efficiency and the speed they can find good production opportunities. Fast and efficient is the name of the game when it comes to unconventional and competitive world.

It is not enough to provide a place to store new unconventional information and flow it to analytical systems, while those are the first steps they must fit into a holistic approach that takes unconventional integrated operational processes to the next level of efficiency.

Cut Search Time for Critical Documents from Days to Seconds. It is Time to Stop Digging in Folder Structures

It wasn’t long ago when geoscientists and petroleum engineers at one renowned oil company might spend days searching for documents.  “Searching” meant digging through folders (as many as 1500 of them!!), and discerning whether a “found” file was an official report or only an earlier draft.  To give you an idea, some critical HSE documents were buried as deeply as within the 13th   sub-folder (and then the correct version had to be selected!!)

Obviously in this situation emergency and critical decision cycle times were lengthened by the difficulty of finding the “buried” technical documents. The average time to locate and validate the accuracy of a document was calculated at 3 days.

When Certis arrived, the company’s folder system looked like an episode of “Hoarders”. The hoarder believes there is an organized system to his “madness”, but nobody else in the home can quite figure it out. Over the years, over 2,000,000 documents had been amassed at this location, and that total was growing fast. As engineers and geoscientists floated in and out, the system fell victim to hundreds of interpretations. Unlike the hoarder’s goods, these documents contained vital information that accumulated years of studies and billions of dollars of data acquisitions. Years of knowledge, buried, literally.

In today’s competitive and fast pace operations in our Oil and Gas industry, data is accumulating faster than ever and decisions must be made faster than ever by petro-professionals that are already overextended.  Compounded with the fact that a large portion of the knowledge is within a workforce that may soon retire means that Oil and Gas companies that want to stay exceptional and competitive cannot afford to waste petro-professionals time hunting for critical records.

So, how do you get to a point where your organization can locate the right document instantly?  We believe it is all about Processes, Technology and People put in place (a cliché but so true)

When Certis completed this project, the technical community could locate their documents within few seconds using “google-like” search. More importantly they were (and are now) able to locate the “latest” version and trust it. The solution had to address 3 elements, people, processes and technology.

The final solution meant collapsing folders from 2000 down to 150, using a DRM system without burdening the technical community and implementing complete processes with a service element that ensured sustainability.

Centralized, standardized and institutionalized systems and processes were configured to take full advantage of the taxonomy and DRM systems. Once the ease of use and the value were demonstrated to the people, buy-in was easy to get.

Technology advances faster than our ability to keep up. This is especially true when working with professionals whose focus is (and should be!) on their projects, not on data management. We had to break the fear of change by proving there is a better way to work that increases efficiency and makes employee’s lives easier.

Legacy Documents, what do you do with them?

Because solving operational issues at the field requires access to complete historical information, exhuming technical legacy documents, physical or electronic, from their buried locations was the next task.

On this project the work involved prioritizing, locating, removing duplicates, clustering, and tagging files with standard meta-data. With a huge number of files accumulated in network drives and library rooms, a company must keep an eye on “cost/ benefit” ratio. How to prioritize and how to tag technical files become two key success factors to designing a cost-effective migration project.

This topic can go on and on since there were so many details that made this project successful. But that may be for another post.

Read more about Certis and about our oil and gas DRM services http://ow.ly/oRQ5f

$250 Million Oil Take-Over Deal Implodes Due To Disastrous Data Management

As professionals in the oil and gas sector we all know that when it comes to a merger and acquisition (M&A) that having access to quality data is essential. In its absence deals don’t get made, investors lose $000,000s and livelihoods are put at risk.

So we were pretty taken aback recently to hear of one deal – of a public company – which fell through because the organization couldn’t even list their complete assets with confidence – such was the mess of their data.

We were talking with a CEO recently who “vented” about a recently failed acquisition.  He is a major player who has worked in the sector since the mid-1970s, he told us here why the $150 Million to $250 million investment his company was prepared to make didn’t just fall flat, but imploded:  “Despite asking this company repeatedly to give us access to their “complete” data sets they failed to deliver time and again. We became increasingly frustrated and discouraged to the extent we wouldn’t even make a proposal in the region of $80 million for the company.  What was so galling to us was that it was obvious this company badly needed an investor and had approached us to bid”

We all know what data is needed for M&A investments to happen, some of which we can get from public records and from commercial organizations such as I.H.S and Drilling Info (in the USA). But those sources alone are not nearly sufficient. So what were they thinking? Did they think data would take care of itself? Or was someone just not doing his/her job well?

The CEO continues “…. in the past when companies were under pressure, typically a lot of data got swept under the rug as it were. Today though, investors demand tighter regulation of data and I suspect that, because of this, in ten years’ time some companies just aren’t going to make it. If our company had been allowed to invest and take over we could have solved many of the organization’s problems, saved some jobs and even added value. Sadly, in this event, due to poor management of critical data that scenario was never allowed to take place. The deal never even got past the first hurdle. No-one is going to invest $millions when they don’t have a clue of (or confidence in the data of) what they’re buying.”

Considering this was a company which had a responsibility for public money the management team should never have been allowed free rein without critical data management regulations or at the very least “guidelines”.

What is your opinion?