Dealing with the massive influx of information gathered from exploration projects or real time gauges at the established fields is pushing the traditional data-management architecture to its limits in the oil and gas industry. More sensors, from 4-D seismic or from fiber optics in wells, crack the gap wider between data capture advancements and the traditional ways of managing and analyzing data. It is the challenge of managing the sheer volume of collected data and the need to sift through it in a timely fashion that Big Data technologies can promise to help us solve. This was just one of the suggestions on the table at the recent Data Management Workshop I attended in Turkey earlier this month.
For me, one of the main issue with the whole Big Data concept within the oil and gas industry is that, while it sounds promising, it has yet to deliver tangible return that companies need to see in order to prove its worth. To overcome this dilemma, Big Data vendors such as TeraData, Oracle, and IBM should consider demonstrating concrete new examples of real-life oil & gas wins. By new I mean challenges that are not possible to solve with traditional data architecture and tools. Vendors should also be able to offer Big Data technology at a price that makes it viable for the companies to “try” it and experiment.
The Oil and Gas industry is notoriously slow to adopt new software technology, particularly when it comes to anything that tries to take the place of traditional methods that have proven to work already, unless its value is apparent. To quote my good friend ” we operate with fat margins we don’t feel the urgency”. However, E&P companies should put their creative hats on to work alongside Big Data technology vendors. Big Data may just be the breakthrough that we need to make a tangible step-change in how we consume and analyse subsurface and surface data with agility.
If either side, vendors and E&P companies, fail to deliver, Big Data becomes a commercial white elephant and is doomed to very slow adoption.
At the workshop we had Oracle, Teradata, and IBM all showing interesting tools. However they showed examples from other industries and occasionally referred to examples that are possible to solve with the conventional data technology. They left the audience still wondering!
One Big Data example that is relevant and hits home was presented by CGG. CGG used pattern recognition (on Teradata technology) to find all logs that exhibit a specific pattern a petrophysicist may be interested in. This type of analysis require scanning through millions of log curves, not just meta-data which is what we had been bound to in traditional architecture. This opens up new horizons to serendipity and who knows maybe to new discoveries.
I’d say ignore big data in oil/gas at your peril. Hospitals can use it to predict who is going to become infected, and hotels can use it to improve their efficiency.. the examples go on and on…
How could oil companies NOT see the value in data analytics.. they are more data driven than many business.. indeed they tend to be drowning in data…This alone is probably enough to justify actually being serious about Data Management.. because you can’t really perform much in the way of data analytic’s if your data is biased/incomplete/wrong or if you can’t find it 🙂
Don’t think of Big Data as being big in terms of number of bytes. Seismic is not big. Think of big as being gathering complex multiple sources of data and extracting intelligence from it through analytics. So maybe vendors talking about the oil and gas volume of data are missing the point?
Maybe the majors are already doing it, but keeping it private. Gathering all the decline curves for all their wells, analysing them, and using it to predict probabilistically performance of new wells? Gathering equipment data and vibrations and optimising maintenance plans?
To analyse data you first have to have it managed.