Curve 3.1.0 Introduction – What is special about Curve?
Main Screen Curve 3.1.0

Curve 3.1.0 Introduction – What is special about Curve?

Do you remember how often you have tried to overlay your core description on top of your logs or las-depth plot to see little or nothing in common?? You didn’t understand the relationship between core data and log data.

Apart from Gamma Ray Logs who have a vertical resolution as little as 10cm (0.1m), many porosity logs have a vertical resolution of 60cm or worse. Electric logs are often a lot more frustrating and may sometimes not even resolve issues on a meter scale. On the other hand, when dealing with dipmeters, the measured features are as thin as a bedding contact. This is partly due to the tools involved and their vintage. But it is also related to that wireline logs are recorded up to 10x per meter (STEP=0.1m).? The logging values are measured against depth in depth increments of 1 feet or 10 cm or 0.1524 meters = 0.5 ft.?

Dept = Start depth + depth increment, Dept=Dept + depth increment, etc.?? To plot these measurements, they must be cross plotted from data measured in the same depth increment.? How can you plot CorePorosity (CPOR) versus Density (RHOB)? One plots a core porosity curve (CPOR) resampled in the same depth increments and depth shifted versus RHOB with the same depth increment.? The resampling to the same depth increments as the density curve is also required because the conventional core analysis data is measured not in STEPS (depth increments) but in irregular depth increments.? To make matters worse, the data points of the logging curve are often a moving average over 10 measurements at STEP depth increments. Are you already dizzy?

So, when using a core description feature measured at 10cm steps it is difficult to compare with a wireline log response as a moving average over 10 depth increments of 10cm each with a tool that has a vertical resolution of 10cm or much worse – compared to the detailed ‘resolution’ of a core feature.? Comparing a 1cm thick depositional facies observed in core with a petrophysical facies derived from wireline logs is a difficult process. Curve 3.0 can shed some light on these issues but still, it requires a lot of experience on the side of the interpreter. We deal we two different approaches using our specialized core descriptions with depositional facies and from grouping wireline log responses into petro-facies.

We also provide curves of various rock compositional data such as clay or Vclay curves determined from observations through the cuttings microscope. How precise do you think a shale (really clay volume) volume curve (VSH) derived from wireline log data is, or a grain size curve? Especially considering the amount of silt embedded in the shale.? You need to know the shale/clay ratio for a specific formation or stratigraphic interval in a particular area.? Do you really think that an empirical relationship derived for some shales in the Gulf of Mexico, as the Clavier curve, is valid for the Montney in the Deep Basin? ?Is it coming even close to the truth (whatever that is)?? Well neither Clavier nor the VSH derived from a cuttings microscope comes close. That is why a third source of detailed information is required, point counted thin section petrography. The latter can provide great insight regarding understanding your reservoir. Our core descriptions are set up to integrate thin section observations with log response.?? If you measure grainsize using a cuttings microscope, how difficult is it to differentiate between very fine sand, coarse silt, and fine silt, and/or clay? That is why so many wellsite geologists cannot differentiate between sand, silt, and shale. ?We use our TS point counted data to incorporate in our core descriptions to better calibrate the lithology description from the cuttings microscope AND to analyze the Thin Section point counted trends in terms of log response and reservoir quality trends.

Curve 3.1.0 is designed to optimize the relationships between core and wireline log data. And there is a whole slough of data such a water saturation distribution laterally and vertically that many use to determine original oil in-place and remaining reserves; the latter is one hotchpotch of numbers and confusion, that is also very difficult to estimate. And then we use capillary pressure data to better approach these numbers.? Regardless of estimating SW from capillary pressure data and reservoir quality and height above free water.

Every reservoir engineer can tell you that your reservoir fluid properties change as soon as one starts to produce from that reservoir; then how can you compare water saturation from logs recorded at pool discovery with logs recorded from infill drills ten or 20 years later? With Curve we are trying to address these issues.?? Remember Numac, an oil producer in Alberta’s past?? It went broke because it wanted to lower its reserve estimates by 30% (god knows based on what). Or 10 years or so ago, Shell International which wanted to do a large reserves write down, and everyone was up in arms!? Curve 3.1.0 is focused on getting the best numbers, but even with that, many problems and questions remain.

Curve 3.1.0 is the best tool to learn to understand your reservoir and to highlight the occurrence of reservoir. To be honest, when you think you can estimate your reservoir fluid distribution by measuring your horizontal well project with a single gamma ray log and throw the cuttings away, you have a very nasty surprise waiting for your company in the not to distant future.

If you think about connecting core with wireline logging data, then what about the data consistency when working with data from different LAS data providers, data sources, tool vintage and tool types, data ownership and much more. This is another data hornet nest of an entirely different kind, which may impact your corporation more than you think possible. Curve provides you the workstream and tools to optimize your data quality. Your corporate data management may prove to be the key to your future success.? Curve creates tools to keep track of what is yours and what is theirs (data provider).

Our help files comprise a series articles discussing these issue and how Curve may help you to come up with defining your reservoir.? We are considering using a series of AI routines generating supplemental data to better evaluate incomplete data sets, rather than to generate black-box outcomes. AI won’t solve your reservoir models, for that we have CURVE’s build in methods (scripts and command-line calculations) with the resulting data saved in LAS that you can easily import into your mapping software and CAS files (our own methodology to present core data).

Finally, CURVE 3.1.0 is a work in progress. We need your help to find bugs and develop future tools for reservoir evaluation. Never forget that we only work with data models, some more and some less appropriate. A model is nothing more than a model it is as good as we can make it; never trust a model and always look for improvements. That is why we work on a annual subscription basis that helps pay with debugging and improving our models. Your suggestions will be an important part of this.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了