Assisting History Match Workflows
Ones the Reservoir Engineer has access to HPC facilities, he will be in better position to use Assisting History Match technology which is usually requiring to perform large numbers of runs to improve dynamic models history match while assessing multi variables parameters their ranges.
The recommended workflow per each AHM technology in general is fully depending on the used tool and its functionality to link between static and dynamic models to objectives to assess the identified variables in parallel as possible.
In this article, I'm going to display 3 key workflows, which can be implemented efficiently to enhance dynamic model history match by:
(A) Use static model to generate multi-realization parameters, which would be exported one-time to dynamic model e.g. (permeability, porosity, RRT’s, structure, Kv/Kh ..etc).?Usually this process needs to generate large numbers of realization parameters to cover all possible trends. Then by utilize “Latin-Hyper-Cube” to perform many runs using combination of all target static and dynamic variables with their possible ranges to pick-up the closer history match case with the observed data. Pick-up the base case history match, which showed the minimum objective function error to be as reference case subject for further optimization process.
(B) Utilize dynamic and static model to generate single input parameters variables,?and preform history match. Once the results calculated, use Assisting History Match tool “AHM” to redefine, redesign and redistribute parameters in 3D static model based on the Objective Function and re-submit them again into dynamic model. This process would require less numbers of variables runs in each alteration, and static model should be kept active during the process all the time.
(C) Using Python shall give RE’s more flexibility and advance options to define and adjust parameters through logic functions.
Looking for new opportunities.
3 年In short, the mathematical methods and algorithms and commercial tools have been around for over 20 years, they do not depend on recent machine learning advances or languages. However, the practical issues, and the dissemination amongst the community, are much more difficult.
Looking for new opportunities.
3 年Throughout, an important consideration is whether the dynamic variable make sense for each static model. If doing regionalisation and rel. perm. changes for regions, the regions may be different for each static model. Or a region may not exist in some static models. What then? Maybe the only way is doing independent history matches for a small number of distinct static models. But then how do you make a proper probabilistic ensemble?
Looking for new opportunities.
3 年In the article, I don't understand the distinctions between the three workflows (and it is nothing to do with Python). I have gone through various stages in the static v. dynamic concepts, and implemented them for clients over 10 years ago. 1) link the AHM tool directly to the Petrel workflow, and generate static models on the fly using input parameters generated by the AHM tool. Sometimes called BigLoop (many, including me, have claimed naming this). This kind of Big Loop does not work, the static models can be garbage. 2) Generate a set of 81 (say) static models, based a H/M/L set of 4 parameters. These can then be validated by the geologist, and used as input to the AHM. These static models are also ordered, so a proxy model can be easily applied. 3. Generate some set of geological static models, these are 'possible' models, and can be unordered. Use a single parameter to select these during the AHM, and automatically sort them for each chosen history match point. This automatic sorting is like a travelling salesman problem, not so easy. I did this for the Olympus optimisation challenge.
Principal Reservoir Engineer (Simulation)
3 年