Oracle EPM - Leveraging Python to navigate artifact relationships

One of the most frustrating things to deal with in the EPM Cloud is the web of relationships that can exist between different artifacts. Between rules, forms, dashboards, infolets, and reports, there are a lot of artifacts that can be referenced by others. The toolset does have some nice parity checks to make sure you don't delete certain artifacts that are referenced by others, but there is nowhere that I have seen that lets you just spit out a report for what is being actively referenced. Now, I don't have a perfect solution to this yet, but wanted to share some of things I do know!

First is the easiest one, rules. The platform does have a system report that allows you to see where rules are being referenced by other objects,:

No alt text provided for this image
No alt text provided for this image

This page allows you to spit out a report that will show you what rules are being referenced by forms or rulesets, which is a good start.

Next are the more complicated ones that require some Python. Navigation Flows recently had an update that will allow you to see if there is a broken reference from just looking at the nav structure, but as far as I can tell you cannot generate a report that tells you every artifact referenced in a nav. So, I built a Python script to do so. What you need to do is download the Nav you want to analyze (Migrate > Core > Global Artifacts > Navigation Flows to export). Then, extract the zip file. Copy the Navigation Flow directory filepath and then use the below Python script:

import xml.etree.ElementTree as ET
import os
topfolder = 'C:/FileLocation/LCMName/AppName/resource/Global Artifacts/Navigation Flows'
for subdir, dirs, files, in os.walk(topfolder):
? ? for file in files:
? ? ? ? filepath = subdir + os.sep + file
? ? ? ? if filepath.endswith(".xml"):
? ? ? ? ? ? print("Generating list of base artifacts referenced in " + filepath)
? ? ? ? ? ? print("Artifact Type:" + "\t" + "Artifact Name:")
? ? ? ? ? ? xml_file = filepath
? ? ? ? ? ? tree = ET.parse(xml_file)
? ? ? ? ? ? for elem in tree.iter():
? ? ? ? ? ? ? ? if 'refObjectDefId' in elem.attrib and elem.attrib['refObjectDefId'] != '':
? ? ? ? ? ? ? ? ? ? if elem.attrib['refObjectDefId'] == 'FORMS_RT_TF':
? ? ? ? ? ? ? ? ? ? ? ? artifactType = "Form? ? ?"
? ? ? ? ? ? ? ? ? ? elif elem.attrib['refObjectDefId'] == 'DASHBOARDS_RT_TF':
? ? ? ? ? ? ? ? ? ? ? ? artifactType = "Dashboard"
? ? ? ? ? ? ? ? ? ? elif elem.attrib['refObjectDefId']== 'FR_REPORTS_TF':
? ? ? ? ? ? ? ? ? ? ? ? artifactType = "FR Report"
? ? ? ? ? ? ? ? ? ? else:
? ? ? ? ? ? ? ? ? ? ? ? artifactType = "N/A"
? ? ? ? ? ? ? ? if 'artifactName' in elem.attrib and elem.attrib['artifactName'] != '' and artifactType != "N/A":
? ? ? ? ? ? ? ? ? ?print(artifactType + "\t" + elem.attrib['artifactName'])        

Now, unfortunately due to how Oracle encodes the XML file, we have to do some manual manipulation before Python can read the file. If you open the XML manually, you will see something like this:

No alt text provided for this image

This CDATA construction actually prevents the Python script from parsing through the file. Best solution I've found is honestly to just manually remove the CDATA tag. You can do this by deleting the <![CDATA[ opening tag, and then finding the ]]> closing tag for it farther down the file. Depending on the size of your nav, it may be broken up into multiple CDATA sections, so just repeat until the tags are clean. Also, make sure that you only have one xml version definition at the top of your file... as you can see above, there are going to be at least 2. But, once you get the xml cleaned up, you can run the above script and get a somewhat usable output like this:

No alt text provided for this image

Now, the current environment I work in only mounts forms, dashboards, and reports in Navs, so if you need more artifact types you can expand the logic to account for that.

The Navigation Flows process was messy, but seeing which artifacts are used in Dashboards is way easier! This is mainly due to the fact that dashboards are stored as raw XMLs and don't use the annoying CDATA construction. My current environment only uses original Dashboards, so not sure how the definition may differ for Dashboards 2.0, but if you use the OGs then you can leverage this simple script to spit out a list of all the forms used in Dashboards:

import xml.etree.ElementTree as ET
import os
topfolder = 'C:/FileLocation/LCMName/AppName/resource/Global Artifacts/Dashboards'
for subdir, dirs, files, in os.walk(topfolder):
? ? for file in files:
? ? ? ? filepath = subdir + os.sep + file
? ? ? ? print("Generating list of base artifacts referenced in " + filepath)
? ? ? ? if filepath.endswith(".xml"):
? ? ? ? ? ? xml_file = filepath
? ? ? ? ? ? tree = ET.parse(xml_file)
? ? ? ? ? ? for elem in tree.iter():
? ? ? ? ? ? ? ? if elem.tag == "block" and str(elem.attrib).find("name") != -1 and? elem.attrib['name'] != '':
? ? ? ? ? ? ? ? ? ? print(elem.attrib['name'])        

And there you have it! Those simple Python scripts can make trying to identify used artifacts way easier. You can dump the exports into Excel and do some lookups to figure out what artifacts are needed. This is primarily useful if you have a large, mature environment that has gone through lots of iterations and you want to do some artifact cleanup, but don't want to delete anything that has references to it.

要查看或添加评论,请登录

Patrick Northcraft的更多文章

社区洞察

其他会员也浏览了