How to: Analyze the criticality of water mains using ArcGIS Utility Network
You might be wondering how to execute the analysis that I briefly described in the article “Analyze the criticality of water mains using ArcGIS Utility Network”. In this post, I will share some pointers on how to do this.
The trace framework of the Utility Network provides almost infinite possibilities to analyze the connectivity and traversability aspects of your network. In my previous post I compared connectivity to proximity, but to be more precise it isn’t connectivity but traversability that we are analyzing. To learn more about tracing and the difference between connectivity and traversability I encourage you to read the following post by Remi Myers, Jon DeRose, Derek J Nelson, and Robert Krisher:
The script is built around two tools:
These two tools are executed inside a loop through the water mains to make sure that each main is processed. For more information on these tools please refer to the documentation on AddTraceLocations and Trace.
The AddTraceLocations tool offers an easy way to define a starting point at the center of a water main.?It will take a couple of parameters:
For the trace locations you can use something like:
trace_loc = f"'Water Main' {globalid} # 0,5"
This will use the water main layer, use the Global ID of the feature to process, ignore the terminal setting and place a point at 50 percent along the line.
To configure the trace it is best to manually configure the trace in ArcGIS Pro and copy the Python snippet when you are happy with the result. For this analysis, I configured the output asset types to include only the different service connections and service meters. In the result types make sure to include the “ELEMENTS” option. This will enable the creation of a JSON file with the affected services. When running an isolation trace make sure to configure the filter barriers to stop the trace when the category is equal to a specific value of isolating:
filter_barriers="Category IS_EQUAL_TO SPECIFIC_VALUE Isolating #"
The trace will return an error when a water main is used that has no traversability to a subnetwork controller (for instance, a water main behind a closed valve). To catch and handle these errors, put the trace tool inside a try/except/else statement to correctly handle the situation.
Since the “ELEMENTS” option is used, the parameter “out_json_file” is enabled which allows us to specify the output JSON file with the affected elements. So, each trace will output a JSON file that contains the services that will be affected by a potential rupture of a water main. The JSON will contain a key called “elements” that contains the list of affected services, including information like globalId or objectId to identify the service and asset group and asset type.?
? "elements": [
??? {
????? "networkSourceId": 9,
????? "globalId": "{502A4400-CEC6-4701-AE16-855538EFAACC}",
????? "objectId": 12755,
????? "terminalId": 1,
????? "assetGroupCode": 12,
????? "assetTypeCode": 65
??? },{ ...
To get detailed information on how to parse the JSON that is created I recommend you to read this post by Robert Krisher explaining how to do this in detail:
A snippet of the code to process the JSON file can be found below:
with open(json_path, "r") as json_file
?????? json_content = ujson.load(json_file)
del json_file
?
elements = json_content.get("elements", None)
dct = {}
for element in elements:
?????? nwsid = element["networkSourceId"]
?????? agc = element["assetGroupCode"]
?????? atc = element["assetTypeCode"]
?????? key = f"{nwsid}#{agc}#{atc}"
?????? if key in dct:
???????????? dct[key] += 1
?????? else:
???????????? dct[key] = 1
?
results[assetid] = dct:
It will read the JSON file into memory and read out the Asset Group (AG) and Asset Type (AT) information and create a dictionary with the counts per AG/AT. The reason is to be able to define the criticality by using a weight per type of service. For instance, an industrial service might be more important than a commercial service which could be more important than a residential service. Since I don’t want to predefine the weights for each type of service, the dictionary information is written to the feature and an Arcade expression is used to translate it into an impact value.
The results from this process are linked back to the water main layer a visualized in ArcGIS Pro.
Customer Vulnerability
Since the process generates all these JSON files and since those files contain information about which customers will be affected if a water main breaks, we can use this information and aggregate it per customer and try to say something about the vulnerability of each customer. In this case, the assumption is that a customer that will be affected by a larger length of water mains will be more vulnerable than a customer that will only be impacted if a smaller length of water mains suffers a rupture. Is this really true? Maybe, probably not, I leave that to the water distribution experts to decide.?
To determine this “vulnerability”, the script reads all the JSON files and for each file, it starts to sum the water main length and store that for affected customers.?
At the end, the total length of water mains is written for each customer and this value is used to visualize the services using the pipe length that will affect the service.
Considerations
Running a large number of traces will take a considerable amount of time. I don’t recommend doing this on your production environment if your architecture has not been dimensioned for this type of usage. You can do this on a local FGDB with the UN. It is important to notice that you will need a UN without errors and with functioning subnetworks to be able to do this type of analysis.?
One way of optimizing the processing time is to maintain a dictionary with the water mains and register the number of affected customers for all the water mains that will be isolated. This will not be as precise, and overrate the criticality of some water mains, but will be valid for a large number of mains.?
When I first ran the optimized script, I didn’t define a threshold that would force the script to recalculate a water main that was included in a previous trace. This resulted in a situation where it would find a water main that affects the entire network, and that way would assign all affected services to each water main.?
In a second run, I decided to define a threshold of 500 customers, so that when a trace would result in more customers being affected, it would not define all water mains that would be without service to be assigned that number of customers. It took just over an hour to complete (a reduction of the processing time to a mere 10% of the original run), but the results would be different for 26% of the water mains and leading to an exaggeration of 70% of the original (correct) impact.
A third run with a threshold of only 25 customers incremented the processing time to almost 2.5 hours, but the “accuracy” increased as 92% of the water mains had the same result as the original (correct) impact.
In conclusion, you can reduce the processing time by implementing a method that will only process water mains not included in a previous isolation trace, but it is important to configure the threshold according to your data and requirements. It will also allow you to detect anomalies in the behavior of the network that would normally be much harder to detect.
I exchanged some emails with Robert Krisher , and he mentioned the use of isolation subnetworks or using the subnetwork JSON export that can significantly reduce processing time without losing precision. He may publish something on this in the future. If you are interested in the Utility Network and you are not following him yet, please do so now, since he continuously shares valuable information and recommendations on different aspects of ArcGIS and the Utility Network.
Xander, very cool analysis. Reminds me of the Summary Isolation Trace tool we had in the Water Editing and Analysis GN toolbar. We are working on a new tool to process the results of a trace and convert these in a GDB. We could use Batch Trace to create the isolation json file per main, then use the new tool to convert those files to a GDB and then use the Trace Key to create a count of features. This might be able to achieve this result with no coding, just some sql view logic, to get the same result. Happy to work with you on it if you are interested.
GIS Manager at Ayesa Global Group
1 年Alfonso Gómez Gutiérrez