Contributions to Big Geospatial Data Rendering and Visualisations - Ph.D Thesis: Specification chapter 5
Within this section we will discuss the specification needs for the project. The specification states what is needed to achieve the aims and objectives of the project, and to overcome the issues we have identified within the background, literature review, and the big geospatial challenges chapters.
We iterate the aims and objectives of this project, which are as follows;
1. To review and consolidate the knowledge and state of the art on big geospatial data rendering and visualisation.
2. To identify and collect the real world data to generate complex real world 3D environment.
3. To design a framework, and software components to develop a big geospatial data rendering and visualisation system.
4. To create novel data structure to combine several data sources and design new algorithms to process and visualise these data using a 3D game engine.
5. To implement a geospatial data visualisation system.
6. To develop new metrics and benchmarks to evaluate performance of the system of the resulting 3D scenes.
We discuss the specifications from a high-level point of view, then the needs of processing data, the needs for 3D city visualisations, the data structures needed to manage the big geospatial data, and then the algorithms which will be applied to the data structures. Finally, we discuss the use of the Interactive Visualisation Interface (IVI).
5.1 High-Level Specification
This section introduces the high level specification of the framework. The specifications will culminate in the generation of a framework which can process geospatial data into a procedurally generated realistic 3D virtual environment which can be manipulated both spatially and visually by a user.
To generate visualisations for a modern GIS, datasets are available for use, but need additional processes and algorithms to combine the datasets. The combination of datasets is to infer data where data is erroneous or missing. If data is missing, inference algorithms are to be created and used. The use of private and open data will justify the combination as both private and open data has multiple varying errors within the sets. The dataset chosen will guide the development of algorithms and data structures needed for the inference of additional data, as well as the visualisations of real-world assets. As of yet, the combining of this data has not been achieved within industry nor research domains, for the generation of virtual scenes projected and rendered within a game engine visualisation. As stated within the literature review section, similar research is underway but use alternatives to game engines for processing, and visualisations. The generation of 3D assets is the key component, and for this, novel algorithms need to be created. Due to the complexity of asset generation, it is unlikely and inefficient to employ a 3D model artist to generate all possible locations within the UK, or the world; thusly PG algorithms are needed. The algorithms will utilise the data extracted from the datasets, to create procedurally generated 3D assets, or if available, load corresponding user generated content in place of procedurally generating assets.
To allow flexibility with the processing pipeline, from data input, data fusion, data inference, and data generation, a scalable and flexible procedure is needed to allow processes to be removed or added to improve processing speeds, or visualisation outcomes. The flexible nature will allow additional visualisation layers to be added with future iterations, and allow layers to be removed or used within various projects and domains.
Due to the large number of assets and the large number of common rendering techniques employed within computer games, but not yet implemented within GIS frameworks or visualisations of real- world scenes, a flexible rendering pipeline is needed. The flexible rendering pipeline should allow all 2D and 3D model assets to be rendered with any rendering technique with the various lighting calculations. This specifies a unified model object is needed, and the use of a unified rendering pipeline.
To reduce latency with both processing, rendering, and interaction with the virtual scenes, processes must provide techniques to skip processing for the generation of assets. During runtime, algorithms must be employed to organise large scenes to reduce latency during the updating, and rendering of the scene.
The use of the processes within alternative frameworks, or projects is not only needed, but the output of the processes should be flexible enough to use within alternative projects, rendering engines, game engines, or frameworks of the like. A common output schema is needed; binary, XML, GML, etc.
The functional and non-functional requirements are stated next, but first we iterate some use cases for the framework from the perspective of potential stakeholders.
A fire department may wish to view a virtual scene which contains the visualisation of real-world city environments at crisis time. If a building is on fire, the fire department may wish to query a scene, or the objects around a fire to render the likelihood that the queried object will also catch fire. Using this data, the planning of fire engine placement can commence. This same scenario can be utilised in training at non-crisis time.
The police can utilise a virtual real-world scene by overlaying additional data onto processed maps. These data can be visualised by the procedural generation of virtual spheres. The spheres can represent potential incidents, and the size of the spheres represent the severity of the incident. This technique has been implemented for sound level mapping within real-world location within the paper in section 1.4.1, item 4.
The functional and non-functional requirements are stated as follows; Functional Requirements
· Utilise real-world open and private data. If data is error prone, or missing, then infer and generate data.
· Create a flexible visualisation pipeline.
· Geospatial data parser. This functionality converts between data structures. For example, converting OSM XML format to a runtime class object which can be processed further.
· Geospatial data combiner.
· Geospatial data interpolator.
· Allow scalable and flexible processes and algorithms to combine real world big-data datasets for the reduction of errors and generation of additional data for visualisation.
· Combine big-data data-sets using novel algorithms.
· Procedurally generate 3D assets generated from real-world geospatial data sets.
· Use user generated content in place of procedurally generated assets.
· Data structures to organise a city, country, and a world worth of data.
· Interaction techniques with GPU parameters.
· Advanced rendering techniques.
· Allow users to navigate a scene utilising common 3D camera systems.
· Scene traversal and searching of user generated queries. Traversal and searching of scenes for user satisfied queries should be within real-time, and thusly should not take longer than 2 seconds. This is required by the use case scenarios stated.
Non-Functional Requirements
· Use of commonly used input devices (keyboard, mouse, gamepad) through a framework specific interface.
· 3D model importer.
· 3D model processor. This is stated as non-functional because model assets can be imported through many open APIs. A custom 3D model processor is needed to reduce data storage and apply needed data parameters if the 3D model mesh is imported without them (UV coordinate, tangent and binormals, etc.)
Within the rest of the chapter we will discuss details and issues of each component of the framework. We start by discussing the specification needs for the processing of data.
5.2 Processing Data
We remind the reader that the data sets which we have available are OS terrain data, LiDAR DTM and DSM terrain data, and OSM data.
The issues stated for the LiDAR shows that additional processing is needed to combine the erroneous LiDAR sets with a complete set. The complete set we speak of is OS. A process, or multiple processes are needed to combine the two data sets together. The multiple resolutions of the LiDAR sets mean interpolation procedures are needed to interpolate the low resolution OS set to the high resolution LiDAR sets. Interpolations between resolutions are needed, as well processes to combine terrain sets together, and extract subsets of terrain sets.
In order to determine the most appropriate interpolation function to use, an evaluation of interpolation functions is needed.
The errors which are needed to be checked are; incomplete maps, missing maps, errors such as spikes in height data. Incomplete maps are maps which have data points missing, normally large areas of the map. Missing maps are maps which have not been captured yet; either due to cost, or no need to capture the area (mountains, non-populated areas, seas/lakes). Spikes in height data may represent flocks of birds or mobile artefacts within a scene at capture. Procedures are needed to mathematically model and present these potential errors. Combing OSM may give the additional data needed to query irregular spikes in height data.
The data held within OSM is vast, extensive, and user-generated. The data is also erroneous, lacking, and vastly different depending on the area chosen due to multiple languages, dialects, and understandings of scene being mapped. To utilise the data of OSM, additional checks are needed to ensure the data extracted is suitable for use for asset generation and visualisation. Initial prototypes show that although the OSM API does error check data entered by users, errors are still within the database when downloaded.
The XML file which is used to describe the OSM objects is unreadable and unmanageable without the aid of parsing algorithms. Objects within OSM are referenced with a unique ID, which we have found to be wrong; multiple objects have been found to utilise the same ID, or an object may be entered into the XML file multiple of times. This may lead to additional and unneeded processing and data generation. Another issue is the formatting of input from the users. For this reason, additional checks are needed to remove these errors as to minimise the system failures or duplications of data.
The OSM objects depict assets within the real-world. Parsing the assets into usable data structures are needed. Designing the data structures is also needed. As stated OSM has numerous errors or incomplete data, as the same with the terrain data sets. Data generation techniques are needed to create data where there is no data; through inference or defaulted values created by domain experts
i.e. the height of a floor can be estimated through government sectors or private companies39.
The projections of the terrain data and OSM data are different. An algorithm is needed to convert between the projections; Longitude Latitude to X, Y coordinates as to be used within a game engine.
The information held within OSM used to describe the object lacks information which can be used for analysis and examination. The minimum information to state which OSM boundary outline represents a building is a single Tag object with the key from a key/value pair of ‘Building’.
Techniques which analyse the spatial parameters and location of the assets can infer relevant data which can be inputted into the OSM database. This need states that inference algorithms are needed. The inference algorithms will need to be checked by domain experts. For the prototype we will generate our own conclusions on the success of the inference algorithms. If data is not available, nor can it be inferred, then default parameters are needed; again a domain expert should be consulted to determine these default parameters.
To create a visualisation of real-world locations, the generation of 3D model data is needed. Due to the large number of assets, and the need to visualise any location within the UK, or the World, employment of a 3D artist would be expensive, thusly PG processes are needed. The processes will need to convert the information extracted from the datasets into runtime 3D model mesh objects.
Organising the assets within scenes, and within internal processes needs the use of a spatial organisation technique. This is needed to allow data organisation and quick look up of location referenced data.
39 https://www.ctbuh.org/TallBuildings/HeightStatistics/HeightCalculator/tabid/1007/language/en- US/Default.aspx
The processing of the data needs to output objects into a format which can be used within runtime simulations; a complete processing environment which combines, extrapolates, and generates data from the datasets available to be rendered and interacted with by a user, or a multiple of users.
To improve testing, and remove data which a user may not need, removing unneeded processing, trigger arguments are needed. If only buildings and terrain are needed within the runtime simulation, the removal of processes not used for the creation of buildings will optimise the speed of generation.
We conclude that many algorithms and processes are needed to combine the various geospatial datasets into a single pipeline. Processes are needed to combine the terrain data of OS and LiDAR together, while separate processes are used to convert OSM data assets into model mesh objects utilising PG techniques. To generate accurate and complete data, inference algorithms are needed to improve pre and post model mesh generation which is needed
After the generations of assets, runtime processes are needed to organise and modify assets both spatially, and visually.
Due to the time scale of processing; combining, and interpolations of terrain data, processing the large database of OSM, we specify the ability to allow pre-processing for the creation of runtime assets.
To summarise, the processes needed are;
· Process LiDAR to remove errors/missing data points.
· Processes to combine LiDAR and OS data sets if errors persist within the LiDAR datasets.
· Processes to interpolate through the resolutions of the LiDAR datasets we have.
· Evaluate interpolation functions for accuracy and processing pros and cons.
· Processes to check OSM data parameters and data types, as well as duplicated data.
· Generation of data structures to store and process OSM objects parsed from XML.
· Utilise 3rd party software processes to convert between the projections of LiDAR/OS and OSM.
· Inference algorithms to generate additional data where data is missing.
· PCG processes are needed to create 3D model mesh objects depicting OSM objects; highways, buildings, amenities etc.
· Allow user interaction through an IVI.
· Allow user generated content to be inputted during pre-processing and runtime.
· Allow users to remove unneeded objects at pre-processing time to improve processing speeds.
Utilising the processes stated, a single pipeline can be created to overcome the issues stated within the background, literature review, and big geospatial challenges sections. This data can be used within a number of algorithms and real-time visualisation procedures; creating a large dynamic, and intractable 3D virtual scene which a user can modify within real-time.
5.3 City Visualisation
To visualise modern city’s and urban environments, objects stand out as common elements; buildings, highways, amenities, terrain, and objects such as street lamps, post boxes, cars, benches and many others.
To view these objects on a multiple of screens, computers, phones, or TVs, a commercial API is needed to render the objects and manipulate them for interaction needs. A commercial game engine combines a rendering API and the components needed.
A 2D and 3D camera system is needed. The camera system must not only provide an eye within the 3D environments, but must allow the interaction to navigate the scene in a multitude of ways.
Common 3D camera systems are built of multi-functional cameras which provide specific interaction capabilities. We will specify the use of the multi-functional cameras. 2D cameras are also needed to view and project overlays within the 3D virtual environment. These are usually Heads-Up-Display (HUD) overlay.
Many commercial GIS platforms provide one of two projections; Orthographic, or Perspective. We specify that a 3D city visualisation must be viewed in both projections.
Before objects are rendered to the screen, a flexible rendering technique is needed. Due to the large amounts of various physical objects needed to create a modern city visualisation, various rendering and lighting techniques, as well as algorithms are needed as to allow the realistic rendering of physically based objects. The rendering techniques needed to render, with realism are; water, glass, plastic, and reflective elements.
The framework should allow all 3D data structures to be rendered with all rendering techniques specified as to minimise the system errors and crashes.
To generated 3D city visualisations, 3D model mesh objects are needed. Creating custom handmade models by a 3D artist is impractical due to many reasons; costs, time, ever changing environments of cities etc. Procedural techniques and algorithms have shown a feasible alternative for the generation of real-world assets. The algorithms stated within the background and literature review sections have been used to generate building models from grammar techniques to a high standard of realism but does not utilise real-world data, while some have created buildings of limited realism and minimal variance in visual representation which is generated purely, or partially of real-world data.
Procedural techniques are needed for the multiple types of assets needed, and the types of objects which are commonly found within real-world locations. Common assets which do not vary are to be generated by a 3D artist; benches, phone boxes, lamp-posts. This is due to their simplicity and constant similar looks. The addition of hand-made models is also needed. These will be used in placement of procedurally generated assets.
To allow the importing of user created model mesh objects, importing, parsing, and processing of the external model datatypes, algorithms which convert the model are needed. This will guarantee the model can be used within the flexible rendering pipeline needed.
To organise all assets spatially within a scene, as well as organise the assets dependent of their type, is needed. Rendering large scenes may not be achievable if the scenes are dense and tightly packed without additional spatial organisation techniques; which many modern urban environments are.
Assets should be spatially organised into manageable chunks for improved processing and rendering. A scenegraph structure is commonly used within commercial products and many commercial computer games to spatially organise assets of both static and dynamic nature, from macro to micro elements. The scenegraph structure should provide spatial organisation and categorisation within a single scenegraph structure used for the complete scene. The scenegraph structure can be utilised by other algorithms to infer data, as well as inject data or user input into a single, or a group of nodes.
Rendering objects in a scene for visualisation needs light sources. Within city visualisations, the main light source is the Sun. A global illumination technique is needed as to apply the light source to models, be it sunlight, or spot lights around a virtual environment. To provide flexibility and additional rendering capabilities, assets, specifically the model mesh objects, should be allowed to utilise their own lighting parameters, or utilise the global scene lights. This allows the unique rendering of asset by modifying their spatial or visual parameters, or by modifying their internal reference illumination lights. Modifying the global illumination lights will modify the entire scene and all assets, creating realistic environments.
We conclude a modern GIS visualisation framework which showcases assets within a city requires the use of a commercial rendering API and potential game engine which encapsulates components needed. Before visualisation of city assets, processing of the datasets is needed to convert the data into representations which are apparent within real world city scenes; 3D model mesh objects depicting buildings, highways, amenities etc. The components needed are 2D and 3D camera systems to allow rendering of assets in multiple projections. A scenegraph is needed to position assets within the 3D virtual world, and organise the assets depending on their internal state, and type. To render objects which are imported or procedurally generated, a custom model representation is needed. To utilise and render objects to screen using a large number of rendering and lighting calculations, a flexible rendering algorithm is needed.
We specify that the visualisation of real-world urban environments, light structures are needed. The number of lights corresponds to the rendering technique we specify is needed. We state that forward rendering is needed for this prototype pipeline. It allows flexibility with rendering techniques, and is appropriately chosen for its implementation ease. Deferred rendering may provide a higher number of lights within a visualisation but is unneeded for this prototype.
If these specifications are met, large virtual 3D scenes can be visualised, manipulated, and used to present real-world locations inside virtual scenes.
5.3.1 Evaluation of an IASF and multi-branched Shader Effect
The visualisation of large numbers of vertices and polygons needs techniques which do not rely on the high price of dedicated hardware configurations. Scenes to be generated for a modern GIS need high rendering speeds for large, dense scenes. An IASF improves the rendering speeds of large number of vertices over branched shader effects. A multi-branched shader function relies on the Boolean flags of if-statements within the function code of a GPU HLSL shading function. Creating branches is expensive for the GPU and increases rendering speeds due to the increased overheads of each function call. Removing branches from the code base is the roll of the IASF. This increases program duplication, but allows increased rendering speeds. The experiment undertook compared the two techniques. The selected functions of the IASF are chosen CPU side, with each function representing a branch, or series of branches as generated within a shader function with if- statements.
The results shown in Figure 27 show that the use of an IASF to render large buffers of vertex data increases the rendering speed. The experiment to find these results compares a low, medium, high, and very high polygonal scene bring rendered with both the dynamic branching shader, and the IASF. The dynamic branching shader consisted if-statements which chose the lighting calculations; textured, Blinn-Phong, Phong, per-vertex, or per-pixel dependent on the shader technique chosen. The technique would send then pre-defined parameters depicting the needs of the shader. The IASF utilised an indexed array data structure which point to specific functions, which would not pass parameters from technique to function, but simply call separate functions, thusly, removing all if- statements.
Figure 28 shows the percentage gains from using an IASF over the branching shader. Figure 28 shows that the larger number of vertices being processed, the higher the framerate will be compared to that of the dynamic branching shaders which are commonly used. We iterate again that an IASF has not been used within the domain of GIS visualisation and real-time interactions before. With a small number of vertices, the IASF is still beneficial. Figure 28 does show that the draw call is a drawback but the overall framerate is higher, which is the main benefit to utilising the IASF for our work.
Figure 27 Runtime rendering rates between IASF and Dynamic Branching.
Figure 28 Percentage of Improvement between IASF and Dynamic Branching.
5.4 Algorithms Needed
We have specified many needs for this project. To complete these specifications and to generate the data needed to visualise a virtual 3D city, algorithms are needed; generated specifically or obtained from open sources. The generation of these algorithms is an essential part for overcoming the problems stated with the datasets available, and the current techniques used by commercial projects and researchers.
To combine the datasets of OS and LiDAR, an algorithm is needed to make the datasets consistent with each other. Interpolation techniques are needed within an algorithm to produce missing data within the terrain data.
An algorithm is needed to extract and infer information from the OSM dataset for the use with PG algorithms. The extraction of OSM data needs domain expertise to create default attributes for OSM assets which have little to no additional data attached. Algorithms to infer information relevant to each asset can be used during extraction, and after generation. For example, the building height can be inferred by defining a constant height for a single floor and multiplying it by the number of floors stated within the description of the building. Algorithms to infer data after generation of assets are to be used to check the spatial relations between assets. Spatial analysis techniques can be employed on 3D assets through the use of commercial physics engines, or collision checks algorithms. Checking if a building is within an OSM boundary, the boundary’s type can be used to classify the building; if the building has no type.
Algorithms are needed to procedurally categorise the OSM assets. As stated, the common assets found within modern cities are; buildings, highways, amenities, boundaries, and objects such as benches, post boxes etc. The highways of OSM encompass waterways, and railways. To do this, the algorithm must loop over the OSM dataset, extracting and checking the relevant data. As stated, assets can be categorised, and data can be inferred from other assets within the dataset, such as classification of buildings dependent on their location within the world and if they are within a specific type of boundary, the organisation and categorisation needs to process and generate assets within a specific order. This is to reduce processing and duplications of data.
During classification and processing branching; each branch will generate either a building, highway, amenity etc. additional checks are needed within the algorithm to make sure data has not been duplicated with the OSM dataset.
Once classified, the asset is branched to be procedurally generated. To allow hand-made models, an algorithm is needed to check the hard disk whether a model exists for the currently processed asset. If a model does exist, the algorithm should load and process the model to use it in place of the procedurally generating asset. This should be done for all objects. With OSM unique identifying assets with their own ID number, the lookup of models can be procedurally checked within the algorithm.
If a model object is not referenced on disk to be used instead of procedural generating assets, an algorithm is needed to check the type of the object, and if it is a common type, such as a residential building, a general purpose model should be used. This removes the limitation of primitive procedural assets needed custom complex algorithms.
If no model object is to be used, procedural algorithms are needed to create the 3D model mesh objects. Procedural algorithms are needed for each asset to be generated.
The algorithm to generate boundary model mesh objects will need to utilise a triangulation algorithm to generate the model mesh object from an arbitrary array of points.
The algorithm for the PG of buildings needs to account for a building which is generated by a multiple of parts. The roof tops of buildings will utilise the triangulation algorithms used to create polygonal model mesh objects.
The algorithm for the PG of highways, waterways, and railways will need to utilise interpolation techniques to interpolate between the data points. We reiterate the highway structures held on OSM are represented by a limited number of points when visualised create angled lines. Figure 29 represents the procedure needed.
Figure 29 Highway mesh generation. Step 1 is OSMWay representation. Step 2 is the result of the interpolated points. Step 3 shows the points generated from the interpolation. Step 4 generates the perpendicular sides of the highway.
During the runtime simulation, rendering algorithms need to provide optimisations to categorised assets depending on their rendering needs; translucent, opaque, and others. The algorithms must also allow the changing and procedural setting of rendering techniques depending on the needs of the model mesh object. For example, to render reflective water, the model mesh asset must be set to utilise reflective rendering techniques. This type of rendering technique is process intensive, so an algorithm is needed to monitor runtime simulation details such frame rate and procedurally set the rendering technique to a lower process intensive technique to improve rendering rates to allow for real-time visualisation.
For a group of nodes within the scenegraph structure, or a group of model object to be modified, search techniques must be employed to allow the procedural querying of assets within a scene. The user will create a query, which an algorithm will interpret and search a scene for objects which satisfy the query. This leads to multiple needs from the algorithm; search techniques to be created which work with the scenegraph layout, the PG of queries, and the need to do something with the assets satisfy the user generated query.
Once an asset or assets are found from the query, controllers will be employed to modify specific properties of the spatial parameters, or the visual parameters. From our background and literature review, the animation of properties of assets within a virtual scene depicting geospatial data has not been used as of yet. An algorithm which exposes all properties of the spatial parameters, and visual parameters is needed to allow the animation, interpolation, and modification of the properties. The algorithm will allow users to encode their own datasets and meanings onto a cities worth of assets. For example, querying for buildings which are over 20 meters, and not fire-proof will search the scenegraph for building which satisfy this query, and a user can state to render the assets in red, or pulsate the assets scaling, bringing the attention of the user to the returned assets.
To animate objects, controller objects should be employed to update the parameters of the asset which the controller is attached to. This will improve the processing of assets; if an asset does not have a controller attached, then it does not need to be processed.
Various optimisation algorithms are needed to improve the rendering and batch processing of assets. For example, the scenegraph structure can be searched for assets which have a controller, and put these assets within a referenced array used to linearly update the assets, instead of recursively traversing a scenegraph checking if assets need updating.
We conclude that generating the algorithms stated will remove or partially remove the issues surrounding modern GIS visualisations and allow for large, accurate rendering and interactions with procedurally generated 3D city scenes. We have stated the need for algorithms to parse, process and combine terrain datasets to be utilised with the inference of data for the assets which are procedurally extracted from OSM datasets. The inference of data is also programmatically checked within the processing of OSM data creating further accurate asset objects. The assets are then procedurally generated to create 3D model mesh objects. If a model is already available and suitable to be used in place of the procedurally generate asset, then it will be processed and referenced accordingly. Once PG of assets is completed, they are to be used within a post-processing algorithm to infer further information. During runtime visualisations, algorithms are utilised for animating and modifying assets within the scene, allowing the flexible rendering techniques to be applied to single assets, or complete scenes of assets. These techniques are to be encoded and dispersed through the scenegraph structure, allowing a flexible, single pipeline process for a modern GIS visualisation framework. Dispersing data through the scenegraph from the top to the bottom can change a scene visually within a single recursive function call.
Next we discuss the requirements for the User Interface.
5.5 Interactive Visualisation Interface Requirements
The requirements of the IVI system will provide interaction with the algorithms and runtime simulations, modifying the spatial and visual properties of assets within the scene, and allow user defined queries to be applied to scenes. This will improve overall functionality of the framework, searching, and provide flexibility with the visualisations of scenes. As well as visualisation flexibility, the interaction of the IASF parameters is achieved, extending the functionality and benefits that the IASF brings to large scale scene visualisation. The use of the IVI will also allow inspection of the PCG,
and specification of the procedurally generated models which need to be visualised first, and then inspected.
The IVI system must allow multiple functionalities and interactions. We have stated a scenegraph structure is needed to spatially organise and classify assets within the scene, the IVI system must allow interactions with that scenegraph structure. The creation and deletion of nodes, and the spatial modification will allow adaptation of the scenegraph to alternative needs.
The IVI system must also allow the searching of the scenegraph for a single or group of nodes selected by the user’s procedural queries; querying the various data structures or information attached to a node. Allowing queryable functionality on a scene, or an individual asset, the framework can be utilised within a number of domains, regardless of the background knowledge of the user; the fire brigade can utilise the framework to search for none fire-proof buildings, while the police can use it to search for highways of specific width to navigate large vehicles through a city.
As stated, once an asset or a group of assets has been chosen, the generation of control units which modify the properties of assets such as spatial parameters, and/or the visual parameters are needed. Allowing the generation through an IVI system, where the user can select the parameters to change, the animations of that parameter, and the values to modify the property too is needed.
To allow modification of the global lighting system, the attributes of the lights must be presented to the user to allow the changing of properties of lights such as; colour, position, direction etc. Assets within the environment may have their own internal lighting parameters, which must also be presented and modified if needed.
The modification of individual assets concludes that an IVI system depicting all asset parameters, attached objects, and interactions is needed.
For development, and further analysis techniques, debugging IVI systems are needed for the internal components of the system; cameras, assets, global lighting, timers for rendering, GPU processes, frames per second and others. This will provide insights into issues within the system. Also they can be used as teaching aids for new users of the framework.
To allow the selection of areas with the UK, the system needs to complete multiple tasks to enable the user to select a 1km2 area tile to load from the pre-processed data. The selection of the tile will navigate the user through the OS reference scheme from 500km2, 100km2, 10km2, 1km2 area tiles. At each stage of selection, the hard disk will be scanned for areas which have been created, and the tiles which are available on hard disk will be highlighted for the choosing by the user. Once a 1km2 tile is chosen, the middle out loading algorithm will load that 1km2 area of assets for viewing and interaction by the user. We issue this will allow the user to know which areas are already processes, and guide their selection of areas.
The selection of assets/objects within a scene, a user must be able to query the objects on the properties of the object. For example, a user may wish to find a building which is fire-proof and over 20m’s in height. A system must allow the querying of said properties of an objects, in any order they wish. The query should state that properties return true. If any of the query parameters are false, the objects will not be returned. An example of the query may be similar to using unary operator to query the properties of an object;
building.height > 20 && building.fireproof == true
The query will check if the building has a height about 20, and the fireproof property is equal to true. This is a common querying definition. The framework must also allow a user to type the query in, and procedurally convert the text to a valid query.
In conclusion, the IVI system must present information of the assets in the 3D virtual scene, and the nodes within the scenegraph structure. If a component interacted with either a node or nodes of the scenegraph, or the model mesh, then component properties must be presented to the user, and modifications must be allowed. The same is said for the node of the scenegraph and the model mesh object. Users of the framework can then manipulate the scene in a manner which suits them, and allows the encoding of their own data to be rendered within the scene by the PG of custom controllers.
The use of the IVI will provide functionality not available with the alternative GISs; Google maps, ArcGIS, SAVE etc. The IVI also allows real-time interactions with scenes, allowing the modification of properties on a frame-to-frame basis.
5.6 Software Development Life Cycle
This section discusses the need for a development lifecycle model for the project. Balaji [75] gives a discussion between the Waterfall model versus the V-Model versus the Agile model. The author states that the project itself determines the life cycle needed.
· The Agile [76], [77] lifecycle should be chosen if changes are frequent and the need to deliver small subsets of a larger framework.
· The Waterfall model [78] should be chosen if the requirements are fully known before development begins.
· The V-Model should be chosen for larger project which requirements may change after development, testing, and delivery of each phase of a project.
We state that the Agile model would be a sufficient model for this project. This is due to the need to rapidly prototype sections of the framework, and to further prototype the combination and interaction between multiple prototype for the generation of a working prototype framework.
5.7 Conclusion
We have specified the needs of the project for the creation of a single pipeline to combine, extrapolate, infer, and generate data from available geospatial datasets. We have presented the processes needed to combine the datasets. We have also presented the novel spatial data structure and algorithms needed to combine and organise the data during visualisation and rendering processing. The algorithms are responsible for the generation of additional data and assets, specifically the generation of 3D model mesh assets used for visualisations. To interact with the runtime simulation, an IVI system is needed to modify assets and object within the scene. The IVI system is also needed to present data to the user, as well as data to the developers.
Agile development lifecycle
The Agile development lifecycle is chosen due to its benefits of design patterns it brings to projects with known unforeseen issues.
Geospatial data parser
The geospatial data parser is needed to load various geospatial data sets into runtime objects for data analysis.
Geospatial data combiner
The need to combine various data sets is needed to fill missing data, and amend errors.
Geospatial data interpolator
Interpolation algorithms are needed due to the characteristics of the terrain maps, and model mesh generation techniques. LiDAR maps have missing data which can be generated by interpolating between know data height points.
Geospatial data inference algorithms
Inference algorithms are needed due to the characteristics of the datasets at hand. OSM has data which can infer alternative data; the height of a building can be calculated by multiplying the number of floors by a default floor height value.
Data structures to organise a countries worth of data
Due to the size of the area we wish to tackle, a single kilometre, a country, and potentially a global mapping schema, the organisation of assets is needed. The organisation of location, and also the data type is needed to be encoded into a single data structure.
3D model importer
To limit issues with rendering techniques, all model mesh objects needs to contain the same data needed for rendering (Normal, UV coordinates, Tangent, BiNormals etc.). This removes issues at runtime to check if a model can be rendered using a chosen rendering technique. Default data should be applied if data is missing while importing.
3D model processor
The processing of procedural model mesh objects created from geospatial data is needed. The output will be the same as a user generated 3D model assets, containing the same data requirements for rendering.
Procedural algorithms for model mesh generation
Due to the differences with the geospatial data; terrain, buildings, boundaries, etc. procedural generation algorithms are needed for each type of object. The algorithms are determines by the attributes of the imported geospatial data.
Advanced rendering techniques
Advanced rendering techniques corrispong to the use of an Array Indexed Shader Function structure, but also the shader effects applied to the model objects; normal mapped, textured, refractions, hemispherical ambient lighting and many others.
Allow user generated content importing
The inclusion of user generated assets (3D model meshes) is needed to improve scene renderings.
Interaction techniques with GPU parameters
The use of materials used by models is needed. These materials are exposed to the user to allow the interaction with GPU settings and material parameters.
Table 9 Specification requirements conclusion.
Given the specifications listed in Table 9, the high-level overview of the framework is shown in Figure 30. The pre-processors contribute to the generation of data which is imported by pre- processor and runtime libraries into the runtime simulation. The data is stored within separate data storage devices and combined to procedure virtual scenes organised by a runtime scenegraph structure.
Figure 30 High level framework overview
Designs of the novel PVS framework and related data-structure and algorithms to overcome the issues discussed will be presented in the next chapter.
Founder weareHUMAIN.ai | Ai Film Image Generation Production | Digital Production Innovator | 3D, Immersive Content | Transforming Brand Experiences | Formerly at Saatchi & Saatchi, 72andSunny, Grey, Digitas LBi | MPC
4 年Dr. David Tully I just read this and have another project I need to talk to you about.