Intro to 3D Terrestrial Scanner Registration - Understanding Target-based Registration

Intro to 3D Terrestrial Scanner Registration - Understanding Target-based Registration

A Disclaimer…

There are many tutorials and demonstrations online that have been developed over the years to address the sometimes-complex process of stitching (registering) 3D laser scans together (or said another way, “putting together a 3D jigsaw puzzle”). I personally started working with point cloud data for the construction industry about 7 years ago while working at another firm. I had extensive experience with CAD, both 2D and 3D drafting and cadastral mapping projects. But when I first saw the FARO Photon 3D laser scanner that our firm had purchased and experienced what it did – I immediately committed myself to learning everything I could about it. Some professionals in this industry have been working with 3D laser scan data since it first came on the market; and so…

here is my big disclaimer…

I have dug my heels in, learning this technology as deeply as I am able to. However, my experience is what it is and to those with more experience, it is my hope that this introductory article does some justice to the industry as a whole and that in some way, sheds light on this process of registration to the newcomer who may feel left in the dark on what this is all about (as I was when I first started). And for those of you reading that have much more experience than I, perhaps something will speak to you as well. I certainly appreciate feedback you veterans have, as this is certainly an industry with a lot of moving parts warranting continuous learning.

I have been using FARO products since the beginning of my 3D scanning career. However, I’ve also used Trimble, Artec, DotProduct, Leica, Artec and other instruments. I’ve processed, manipulated, and have created visualizations from scan data in FARO Scene, Cyclone, CloudWorx, CloudCompare, LAStools, Rhino, SketchUp, Bentley Systems, Autodesk products and a few others. One thing I can say with certainty is that all 3D scan registration software has similar functionality. While I refer to FARO Scene registration software primarily in this article, I believe the overall concepts translate to other 3D scan processing software. And, it should be noted that this is by no means an exhaustive article on the subject of registration. Rather, this is covering one small topic of a much larger process.


 It all begins in the field…or in the office?

Registering 3D scan data (scans) can be easy or extremely difficult. It’s not as simple as just learning the software; nor just simply learning the field operations to make the registration go well. Rather, it’s a combination of the two. A new field tech cannot possibly know what goes on in the registration process and therefore would not be able to make sound, strategic decisions in the field. Conversely, a processing tech independently would not know what the field subject looks like and therefore could have difficulty knowing how all the scans would fit together, should the automatic registration options offered by the registration software fail them. This often can be a pain point in companies first adopting scan technology. It requires a lot of communication in both the field and the office and in some ways, it can be easier to assign one or two persons to learn the technology in a rounded fashion; both field and processing operations. Mitigating this possible communication breakdown could include the field technician creating a rough map of where all the scans were taken from, which is a good practice anyway, especially when first learning this process.

 

Tolerances

Scanning from multiple locations to create a larger, contiguous point cloud requires that, somehow, the software can find strategic ways to stitch all the scans to one another – accurately. Depending on the project, there are certain tolerances that can be set by a client, suggesting that any measurement they take within your point cloud should be within this measurable tolerance. A current industry standard for scanning architecture with a terrestrial scanner (such as a FARO Focus3D or Leica P40) is about 4-6mm or better over the entirety of the registered point cloud. Hand scanners may have tolerances at sub-millimeter and aerial LiDAR is typically at centimeter tolerance levels. In one way, these standards have been driven by scanner manufacturers who have attempted to replicate what an architect or engineer could achieve with hand measurements, and then do just a little better, faster, and easier – hence the current disruption in the metrology industry that is much like a fierce and competitive race for the company who can be better, faster, easier (and cheaper) with measurement technology.

4-6mm or better sounds great, but it isn’t always easy to get that tight especially with large scan projects requiring say, hundreds or thousands of scans. In fact, the process has been so difficult and unpredictable with registration software, many companies continue to rely on survey control to place the scans to. Some companies have had such bad experiences with registration software that they survey in with a total station each and every target to align the scan data, bypassing the registration algorithms altogether.

This is unfortunate, expensive and time-consuming. Yet for some, it is better that than deliver data that is inaccurate. Part of the pain point of registration software is that the algorithms, while exponentially more sophisticated than 7 years ago, still produce false positives (good registration reports, but clearly mis-registered scans). Ever noticed all green indicator lights in the registration report along with that toilet on the ceiling of the main lobby in your point cloud? Yeah, I don’t think that’s supposed to be there…

 

Targeting and Not Targeting

There are a few techniques that I’ve found help a quite a bit with registration. It’s never a bad idea, especially to the new scanning firm, to survey in your targets on the first few projects to verify your registration workflow – but over time and with some established quality control processes in place, you can start trusting registration software more and more. You just get to help those clever algorithms figure things out a little bit. It should be noted that for large scan projects, using survey control is always a good practice. Although, depending on the coordinate system, some 3D laser scanning registration packages may not round large numbers well and therefore produce mixed results or even partially corrupted data (more on that in later blogs).

 In order to register scans together, the algorithms must first have at least 3 reference points that correlate between scans. Some call this a “constellation” of reference markers: if we have two scans that are to be registered together, the first scan must identify 3 geometric references (which could be a sphere or checkerboard target, a spot on a wall, a cylindrical object, a plane on a floor, etc) that the second scan can also see – even if they are from different vantage points. The software then calculates the geometric arrangement of these references and attempts to match a similar arrangement in other scans.

It is also possible to not use any targets at all to register scans. Field technicians do not need to place any sort of physical references so long as there is sufficient overlap between scans and enough variation in the matching geometry to identify scan matches algorithmically. “Targets” can include the scanner’s sensors as well, which can support the matching process. I’ve had only two physical targets in the field on projects from a certain vantage point and the software still made the match thanks to the inclinometer and other enabled sensors.

 

Targeting for Registration: Downright Mean, with Tension

It’s first important to understand a specific term that comes out of a registration report. This is an indicator of what the algorithms are trying to do and how they calculate point cloud matches. Finding some technique to understand this concept can go a long way in successful field operations as well as post-processing quality control.

Weighted Statistics: I like to think of weighted means, minimums, maximums and deviations as the software’s way of telling you that sometimes the targets are pulling the "strings" too tight and sometimes they are too loose. When scan registration solutions are being looked over and considered by the algorithms, there is an averaging and attempt to evenly distribute the error – sometimes known as “global softening.” The stitch is made, then the software tries to ‘soften’ the stitch to make it more averaged across the entirety of the registered scans.

"Balancing the Ball" - Using a bit of imagination, consider a red rubber ball which represents a single scan. Now imagine 3 strings attached to that rubber ball which is suspended by these strings, with each string thumb-tacked evenly to walls in a room in a perfect triangular balance. The thumb-tacks (green diamond shapes) represent targets and the string carries the tension between the target (tack) and the scan (ball).

If we take the tacks and string and move them to anywhere else in this room, some strings will become more tense and others will become more slack:

Fig.1) Balanced Tension

Fig.2) Imbalanced Tension. We might be left with something like this:


Notice how moving the strings and tacks around changes tension on the strings. A very simple example but perhaps instructive toward what role the term “tension” has in the software. This concept can support our overall goal of targeting in the field to assist the software to create an accurate registration.

When placing physical targets in the field, it’s important to keep in mind this illustration because an even and wide distribution of targets creates a more probable, balanced tension than targets that are close together. Having targets in a straight line from the vantage point of a scanner is one of the worst scenarios for registration because the algorithms cannot easily determine the orientation of the scan’s spatial relationship to one another. Distributing targets widely and at varying elevations is a good practice. That goes for spheres and checkerboards or other physical targets. Also, while it may seem tidy to distribute targets at an even elevation along the walls at an equal distance, nothing can confuse registration algorithms more than this repeating pattern (constellation) of targets. Variety is your friend when placing targets – not too close to one another, and varying distances from one another will save you a lot of headache in post-processing.

In the QA/QC process, if you are using physical targets in the field, it is inevitable with larger data-sets that scans will not perfectly come together and the overall statistics will indicate your ‘weighted mean’ is beyond the 4-6mm tolerance that you were hoping for with automatic registration. Looking to your registration report can provide clues on how to correct this and tighten your registration tolerance. I think of it as ‘pruning’ or ‘snipping’ the strings that have too much tension to balance out the rubber ball. You may see when looking over the Target Tensions tables that some targets have tension values that are quite high, descending to those that are low. At first glance, and with some visual QC, it can be clear that a few scans are having trouble coming together, which in turn is throwing everything else off. I suggest deleting targets with the highest tensions, one at a time, re-running your registration tool (aka “Place Scans” in FARO Scene), and see what happens. If needed, you can always select those targets again in the planar view, adding them back into the equation. It’s a bit of trial and error, but a good place to start.

Note that if you are using top-down or cloud-to-cloud registration for any reason, the target tensions may increase quite a lot since the software is now ignoring physical targets and trying to make the best matches using the point cloud’s geometry only. My workflow for registration in a broad sense is to use targets first, prune, get it as tight as possible, then run cloud-to-cloud on what remains. This produces good results in almost every scenario.

As a parting thought, even if the targets are placed well in the field, there can be several optic anomalies that can occur in the automated target identification and registration process. When I teach workshops on registration to newcomers, and after the initial pre-processing of the scans (before “Placing Scans”), I walk technicians through a “target validation” process. Target validation includes opening each and every scan in planar view to ensure the software has identified all the physical targets placed in the field, and deleting mis-identified targets such as those cast in reflections, water, or other objects (like my round head).

It’s all very interesting…

 

Your friend in 3D,

ToPa 3D~

 

Duje Kalaj?i?

Technician at laboratory at Faculty of Civil Engineering, Rijeka

7 年

Great work! Thanks!

Paul Tice

CEO | Reality Capture Consultant | LinkedIn Learning Author

7 年

I'll check it out - thanks Eddie

回复

Very thorough. We could talk for days about FARO registration. I started with them in ‘06, Scene v3 we introduced “filters”! Can you imagine the data without these automatic default filters applied? It's come a very long way but still has some things you should know. Instead of just posting on the forums and LinkedIn I'm hoping to work with folks like you on my new platform to create an organized repository accessible to everyone.

回复
Jared M.

Surveyor and Director at Sova Surveys Ltd | Customer-focused Service | Quality product for great Value

7 年

Nice article Paul. I like your ball and string metaphor. I can tell from your descriptions of "green fit" and "top-down placement" that you use faro scene. Even so, the same principles are applicable to the other software suites. If I may add a note regarding the cloud to cloud scanning (scanning without reference targets): It's often very helpful to crop out all the "scatter" points and allow the software to then only use the remaining points to make the overlap calculations. For example, if one was scanning an external building elevation with vegetation around, and you had two scans that you wanted to stitch together, your focus may only be on the alignment of the building in both scans. The software obvious doesn't know this, so tries to check all standpoints for overlap. There will be good (and easy) overlap calculations on the building, but when the software tries to factor in the vegetation (which will most likely move in a gentle breeze for example) it will pick up these variances and perform the same "averaging" you mentioned. This averaging may affect the wall alignment ever so slightly. But by cropping out the vegetation and having only the wall points available for calculation, the fit will be much more accurate.

Mark Mayers

Looking for what comes next

7 年

Hi Paul another great article. Sorry we didn't get to talk while I was with Euclideon, I guess I can now give you an independent view from inside and out now if still interested ??

回复

要查看或添加评论,请登录

Paul Tice的更多文章

社区洞察

其他会员也浏览了