The number of choices for “Point Clouds”, “Reality Capture” “or more recently “Digital Twins” have come a long way in recent years and it changes more and more every day. However, like many of the more powerful tools within the Visualization realm, the application of an appropriate workflow isn’t one size fits all, and is dependent on the specific situation at hand. Therefore, figuring out a good path forward isn’t always straight forward. Today we’ll attempt to offer some layman’s guidance on some of the various applications of this technology available today.
I heard my cell phone does scanning now, so I don’t need an expensive scanner anymore, right?
Before embarking on a reality capture effort, one of the first questions one should ask themselves is “What level of information do we need out of this effort?”. Some scans are purely for showcasing a space, and accuracy of dimensions isn’t particularly important. This type of scanning is very popular in real-estate or office planning, and we find in this space folks using smart phones and tablets often performing scans of small spaces and using them with success for their application.
On the other side of the spectrum we see scanning to document a space as accurately as possible. Some applications in this space may be crime or accident scene documentation, dam inspection, or prefabrication tie-in or replacement portions of mechanical piping or duct systems to name a few. In these applications, the dimensional information able to be gleaned from the clouds is paramount. The scanners that provide this level of documentation often times cost more money, and take a little bit longer per scan, as they capture a much higher level of resolution and accuracy.
At yet another level of the technology are drones capturing geographical scanning of very large outside areas identifying topography and or real time conditions of a construction site, desert race course, or military battlefield.
I have an area identified for capture and I have my device. Now what?
The process for most of the levels of reality capture (excluding drones) involve the following “basic” steps. Obviously within each step the amount of work and or expertise required varies, but at a high level it goes like this:
First we have the “shooting” of the cloud. This is where typically a human and a scanning device of some sort are in the target space setting the device up in a spot on a tripod or mount, hitting “go” and then waiting as the device emits LiDAR (“light detection and ranging” or “laser imaging, detection, and ranging”) or infrared signals and bounces them off everything it can see in 360 degrees. In this step, the “what it can see” part is important, and one must think about where to put the device in order to maximize line of sight for the device. Once complete at a location, the person moves the device some distance away and repeats. The distance between scans, and associated number of total scans that are required to capture the entire target space is dependent on the device being used. Typically the higher end the device is, the further it can capture information from each location, hence the less number of total scans being required to finish, but each of these longer range scans will likely take longer to complete. Elapsed time per scan can range from less than a minute up to 20 minutes. Distances that points can be captured at each location can range from a few feet with a cell phone, to hundreds of feet with a high-end Faro or Leica device.
The next step in the process is referred to as “registration”. This is the process of stitching together all of the individual scan worlds into a single comprehensive “point cloud”. Again, depending on the workflow being applied, this process varies, but most of the current era workflows do the registration process real time in the field while the individual scans are being shot. Typically, a tablet can be Bluetooth connected to the scanner, allowing the person to see the environment being built as they move along through the space. When a device is capable of registering on the fly while it goes, typically it requires overlap of ~20% of points from a previous scan location in the new scan location. It uses this overlap to determine where the new space is relative to the other spaces and aligns the new space accordingly. Total number of scans could range from 2 or 3 for a small room, to several hundred in a large manufacturing area.
Other workflows could use targeting of some sort, where the person must place targets in the field that are later referenced by the software to aid in the registration process that could be done back in the office once the field scanning is complete. Ten years ago, targeting was the only dependable way to register a cloud, but many of the current era workflows have evolved beyond this requirement, and utilize the above mentioned process, saving time.
The point cloud makes the CAD model for me right?
A big misconception with reality capture is that coming out of the field process, we are left with a CAD drawing or BIM model that we can automatically open in Revit, SolidWorks, AutoCAD, etc. and work with natively. This typically isn’t the case.
The point cloud (when applied correctly) provides a modeler with sufficient information to build 3D geometry and or families from, but this process isn’t fully automatic. There are certain exceptions to this with some of the more advanced workflows, where the registration software can “sense” certain steel beam and pipe sizes in the point data, and place a piece of geometry in the space occupied by these points. The software references AISC or ASTM documented standards to infer what it is seeing in the field. Unfortunately, for many of the “things” the scanner sees in a real-world environment, the software is unable to determine what these things are. As such, a modeler still needs to interpret what they are seeing in the cloud and model the appropriate geometry to represent the points as required.
Fortunately, within the point cloud itself, a person can obtain dimensions, etc. as required to reach their objectives without “modeling” all of the things captured. How much or how little modeling is required depends on what the objectives are for a specific application. There is a host of software available that support working with the cloud information ranging from web browser based applications up to brand specific licensed software from companies like Autodesk, Leica, Faro, and Matterport to name a few.
In closing, the value of reality capture when trying to engineer, understand, and communicate within an existing space is undeniable. The workflows have simplified greatly over the last few years, as have the costs of entry to procure the equipment and software. If you are in an industry where you have a need to know “what is”, some form of this workflow should be in your arsenal!
—Jim Counts, St. Onge Company