Photogrammetry is the process of running a collection of photos through a specialized program to produce a realistic textured 3D mesh that artists can modify and add to 3D scenes. Photogrammetry allows 3D artists to use photography of real world objects to produce realistic representations of the object in 3D scenes. It has the potential to produce higher quality scenes than manual 3D asset creation. We've spent some time investigating photogrammetry and assessing it's potential inclusion in creating our Build 3D scenes.

The Process - Capturing Photos

There are many variables to consider when photographing the object to be converted, especially lighting:

  • Ideal weather is an overcast day or conditions that produce flat lighting on the subject in a controlled environment.
  • The subject should have the least amount of shadows and hot spots possible. This will require less editing for the final model and texture.
  • The lighting and the camera settings should stay the same throughout the shoot if possible. Changes in either can compromise the scan entirely or make it so that unnecessary work will be needed.

Once you have your lighting established, you can follow these guidelines while gathering photos of your subject to ensure consistent results:

  • Photos should have about a 40 percent overlap with the other photos taken.
  • Take photos in rows around or along the subject if possible.
  • Take at least three rows of photos with every row at a different height/angle while still having a 40 percent overlap with another row.

The Process - Generating the Texture

There are multiple photogrammetry programs out there such as Reality Capture, or what we used, Agisoft Photoscan. While each have their own guildelines to stitch together your photos, we found these tips to be generally helpful:

  • Don’t use the highest setting when generating the first round of scans as this can take an unnecessarily long amount of time.
  • Keep in mind what the texture limit is for the final product. If the texture limit for the final product is 1024 pixels, then there is no need for the scan to have an 8000 pixel texture.

Here's an example of a textured mesh for the base of a tree we built using photogrammetry:

An image of a 3D base of tree with highlighted mesh

The mesh produced by photogrammetry

An image of a fully textured 3D base of tree

The fully textured object

Further Exploration

In our 3D workflow at CBRE Build, we are always looking for new techniques to create 3D environments that are as realistic and compelling as possible. In commercial real estate and architectural visualization, the primary goal of the 3D model is to reproduce as accurately as possible what an existing or soon to be built space looks like. In most of our projects, photorealism is often emphasized over more atmospheric or subjective qualities that are prioritized in other 3D environments, such as those created for games or advertisements.

Typically, the CBRE Build 3D team uses traditional asset creation techniques for interactive environments. One challenge with this workflow is that 3D assets built for interactivity must be lightweight and therefore have a low poly-count and small texture size when compared with assets used for 2D architecture visualization renderings. The result can sometimes be that scenes look too “video gamey” or “like SimCity”. Or put another way, the 3D assets do not look realistic enough. Photogrammetry could be incredibly helpful addressing this critique and in creating a balance between lightweight, efficient models and realism. Using photography of real world objects as the source for content in our 3D scenes could potentially increase how convincing the CBRE Build 3D scenes are while maintaining fully immersive environments.