At CBRE Build, we have the opportunity to test out new technologies that push the boundaries on what 3D content is capable of delivering. One such technology is Augmented Reality (AR), which is the use of computer generated imagery to "augment" a real world environment. Put simply, you can have a 3D object viewable through the camera on a device. This blog post will detail why we chose to make 2 different AR apps, and the development process from the design side.
To explain the advantage of AR, we must first talk about Virtual Reality (VR), which is a fully realized 3D environment viewable through a headset.The initial hype around VR was tremendous when robust pieces of hardware like the Oculus and the HTC Vive launched, with numerous requests to translate our scenes into that space. However, we quickly learned that it was not an easily shared experience. Between a cumbersome, bulky headset that only one person can engage with at a time, to the potential for a dizzying amount of lag in the viewer, VR was shaping up to be a quickly passing phenomenon for commercial applications.
A few months later, a similar buzz started to develop around AR technology, particularly around Apple's ARKit platform. Even though it seemed to be on a similar trajectory to the VR hype, AR offered some key differences that were important for our 3D services. Simply not having a headset allows the experience to be shared amongst a group of people. But even more crucially, AR apps offer a more scalable platform and much greater distribution potential than VR. Instead of carrying around a large box full of computer equipment (and subsequently having to set it up), the AR apps can be downloaded directly to your iPhone or iPad and immediately consumed. There's also the advantage of people already being acclimated to touch controls on their mobile devices. The VR setup was always custom, which required an initial tutorial to learn.
The key difference between using AR and VR technology however, is that AR superimposes the 3D content into the real world. This allowed us to create a more unique type of experience than our standard Build scenes. A client could be physically standing in an empty building and see their office come to life directly on their device, accurately tracked and properly sized to their position.
We chose to create the 2 AR apps in the game engine Unity, using Apple's ARKit setup. Unity has both the built in functionality that works smoothly with AR development, and the graphical power we needed to make the scenes look great.
That said, we had to consider the size limitations of the types of spaces you can export to mobile devices. The biggest limitation is the polygon count - exceeding 500,000 polygons will slow down the system and cause the AR tracking to become less accurate (the scene will start sliding around in your camera). When developing scenes for Build, we are always looking to keep file size small so it can run on a variety of computers. As a point of context, a simple office chair might be somewhere in the range of 1,200 polygons. In a moderately sized office this number will start to dramatically multiply when adding every type of furniture available, placing us well above the 500,000 limit.
We came up with two options - we could either reduce polygons in our scenes until they hit the threshold required to run on mobile, or limit the experience to a few well crafted areas at a time. We decided to try each option on a separate app.
The first app works by choosing an area type (in our demo's case, an office breakout space), setting two points from a top down floorplan of that space, then using the device's camera to place those same two points in the real world. This appropriately sizes the space to real world coordinates and places it correctly against the chosen wall types so the user can now walk around their new 3D space in the real world. Here is a video that demonstrates this functionality:
Initially with this app, we planned on doing a full office, but it turned out that individual spaces were a better move for a lot of reasons. For one, we could retain high quality graphics that would hold up to the scrutiny of a camera examining them. Rather than conservatively choosing which objects would get a higher polygon count (and thus more detail), we could contain the area and represent each object properly. Having a contained area also accounts for placement problems that AR might face. Suppose when placing the points down to place the AR space you have it off center. The scene would need to be reset and replaced, or else the objects wouldn't be properly situated in the real world. This problem would exponentially scale if the scene represented a large office space, with desks potentially being stuck inside of walls. And finally, having a smaller area allows us to have a more fluid and fast moving app. Rather than loading in an enormous space, a user can load in smaller areas, test out different types, and then try the software out in other areas of the space.
The second app represents a more information-driven experience. Users move the camera on their device to scan a floorplan (in this case, a large industrial space), which corresponds to a 3D model loaded on the app. A reticule appears and allows the user to place the model on a real world surface. Once the model loads, you can manipulate it by spinning it around or changing its size. Scattered around the space are interactable dots that list the core features of the space (parking lot size, number of offices available, etc). Below is a video that shows off the app's functionality:
With our typical industrial spaces we receive, a polygon count far above the 500,000 limit is extremely common. Because the context of the app is to look at a space from a holistic point of view (in contrast to the office breakout app, which is far more detail oriented) and get key points of information, it became a much easier task to allocate polygons to the areas that needed them the most. In this case, localizing them to the industrial racks and the exterior walls keep the scene looking great, while dropping the polygon count on other assets allowed the space to load much faster.
I think these apps demonstrate that AR can provide a meaningful experience that is different enough to complement other 3D offerings. The ability to physically walk around in a 3D space is significant, coupled with the ease of use of loading them to your iPhone or iPad translates to a new way to engage with 3D content. These initial experiments with AR were important to see what the technology can offer, and I believe we can continue crafting them into something that gives a more unique experience in the commercial real estate world.