The Journey Before Looq AI

My journey was a combination of decades of passion for photography and exploring how I could apply the fundamental ability to take an image of the world and transform it into a contextual understanding in multiple dimensions. Through this my thoughts evolved on how we should document the world effectively. This journey has been about changing workflows, and I am sure it will continue to evolve into a full process and evolution of products that we are excited to lead.

My Story

Author: Dominique Meyer, PhD – Co-founder and CEO Looq AI

The question I get most commonly is: how did I start looking at why? How did I come up with the idea of applying this technology to the survey and utility space?

 

The short answer is, I didn’t plan any of this, and it didn’t come overnight. It was a combination of decades of passion for photography and exploring how I could apply the fundamental ability to take an image of the world and transform it into a contextual understanding in multiple dimensions.

 

Over this period, I have evolved my thoughts on how we should document the world effectively. This journey has been about changing workflows, and I am sure it will continue to evolve into a full process and evolution of products that we are excited to lead.

 

First, I want to take you through some past events. My passion for imaging and how those experiences shaped the ability to build what is fundamentally a new technology—a way to effectively, quickly, and reliably capture information about our built world. This journey includes my past work of taking images of our universe, galaxies, and planets, documenting ancient sites in the jungle, archaeological excavations in caves in Italy, and building systems that helped various institutions, including government bodies, fundamentally change how they document some of the most difficult-to-image assets in the world.

 

I am excited to share a little bit about this past because I feel it’s important to share the journey that has evolved into what our technology is today.

Photography of the World

To begin, I want to take you through my passion for photography. Back in the day, I used to travel a lot, and I still do. I traveled the world from Nepal to Mexico to the US while living in Switzerland. On my journeys, I was equipped with a DSLR camera, specifically a Canon 5D system.

 

I discovered a particular photography style that essentially began my journey into multi-image capture: panoramic imagery. Typically, a camera takes a picture in a single direction, often missing what is around the top, bottom, and sides. This limitation affects the viewer’s sense of being in the scene.

One of my passions was creating panoramas of landscapes and scenes that captured a much larger horizontal and vertical area, immersing the viewer into the scene.

To achieve this, I would take multiple images—one forward, then to the side, the top, the sides, and corners—and spend days stitching these images together. This process, known as panorama stitching, involved finding features in the images, aligning them, blending them nicely, fixing any exposure discrepancies, and creating stunning large-scale horizontal landscapes. The benefit was the high resolution of these images, often nearly a gigabyte of pixel data, allowing for detailed prints.

 

As I pushed the boundaries of what was known as gigapixel imaging, capturing billions of pixels in a single image, I started building intricate robotic systems to capture full 360-degree environments. One notable system was my stereo gigapan system, which involved two Sony QX1 sensors paired with 200mm telephoto lenses. These would orbit around a tripod system, capturing about 400 images to fully cover the sphere of the scene at a resolution of 20 megapixels per image, with stereo disparity.

This setup allowed for a virtual simulation of being in the scene, entering the space of realizing that if we build camera systems correctly and move them around a space, we can document the world in dimensionalities that create a feel otherwise not possible with a single image.

Photography of the Universe

An exciting chapter of my photography passion was the night sky. I was very lucky to have the opportunity to play around with telescopes, both ones I built and those I had access to at my high school in Switzerland.

 

I figured out how to assemble the best optical system, whether it was a refractor telescope or a reflector with mirror-based optics. These were paired with sensing and motion tracking on a tripod to take long exposure images. As you know, the Earth spins, and as a consequence, the stars in the sky appear to move over time. To compensate for this motion, I used a robotic system, typically an equatorial mount or an autonomous system, allowing the camera sensor to be exposed for multiple minutes at a time.

 

Why expose for so long? The amount of light captured through a telescope is very small compared to daytime photography. By doing this, I was able to capture images of different galaxies, planets in our solar system, the moon, and the sun, leading to my second passion: astrophotography.

 

I researched how to optimize image processing for low light and long exposure, combining wavelengths of light not visible to the human eye but detectable by the sensors into stunning image composites. These are some of the images I am most proud of. Additionally, I was passionate about sharing this ability to document the night sky with the local community. I hosted events to show families and kids what it looks like to view and capture images of the night sky through a telescope.

Undigging the World's Ancient Wonders

Fast-forward a few years, and I landed in San Diego, CA, where I was doing my undergrad at the University of California, San Diego. Within a few months, I was eager to join a research lab and do something productive with my time outside of classes. One day, while walking down Engineering Hall, I found a stand where a research lab was recruiting a UAV pilot—a drone pilot—to help build drones and fly them in archaeological environments.

 

I had always enjoyed flying radio-controlled planes, helicopters, and later drones in my childhood. This led to my first endeavor in a research lab. I was selected to become one of the pilots for the lab, and a few weeks later, I was sent to Guatemala for an expedition to help archaeological researchers document historical Maya pyramids about 50 kilometers west of the Caldera site.

 

At the time, we built drones ourselves, as the technology available today, like DJI components, wasn’t accessible. Everything was experimental. We assembled sensors and motors with flight controllers we built in the lab to fly above the jungle and take images of the pyramids to create 3D models. For archaeologists, who had been using measuring tapes, pen and paper, and sometimes total stations, creating a 3D model from a drone was groundbreaking.

 

Creating 3D models is essential in archaeology to put a site in context. It helps understand how civilizations lived, the purpose of buildings, and the layout of cities. Our technology helped reconstruct and estimate what the sites looked like before the jungle covered them. We slept in hammocks in the jungle, dealing with snakes and jaguars, which made me appreciate the importance of reliable, robust technology.

 

Soon after, I wrote a research grant for the International Geographic Society to use drones to capture images in multispectral wavelengths. The hypothesis was that the Maya’s construction materials affected the ground’s water content and chemicals, which in turn affected the vegetation’s chlorophyll levels. We aimed to detect and predict undiscovered Maya sites using hyperspectral or multispectral imaging from drones.

 

In 2014, we identified three sites in the northeast Yucatan Peninsula with signatures in the multispectral imaging that we believed were actual sites. We later confirmed these sites, finding artifacts and ornaments around the pyramids. This was my first discovery of a new archaeological site.

 

In Maya archaeology, documenting structures underground or in caves is also crucial. Some cave sites in the southeast Yucatan Peninsula had shrines at risk of looting. We aimed to document these sites to have a record for archaeological research. We built imaging technologies capable of capturing these shrines in 3D with light, using images instead of emerging lidar technology to get a visual representation of the ornaments and colors.

 

This innovation in 3D imaging from handheld and drone-based photography inside caves was a significant step in realizing the potential of leveraging low-light conditions to create powerful imaging tools and methodologies for archaeological sites.

Gallery: Cave Imaging – Quintana Roo

Building Tech to Assess 91K Dams in the US

When you think about PhD programs, you probably imagine researchers sitting behind desks, completing diligent, repetitive scientific experiments to prove new methodologies, processes, and discoveries in their field. However, I was fortunate to be part of a research lab focused on applied research. We combined fundamental technological engineering research with real-world scenarios to bring our lab innovations to practical use. This approach was wonderful because I could see that what I built was actually useful and would change how we work.

 

One of the projects we worked on was in partnership with the Army Corps of Engineers, responsible for many government-owned civil assets, including dams. There are over 91,000 dams in the United States, many of which are very old, built 50, 100, or more years ago. These structures have a life expectancy, but when they reach that mark, it’s not always feasible to tear them down and build new ones. Instead, we need to assess their structural integrity and determine what can be done to keep them stable and strong.

 

We were tasked with building sensory and imaging systems to inspect these dams from the inside to understand if they were structurally sound. Specifically, we focused on the gates that control water flow and the concrete structure, looking for changes due to settlements or tectonic movements.

 

To inspect a dam, you can enter from the top, where the water is stored, or from the bottom, where the water flows out. Often, dams are very inaccessible because they are large concrete structures. We had to enter through the bottom, navigating tunnels between 50 and 500 meters long, either walking up from a downstream river or using a boat to reach the inside of the dam where the gates are located.

 

These gates are on a pulley system that controls water flow, crucial for managing water levels and preventing overflow during storms. We built a camera system to navigate the outflow conduit of the dams and capture high-resolution images of the gates and concrete structure. These images allowed us to inspect for rust, warping, and concrete deformation, comparing them to the original hand-drawn blueprints from the early 1900s.

 

Our camera system had over 200 megapixels, equipped with LED rings, and was mounted on a small metal boat. As we pulled it through the dam’s tunnel, it continuously captured images. We later used software tools to reconstruct these images into 3D point clouds and models, representing the geometry of the structures.

 

One challenge we faced was warping when aligning multiple images in a long, linear fashion, causing geometric misalignment. We carefully designed both the hardware and software systems to ensure the quality and reliability of the data. Using lidar systems in such environments was difficult due to the repetitive structure, making it hard to register scans together. However, with the power of images, we fundamentally changed how we documented these outflow tunnels, creating a new way of capturing 3D data for structures with limited information.

Helping to Build 3D Vision for Self-Driving Cars

By the end of my PhD, I was approaching the ability to build intricate multi-camera systems capable of capturing various environments. However, all this data needed to be processed after the fact because we were dealing with hundreds of gigabytes of data for each capture. Some applications required real-time data processing, such as self-driving vehicles.

 

Back in 2019-2020, the industry was excited about the potential of self-driving vehicles replacing human drivers. At the time, high-end autonomous vehicles capable of level 4 and level 5 autonomy were equipped with expensive lidar systems. These systems, like those on Waymo cars, used spinning lasers to detect pedestrians, cars, and buildings. However, lidar systems are mechanical and expensive, limiting their accessibility.

 

My colleagues and I concluded that to scale autonomous vehicles for the larger population, we needed to replace lidar systems with camera systems. Cameras are cheap, accessible, and ubiquitous, but they lack the ability to estimate depth without complex 3D algorithms. Cameras also offer high resolution, which is crucial for tasks like pedestrian detection.

 

For example, an autonomous vehicle driving at 45 mph needs to detect a pedestrian 100 meters away. A lidar system might only get 5 to 10 returns to represent that pedestrian, which isn’t enough for reliable detection. In contrast, a camera sensor can capture tens of millions of pixels at 30 frames per second, providing thousands of pixels to represent the pedestrian. Multiple cameras can then determine the exact distance and movement of the pedestrian.

 

To achieve this, we needed to make 3D sensing from cameras real-time and affordable. We built 360-degree camera arrays that covered a full sphere horizontally and included depth components. These arrays used stereo and trinocular setups to estimate distances in all directions. The computation was complex, so we leveraged FPGA systems to efficiently compute the geometry and perform contextual segmentation.

 

This realization allowed us to build a foundation for reconstructing environments with accuracy, trust, and at a price and speed that could inform robotic systems. This was the culmination of my PhD, where I knew that leveraging cameras and imaging could build something very powerful.

Related Posts

Discover more from Looq AI

Subscribe now to keep reading and get access to the full archive.

Continue reading

The Journey Before Looq AI