Inside an AI-Powered Ariel data analysis startup — AirWorks

AirWorks is an AI-powered software company that converts 2D and 3D Ariel data into CAD linework. Currently, it is used for the civil engineering and construction engineering domain. However, in the future it can be used in defense, large scale agriculture, and other drone inspection related domains.

The Business Face

With a seed fund of 2.4M and a very small org-size, the company is thriving and developing the next level AI automation engine to help the land surveillance department of civil engineering. It was initially funded by MIT’s startup accelerator Delta V. With very few competitors in this industry, the initial impression looks promising.

The Technical Face

  1. A high-resolution Ariel 2D or 3D image
  2. A digital boundary of the area to be inspected provided the user

It then runs its AI and ML engine in the backend and gives a detailed CAD drawing of the area with analysis of types of structures on the land and other information about the land required during a land surveying process in civil engineering. The chunk of work done by this system is shown below (taken from AirWorks website).

Image is taken from AirWorks website

When I try to recollect the concepts recently learned in a machine learning training, I can think of the below pipeline for the backend ML engine. This is completely my version of how I think the ML engine in the backend might be working to classify the work in different ML and non-ML components. The engineering team must have utilized a completely different and complex version. This is my attempt to apply my learning.

Below are my assumptions of the ML algorithms and activities which the product team would have used to build the engine.

  1. High-resolution image

The system asks for a very high-resolution image so that they may closely predict the structures in the later stages. Though a high-resolution image will be very costly in terms of computation and running the algorithm, they might have taken the dimensionality reduction approach wherever required. Dimensionality reduction is nothing but converting an image with large number of features to a lower degree by projecting the data to a lower dimension (something as shown in the figure below; 3D to 2D). It retains the features to a large extent but reduces the size drastically.

Since it is a B2B tool for long term civil engineering projects, so computational time should not be an issue here. It is evident by the timing of the results which is about 20–30 minutes per analysis.

2. Boundary Detection

Boundary/edge detection was a problem long back. And especially with the emergence of ML, it becomes way easier to train the neural network with boundary detection by teaching them about sudden changes in pixel detail. This boundary detection will enable the engine to apply virtual boundaries to different components in an image and feed them to the next element of the pipeline. There are many efficient edge detection algorithms such as the Sobel edge detector, Canny edge detector, Prewitt edge detector, and Laplacian edge detector. The Canny algorithm is widely used due to its efficiency in sharply defining the boundaries.

3. Structure determination

Once the boundaries are identified, the development team would have deployed the sliding boxes method (shown below) to determine what type of object/structure is there. This should be a case of supervised learning where different but limited types of data is fed to make the system learn. How a typical vegetation field should look or how parking should look like, etc. Once the structure is determined the data flows to the next element(s).

Each time box is matched against the labeled training data to find the match

4. Structure analysis

The n different identified structures (vegetation, building, empty land, waterbody, etc.) are analyzed separately by further classifying the structure into substructures. E.g. the building can be a school, a commercial, or a residential building. More and more synthetic data can be created in lieu of actual data by distorting and creating a new version of the actual data. This helps the model learn more and predict precisely.

5. Input to the CAD tool and Drawing/Analysis results

The final classified, identified, cleaned and structured data along with its attributes is fed to the CAD tool to generate the final drawing with a summary of results which a human would have taken weeks to create.

A typical final output (Image is taken from AirWorks website)

A Ceiling analysis can be done to identify which element contributes to more precision of prediction so that more resources, analysis, and data is fed to that element of the pipeline/model. Also, depending upon whether there is bias or a variance one can decide whether more data is required or whether there is a need for a different algorithm. Starting with any algorithm and utilizing the learning curves to learn on the way is the universal approach.

Thousands of tech startups take birth every year and almost 90% of them fail. But the remaining 10% only is shaping our tech world and our future. With artificial intelligence and machine learning coupled with deep learning (now), there are innumerable possibilities of what can be solved and achieved through technology.

Product Enthusiast — Utilizing the power of AI and Design to rethink possibilities and reframe the problem statement! Website: www.hellodeepaksingh.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store