Advancing lidar technologies—an interview with Shauna McIntyre

I recently interviewed Shauna McIntyre, who joined Sense Photonics as CEO after many years of engineering and executive roles in companies such as Google, Ford, and Honeywell, along with consulting on strategic issues related to transportation. Sense Photonics is a small company, founded in Durham, NC in 2016, with offices in San Francisco and Edinburgh.  Conard Holton: What attracted you to the opportunity?

Shauna McIntyre: The pandemic is telling us that the world needs core systems to be automated beyond what we have today to enable people to work at a distance, and for goods to be transported with less human intervention. To do this, we need to give our industrial facilities greater capability to see and orchestrate the activities within; plus, objects need the ability to see so they can maneuver autonomouslyautomated guided vehicles, forklifts, and eventually cars and trucks. I saw the playing field of vision companies–lidar, especiallyattempting to solve this 3D problem mechanically: spinning, scanning, MEMS, all trying to manipulate and detect light forms to enable vision.

However, as a mechanical engineer who has launched millions of complex mechanical systems into high-volume production in my career, I’ve seen firsthand the reliability issues customers face when deploying these systems for millions of hours or miles. I know that the complex system never wins. Therefore, I saw a gaping hole in the market and a huge opportunity for a high-performance, low-cost, reliable solution that will untap opportunities for high-volume deployment, enabling new automation and intelligence levels for our customers. 

CH: Could you talk about your core flash lidar technologies to our engineering audience? From descriptions I’ve read, it sounds like a combination of VCSEL arrays and an RGB camera. Are the VCSELs of a proprietary design and wavelength or commercially available?

And the camera? SM: We have core flash lidar technology in the laser emitter, the detector array, and the algorithms and software stack. The proprietary laser emitter is based on a large VCSEL array, which provides high, eye-safe optical output power for long-range detection and wide field-of-view at a low cost point that is game-changing.

Because the emitter’s wavelength is centered around 940 nm, our detector array can be based on inexpensive CMOS technology for low cost, and we get the added benefit of lower background light from the sun for a higher signal-to-noise ratio. From an architecture perspective, we intentionally chose a flash architecture because of its simple camera-like global shutter design, scalability to high-volume manufacture, the benefit of having no moving parts, and most importantly, it enables low cost. CH: Flash lidar is typically either single laser flash or multilaser flash–what are the relative capabilities and which does Sense Photonics use?

SM: Our laser array is a network consisting of thousands of VCSELs interconnected in a way that provides short pulses of high-power light. In keeping with our philosophy of design simplicity and high performance for our customers, we actuate the array to generate a single laser flash rather than adding complexity and cost associated with a multi-flash approach. CH: This approach sounds comprehensive, but expensive–how does the cost/performance compare to a camera-only approach as with Tesla?

What’s the balance between imaging and lidar functions? SM: All sensors are additive. Objects (obstacles) have different ways of manifesting themselves on the roada silhouette, sound, 3D depth, etc.the fundamental principle of sensing-based perception is that the more sensors, the better. More‘ refers to coverage as well as modalities.

When a perception system has confident access to different sensing capabilities and across all stringent operating conditions, and not just in good weather, the resulting system is safer and with more-effective failure mitigation options such as more-comfortable braking. The immediate priorities in Level 3+ are navigating traffic jams, highways, varied urban driving conditions, and, importantly, coping with corner cases in unfamiliar environments.  On balance between imaging and lidar, the industry to a large extent (and I don’t include Teslas own lidar trials!) has converged that delivering effective L2.5+ is difficult without lidar.

There have been several industry white papers that address camera perceptions limitations in darkness, bad weather, dealing with low-contrast objects crossing the road, floaters such as tumbleweed, plastic bags, etc., and other unfamiliar objects on the road, tunnels, irregular lane markings, dawn/dusk/high dynamic range situations, etc. In general, the superior depth advantage of lidar manifests itself in so many real-world driving conditions that a brute-force approach to address every corner case either with camera-based perception or even with data aggregation becomes progressively difficult since the nature of corner cases is such that they have a very long tail (referring to Philip Koopmans research).  What has held the industry back on mass adoption has been cost and industrialization, and the gaps on both have rapidly closed.

Industrialization has been proven on solid-state lidars with automotive-grade advanced driver-assistance systems (ADAS) deployments. On the other hand, costs have dropped over 10X since, and simple architectures such as what Sense Photonics has pioneered have lowered the cost of the lidar to within striking distance of a camera. When the industry can get all the benefits of lidar (especially improved safety), why wouldnt they replace cameras and adopt lidar at mass scale?

Sense Photonics has gone a step beyond usability and painless adoption in mind and architected probably the most camera-like lidar in the world, making it easy for customer perception engineers to migrate their camera-based algorithms to being lidar- and camera-driven.  CH: How are AI and machine learning being used in your products? SM: Our philosophy is that AI and machine learning are a means to an end, and enable ROI by unlocking operational efficiencies on top of a superior smart edge hardware. 

In the short-term, we are shipping sensors that produce data-rich outputs, are easy to use, facilitate quick adoption by customers, and are feature-rich. This allows customer engineers to take full advantage of the intelligence we provide, ensuring application on indoor and outdoor use cases and industrial-grade operation under all operating conditions. As our sensor continues to mature, improve, and ruggedize with extensive customer feedback, we are concurrently maturing our AI perception stack, which features object counting, obstacle detection, segmentation, classification, x/y/z velocity, localization, free-space detection, and much more.

These features will be offered to customers together with the sensor. Customers who already have perception teams will be supplied annotated datasets to accelerate time to market.  Longer-term, we will also lend our software and AI expertise to customers to enable them to collect and use aggregated sensor data and unlock operational improvements, such as reducing downtime, performing predictive maintenance, and orchestrating functions across robots in space and time.

There are a lot of ideas to be explored in this space, and we are continually learning. CH: You seem to be approaching two distinct markets (automotive and industrial) for your two products [Osprey and Sense One]–what are the differences and similarities in criteria between the markets and how do you design for them? SM: Our classification of industrial applications is relatively broad, from indoor factory automation to retail to outdoor operations in form factors that may be stationary or moving.

Requirements are stringent (wide temperature range, IP67 ratings, rugged connectors, accurate calibration), but the time to deployment is typically shorter and the need is immediate. We are currently shipping the Sense One and Osprey sensors into these use cases, and customers appreciate the sweet spot that we fill between low-end 3D time-of-flight (ToF) cameras and the much-more expensive and less-reliable spinning lidars that are not optimized for industrial use cases where the predominant majority of information lies within 50 meters (this range is where the richness of our point clouds are unmatched in the industry).  We are taking a more measured, but also uniquely differentiated approach to the automotive market.

For ADAS applications, we believe that our product (to be announced) has achieved the industrys most compelling price point with state-of-the-art performance for both long-range and eventually short-range (blind spot coverage) operation. We are engineering this product to very tight automotive-grade specifications and pushing the boundaries of physics. Yet, the beauty of the product is its simple architecture, which in turn allows us to position at a very low cost.

We are working with several OEMs and Tier 1s to obtain early validation on the product and believe that overall, we are well positioned to successfully compete in the next set of ADAS contracts. Also, within automotive, the AV market is taking longer to mature at scale. Still, we are also observing that specific segments within AV, such as goods delivery, are accelerating partly due to COVID-19.

The Sense 1 and Osprey products that we are shipping today address very unique near-field use cases, and we are enjoying excellent traction with leading companies in this sector and also Robotaxi companies as they continue to roll out pilots and conduct testing and validation activities. Overall within the automotive segment, the cost advantage that we drive makes us a compelling pick as the best price/performance-optimized 360-degree systems. CH: Could you talk about your management team and your management style?

It must be a challenge with several offices and COVID-19. SM: Having just come from Google, I believe in setting objectives and measuring progress (OKRs), being transparent and data-driven, and being inclusive of all types of talent that can add value to our company. I believe in empowering the voices of my team, and as managers, we do everything we can to give our teams the tools they need to succeed.

As for our three sites in the San Francisco Bay Area, Research Triangle Park, and the UK, I haven’t found this to be a challenge other than the obvious immediate limitations of physically being together as a team. COVID-19 is a challenge for everyone. I consider ourselves fortunate as we have all stayed healthy and we’ve been able to stay open as an “essential business” due to a government contract.

To do so, we have put in safety measures to ensure that our clean room and assembly technicians are safely distanced. This has allowed us to keep our momentum strong during such an unpredictable time. CH: How involved is your Board of Directors in guiding you?

SM: One major reason I joined Sense Photonics was the strength of our investor base and the Board of Directors. They have been incredibly supportive from day one and are fantastic partners. I’m fortunate to have such great minds around the table from a business-building perspective and technical background.

I consult with them on major decisions and appreciate the guidance I get from them. CH: There are many lidar companies out there, and many have already been acquired by major automotive manufacturers–you must like a challenge! SM: It may be a bit of an understatement, but I do enjoy a good challenge.

I’m an avid runner (and I’m a maniacal skier!), so I like to push myself and feel the rewards of hard work.

I learned this professionally during my early days at Ford when running night-shift truck production in the dead of winter in Minnesota: when the stakes are high, I really enjoy rising to the occasion and building trust with my team to achieve great results.

Ever since, I’ve enjoyed the satisfaction that comes from leading teams to push boundaries and accomplish goals together, well beyond what we had imagined. 

You may also like...