Back stage
Of Autonomous Robot
Autonomous robot Technology  
Group 18hh_3x.png
Our Sensor Suite
Our Autonomous Driving Software
Group 18hh_3x.png
Group 19_3x.png
Rectangle_3x.png
Earth Robotics has software, hardware, and robotics engineers working together. This integrated approach allows us to develop state-of-the-art strategies for our robot to safely navigation and interact with the human and other obstacles. 
003.png
See
Our robot Lu perceive their surroundings through a multi-sensor suite. These sensors work together to provide our vehicles with a detailed 360-degree view of the world around it.
004.png
Go
Lu have extremely precise location and planning tools that direct our robot to the places our customers want to go. Additionally, we have developed the tools to build and regularly update high-definition 3D maps of the areas our robot drive.
002.png
Learn
Lu are on the path to fully autonomous driving by learning from detailed simulation and controlled testing
001.png
Stuck
In the rare instance a robot stops in an unknown situation, it contacts a remote operations center, where an operator will help unblock the robot
a6428a109421569.5fd342845c7d5.gif
Group 18hh_3x.png
3D SLAM
Earth Robotics is building a fully integrated autonomous mobility system. Our system allows us to deliver an experience with safety incorporated by design into every aspect of our service.
Sensor Suite.
LU-.png
Group 4 Copy_3x.png
Group 2_3x.png
Group 3 Copy_3x.png
Group 4 Copy_3x.png
Group 4 Copy_3x.png
Group 4 Copy_3x.png
Group 2_3x.png
Group 2_3x.png
Group 2_3x.png
RADARS
CAMERAS
LIDAR
Lu use unique sensor architecture combining cameras, lidar, and radars to see their surrounding. This allows our robot to deal with many scenarios.
360 - DEGREE FILD OF VIEW
Our sensor placement provides an overlapping field of view and 360° coverage. This enables redundancy and for the robot to perceive in all directions equally well.
Oval Copy 24.png
Oval Copy 24.png
Oval Copy 24.png
asd_3x.png
Oval Copy 3.png
Oval Copy 21.png
Oval Copy 23.png
Oval Copy 22.png
Oval Copy 26.png
Oval Copy 26.png
Oval Copy 25.png
Oval Copy 25.png
Oval Copy 27.png
Oval Copy 27.png
Oval Copy 27.png
SEES OVER 131FT
Autonomous Robot Software
Lu is able to interact seamlessly between our physical sensor suite and our sophisticated autonomous software. Using state-of-the-art software, our robot senses its environment and can predict the upcoming movement. With this information, our robot plans and steers its trajectories. this software is a localization function that allows Lu to know precisely where he is at all times.
Group 12_3x.png
Perception
Our robot Lu can see their surroundings through computer vision technologies. They take the images and data from sensors to detect, track, and avoid all objects. Our state-of-the-art technology uses deep-learning methods to segment and classify objects from our sensor data.
Group 43_3x.png
Prediction
Robot Lu predict the future actions by using a complex software framework that integrates the following:
Domain-specific rules: Our software takes the context of the situation into account (e.g. a person direction).
Physics-based modeling: The software anticipates where a dynamic object will be, given its anticipated speed and acceleration (e.g. a person speed).
Rectangle Copy 23_3x.png
Data-driven machine-learned behavior modeling: Our robot interpret human behavior and use this information to anticipate the actions of dynamic objects (e.g. a guy is veering in a certain direction).
Group 11_3x_3x.png
Mapping
High-fidelity maps are crucial for enabling autonomous robot to know exactly where they are. We are developing our own mapping technology as well as the maps themselves, which guarantees a high level of resolution and quality.
Group 7_3x.png
Planing & Control
Our planning methodology uses our software’s perception and prediction of what other road objects will do to plan a path for our robot. This enables our robot to drive where they need to go. Our software is constantly evaluating the robot surroundings and predictions of other road objects’ paths to plan its driving actions.
Our Indoor Mirror Map ( IMM)
IMM is a real-world replica in a digital environment created by our robots. In particular, digital data of buildings is an important element for smart cities, autonomous driving, service robots, XR, AR, and Metaverses. In addition, active public-private investment is taking place to give intelligence to cities.
Group 2_3x.png
Advanced Computer Vision
Color
Depth
Segmentation
Detected object
Point 3D map view
Local View
ojbd.png
Indoor Localization 
Our computer vision technique that allows us to identify and locate objects in an image or video., we frame object detection as a re-gression problem to spatially separated bounding boxes and associated class probabilities. A single neural network pre- dicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.
Powerful Compute System
655_3x.png
Our robot Lu come with advanced hardware capable of providing Autonomous features, and full self-driving capabilities—through software updates designed to improve functionality over time.
Group 3_3x.png
Group 2_3x.png
Group Copy_3x.png
Group_3x.png