Saturday, July 29, 2023

Mojo5 - First concepts and Franken-prototype (#2)

Mojo5: Building a New Robot!

To kick off the design process, the main concept behind Mojo5 was to closely bind the servos together while using an unconventional material – chopsticks(!) – to create a minimalistic chassis.

Initial CAD Drawings:

The journey started with creating initial CAD drawings to visualize the robot's structure and mechanics.

Servo mount is binding the two leg servos together

(Blue) knee servo arm, linkage, and co-axial cam
(Yello) Upper-Leg also co-axial, servo connected

Microcontroller and Servo Driver:

For Mojo5, a major change was adopting the ESP32 controller instead of the simpler Arduino mini-pro used in previous Mojo variants. The ESP32 provides added functionality, including built-in Wi-Fi and Bluetooth connectivity, and ample I/O pins to accommodate future expansions.

Mojo5: ESP32 Micro-controller connected to PCA9586 Servo Driver


To drive the servos, I chose the PCA9685 servo driver, utilizing I2C communications. Although my existing 11kg servos are not the most powerful, the PCA9685 still manages to control them effectively. Furthermore, my familiarity with this driver makes it a suitable choice for use with the ESP32.


The Franken-Prototype:

To quickly test the concept feasibility, an initial prototype was assembled. It consisted of a newly designed 'upper-leg' or 'femur' and a knee-cam sharing the same axis as the upper-leg. Other parts were recycled from earlier robots, giving birth to what we affectionately call a 'franken-prototype'. Franken-prototyping is a common and pragmatic approach to quickly determining the feasibility of an idea.

The first test was crucial – connecting the servos to the PCA9685 and ESP32. I taped the servos together and assembled the required components. Powering the system with a 5V source, I eagerly sought to witness the initial motion and validate the success of this build. And to my delight, it worked! This promising outcome lays a solid foundation for the upcoming design iterations.

Mojo5: Franken-Prototype

In addition to the Prototype, I was able to set up a new repository for the Mojo5 code, and test out the basic set up, communications, and loops.

With Mojo5's initial movements validated, I'm excited to delve deeper into refining and enhancing its capabilities for even greater achievements in the future!

Sunday, July 16, 2023

Mojo5 - Minimal Viable Mojo

 


Lately, I've been on a bit of a hiatus from robot building, due to the stresses of life and my role as the technical lead in a startup. Yet my thoughts have never strayed from the topic, frequently returning to many different types of robots. But quadrupeds – they have a special place in my heart. You always go back to your first love.

In recent times, there have been numerous discussions about the prerequisites for a good, affordable quadruped. Since embarking on this journey, I've seen a few open-source designs come into existence, such as the Stanford Puppers. To me, the real thrill lies in creating my own, even though I do keep an eye on the work of others in this field.

Looking back, the primary challenges with my previous designs stemmed from the strength of the servos and the overall weight of the robot. To be successful, you need 25+ kg/cm servos with speeds of less than 0.2 seconds for 60 degrees. These servos typically cost $20-25 each (when on sale), and a basic 8-degree-of-freedom quadruped would need eight of them. Before I take the plunge and invest in these high-end servos, I want to try one more design to see if it might be possible to work with the 11 kg/cm servos I currently have.

Enter the Minimal Viable Mojo - Mojo5.

For my Mojo5 design, I aim to strip everything down to the bare minimum necessary to create a walking quadruped.

Minimal parts:

  • 4 leg assemblies
  • 4 quadrant servo mounts
  • 8x MG995 servos (60g each) 480g
  • Light weight wooden dowels connecting the parts
  • Arduino nano pro
  • 7.4v battery for servos
  • 9v battery for controller

This MVM - minimal viable mojo will be a mini-mini quadruped.

Moving forward with this reduced configuration, we have to address several fundamental challenges, the foremost being power optimization. Lighter and less powerful servos mean we have to be extra diligent about weight distribution and energy efficiency.

We're dealing with MG995 servos that are a bit less powerful than the ideal 25+ kg/cm servos, hence the need to maintain a weight that falls within their operating parameters. Every gram of weight matters in our design, hence the decision to use lightweight wooden dowels as linkages.

The Arduino Nano Pro will be the brain of the Mojo5, controlling the servos and coordinating their movement to achieve walking and possible turning. It's a cost-effective and well-documented controller, which makes it ideal for this type of application. However, there will be some limitations due to the lower processing power compared to higher-end controllers.

Power-wise, we're going with a 7.4v battery for the servos and a separate 9v battery for the controller. This dual power source approach is necessary to avoid a brownout scenario when the servos draw a significant amount of current, which could lead to a temporary loss of control.

One innovative aspect of the Mojo5's design will be its cam, pushrod-controlled lower leg. This type of mechanism can provide the torque needed for locomotion without adding much weight, but it does introduce some complexity into the design and programming.

For the leg assemblies, we're starting with 70cm sections. This is a reasonable starting point, but it's likely that we'll have to iterate and adjust these dimensions as we move forward with the project and start testing the prototype. It's going to be a careful balancing act to ensure the legs are long enough for effective locomotion, yet not so long as to place too much strain on our MG995 servos.

Overall, the Mojo5 design will be a journey of continuous iteration and improvement. Each step will likely present new challenges, from structural tweaks to software adjustments, but that's part of the excitement of building our own quadruped from scratch. Stay tuned as we start prototyping and begin to bring Mojo5 to life. We're looking forward to sharing this journey with you.

Tuesday, July 11, 2023

Robots in the Wild!

Have you ever heard the phrase "as rare as seeing a unicorn in the wild"? Well, the times they are a-changing, and a new, equally exciting creature is becoming more and more common in our everyday lives - robots. As you know, they're leaving the labs and factories and starting to roam free in our streets, our parks, and even our homes! This is a just migration, not an invasion, let's not panic. It's "Totally" Not Evil, I promise.

Earlier this year, I was taking a leisurely stroll around the magnificent grounds of the Belvedere in Vienna. I was there to admire the exquisite Baroque architecture and to gaze at the mesmerizing works of Klimt. However, I spotted something extraordinary, a Robot in the Wild. A Husqvarna robot, completely autonomous, humming a silent tune as it meticulously trimmed the luscious green carpet surrounding the palace. It's quite something to see these machines tending the gardens of such historic landmarks - as if the future decided to pay a visit to the past. Who needs garden gnomes when you have lawn-mowing robots?

Robot mowing the gardens at the Belvedere, Vienna

As I continued my travels, this time to an airport in Phuket, Thailand, the robots made their presence felt yet again. As I waited for my flight, a cute, round robot diligently scrubbed the floors, tirelessly ensuring cleanliness for the thousands of footfalls it would encounter every day. While we all were busy staring at our screens, this little robo-janitor was doing its part to keep our world a tad cleaner and shinier. 

Airport floor cleaner - Phuket, Thailand

Remember how I mentioned special purpose robots? Well, let's turn the spotlight to the solar industry. At a recent convention, there were 4-5 solar robot cleaner companies exhibiting. These little machines, like well-trained bees in a hive, were cleaning the dust off solar panels, making sure the path for sunlight was as clear as day. Why climb up the roof under the scorching sun when you can delegate it to these handy helpers?

Solar panel cleaner robot at InterSOLAR

two models - Solar panel cleaner robot at InterSOLAR

Finally, at a Power and Energy AWS convention, I spotted the ANYMAL robot. This four-legged, dog-like creation is something out of a science fiction movie. They are designed to operate in challenging environments, like inspecting offshore oil rigs, navigating through disaster-stricken areas, or simply delivering packages - your personal mechanical courier!

The ANYmal robot

Every day, the presence of robots in our world becomes less of a novelty and more of a norm. Some might find this unsettling, but remember, they're here to help us, to make our lives easier, safer, and more efficient. So, the next time you spot a robot in the wild, give them a nod, a wave, or a friendly beep-boop-beep in their language. Its best to be friends with them! ;)

In this thrilling time of transformation, we'll continue to document these mechanical marvels in their new habitats – our world. So, stay tuned for more chronicles of Robots in the Wild. You may think it's unusual to see a robot in your daily surroundings, but take a moment and observe. Are they becoming more common? Are they quietly blending into our everyday lives? You might be surprised.

Monday, May 29, 2023

Game of Shrooms 2023

 The Totally Not Evil Robot Army will be participating in the 2023 Game of Shrooms!




For information on the Game of Shrooms check out the website:  Game of Shrooms

Clues:

Your first clue is that, it will be located in/around/near the Golzheimer Friedhof in Golzheim/Pempelfort Düsseldorf.


Details and Clues on this page:

The Shrooms are Growing!






Thursday, March 30, 2023

SLAM! A conversation with ChatGPT-4

 The following is a NEW conversation about SLAM with ChatGPT. In my previous post, I conversed with ChatGPT 3.5, this time it is with the latest LLM.  Can you note the differences?



Prompt

Act as an expert in robotics on the topic of simultaneous localization and mapping, and an expert blog writer. The audience for this blog is technical professionals who are interested in learning about the latest advancements in robotics. Provide a comprehensive overview of the most popular localization and mapping processes, including their strengths and weaknesses. Include real-life examples and case studies to illustrate how these processes have been successfully used in various industries.


Simultaneous Localization and Mapping: A Deep Dive into Popular Techniques and Real-Life Applications

Welcome back to our expert robotics blog, where we explore the latest advancements and innovations in the world of robotics. Today, we'll be discussing a topic that is crucial for the autonomy of modern robots: simultaneous localization and mapping (SLAM). This post will provide a comprehensive overview of the most popular SLAM techniques, their strengths and weaknesses, and real-life examples of their successful implementation.


What is SLAM?

SLAM stands for Simultaneous Localization and Mapping. It is a fundamental problem in robotics, where a robot must simultaneously build a map of its environment and localize itself within that map. This process is essential for enabling robots to navigate and interact with their surroundings effectively.


Popular SLAM Techniques

There are several popular SLAM techniques employed in the robotics community, each with its own strengths and weaknesses. Let's explore some of these techniques:


1. Extended Kalman Filter (EKF) SLAM

The Extended Kalman Filter (EKF) SLAM is one of the earliest and most well-known SLAM techniques. It is a recursive estimation algorithm that uses a probabilistic framework to model the robot's state and its uncertainty. The main strength of EKF SLAM is its ability to provide real-time estimates of the robot's position and map features, making it suitable for various applications.

However, EKF SLAM has some limitations. It assumes linear motion and observation models, which can lead to inaccuracies in highly non-linear environments. Additionally, it suffers from high computational complexity, which can be problematic for large-scale mapping and localization tasks.


2. FastSLAM

FastSLAM is an improvement over EKF SLAM that addresses some of its limitations. It uses a particle filter-based approach for estimating the robot's trajectory and a set of EKF's for estimating map features. This results in a more computationally efficient algorithm, making it suitable for large-scale mapping tasks.

One of the main strengths of FastSLAM is its ability to handle non-linear motion and observation models, leading to more accurate localization and mapping. However, it may suffer from particle depletion, which can result in the loss of localization accuracy.


3. Graph-based SLAM

Graph-based SLAM is a more recent technique that represents the SLAM problem as a graph optimization problem. The robot's poses and map features are represented as nodes in the graph, while the spatial constraints between them are represented as edges. By solving the graph optimization problem, the robot can estimate its trajectory and build a map of the environment.

Graph-based SLAM has several advantages, such as its ability to handle large-scale mapping tasks, loop closure, and robustness to outliers. However, it can be computationally expensive, especially when the graph size becomes large.


Real-Life Examples and Case Studies

SLAM techniques have been successfully applied across various industries. Here are a few notable examples:

Autonomous Vehicles: Companies like Tesla, Waymo, and Cruise use SLAM algorithms to enable their self-driving cars to navigate through complex urban environments. Graph-based SLAM techniques are particularly popular in this domain due to their ability to handle large-scale mapping tasks.

Drones: SLAM has been used in drones for applications such as aerial mapping, search and rescue, and infrastructure inspection. For instance, FastSLAM has been employed in the DARPA Subterranean Challenge, where drones were required to navigate and map underground environments autonomously.

Warehouse Automation: SLAM is an integral component of warehouse automation systems, where robots must navigate and interact with their surroundings. Amazon Robotics, for example, uses SLAM techniques to enable its robots to localize themselves and transport goods efficiently within their vast warehouses.

Additional Real World Examples

please provide additional real world examples where SLAM has been used. Please omit already mentioned autonomous driving, drones, or warehouse.

Certainly! Here are more real-world examples of SLAM being used outside of autonomous driving, drones, and warehouse automation:

  1. Mining: SLAM has been used in autonomous mining vehicles to help them navigate and map underground tunnels. The robots can perform tasks like drilling, excavation, and transportation of materials while avoiding collisions and ensuring the safety of human workers. One example is the LHD (Load, Haul, Dump) robot developed by Sandvik, which utilizes SLAM to navigate in underground mines.
  2. Agriculture: SLAM techniques are employed in precision agriculture to enable autonomous robots to navigate and perform tasks such as harvesting, planting, and soil analysis. For instance, the company Blue River Technology developed the "See & Spray" system, which uses SLAM to navigate through crop fields and identify weeds for precision spraying.
  3. Indoor Mapping and Navigation: SLAM is used in indoor mapping systems to create accurate floor plans and 3D models of building interiors. This is particularly useful for facilities management, emergency response planning, and architectural design. Companies like NavVis and Matterport offer solutions that use SLAM for indoor mapping and navigation.
  4. Archaeology and Cultural Heritage: SLAM has been applied in the field of archaeology to document and preserve historical sites. For example, researchers have used SLAM-based systems to create 3D models of ancient ruins, caves, and other cultural heritage sites, providing valuable data for preservation and study. One notable project is the Zamani Project, which employs SLAM to document African cultural heritage sites.
  5. Robotic Vacuum Cleaners: SLAM is a key technology in advanced robotic vacuum cleaners, enabling them to autonomously navigate and clean homes and offices. These devices use SLAM algorithms to build maps of the environment, plan efficient cleaning paths, and avoid obstacles. Companies like iRobot (with their Roomba series) and Neato Robotics (with their Botvac series) employ SLAM in their robotic vacuum cleaners.
  6. Search and Rescue: SLAM is used in search and rescue operations, where robots are required to navigate through complex environments, such as disaster-stricken areas or collapsed buildings, to locate and assist survivors. SLAM enables these robots to map their surroundings and efficiently search for victims, even in low-visibility conditions or GPS-denied areas.

These examples showcase the versatility of SLAM techniques and their potential applications across a wide range of industries and domains.


SLAM and OpenSource

What OpenSource software projects are successfully implementing SLAM?

There are several open-source software projects that successfully implement SLAM algorithms. These projects provide an excellent starting point for researchers and developers looking to experiment with SLAM techniques in their applications. Some of the most popular open-source SLAM projects include:

  1. GMapping: GMapping is an open-source implementation of FastSLAM 2.0, which uses a particle filter-based approach for SLAM. It is widely used in the robotics community and has been integrated into the Robot Operating System (ROS) as a standard SLAM package. GitHub Repository
  2. Cartographer: Developed by Google, Cartographer is a real-time, loop-closing SLAM library that works with both 2D and 3D data. It employs a combination of scan matching and graph optimization techniques to generate high-quality maps. Cartographer is compatible with ROS and is designed for various platforms, including autonomous vehicles and drones. GitHub Repository
  3. ORB-SLAM: ORB-SLAM is a feature-based monocular SLAM system that uses ORB features (Oriented FAST and Rotated BRIEF) for localization and mapping. It is capable of handling real-time operation, loop closing, and relocalization. There are different versions of ORB-SLAM available, including monocular, stereo, and RGB-D camera configurations. GitHub Repository
  4. RTAB-Map (Real-Time Appearance-Based Mapping): RTAB-Map is a graph-based SLAM library that works with 2D and 3D data. It employs a loop closure detection approach based on visual appearance, making it suitable for large-scale and long-term mapping tasks. RTAB-Map is compatible with ROS and can be used with various sensor configurations, including RGB-D cameras, stereo cameras, and lidar sensors. GitHub Repository
  5. OpenVSLAM: OpenVSLAM is a versatile and customizable SLAM framework that supports monocular, stereo, and RGB-D cameras. It uses a combination of ORB features and bundle adjustment for localization and mapping. OpenVSLAM provides a user-friendly interface and includes a wide range of features, such as loop closing, global relocalization, and map saving/loading. GitHub Repository

These open-source projects offer a solid foundation for those looking to explore SLAM techniques and implement them in various robotic applications.


Sensor Configurations and Unstructured Environments

what are the possible sensor configurations for accurate SLAM in an unstructured environments? include pros and cons for the various solutions.

SLAM can be performed using a variety of sensor configurations, each with its own advantages and disadvantages. The choice of sensor configuration largely depends on the specific application and the environment in which the robot will operate. Here, we will discuss some common sensor configurations for SLAM in unstructured environments:

Lidar (Light Detection and Ranging): Lidar sensors emit laser beams and measure the time it takes for the light to return after reflecting off surfaces. This allows the sensor to generate accurate distance measurements and create detailed 3D point clouds of the environment.

Pros:

  • High accuracy and resolution
  • Works well in both indoor and outdoor environments
  • Performs well in various lighting conditions

Cons:

  • Expensive compared to other sensors
  • Can be sensitive to environmental factors such as rain, fog, or dust
  • May struggle with highly reflective or transparent surfaces

RGB-D Cameras: RGB-D cameras capture color images (RGB) along with depth information (D), either using structured light, time-of-flight (ToF), or active stereo techniques.

Pros:

  • Provides both color and depth information
  • More affordable than lidar sensors
  • Useful for object recognition and semantic mapping

Cons:

  • Limited range compared to lidar sensors
  • Can be sensitive to lighting conditions and surface reflectivity
  • Lower accuracy and resolution compared to lidar

Stereo Cameras: Stereo cameras consist of two cameras placed at a fixed distance apart, which capture synchronized images. By comparing these images and identifying corresponding features, the system can estimate depth information using triangulation.

Pros:

  • More affordable than lidar and RGB-D cameras
  • Passive sensing, which means it does not emit any signals
  • Works well in outdoor environments

Cons:

  • Requires sufficient texture in the environment for accurate feature matching
  • Sensitive to lighting conditions
  • Lower accuracy and resolution compared to lidar

Monocular Cameras: Monocular SLAM uses a single camera to estimate the robot's motion and map the environment by tracking features in consecutive images.

Pros:

  • Highly cost-effective and lightweight
  • Low power consumption
  • Can be used for visual odometry and loop closure detection

Cons:

  • Scale ambiguity: the absolute scale of the environment cannot be directly measured
  • Requires sufficient texture and features in the environment for accurate tracking
  • Sensitive to lighting conditions and motion blur

Sonar (Sound Navigation and Ranging): Sonar sensors emit acoustic waves and measure the time it takes for the waves to return after reflecting off surfaces, allowing the system to estimate distances.

Pros:

  • Works well in low-visibility environments, such as underwater or in smoke-filled areas
  • Can penetrate certain materials, such as foliage or thin walls

Cons:

  • Lower accuracy and resolution compared to lidar and vision-based systems
  • Susceptible to noise and multipath reflections
  • Limited range and field of view

In unstructured environments, it is common to use a combination of these sensors to achieve accurate SLAM. By fusing the data from multiple sensors, the system can compensate for the limitations of each sensor type and improve overall performance. The choice of sensor configuration depends on factors such as cost, weight, power consumption, and the specific challenges posed by the unstructured environment.

Saturday, February 25, 2023

SLAM! a conversation with ChatGPT

The following is a conversation with OpenAI's ChatGPT about Swarm robotic systems.  I will be updating the transcript with links as well as verifying ChatGPTs response.

Prompt

Act as an expert in robotics on the topic of simultaneous localization and mapping, and an expert blog writer. The audience for this blog is technical professionals who are interested in learning about the latest advancements in robotics. Provide a comprehensive overview of the most popular localization and mapping processes, including their strengths and weaknesses. Include real-life examples and case studies to illustrate how these processes have been successfully used in various industries.

[NOTE:  OpenAI's ChatGPT3 is trained on data up to Sept 2021. At the time of this conversation it was almost 18months out of date.]


Introduction to SLAM!

Simultaneous Localization and Mapping (SLAM) is a critical aspect of robotics that involves generating a map of an unknown environment while simultaneously localizing the robot within that environment. SLAM is used in a wide range of applications, including autonomous vehicles, drones, and robotics used in manufacturing, agriculture, and healthcare. In this blog, we will provide an overview of the most popular localization and mapping processes, along with their strengths and weaknesses.

Extended Kalman Filter (EKF) SLAM:

EKF-SLAM is a popular approach to SLAM that uses a Kalman filter to estimate the position and orientation of the robot, along with the location of landmarks in the environment. EKF-SLAM is relatively simple to implement and has been used in a wide range of applications, including mobile robotics and autonomous vehicles. However, EKF-SLAM assumes that the environment is static, which may not always be the case. Additionally, EKF-SLAM may struggle to handle large and complex environments.

Graph-based SLAM:

Graph-based SLAM represents the environment as a graph, where nodes represent the robot's position and orientation and landmarks in the environment. The edges between the nodes represent the constraints that must be satisfied to ensure consistency between the robot's estimate of its location and the landmarks in the environment. Graph-based SLAM is relatively robust and can handle complex environments. However, it can be computationally expensive and may struggle to handle large-scale environments.

Particle Filter SLAM:

Particle Filter SLAM is a probabilistic approach to SLAM that uses a set of particles to represent the possible locations of the robot in the environment. The particles are updated based on sensor measurements, and the particle set is used to generate a map of the environment. Particle Filter SLAM can handle non-linearities and non-Gaussian noise in the sensor measurements, making it a popular choice for SLAM in unstructured environments. However, Particle Filter SLAM can struggle to handle multi-modal distributions and can be computationally expensive.

Real-life examples and case studies:

Autonomous vehicles:

SLAM is a critical component of autonomous vehicles. Autonomous vehicles use SLAM to create a map of their surroundings and to determine their position within that map. For example, Waymo's autonomous vehicles use a combination of Graph-based SLAM and EKF-SLAM to navigate complex environments.

Drones:

SLAM is also used in drones to navigate and map unknown environments. For example, DJI's Phantom 4 Pro drone uses a combination of EKF-SLAM and Visual SLAM to navigate and map environments in real-time.

Manufacturing:

SLAM is also used in manufacturing to automate material handling and assembly tasks. For example, Omron's LD mobile robot uses a combination of Graph-based SLAM and LiDAR to navigate and map environments in manufacturing facilities.

Conclusion:

In conclusion, SLAM is a critical aspect of robotics that involves generating a map of an unknown environment while simultaneously localizing the robot within that environment. EKF-SLAM, Graph-based SLAM, and Particle Filter SLAM are the most popular approaches to SLAM, each with their strengths and weaknesses. Real-life examples and case studies demonstrate the importance of SLAM in a wide range of industries, including autonomous vehicles, drones, and manufacturing. As robotics technology continues to advance, we can expect to see even more sophisticated approaches to SLAM emerge, enabling robots to navigate and operate in increasingly complex environments.


In Depth with Particle filter SLAM:

explain Particle Filter SLAM in simple terms, provide additional real world examples.

Particle Filter SLAM is a probabilistic approach to Simultaneous Localization and Mapping (SLAM) in robotics. In Particle Filter SLAM, the robot's position and orientation, as well as the location of landmarks in the environment, are represented by a set of particles. Each particle represents a possible location of the robot and landmarks, and the set of particles is updated over time based on sensor measurements.

Particle Filter SLAM is commonly used in unstructured environments, where the sensor measurements can be noisy and the robot's movements can be unpredictable. Unlike other SLAM approaches that assume Gaussian noise and linear motion, Particle Filter SLAM can handle non-linearities and non-Gaussian noise.


The basic steps in Particle Filter SLAM are as follows:

Initialization: The particles are randomly placed in the environment, representing possible locations of the robot and landmarks.

Prediction: The particles are updated based on the robot's movement. Each particle is moved according to the robot's motion model, which accounts for the uncertainty in the robot's movement.

Measurement Update: The particles are updated based on sensor measurements. The sensor measurements are used to compute the likelihood of each particle, which represents how well the particle's position and orientation match the sensor measurements.

Resampling: The particles are resampled based on their likelihood. Particles with higher likelihood are more likely to be selected, while particles with lower likelihood are less likely to be selected. This step helps to reduce the number of particles with low likelihood, improving the accuracy of the particle set.


Particle Filter SLAM has been used in a wide range of applications, including robotics, computer vision, and navigation. For example, it is used in drones to navigate and map unknown environments. It is also used in mobile robots for automated material handling and assembly tasks in manufacturing. Additionally, it has been used in virtual reality and augmented reality to track the movement of a user's head and hands in real-time.

Overall, Particle Filter SLAM is a powerful approach to SLAM that can handle non-linearities and non-Gaussian noise, making it well-suited for unstructured environments. Its ability to provide accurate and real-time localization and mapping makes it a critical component of many robotics and navigation systems.


Cameras and SLAM

what methods of SLAM are best used with computer vision camera as opposed to lidar or other sensors?

When it comes to SLAM, the choice of sensor plays a critical role in determining the best approach to use. Computer vision cameras are commonly used in SLAM systems, and there are several methods that are particularly well-suited to this type of sensor.

One of the most popular methods for SLAM using a camera is visual odometry. Visual odometry relies on detecting and tracking visual features in consecutive frames of a camera feed to estimate the camera's motion. This method is well-suited to cameras because they are able to capture high-resolution images at high frame rates, allowing for accurate and detailed feature tracking. Visual odometry is commonly used in applications such as drones, autonomous vehicles, and augmented reality.

Another method that is often used with cameras is visual SLAM. Visual SLAM uses the same principles as visual odometry, but in addition to estimating the camera's motion, it also builds a map of the environment using the visual features detected by the camera. This map can then be used for localization and navigation. Visual SLAM is often used in applications such as robotics, virtual and augmented reality, and 3D scanning.

Another approach that can be used with cameras is Structure from Motion (SfM). SfM uses a series of images captured by a camera to reconstruct a 3D model of the scene. SfM is well-suited to cameras because they are able to capture high-resolution images, allowing for accurate 3D reconstruction. SfM is commonly used in applications such as 3D modeling and mapping, virtual and augmented reality, and cultural heritage preservation.

While cameras are an excellent sensor for SLAM, there are also other sensors that can be used depending on the application. Lidar, for example, is often used in robotics and autonomous vehicles to build 3D maps of the environment. Other sensors such as radar, sonar, and GPS can also be used in SLAM systems depending on the application and the environment.

In summary, when it comes to SLAM with cameras, visual odometry, visual SLAM, and Structure from Motion are among the most popular methods used. However, the choice of method will ultimately depend on the specific application and the characteristics of the environment being mapped.


SLAM and OpenSource

what open source software projects are implementing SLAM?

There are several open source software projects that implement SLAM algorithms. Here are some of the most popular ones:

ROS (Robot Operating System): ROS is a popular open source framework for robotics development that includes a wide range of tools and libraries for SLAM. ROS provides a range of SLAM algorithms, including gmapping, hector_slam, and cartographer. These algorithms are widely used in applications such as autonomous vehicles, drones, and mobile robots.

OpenSLAM: OpenSLAM is a collaborative open source project that provides a wide range of SLAM algorithms, including visual SLAM, RGB-D SLAM, and graph-based SLAM. The project includes a library of C++ code that can be used to develop SLAM applications.

GTSAM (Georgia Tech Smoothing and Mapping): GTSAM is a library of C++ code that provides tools for implementing SLAM algorithms. GTSAM includes a range of algorithms for factor graph optimization, including pose graph optimization, bundle adjustment, and factor graph smoothing.

ORB-SLAM: ORB-SLAM is a visual SLAM algorithm that uses feature points detected in camera images to perform localization and mapping. ORB-SLAM is implemented in C++ and has been optimized for real-time performance. It is widely used in applications such as autonomous vehicles and augmented reality.

RTAB-Map (Real-Time Appearance-Based Mapping): RTAB-Map is a real-time SLAM algorithm that uses a database of visual features to perform localization and mapping. RTAB-Map is implemented in C++ and is optimized for real-time performance. It is widely used in applications such as drones, mobile robots, and virtual reality.

These are just a few examples of the many open source software projects that implement SLAM algorithms. Each of these projects provides a range of tools and libraries that can be used to develop SLAM applications, making it easier for developers to get started with SLAM and build their own custom applications.


OpenSource SLAM using Graph

which open source projects are using Graph-based SLAM?

Graph-based SLAM is a popular approach to solving the SLAM problem, and there are several open source software projects that use this approach. Here are some examples of open source projects that use graph-based SLAM:

GTSAM (Georgia Tech Smoothing and Mapping): GTSAM is a C++ library that provides a suite of tools for implementing graph-based SLAM algorithms. GTSAM includes a range of algorithms for factor graph optimization, including pose graph optimization, bundle adjustment, and factor graph smoothing.

OpenVSLAM: OpenVSLAM is an open source visual SLAM system that uses graph-based optimization to estimate the camera trajectory and build a map of the environment. OpenVSLAM uses a bundle adjustment algorithm to optimize the pose graph and minimize the reprojection error.

Cartographer: Cartographer is a 2D and 3D SLAM system that uses graph-based optimization to estimate the robot's pose and build a map of the environment. Cartographer is implemented in C++ and is designed to work with a variety of sensors, including lidar, RGB-D cameras, and IMUs.

SLAM++: SLAM++ is an open source SLAM library that uses graph-based optimization to estimate the robot's pose and build a map of the environment. SLAM++ is implemented in C++ and includes a range of features, such as loop closure detection, map merging, and dynamic object tracking.

iSAM: iSAM (incremental Smoothing and Mapping) is a C++ library that provides tools for solving graph-based SLAM problems in an incremental manner. iSAM is designed to work with large-scale SLAM problems and provides a range of features, such as loop closure detection and bundle adjustment.

These are just a few examples of open source projects that use graph-based SLAM. Graph-based SLAM is a powerful approach to solving the SLAM problem, and it has been widely used in applications such as robotics, autonomous vehicles, and augmented reality.


SLAM in the Real World - other than driving

provide real world examples where SLAM has been used, omit any autonomous driving and manufacturing examples.

Certainly! SLAM has been used in many applications beyond autonomous driving and manufacturing. Here are a few examples:

Robotics: SLAM is commonly used in robotics for navigation and mapping. For example, SLAM can be used in unmanned aerial vehicles (UAVs) to create maps of the environment and navigate through unknown terrain.

Augmented reality: SLAM is used in augmented reality to create virtual 3D models of real-world environments. For example, the ARKit and ARCore software development kits for iOS and Android respectively use SLAM to create immersive AR experiences.

Indoor mapping: SLAM can be used to create accurate maps of indoor environments. This can be useful for applications such as facility management, indoor navigation, and asset tracking. For example, the company IndoorAtlas uses SLAM to create maps of indoor spaces that can be used for indoor navigation.

Archaeology: SLAM can be used to create detailed 3D models of archaeological sites. For example, the company CyArk uses SLAM to create high-resolution 3D models of historic sites and cultural artifacts.

Search and rescue: SLAM can be used to create maps of disaster areas to aid in search and rescue operations. For example, researchers at the University of California, Berkeley have used SLAM to create maps of disaster areas that can be used to guide rescue teams and locate survivors.

These are just a few examples of the many applications of SLAM beyond autonomous driving and manufacturing. SLAM is a versatile technology that can be used in many different industries and applications to create accurate maps and navigate through unknown environments.


SLAM in Agriculture

how has SLAM been used in agriculture, omit autonomous tractors.

SLAM has many potential applications in agriculture, beyond autonomous tractors. Here are a few examples of how SLAM has been used in agriculture:

Crop mapping and monitoring: SLAM can be used to create high-resolution maps of crop fields, which can be used to monitor crop health and detect issues such as water stress or nutrient deficiencies. For example, researchers at the University of California, Davis have used SLAM to create maps of almond orchards that can be used to monitor the health and growth of the trees.

Autonomous drone navigation: SLAM can be used to help drones navigate through complex environments such as crop fields. This can be useful for applications such as crop monitoring and mapping, as well as pesticide and fertilizer application. For example, researchers at the University of Illinois at Urbana-Champaign have used SLAM to create a drone navigation system that can fly through apple orchards and detect signs of disease or pest infestations.

Fruit harvesting: SLAM can be used to help robots or autonomous vehicles navigate through orchards and pick fruit. For example, researchers at the University of Plymouth in the UK have used SLAM to create a robotic fruit picker that can navigate through strawberry fields and pick ripe fruit.

Precision agriculture: SLAM can be used to create high-resolution maps of soil properties, which can be used to create precise prescriptions for fertilizer and pesticide application. For example, researchers at the University of Nebraska-Lincoln have used SLAM to create maps of soil compaction that can be used to optimize tillage practices and improve crop yields.

These are just a few examples of how SLAM has been used in agriculture. SLAM has the potential to revolutionize agriculture by providing accurate mapping and navigation capabilities that can help farmers optimize their practices and improve crop yields.


Sensors for SLAM

what is the optimal sensor configuration for accurate SLAM in an unstructured environments?

The optimal sensor configuration for accurate SLAM in unstructured environments depends on a variety of factors, including the size and complexity of the environment, the type of features in the environment, and the mobility and computational resources of the robot or platform performing the SLAM.

In general, a combination of different sensors is often used to achieve the best results. These sensors may include:

LIDAR: LIDAR sensors use lasers to create high-resolution 3D maps of the environment. LIDAR is particularly useful in unstructured environments because it can provide accurate depth information, even in low-light or low-visibility conditions.

RGB-D cameras: RGB-D cameras, such as the Microsoft Kinect, use both color and depth information to create 3D maps of the environment. RGB-D cameras can be useful for SLAM in unstructured environments because they provide both visual and depth information.

Stereo cameras: Stereo cameras use two cameras to create a 3D image of the environment. Stereo cameras can be useful for SLAM in unstructured environments because they provide accurate depth information and can work in a variety of lighting conditions.

Inertial sensors: Inertial sensors, such as accelerometers and gyroscopes, can be used to track the movement of the robot or platform performing SLAM. Inertial sensors can be useful for SLAM in unstructured environments because they provide accurate motion information, even in areas where other sensors may not work well.

GPS: GPS sensors can be used to provide global position information for the robot or platform performing SLAM. While GPS may not be accurate enough for SLAM on its own, it can be useful as a complementary sensor.

Ultimately, the optimal sensor configuration for SLAM in unstructured environments will depend on the specific requirements of the application. In some cases, a combination of all of these sensors may be needed to achieve the best results. In other cases, a simpler sensor configuration may be sufficient.




Thursday, February 16, 2023

SWARM - A conversation with ChatGPT

The following is a conversation with OpenAI's ChatGPT about Swarm robotic systems.  I will be updating the transcript with links as well as verifying ChatGPTs response.

Prompt:

Act as an expert in robotics on the topic of swarm robotics, and an expert blog writer. The audience for this blog is technical professionals who are interested in learning about the latest advancements in swarm robotics. Provide a comprehensive overview of the most popular swarm management processes and software packages, including their strengths and weaknesses. Include real-life examples and case studies to illustrate how these processes have been successfully used in various industries.

[NOTE:  OpenAI's ChatGPT3 is trained on data up to Sept 2021. At the time of this conversation it was almost 18months out of date.]

Introduction to Swarm Robotics

Swarm robotics is a rapidly growing field that involves the coordination of multiple robots to achieve a common goal. The approach is inspired by the collective behavior of social animals such as ants, bees, and termites. Swarm robotics offers several advantages over traditional robotics, including increased flexibility, adaptability, and fault tolerance.

Swarm robotics has found applications in various fields, including agriculture, construction, search and rescue, and surveillance. In this blog, we will discuss the most popular swarm management processes and software packages used in swarm robotics and their strengths and weaknesses.

Particle Swarm Optimization (PSO)

Particle swarm optimization is a popular algorithm used in swarm robotics for optimization problems. The algorithm is based on the behavior of bird flocks and fish schools. PSO is used to optimize a function by iteratively adjusting a population of particles or agents. The algorithm's strength lies in its ability to converge to the optimal solution quickly, but it may get stuck in local optima.

PSO has been applied in various industries, such as in the energy industry, to optimize the placement and sizing of renewable energy systems. A real-life example of PSO in action is the optimization of wind turbines in offshore wind farms.

Ant Colony Optimization (ACO)

Ant colony optimization is another popular algorithm used in swarm robotics. The algorithm is based on the foraging behavior of ants. ACO is used to solve combinatorial optimization problems. The algorithm's strength lies in its ability to find the optimal solution by exploring the search space thoroughly. However, ACO may take longer to converge to the optimal solution.

ACO has been applied in various industries, such as in the logistics industry, to optimize transportation routes. A real-life example of ACO in action is the optimization of delivery routes for a package delivery company.

Artificial Bee Colony (ABC)

Artificial bee colony is a metaheuristic algorithm inspired by the behavior of honeybees. ABC is used to solve optimization problems by iteratively adjusting a population of candidate solutions. The algorithm's strength lies in its simplicity and ability to explore the search space effectively. However, ABC may converge slowly to the optimal solution.

ABC has been applied in various industries, such as in the manufacturing industry, to optimize the production process. A real-life example of ABC in action is the optimization of a manufacturing process to minimize the production cost.

Robotic Operating System (ROS)

The Robotic Operating System (ROS) is a popular software package used in swarm robotics. ROS provides a set of tools and libraries for developing and managing robotic systems. The system's strength lies in its modularity, flexibility, and compatibility with a wide range of sensors and hardware. However, ROS may have a steep learning curve for new users.

ROS has been applied in various industries, such as in the healthcare industry, to develop robotic systems for assisting with surgical procedures. A real-life example of ROS in action is the development of the da Vinci surgical system, which is used in minimally invasive surgeries.

Multi-Agent Platform (MAP)

The Multi-Agent Platform (MAP) is a software package designed for developing and managing multi-agent systems. MAP provides a set of tools and libraries for developing and managing swarm robotics systems. The platform's strength lies in its ability to manage large-scale multi-agent systems and its support for fault tolerance. However, MAP may have limited compatibility with certain sensors and hardware.

MAP has been applied in various industries, such as in the defense industry, to develop swarm robotics systems for surveillance and reconnaissance missions. A real-life example of MAP in action is the development of a swarm of autonomous drones for border surveillance.

Conclusion

Swarm robotics is a rapidly growing field that offers several advantages over traditional robotics. In this blog, we discussed the most popular swarm management processes


About ROS?

Me: what packages are used for Swarm management in ROS?

ROS (Robot Operating System) is a widely used software framework for developing robotic systems, including swarm robotics systems. There are several ROS packages that are specifically designed for swarm management in ROS.

Here are some popular ROS packages for swarm management:

swarm_behaviors: This package provides high-level swarm behaviors for robots in a swarm, such as maintaining a formation, exploring a space, and coordinating tasks. The package is designed to be used with other ROS packages to build more complex swarm behaviors.

swarm_msgs: This package provides message types for swarm robotics systems, such as swarm commands, swarm state information, and swarm sensor data. These message types enable communication and coordination between robots in a swarm.

swarm_simulator: This package provides a simulator for swarm robotics systems in ROS. The simulator allows users to test and validate swarm behaviors and algorithms in a simulated environment before deploying them on real robots.

swarm_navigation: This package provides navigation algorithms for robots in a swarm, such as swarm path planning and obstacle avoidance. The package is designed to work with other ROS packages to provide more complex swarm navigation behaviors.

swarm_control: This package provides low-level control algorithms for robots in a swarm, such as formation control and synchronization. The package is designed to be used with other ROS packages to build more complex swarm control behaviors.

These packages are just a few examples of the many ROS packages available for swarm management. Depending on the specific application, different combinations of packages may be used to build a complete swarm robotics system in ROS.


Communications for Swarms:

ME:  what technology and methods are used for swarm communications in areas of 100s of meters? what protocols of communication are used?

Swarm communications in areas of 100s of meters require a robust and reliable communication system that can support a large number of robots in a swarm. There are several technologies and methods that can be used for swarm communications, including wireless mesh networks, ad-hoc networks, and satellite communications.

Wireless mesh networks are a popular technology for swarm communications. In this approach, each robot in the swarm acts as a node in a wireless network, and communication is facilitated through peer-to-peer links between nodes. Mesh networks are highly scalable and can support a large number of nodes, making them ideal for swarm robotics applications.

Ad-hoc networks are another technology that can be used for swarm communications. In this approach, robots in the swarm establish direct communication links with nearby robots to create a decentralized communication network. Ad-hoc networks are highly flexible and can adapt to changes in the swarm's structure, making them ideal for dynamic swarm robotics applications.

Satellite communications are a technology that can be used for swarm communications in remote areas where other communication technologies are not available. In this approach, robots in the swarm communicate with a satellite to transmit data and commands to other robots in the swarm. Satellite communications are highly reliable but may have high latency due to the long distance between the swarm and the satellite.

In terms of communication protocols, several protocols are used in swarm robotics, including Bluetooth, Zigbee, and IEEE 802.11. Bluetooth is a short-range wireless protocol that can be used for communication between nearby robots in a swarm. Zigbee is a low-power wireless protocol that can be used for communication between robots in a swarm over longer distances. IEEE 802.11 is a high-speed wireless protocol that can be used for communication between robots in a swarm over long distances.

In summary, the choice of communication technology and protocols for swarm robotics applications depends on several factors, including the size and structure of the swarm, the range of communication required, and the environmental conditions in which the swarm operates.


Swarm with Mesh networks

ME: what swarm robot management systems support mesh networks?

There are several swarm robot management systems that support mesh networks for communication between swarm robots. Here are a few examples:

Kilobot: Kilobot is a low-cost robot designed for swarm robotics research. The Kilobot system includes a custom wireless mesh network protocol that allows the robots to communicate with each other and coordinate their behaviors.

DroNet: DroNet is a swarm robotics platform designed for monitoring and surveillance applications. The system includes a wireless mesh network protocol that enables communication between the robots in the swarm.

Buzz: Buzz is a programming language and runtime system for swarm robotics applications. The Buzz system includes a wireless mesh network protocol that allows the robots to communicate with each other and share information.

Aseba: Aseba is a programming language and runtime system for swarm robotics applications. The Aseba system includes a wireless mesh network protocol that allows the robots to communicate with each other and coordinate their behaviors.

MORSE: MORSE is a simulation platform for swarm robotics research. The MORSE system includes a wireless mesh network protocol that simulates the communication between the robots in the swarm.

These are just a few examples of swarm robot management systems that support mesh networks for communication between swarm robots. Depending on the specific application, different swarm robot management systems may be more appropriate.


Swarm in Industry:

ME: what industries has swarm robotics been applied to?

Swarm robotics has been applied to a wide range of industries, from manufacturing and logistics to agriculture and environmental monitoring. Here are some examples of industries where swarm robotics has been successfully applied:

Manufacturing: In manufacturing, swarm robotics has been used to optimize production processes and improve product quality. For example, swarm robots can work together to assemble complex products, such as electronic devices or automobiles, with high precision and efficiency.

Logistics: In logistics, swarm robotics has been used to automate warehouse operations and improve supply chain management. For example, swarm robots can work together to pick and pack items in a warehouse, reducing the need for human labor and improving efficiency.

Agriculture: In agriculture, swarm robotics has been used to monitor and manage crops, as well as perform tasks such as planting and harvesting. For example, swarm robots can work together to map and monitor fields, identify areas where crops are struggling, and apply targeted treatments to improve crop health.

Environmental monitoring: In environmental monitoring, swarm robotics has been used to collect data and monitor sensitive environments, such as forests, oceans, and wetlands. For example, swarm robots can work together to collect water or soil samples, monitor air quality, and track wildlife populations.

Search and rescue: In search and rescue operations, swarm robotics has been used to locate and rescue people in emergency situations. For example, swarm robots can work together to search for survivors in the rubble of a collapsed building or to locate people stranded in a remote wilderness area.

These are just a few examples of the many industries where swarm robotics has been applied. As swarm robotics technology continues to advance, we can expect to see even more applications in the future.


Swarm and SLAM

ME: what systems combine swarm robotics management with larger mapping (SLAM)?

Simultaneous Localization and Mapping (SLAM) is an important technology for mobile robotics that allows robots to build a map of their environment while simultaneously localizing themselves within that map. Combining SLAM with swarm robotics can be particularly useful in applications where a large number of robots need to navigate and map an environment. Here are some examples of swarm robotics systems that combine SLAM with swarm management:

ARGoS: ARGoS is a multi-robot simulation framework that supports swarm robotics research. The ARGoS system includes a SLAM module that allows the robots to build a map of their environment and localize themselves within that map. The system also includes a swarm management module that allows the robots to communicate and coordinate their behaviors.

Robot Operating System (ROS): ROS is a popular platform for mobile robotics research that includes many modules and tools for building complex robotic systems. ROS includes several SLAM packages, such as GMapping and Hector SLAM, that can be used in conjunction with swarm robotics management packages, such as Swarm Robotics Tool Kit (SRTK), to build large-scale robotic systems.

Robotic Operating System for Everyone (ROS2): ROS2 is a newer version of ROS that includes many improvements and updates. ROS2 also includes several SLAM packages, such as Navigation2 and Cartographer, that can be used in conjunction with swarm robotics management packages, such as Swarm Intelligence in Cellular Systems (SWARMS), to build complex robotic systems.

Distributed Autonomous Robotic Systems (DARS): DARS is a research program that aims to develop decentralized, self-organizing robotic systems. The DARS program includes several projects that combine SLAM with swarm robotics management to create large-scale robotic systems that can operate autonomously.

These are just a few examples of swarm robotics systems that combine SLAM with swarm management. There are many other systems and research projects that are exploring this area, and as the technology continues to advance, we can expect to see even more sophisticated swarm robotics systems that can operate in complex and dynamic environments.


SWARM software that is commercially available

ME: what commercially available software systems provide swam management?

There are several commercially available software systems that provide swarm management capabilities. Here are some examples:

Robot Operating System (ROS): ROS is a popular open-source platform for building robotic systems. ROS includes several packages for swarm robotics, such as the Swarm Robotics Tool Kit (SRTK) and the ARGoS simulator. ROS can be used for a wide range of robotic applications, including swarm robotics, and is supported by a large community of developers.

Blue River Technology: Blue River Technology is a company that develops precision agriculture technologies, including a swarm management system called See & Spray. See & Spray uses computer vision and machine learning to identify and target weeds in agricultural fields, and uses a swarm of robots to apply targeted herbicide to the weeds. The system is designed to reduce the amount of herbicide used in agriculture and improve crop yields.

Aerialtronics: Aerialtronics is a company that develops drone technologies for a wide range of applications, including inspection, surveillance, and security. Aerialtronics' Altura Zenith drone system includes a swarm management module that allows multiple drones to be controlled and coordinated by a single operator. The system can be used for applications such as search and rescue, disaster response, and infrastructure inspection.

Auterion: Auterion is a company that develops open-source software for drone and robotics applications. Auterion's Dronecode platform includes a swarm management module that allows multiple drones to be controlled and coordinated. The platform is designed to be flexible and customizable, and can be used for a wide range of applications, including agriculture, inspection, and search and rescue.

These are just a few examples of commercially available software systems that provide swarm management capabilities. There are many other companies and platforms that are developing swarm robotics technologies, and as the field continues to evolve, we can expect to see even more advanced and sophisticated systems become available.


Swarm systems in unstructured environments

Me: which swarm systems operate best in unstructured and non-static environments?

Swarm systems that operate best in unstructured and non-static environments are those that are designed to be adaptive and resilient. Here are some examples of swarm systems that are well-suited for such environments:

Kilobot: Kilobot is a low-cost swarm robot developed by researchers at Harvard University. The Kilobots are designed to operate in large groups and are capable of self-organizing and adapting to changes in their environment. The robots use a simple communication protocol to coordinate their behaviors, and can perform a variety of tasks, such as pattern formation and aggregation.

[Note: Kilobot is not suitable for unstructured environments. (-1 Chat GPT!)]

SWARMS: SWARMS (Swarm Intelligence in Cellular Systems) is a software platform developed by researchers at the University of Sheffield that allows robots to self-organize and adapt to changing environments. The platform includes a swarm management module that allows the robots to communicate and coordinate their behaviors. The SWARMS platform has been used in a variety of applications, such as environmental monitoring and search and rescue.

RoboBees: RoboBees are a type of micro-robot developed by researchers at Harvard University that are designed to mimic the behavior of bees. The robots are capable of flying and can be used for a variety of applications, such as environmental monitoring and crop pollination. The RoboBees are designed to operate in large swarms and can adapt to changes in their environment.

AMR Swarm: The AMR Swarm is a swarm robotics platform developed by researchers at the University of Lincoln that is designed for use in industrial environments. The platform includes a swarm management module that allows the robots to communicate and coordinate their behaviors, and is designed to be adaptable to changes in the environment. The AMR Swarm has been used in a variety of applications, such as warehouse logistics and manufacturing.

These are just a few examples of swarm systems that are well-suited for unstructured and non-static environments. In general, swarm systems that are designed to be adaptive, resilient, and capable of self-organization are likely to perform well in such environments. Additionally, systems that use sensors and machine learning algorithms to perceive and interpret their environment can further enhance their adaptability and resilience.