Monday, October 28, 2024

SLAM! - The Basics

[Note: this article is mostly a conversation I had with an AI. I likeed the output, so I am including it here. It may not be the most genuine of blog posts, but it is something I will refer back to, so I am sharing it with you as well.]

A Simple SLAM Simulation

SLAM, which stands for Simultaneous Localization and Mapping, is a critical concept in robotics, especially in autonomous systems where the robot must navigate through an unknown environment. Here’s a detailed breakdown of the basic components and methods involved in SLAM:

What is SLAM?

SLAM is the process by which a robot can:

1.  Map an Environment: Create a representation (usually a 2D or 3D map) of an unfamiliar area.

2.  Localize Itself: Determine its own position within that map, in real time.

SLAM is a solution to the "chicken and egg" problem of robotics: To map the environment, a robot needs to know where it is, but to know where it is, it needs a map. SLAM solves both tasks simultaneously.


Key Components of SLAM

1. Sensors

   - Lidar: Light Detection and Ranging (uses laser beams to measure distances).

   - Cameras: Visual or stereo cameras for Visual SLAM (VSLAM).

   - Ultrasonic Sensors: Simple but effective for obstacle detection.

   - IMUs (Inertial Measurement Units): Track orientation and movement changes.

   - Encoders: Simple but effective for tracking movement changes.


2. State Estimation

   - Pose: Position (x, y, z) and orientation (yaw, pitch, roll) are estimated continuously.

   - Common methods for estimating this are Kalman Filters or Particle Filters. For complex non-linear systems, the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) is often used.


3. Mapping:

   - The robot gathers spatial information about obstacles and free space.

   - Maps are usually grid-based (like occupancy grids) or feature-based. 

   - Occupancy Grid Mapping divides space into a grid of cells, marking each cell as occupied, free, or unknown.


4. Loop Closure

   - The robot must recognize a location it has visited before, called loop closure.

   - Without recognizing these loops, errors can accumulate, leading to an inaccurate map.

   - Visual features, like walls or landmarks, are often used for loop closure.


Mathematical Foundations

1. Probabilistic Approach:

   - SLAM typically involves Bayesian Estimation. A robot's state is represented as a probability distribution that evolves as new data arrives from the sensors.

   - Bayes Filters and Markov Localization are used to manage the uncertainty of sensor data and robot motion.


2. State-Space Representation:

   - The SLAM problem can be represented in terms of states and measurements, where states represent robot and feature positions and measurements represent distances and angles to landmarks.

   - The sensor model and motion model help update the robot’s understanding of the environment.


Approaches to SLAM

1. Kalman Filter-Based SLAM:

   - The Extended Kalman Filter (EKF) SLAM is one of the earliest methods.

   - It’s efficient but suffers from scalability issues in large environments due to the computational cost of updating a large covariance matrix.


2. Particle Filter-Based SLAM:

   - Also called Monte Carlo Localization (MCL).

   - Uses particles to represent possible positions of the robot. It’s highly robust for non-linear problems and can handle multi-modal distributions better than EKF.


3. Graph-Based SLAM:

   - The problem is represented as a graph, where nodes represent robot poses and landmarks, and edges represent constraints between them (from sensors).

   - After creating the graph, optimization is used to adjust all nodes, effectively minimizing the error in both localization and mapping.


4. Visual SLAM (VSLAM):

   - Uses cameras to detect visual landmarks. ORB-SLAM (Oriented FAST and Rotated BRIEF) is a popular implementation.

   - Visual SLAM can use either monocular or stereo cameras. Stereo cameras provide depth information directly, while monocular cameras infer depth using techniques like feature triangulation over time.


Mapping Techniques

1. Occupancy Grid Mapping:

   - The environment is divided into a discrete grid where each cell holds a probability of being occupied.

   - Updates occur using sensor data and can be represented by Bayes’ rule.

2. Feature-Based Mapping:

   - Instead of dividing space into grids, the system identifies key features or landmarks (corners, edges, unique patterns) and tracks them.

   - It’s computationally efficient in sparse environments.

3. Octomap (3D Occupancy Grid Mapping):

   - Uses octrees to create maps that efficiently represent 3D environments.


Challenges in SLAM

1. Sensor Noise: Every sensor has inherent noise that needs to be filtered out.

2. Motion Model Uncertainty: The real-world movements of robots have slippage and drift, leading to deviations between intended and actual motion.

3. Loop Closure Errors: If not correctly handled, identifying previously visited locations can lead to misaligned maps.

4. Scalability: Larger maps become computationally intensive, and optimizing these graphs for large environments requires sophisticated approaches.


Popular SLAM Frameworks and Libraries

1. GMapping: A well-known 2D SLAM algorithm using Particle Filters, often used with ROS (Robot Operating System).

2. Hector SLAM: Uses a Lidar and an IMU, known for high speed and low latency mapping in indoor environments.

3. Cartographer by Google: Supports both 2D and 3D SLAM, popular for its real-time mapping capabilities.

4. ORB-SLAM: A Visual SLAM library focusing on feature-based landmarks, very popular in robotics and augmented reality.


Use Cases in Robotics

1. Autonomous Vacuum Cleaners: These use SLAM to map a house layout and optimize cleaning routes while avoiding obstacles.

2. Delivery Robots: Like those used in Amazon warehouses, where they navigate autonomously through aisles and handle dynamic obstacles.

3. Drones: Drones often employ Visual SLAM to create maps of unknown environments for navigation and avoid collisions.

4. Self-Driving Cars: While more sophisticated, they often employ a form of SLAM to localize the vehicle and build environmental maps for path planning.

Ah...  These Use Cases are just the basic ones the AI can share, the low hanging fruit. The Use Cases are much, much, more expansive.  All robots and machines interacting in physical space will need some form of SLAM. Once the machine is localized and positions are known, then the applications of the sensors, manipulators, and motion can become effective. This is just the beginning, there are many applications that are yet to be realized or identified!


Advanced SLAM Topics to Explore

- Multi-Robot SLAM: Where a team of robots collaborates to create a common map.

- SLAM with Deep Learning: Incorporating neural networks to aid feature extraction and recognition, which can improve SLAM reliability, especially in challenging environments.

- SLAM in Dynamic Environments: Handling environments where objects are constantly in motion, requiring more sophisticated filtering to identify static versus dynamic features.


Conclusion

SLAM forms the backbone of autonomous navigation, providing a robot with the ability to understand and navigate in its environment. Whether using traditional Lidar-based approaches, visual methods, or newer graph-based optimization techniques, each approach has its own strengths and trade-offs depending on the application scenario.

Moving forward, I will be expanding my Simple SLAM! Simulator to explore the necessary components of SLAM.  My interest is (at-least!) two fold: 

1) The digitalization of physical space, fusing sensor data to observer and record that in our world that is difficult for humans to perceive in spectrum, space, and temporal dimensions.

2) Develop the concepts around Machine Perception of the physical world.

Sunday, September 29, 2024

SLAM! - Building a Simple SLAM Simulator

SLAM_play - A simple SLAM simulator

Welcome to the next step in creating my Totally Not Evil Robot Army! Today, we kick off our journey into the fascinating world of robotics with a simple SLAM (Simultaneous Localization and Mapping) project.

What’s SLAM?

In the world of robots, SLAM is what helps our mechanical minions make sense of their surroundings. Imagine a robot moving through an unknown environment—how does it know where it is and where the walls (or, you know, other targets) are? That's where SLAM comes in. The robot uses sensor data to map its environment and figure out its position on that map in real time.

My SLAM_play Project

My initial SLAM project is a basic 2D simulator called SLAM_play. This simulation features a simple robot equipped with an ultrasonic sensor that explores a grid-like environment. The robot moves around, detecting obstacles and marking unexplored areas as it builds a map of its world.

The robot updates its map in real time:

  • Grey zones represent the areas it has explored and found clear.
  • Green spots mark obstacles (which may or may not be future targets for…um...peaceful interaction).
  • Dark grey zones represent the "frontier," where the robot has reached the limits of its sensor's range and hasn’t detected anything yet.

What's Next?

This is just the beginning. As I dive deeper into SLAM, I’ll explore adding more complex sensors, like LIDAR, and even experiment with autonomous pathfinding. The ultimate goal is to build a robot army that can navigate any environment, no matter how complex…for purely benevolent purposes, of course.

For now, I'm taking small steps, but these small steps will one day become the foundation of an unstoppable (yet totally friendly) robot force.

If you’re interested in exploring the code behind this simulator, check out my public GitHub repo here:
SLAM_play Repository

Lets continue the adventure of giving the machine some SLAM —one sensor at a time!

Robot Army Motto: Mapping the future, one ultrasonic ping at a time.  (for now!)

[UPDATE]

Here is a short video of the Simple SLAM simulator:



more to come...

Sunday, August 11, 2024

Mojo5: The End of a Journey (#12)


Every project reaches that pivotal moment when progress halts, and the creative spark seems to fade. For Mojo5, that moment has arrived. This little robot has encountered a significant challenge in the design of its abductors. The servos have developed excessive backlash, compromising its ability to stand and walk effectively.

In an attempt to resolve the issue, I removed the servos and installed a 'static' gear to lock them in place. Unfortunately, this fix didn't fully eliminate the backlash, and the problem persists. Addressing this issue would require a major redesign, far more extensive than initially anticipated.

With a heavy heart, I’ve decided that it’s time to retire Mojo5. But as with all endings, this marks the beginning of something new. Long live Mojo6.

Sunday, June 23, 2024

Mojo5 - Enhancing Performance with More Powerful Batteries (11)

 

Mojo5 - with power bank battery

Welcome back to the "Totally Not Evil Robot Army" blog. In this 11th installment of the Mojo5 series, we’re going to dive into a crucial aspect of building robust quadruped robots: using more powerful batteries to drive hobby servos. Specifically, we'll explore how increasing the available amperage and voltage can significantly improve the performance of underpowered servos and the considerations needed to ensure safe and efficient operation.

The Problem with Underpowered Servos

Building a quadruped robot like Mojo5 with cheap hobby servos often leads to performance issues. Many of these servos, such as the MG995, are slow and lack the torque needed for dynamic movements. Even with a 5V power supply, these servos can become wobbly and unreliable, especially under load. This is where upgrading to more powerful batteries comes into play.

The Limitations of the Original Power Configuration

Initially, Mojo5 used a 5V 12000mAh power bank. While this power source had a high capacity, it was limited to a maximum output of 3A. Given that the MG995 servos have a stall current rating of up to 3A each, the total current demand for the robot could easily exceed 20A during peak operation. This significant shortfall in available current was a primary cause of the robot's poor performance.

Upgrading the Power Source: LiPo Batteries

Based on previous discussions and experiments, we found that using a more powerful battery can drastically improve servo performance. Upgrading to a LiPo battery with a 2200mAh capacity, 7.4V voltage, and a 50C discharge rating provides the needed boost. This battery can supply well over the 20A needed by the servos, addressing the current limitations of the previous power source.

Considerations for Over-Volting

  1. Risk of Burnout: Exceeding the servo’s voltage rating does carry a risk of burnout. However, anecdotal evidence suggests that slight over-volting is generally safe if monitored properly. Implementing a voltage regulator could limit the current capacity, which might be counterproductive. In general voltage regulators on servo power sources are not recommended.

  2. Separate Circuits for Different Servos: For servos with lower voltage ratings, such as the MG90 (max 6V), consider creating a separate circuit to avoid over-volting. This will add an additional circuit which can be problematic. For my experiments with Mojo5, I used the same circuit.

Practical Application: Experimenting

Initial Setup

  1. Battery Connection: Connect the 7.4V 50C LiPo battery directly to the power bus of the PCA9685 servo driver, bypassing capacitors and regulators. This setup ensures the servos receive power directly.

  2. Handling Large Wires: Safely connecting XT-60 battery connectors and their thick 12 AWG wires to smaller electronics is challenging. Using a bare copper PCB or prototyping PCBs with multiple copper lines can provide a more robust connection. Soldering all wires together is an ugly but space-efficient solution.


Not Recommended!


In this experiment, I took the short cut and soldered the large wires directly in to a dupont connector. The connector was connected directly to the servo bus. This is not a recommended practice! I better connection with larger wires is recommended. In this particular case, as long as all of the servo motors do not stall, then only brief seconds of +20A current would be expected.  Current greater than 7A for sustained periods of time would destroy the connectors.

Experiment Results

After connecting the 7.4V battery:

  • Servo Performance: There was a noticeable improvement in servo actuation. The MG995 servos operated better, with no immediate burnout.
  • Heat Management: Monitoring the temperature of connectors and wires is crucial. Using a finger-test method (keeping fingers on components for one minute) can help identify safe current levels. If components become too hot to touch, reduce the current.

To visualize the improvement, I have embedded a short video showing Mojo5's performance with the original 5V power bank versus the upgraded 7.4V LiPo battery. Notice the difference in servo response time and stability.



Long-Term Solutions

While the initial experiments are promising, long-term solutions require more robust hardware:

  1. Custom Servo Driver Board: Developing a servo driver board with better connectors and thicker traces can handle higher currents more efficiently. This would replace the PCA9685 board, which is not designed for high current loads.
  2. Current Monitoring: Implementing current measurements and safety features like e-fuses can prevent overcurrent situations and protect your components.

Conclusion

Upgrading to more powerful batteries can significantly enhance the performance of underpowered servos in your DIY robots. While there are risks associated with over-volting, careful monitoring and proper hardware can mitigate these risks. As we continue to push the boundaries of DIY robotics, sharing these experiences and solutions will help us all build more capable and reliable robots.

Sunday, May 19, 2024

Mojo5 - Mirrored Servo Control for Opposite Side Legs (#10)

Mojo5 - Two Legs with Symmetric Control

Introduction to Mirrored Servo Control (Symmetric Control)

One of the primary challenges in developing Mojo5 was ensuring synchronized movements between the servos on opposite sides of the robot. To achieve this, we employed a straightforward yet effective approach: mirroring the servo movements by reflecting the target angles.

Technical Implementation

To implement the mirroring effect, we introduced a boolean parameter in our Servo structure to indicate whether a servo should be mirrored. The adjustment is applied directly in the servo control function.

void moveServo(const Servo& servo, int pos) {
  pos = max(servo.minPos, min(pos, servo.maxPos));
  int pulseWidth =  map(pos, servo.minPos, servo.maxPos, servo.minPWM, servo.maxPWM);
  pca9685.setPWM(servo.num, 0, pulseWidth);
    if (servo.mirror) {
        pos = 180 - pos;  // Adjust the position if mirroring is needed
    }
    pos = max(servo.minPos, min(pos, servo.maxPos));
    int pulseWidth =  map(pos, servo.minPos, servo.maxPos, servo.minPWM, servo.maxPWM);
    pca9685.setPWM(servo.num, 0, pulseWidth);
}

the following structure is used to define all the elements of the servo

struct Servo {
  uint8_t num;
  int minPos;
  int maxPos;
  int minPWM;
  int maxPWM;
  int minRange;
  int maxRange;
  int minRangePWM;
  int maxRangePWM;
  bool mirror;  //true => adjust IK if opposite side
};


The results of this code are directly observed in the output for the Pulse Width Modulation (pwm) values that are sent to the servos.  Here you can see the servo values are 'reflected' or 'mirrored', symmetrical to one another:

Mojo5 - Symmetric control - pwm values over a rectangle gait


Practical Application and Results

In our setup, the mirror effect (symmetric control) is particularly useful for maintaining symmetry in the leg movements. This approach simplifies the inverse kinematics (IK) calculations, as the same code can be used for both sides of the robot with the mirrored adjustment applied where necessary only at the servo control.

To illustrate this concept, we've embedded a short video demonstrating the addition of a leg from the opposite side of Mojo5. The IK calculations are identical, but the servo angles are adjusted by reflecting the target angles, resulting in a mirrored motion that maintains the robot's symmetry.



Insights on Mirrored Movements

Creating a mirrored movement (symmetric control) for servos is crucial for several reasons:

  • Symmetry and Balance: Ensuring that both sides of the robot move in a symmetrical manner is essential for maintaining balance, especially in quadruped robots. Asymmetrical movements can lead to instability and erratic behavior.
  • Simplified Coding: By mirroring movements, the same IK code can be reused for both sides, reducing complexity and the potential for errors. This makes the development process more efficient and the codebase easier to maintain. The servo angles are simply adjusted because all servos turn counter-clockwise and have a 0 to 180° range based on their orientation. Servos on the opposite side face differently, so this mirroring adjustment is necessary.
  • Consistent Gait Patterns: Symmetrical leg movements are vital for creating smooth and natural-looking gait patterns. Mirroring helps in achieving uniform step lengths and timings, which are important for the robot's locomotion.

Summary

The mirroring technique (symmetric control) we've implemented in Mojo5 represents a significant simplification in controlling symmetrical movements in quadruped robots. By introducing a boolean flag in the servo structure and adjusting the servo angles accordingly, we achieve mirrored movements without duplicating the IK code. This not only enhances the efficiency of our development process but also ensures more consistent and predictable robotic movements.