Monday, October 28, 2024

SLAM! - The Basics

[Note: this article is mostly a conversation I had with an AI. I likeed the output, so I am including it here. It may not be the most genuine of blog posts, but it is something I will refer back to, so I am sharing it with you as well.]

A Simple SLAM Simulation

SLAM, which stands for Simultaneous Localization and Mapping, is a critical concept in robotics, especially in autonomous systems where the robot must navigate through an unknown environment. Here’s a detailed breakdown of the basic components and methods involved in SLAM:

What is SLAM?

SLAM is the process by which a robot can:

1.  Map an Environment: Create a representation (usually a 2D or 3D map) of an unfamiliar area.

2.  Localize Itself: Determine its own position within that map, in real time.

SLAM is a solution to the "chicken and egg" problem of robotics: To map the environment, a robot needs to know where it is, but to know where it is, it needs a map. SLAM solves both tasks simultaneously.


Key Components of SLAM

1. Sensors

   - Lidar: Light Detection and Ranging (uses laser beams to measure distances).

   - Cameras: Visual or stereo cameras for Visual SLAM (VSLAM).

   - Ultrasonic Sensors: Simple but effective for obstacle detection.

   - IMUs (Inertial Measurement Units): Track orientation and movement changes.

   - Encoders: Simple but effective for tracking movement changes.


2. State Estimation

   - Pose: Position (x, y, z) and orientation (yaw, pitch, roll) are estimated continuously.

   - Common methods for estimating this are Kalman Filters or Particle Filters. For complex non-linear systems, the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) is often used.


3. Mapping:

   - The robot gathers spatial information about obstacles and free space.

   - Maps are usually grid-based (like occupancy grids) or feature-based. 

   - Occupancy Grid Mapping divides space into a grid of cells, marking each cell as occupied, free, or unknown.


4. Loop Closure

   - The robot must recognize a location it has visited before, called loop closure.

   - Without recognizing these loops, errors can accumulate, leading to an inaccurate map.

   - Visual features, like walls or landmarks, are often used for loop closure.


Mathematical Foundations

1. Probabilistic Approach:

   - SLAM typically involves Bayesian Estimation. A robot's state is represented as a probability distribution that evolves as new data arrives from the sensors.

   - Bayes Filters and Markov Localization are used to manage the uncertainty of sensor data and robot motion.


2. State-Space Representation:

   - The SLAM problem can be represented in terms of states and measurements, where states represent robot and feature positions and measurements represent distances and angles to landmarks.

   - The sensor model and motion model help update the robot’s understanding of the environment.


Approaches to SLAM

1. Kalman Filter-Based SLAM:

   - The Extended Kalman Filter (EKF) SLAM is one of the earliest methods.

   - It’s efficient but suffers from scalability issues in large environments due to the computational cost of updating a large covariance matrix.


2. Particle Filter-Based SLAM:

   - Also called Monte Carlo Localization (MCL).

   - Uses particles to represent possible positions of the robot. It’s highly robust for non-linear problems and can handle multi-modal distributions better than EKF.


3. Graph-Based SLAM:

   - The problem is represented as a graph, where nodes represent robot poses and landmarks, and edges represent constraints between them (from sensors).

   - After creating the graph, optimization is used to adjust all nodes, effectively minimizing the error in both localization and mapping.


4. Visual SLAM (VSLAM):

   - Uses cameras to detect visual landmarks. ORB-SLAM (Oriented FAST and Rotated BRIEF) is a popular implementation.

   - Visual SLAM can use either monocular or stereo cameras. Stereo cameras provide depth information directly, while monocular cameras infer depth using techniques like feature triangulation over time.


Mapping Techniques

1. Occupancy Grid Mapping:

   - The environment is divided into a discrete grid where each cell holds a probability of being occupied.

   - Updates occur using sensor data and can be represented by Bayes’ rule.

2. Feature-Based Mapping:

   - Instead of dividing space into grids, the system identifies key features or landmarks (corners, edges, unique patterns) and tracks them.

   - It’s computationally efficient in sparse environments.

3. Octomap (3D Occupancy Grid Mapping):

   - Uses octrees to create maps that efficiently represent 3D environments.


Challenges in SLAM

1. Sensor Noise: Every sensor has inherent noise that needs to be filtered out.

2. Motion Model Uncertainty: The real-world movements of robots have slippage and drift, leading to deviations between intended and actual motion.

3. Loop Closure Errors: If not correctly handled, identifying previously visited locations can lead to misaligned maps.

4. Scalability: Larger maps become computationally intensive, and optimizing these graphs for large environments requires sophisticated approaches.


Popular SLAM Frameworks and Libraries

1. GMapping: A well-known 2D SLAM algorithm using Particle Filters, often used with ROS (Robot Operating System).

2. Hector SLAM: Uses a Lidar and an IMU, known for high speed and low latency mapping in indoor environments.

3. Cartographer by Google: Supports both 2D and 3D SLAM, popular for its real-time mapping capabilities.

4. ORB-SLAM: A Visual SLAM library focusing on feature-based landmarks, very popular in robotics and augmented reality.


Use Cases in Robotics

1. Autonomous Vacuum Cleaners: These use SLAM to map a house layout and optimize cleaning routes while avoiding obstacles.

2. Delivery Robots: Like those used in Amazon warehouses, where they navigate autonomously through aisles and handle dynamic obstacles.

3. Drones: Drones often employ Visual SLAM to create maps of unknown environments for navigation and avoid collisions.

4. Self-Driving Cars: While more sophisticated, they often employ a form of SLAM to localize the vehicle and build environmental maps for path planning.

Ah...  These Use Cases are just the basic ones the AI can share, the low hanging fruit. The Use Cases are much, much, more expansive.  All robots and machines interacting in physical space will need some form of SLAM. Once the machine is localized and positions are known, then the applications of the sensors, manipulators, and motion can become effective. This is just the beginning, there are many applications that are yet to be realized or identified!


Advanced SLAM Topics to Explore

- Multi-Robot SLAM: Where a team of robots collaborates to create a common map.

- SLAM with Deep Learning: Incorporating neural networks to aid feature extraction and recognition, which can improve SLAM reliability, especially in challenging environments.

- SLAM in Dynamic Environments: Handling environments where objects are constantly in motion, requiring more sophisticated filtering to identify static versus dynamic features.


Conclusion

SLAM forms the backbone of autonomous navigation, providing a robot with the ability to understand and navigate in its environment. Whether using traditional Lidar-based approaches, visual methods, or newer graph-based optimization techniques, each approach has its own strengths and trade-offs depending on the application scenario.

Moving forward, I will be expanding my Simple SLAM! Simulator to explore the necessary components of SLAM.  My interest is (at-least!) two fold: 

1) The digitalization of physical space, fusing sensor data to observer and record that in our world that is difficult for humans to perceive in spectrum, space, and temporal dimensions.

2) Develop the concepts around Machine Perception of the physical world.

Sunday, September 29, 2024

SLAM! - Building a Simple SLAM Simulator

SLAM_play - A simple SLAM simulator

Welcome to the next step in creating my Totally Not Evil Robot Army! Today, we kick off our journey into the fascinating world of robotics with a simple SLAM (Simultaneous Localization and Mapping) project.

What’s SLAM?

In the world of robots, SLAM is what helps our mechanical minions make sense of their surroundings. Imagine a robot moving through an unknown environment—how does it know where it is and where the walls (or, you know, other targets) are? That's where SLAM comes in. The robot uses sensor data to map its environment and figure out its position on that map in real time.

My SLAM_play Project

My initial SLAM project is a basic 2D simulator called SLAM_play. This simulation features a simple robot equipped with an ultrasonic sensor that explores a grid-like environment. The robot moves around, detecting obstacles and marking unexplored areas as it builds a map of its world.

The robot updates its map in real time:

  • Grey zones represent the areas it has explored and found clear.
  • Green spots mark obstacles (which may or may not be future targets for…um...peaceful interaction).
  • Dark grey zones represent the "frontier," where the robot has reached the limits of its sensor's range and hasn’t detected anything yet.

What's Next?

This is just the beginning. As I dive deeper into SLAM, I’ll explore adding more complex sensors, like LIDAR, and even experiment with autonomous pathfinding. The ultimate goal is to build a robot army that can navigate any environment, no matter how complex…for purely benevolent purposes, of course.

For now, I'm taking small steps, but these small steps will one day become the foundation of an unstoppable (yet totally friendly) robot force.

If you’re interested in exploring the code behind this simulator, check out my public GitHub repo here:
SLAM_play Repository

Lets continue the adventure of giving the machine some SLAM —one sensor at a time!

Robot Army Motto: Mapping the future, one ultrasonic ping at a time.  (for now!)

[UPDATE]

Here is a short video of the Simple SLAM simulator:



more to come...

Sunday, August 11, 2024

Mojo5: The End of a Journey (#12)


Every project reaches that pivotal moment when progress halts, and the creative spark seems to fade. For Mojo5, that moment has arrived. This little robot has encountered a significant challenge in the design of its abductors. The servos have developed excessive backlash, compromising its ability to stand and walk effectively.

In an attempt to resolve the issue, I removed the servos and installed a 'static' gear to lock them in place. Unfortunately, this fix didn't fully eliminate the backlash, and the problem persists. Addressing this issue would require a major redesign, far more extensive than initially anticipated.

With a heavy heart, I’ve decided that it’s time to retire Mojo5. But as with all endings, this marks the beginning of something new. Long live Mojo6.

Sunday, June 23, 2024

Mojo5 - Enhancing Performance with More Powerful Batteries (11)

 

Mojo5 - with power bank battery

Welcome back to the "Totally Not Evil Robot Army" blog. In this 11th installment of the Mojo5 series, we’re going to dive into a crucial aspect of building robust quadruped robots: using more powerful batteries to drive hobby servos. Specifically, we'll explore how increasing the available amperage and voltage can significantly improve the performance of underpowered servos and the considerations needed to ensure safe and efficient operation.

The Problem with Underpowered Servos

Building a quadruped robot like Mojo5 with cheap hobby servos often leads to performance issues. Many of these servos, such as the MG995, are slow and lack the torque needed for dynamic movements. Even with a 5V power supply, these servos can become wobbly and unreliable, especially under load. This is where upgrading to more powerful batteries comes into play.

The Limitations of the Original Power Configuration

Initially, Mojo5 used a 5V 12000mAh power bank. While this power source had a high capacity, it was limited to a maximum output of 3A. Given that the MG995 servos have a stall current rating of up to 3A each, the total current demand for the robot could easily exceed 20A during peak operation. This significant shortfall in available current was a primary cause of the robot's poor performance.

Upgrading the Power Source: LiPo Batteries

Based on previous discussions and experiments, we found that using a more powerful battery can drastically improve servo performance. Upgrading to a LiPo battery with a 2200mAh capacity, 7.4V voltage, and a 50C discharge rating provides the needed boost. This battery can supply well over the 20A needed by the servos, addressing the current limitations of the previous power source.

Considerations for Over-Volting

  1. Risk of Burnout: Exceeding the servo’s voltage rating does carry a risk of burnout. However, anecdotal evidence suggests that slight over-volting is generally safe if monitored properly. Implementing a voltage regulator could limit the current capacity, which might be counterproductive. In general voltage regulators on servo power sources are not recommended.

  2. Separate Circuits for Different Servos: For servos with lower voltage ratings, such as the MG90 (max 6V), consider creating a separate circuit to avoid over-volting. This will add an additional circuit which can be problematic. For my experiments with Mojo5, I used the same circuit.

Practical Application: Experimenting

Initial Setup

  1. Battery Connection: Connect the 7.4V 50C LiPo battery directly to the power bus of the PCA9685 servo driver, bypassing capacitors and regulators. This setup ensures the servos receive power directly.

  2. Handling Large Wires: Safely connecting XT-60 battery connectors and their thick 12 AWG wires to smaller electronics is challenging. Using a bare copper PCB or prototyping PCBs with multiple copper lines can provide a more robust connection. Soldering all wires together is an ugly but space-efficient solution.


Not Recommended!


In this experiment, I took the short cut and soldered the large wires directly in to a dupont connector. The connector was connected directly to the servo bus. This is not a recommended practice! I better connection with larger wires is recommended. In this particular case, as long as all of the servo motors do not stall, then only brief seconds of +20A current would be expected.  Current greater than 7A for sustained periods of time would destroy the connectors.

Experiment Results

After connecting the 7.4V battery:

  • Servo Performance: There was a noticeable improvement in servo actuation. The MG995 servos operated better, with no immediate burnout.
  • Heat Management: Monitoring the temperature of connectors and wires is crucial. Using a finger-test method (keeping fingers on components for one minute) can help identify safe current levels. If components become too hot to touch, reduce the current.

To visualize the improvement, I have embedded a short video showing Mojo5's performance with the original 5V power bank versus the upgraded 7.4V LiPo battery. Notice the difference in servo response time and stability.



Long-Term Solutions

While the initial experiments are promising, long-term solutions require more robust hardware:

  1. Custom Servo Driver Board: Developing a servo driver board with better connectors and thicker traces can handle higher currents more efficiently. This would replace the PCA9685 board, which is not designed for high current loads.
  2. Current Monitoring: Implementing current measurements and safety features like e-fuses can prevent overcurrent situations and protect your components.

Conclusion

Upgrading to more powerful batteries can significantly enhance the performance of underpowered servos in your DIY robots. While there are risks associated with over-volting, careful monitoring and proper hardware can mitigate these risks. As we continue to push the boundaries of DIY robotics, sharing these experiences and solutions will help us all build more capable and reliable robots.

Sunday, May 19, 2024

Mojo5 - Mirrored Servo Control for Opposite Side Legs (#10)

Mojo5 - Two Legs with Symmetric Control

Introduction to Mirrored Servo Control (Symmetric Control)

One of the primary challenges in developing Mojo5 was ensuring synchronized movements between the servos on opposite sides of the robot. To achieve this, we employed a straightforward yet effective approach: mirroring the servo movements by reflecting the target angles.

Technical Implementation

To implement the mirroring effect, we introduced a boolean parameter in our Servo structure to indicate whether a servo should be mirrored. The adjustment is applied directly in the servo control function.

void moveServo(const Servo& servo, int pos) {
  pos = max(servo.minPos, min(pos, servo.maxPos));
  int pulseWidth =  map(pos, servo.minPos, servo.maxPos, servo.minPWM, servo.maxPWM);
  pca9685.setPWM(servo.num, 0, pulseWidth);
    if (servo.mirror) {
        pos = 180 - pos;  // Adjust the position if mirroring is needed
    }
    pos = max(servo.minPos, min(pos, servo.maxPos));
    int pulseWidth =  map(pos, servo.minPos, servo.maxPos, servo.minPWM, servo.maxPWM);
    pca9685.setPWM(servo.num, 0, pulseWidth);
}

the following structure is used to define all the elements of the servo

struct Servo {
  uint8_t num;
  int minPos;
  int maxPos;
  int minPWM;
  int maxPWM;
  int minRange;
  int maxRange;
  int minRangePWM;
  int maxRangePWM;
  bool mirror;  //true => adjust IK if opposite side
};


The results of this code are directly observed in the output for the Pulse Width Modulation (pwm) values that are sent to the servos.  Here you can see the servo values are 'reflected' or 'mirrored', symmetrical to one another:

Mojo5 - Symmetric control - pwm values over a rectangle gait


Practical Application and Results

In our setup, the mirror effect (symmetric control) is particularly useful for maintaining symmetry in the leg movements. This approach simplifies the inverse kinematics (IK) calculations, as the same code can be used for both sides of the robot with the mirrored adjustment applied where necessary only at the servo control.

To illustrate this concept, we've embedded a short video demonstrating the addition of a leg from the opposite side of Mojo5. The IK calculations are identical, but the servo angles are adjusted by reflecting the target angles, resulting in a mirrored motion that maintains the robot's symmetry.



Insights on Mirrored Movements

Creating a mirrored movement (symmetric control) for servos is crucial for several reasons:

  • Symmetry and Balance: Ensuring that both sides of the robot move in a symmetrical manner is essential for maintaining balance, especially in quadruped robots. Asymmetrical movements can lead to instability and erratic behavior.
  • Simplified Coding: By mirroring movements, the same IK code can be reused for both sides, reducing complexity and the potential for errors. This makes the development process more efficient and the codebase easier to maintain. The servo angles are simply adjusted because all servos turn counter-clockwise and have a 0 to 180° range based on their orientation. Servos on the opposite side face differently, so this mirroring adjustment is necessary.
  • Consistent Gait Patterns: Symmetrical leg movements are vital for creating smooth and natural-looking gait patterns. Mirroring helps in achieving uniform step lengths and timings, which are important for the robot's locomotion.

Summary

The mirroring technique (symmetric control) we've implemented in Mojo5 represents a significant simplification in controlling symmetrical movements in quadruped robots. By introducing a boolean flag in the servo structure and adjusting the servo angles accordingly, we achieve mirrored movements without duplicating the IK code. This not only enhances the efficiency of our development process but also ensures more consistent and predictable robotic movements.

Sunday, April 28, 2024

Mojo5 - Inverse Kinematics in Motion (#9)

I've recently implemented an inverse kinematics (IK) solution to enhance the precision and control of robotic leg movements. Our primary experiment involved programming the leg to execute a simple rectangular movement pattern, with a focus on maintaining accuracy and consistency. Here, I will share some intriguing geometrical observations and challenges we encountered.

Intended Path and Test Setup

For this test, the robot leg will start in the (0,-130) position. Moving clockwise, It steps backward (in my case more positive, note the inverted x axis) to the (50, -130) position. Next up by 25mm to the top. At this time I use a different z position, but it is hard to see. the motion continues around in a rectangle ending at the start.

Planned path for simple gait test. starting at the center bottom

Video Analysis

In the bolow short video, the robot steps through the gait path. The leg is mounted above the table and horizontal to the table, not in a typical robot position. I have traced each step of the path, with a small dot at the end of the foot.


As seen in the video, the motion lacks the expected precision, influenced by several factors:

  • Fixture Stability: The fixture holding the leg is unstable, contributing to erratic movements.
  • Manual Marking Inaccuracy: Yellow dots indicating key positions were marked by eye, leading to imprecision.
  • Servo Calibration: The calibration of the servo points was approximative. At a setting meant to be 90°, the actual angle could range between 85-95°.

Geometrical Challenges: The Skew Issue

One of the more fascinating issues we observed is the skew to the left when the leg is raised vertically. This skew is likely influenced by the constraints of the 'solution space'—the range within which the actuator operates. As a result, there is a slight warping effect in the grid area over which the leg can move.

Mojo5 Inverse Kinematics - geometric skew

Implications of the Skew

  • Motor-Driven Distortions: The two servomotors, designed to drive the leg in specific rotational angles, do not guarantee a perfectly perpendicular alignment of the movement path. This introduces a mathematical distortion in the intended trajectory.
  • Impact on Gait Creation: While this skew does not necessarily hinder the robot's ability to perform meaningful gaits, it highlights an essential aspect of robotic movement—geometric imperfections inherent in mechanical and software solutions.

Further Research on the Mapped or Solution Space

The space in which the robot can operates can be understood as the geometric area or volume within which the robot can effectively reach and manipulate objects. This space is defined by the following:

  • Reachability: Determined by the length of the robot's arms and the range of motion of its joints. The maximum and minimum extents of each joint define the outer and inner boundaries of this space.
  • Compliance: In the context of SCARA robots, the vertical compliance allows for certain movements in the vertical plane within the workspace. This selective compliance helps in absorbing forces during tasks like assembly, where vertical give is beneficial.
  • Kinematic Constraints: Defined by the robot's mechanical design and the kinematic equations governing its movements. These constraints delineate the paths and patterns the robot can execute.
  • Control Resolution: The precision with which the robot's controllers can position its joints also defines the resolution within the mapped space, influencing how finely the robot can maneuver within its reach.

Certainly a very interesting area of study in robotics!

Monday, April 22, 2024

Mojo5 - My Inverse Kinematics Simplified (#8)

Inverse Kinematics pictured by Dalle3, it can be much simpler!

A much simpler approach

In my previous post on Inverse Kinematics (IK), the complexity might have left some readers puzzled about whether there could be a simpler method to achieve the same results. The good news is, yes, there is a simpler way! Thanks to insights from my roboticist friend Oracid over at the French Robot Forum on Robot Maker, I've streamlined the IK calculations for our Mojo5.

Understanding the Basics

The challenge remains the same: given a target point (x, y), we need to determine the angles of the hip servo (S1) and knee servo (S2). The geometric configuration of the robot's leg has not changed; we still deal with the leg segments L1 (upper) and L2 (lower), equal in length, connected at the knee and forming two sides of a parallelogram.

Geometric Simplification

This parallelogram setup allows us to simplify our calculations considerably. The angle from the horizontal to the leg segment L1 directly provides the angle for S1, and similarly, the angle from the horizontal to the side of L2 gives us S2.

Mojo5 - simplified Inverse Kinematics

Simplified Calculation Steps

Calculate Distance D:  First, we determine the distance from the hip servo to our target (x, y) using the Pythagorean theorem:  D = sqrt(x*x + y*y).  Note D plays a very important role in the following calculations for angles alpha and beta. It is also the line that divides our parallelogram in half.

Calculate alpha:  We will use the trigonometric law of cosines. Given that we know the length of our upper leg L1 and the length of D (aka the hypotenuse), we can calculate the angle between them.  We will call this alpha:  alpha = acos(D / (2*L)).  You may notice this is the same calculation that I used previously. More importantly, this angle alpha is the same on either side of the bisection of the parallelogram by D.

Calculate beta:  We will use a much simpler approach. Also using trigonometry, we see that there is a Right Triangle formed by the Y position, X length and the hypotenuse D. it is possible to calculate this angle opposite of y and D.  beta = asin ( abs(y) / D). It must be adjusted to its reflection in the event that x < 0, this is done (in radians) by subtracting PI from beta. Now one more interesting note. The angle we just calculated is the same angle at top horizontal, this makes it more clear how this beta angle will be used in our final calculations.

Final Servo Calculations:  For S1 we can take the value of beta and subtract apha from it.  Like wise to calculate the angle for S2 and can take the value of beta and add alpha to it to find its value.

  • S1 = beta - alpha
  • S2 = beta + alpha
Hopefully this explaination will help in visualizing the relationships and how to calcuate the correct angles for S1 and S2.  

Here you can view a simplified set of C++ code:

void calcIK(float x, float y, float &s1, float &s2,) {
  float L = 70; //length mm
  float d = sqrt(x*x + y*y);
  float alpha = acos(d / (2*L));
  float beta = asin(abs(y)/d);  if(x<0)beta=(M_PI-beta); 

  s1 = ((beta - alpha) * (180.0 / M_PI));
  s2 = ((beta + alpha) * (180.0 / M_PI));
}


 


Sunday, March 31, 2024

Mojo5 - Inverse Kinematics (#7)

Inverse Kinematics

At first glance, one might think that Inverse Kinematics (IK) boils down to simple trigonometry. However, determining the appropriate angles is merely scratching the surface. The real challenge lies in considering the physical arrangement of servos, the reference frame, and the constraints tied to both the servo capabilities and their placement. Collectively, these factors embody the essence of Inverse Kinematics.

For  Mojo5, I've chosen to use two MG995 servos, arranged in a stack. One servo is responsible for the 'hip' motion, and the other controls the 'knee.' The crux of IK in this setup is to map a target position within the coordinate plane to specific angles for these servos. To streamline the calculations, I've made a series of strategic design decisions. The lengths of the leg segments, L1 and L2, are set to be equal, each measuring 70mm. The hip servo is positioned as the origin point of our coordinate system. The mechanism for the knee is somewhat intricate, primarily because the servo controlling it is not mounted directly on the leg. Additionally, I've introduced a concept of 'yaw' movement along the z-axis, although, for the time being, our IK calculations will focus solely on movements within the x and y axes.

When it comes to calculating the necessary angles through IK, the approach is to visualize a triangle formed by the leg segments. Given that L1 and L2 are of equal length, this triangle is always isosceles. While this detail may seem minor at first, it becomes crucial when applying the Pythagorean theorem—a^2 = b^2 + c^2—to determine the distance (D) between the endpoints of the leg segments. This distance is key to assessing the feasibility of reaching a given target (x, y) position. To ensure reachability, D must not exceed the sum of the lengths of the two leg segments, or in other words, D <= 2*L.

Calculating the Hip Angle (Theta1)

To determine the hip angle, one must sum two distinct angles. The initial angle is formed between the horizontal axis and the target point (x,y) at the end of line D in our coordinate system. This can be calculated using the ArcTangent function, specifically arctan2(y/x) in standard practices. However, in my application, I employ arctan2(-y/x). The choice to use a negative y value due to my y values will consistently fall below zero. An alternative approach could involve taking the absolute value of the ArcTangent result to ensure a positive angle.

Following this, it's necessary to find the interior angle between line D and leg segment L1 within our conceptualized triangle. This angle, designated as alpha, can be determined through the law of cosines. In a simplified form, the calculation of alpha is expressed as acos(D / (2*L)). By adding alpha to the previously calculated angle, we derive the hip angle. However, there's a twist due to the servo's counterclockwise incrementation: the actual Theta1 is the supplement of the sum of alpha and our initial angle, mathematically expressed as Theta1 = 180 - (alpha + theta).

Mojo5 - Inverse Kinematics of the Hip Joint

Calculating the Knee Angle (Theta2)

To calculate the knee angle, our first step involves identifying the interior angle between the two legs, L1 and L2, which we'll refer to as beta. Once again, the law of cosines proves invaluable for this calculation. While the deeper mathematical proofs are better left to academia, the simplified formula to compute beta is given by acos((2*L^2 - D^2) / (2*L^2)). This equation allows us to calculate beta, which represents the angle between the leg segments in our model.

However, to translate this angle into a form usable by the servo mechanism, additional adjustments are necessary due to the servo being linked to the leg segments via cams. We must take the supplementary angle to beta. This supplementary angle, once processed through the cam system, achieves the effect of pulling the leg segments into the correct position but in a reversed direction. Consequently, we must employ the complement of this supplementary angle to align with the actual geometry and movement direction required by the servo mechanism. This raises an interesting question: could the calculation have been simplified to just beta minus 90 degrees?

Mojo5 - Inverse Kinematics of the Knee Joint


Moving forward

This step concludes the basic framework for the Inverse Kinematics (IK) calculation pertinent to our robotic project. It's important to note that the physical setup of the servos and the joint mechanics imposes certain limitations on movement. Preliminary observations suggest the knee joint is limited to 90 degrees of motion, while the hip joint can achieve approximately 130 degrees. These constraints are not absolute; certain hip positions may permit additional knee movement, although these specific interactions remain to be fully mapped out.

Friday, March 29, 2024

Mojo5 - Video Update, Next IK (#6)

The development of Mojo5 steadily progresses, we've put Mojo5's new leg design through its paces. This latest test, captured in a YouTube Short, showcases a significant enhancement in the design, specifically in the addition of a 'yaw mount'. Although the abduction servo—responsible for the 'yaw' movement—is not operational in this iteration, the primary focus was on the leg's up/down movement capabilities.



One notable improvisation was the use of a hastily clamped mount to a flexible support. This setup was crucial in providing the freedom of movement required while still managing to lift a weight of 370g. It's a testament to our iterative design process, where even makeshift solutions can lead to valuable insights.

On the Horizon: Inverse Kinematics

Moving forward, our journey takes a calculated turn towards the precision of Inverse Kinematics (IK). IK stands at the intersection of design and mathematics, simply translating desired leg positions into specific servo angles. This mathematical approach is the cornerstone for designing diverse robot gaits.

Before we dive into the complex world of gaits, our immediate next steps involve crafting a robust design, delving into the mathematics, coding the solution, and rigorous testing. Stay tuned for our next update, where we'll share our progress in making these calculations a reality for Mojo5.

Saturday, March 23, 2024

Mojo5 - Moving forward in the design (#5)

In our latest chapter of the Mojo5 saga, we're diving deep into the tangible world of robot building, where every breakthrough is hard-won, and every detail counts. This update is all about the real, hands-on progress we've made since last time, focusing on practical challenges and our solutions.

Here's what we've been up to:

Refining the Yaw Axis: We've successfully printed and assembled the new components that accommodate Mojo5's improved yaw movement. This time around, we're talking about the nuts and bolts—literally. Using 30mm M3 screws, to provide the Axis for the Yaw. Now, it is important to consider the build order as the 30mm bolt is added through the servo mount to the yaw frame. The locking nuts and Yaw Gear have to be added before passing through the frame.

Mojo5 - New 3rd Axis added to design

In the photo you can see the original paper sketch, and the 3D printed version. The previous post has the openSCAD version.


Putting Strength to the Test: We didn't stop at design improvements. To really see what Mojo5 can handle, we added a 370g load to its leg. It's essential for us to keep testing the limits and capabilities of our design, especially when it comes to real-world functionality. You can catch this test in action on our YouTube channel https://youtube.com/shorts/EplPBtQ46vw , where we've captured the whole process.



Streamlining the Design: while always on iterating the next improvement, we've started to pare down Mojo5, removing unnecessary parts and integrating a smaller Abductor Servo. This step might seem like we're taking things away, but in reality, we're optimizing for efficiency and performance. Sometimes the best part is the part you leave out - Engineering saying.

Mojo5 - The next iteration, removing un-required structure


Looking Back and Charging Ahead

These updates are more than just progress; they're a testament to the iterative nature of building robots. Each step, whether it's a new screw or a test under load, teaches us something vital about our design and its possibilities.

And as we dive into refining Mojo5 further, removing the extraneous and focusing on what truly matters, we're reminded of the essence of hobby robotics: it's a journey of constant learning, adjusting, and, most importantly, enjoying the process.

We Want to Hear From You

Your thoughts, feedback, and ideas have been incredibly valuable throughout this project. As we continue to navigate the complexities and joys of robot building, we're eager to hear more from you. What challenges have you faced in your projects? How do you approach problem-solving and iteration?

Let's keep the conversation going. Stay tuned for more updates as we push forward, one prototype at a time.

Catch you in the next post!

Tuesday, March 12, 2024

Mojo5 - Rethinking for the Yaw (#4)

Hey everyone! Diving into the heart of our Mojo5 robot project, we’ve hit that exhilarating moment where stepping forward means transforming everything from the ground up. Realizing this early in the prototyping phase? Priceless.

In this fourth installment, we’re rolling out the 'Yaw' – and yes, for our savvy crew following along, that's our latest dive into giving our robot an extra degree of freedom. This isn't just any move; it's what will let our robot's foot pivot with finesse, moving in and out from under the hip, enhancing its agility.

Mojo5 "yaw" mount around servos mount

Our solution? Introducing a lateral axis by creatively engineering a yaw mount that wraps around our existing servo setup. For those of you who’ve been with us, you know the servos and mounts are already pushing our size limits. So, in a stroke of insight, we decided to rotate the entire servo mount by 90 degrees. It’s a straightforward yet revolutionary tweak that cuts through our design challenges, though it does mean we’ve got our work cut out for us updating the OpenSCAD code. But, considering we're still early in the design phase, it's the perfect time for such ambitious changes.

Working in the CAD system really shines here, allowing us to simulate and verify our modifications in real-time. We managed to dial in the yaw movement to achieve +30 and -30 degrees of motion by adjusting just one variable. It’s clear, though, we’re not done iterating; we’ll need to push beyond these angles for the mobility we’re aiming for.

Mojo5: Yaw at +30 degrees

Figure: Yaw at -30 degrees

These figures highlight not just the potential of our design but the iterative nature of robotics itself. While 'Yaw' may be our current terminology, it's the concept and the progress that truly matter.

As we navigate through these updates, your insights and enthusiasm fuel this journey. The path of rapid prototyping is fraught with revisions, but each step brings us closer to realizing the full capabilities of our Mojo5. Stay tuned for the next leap forward!


Note: the correct term for "yaw" would likely be abduction and adduction:

  • Abduction is the movement away from the central axis of the body or, in this case, moving the leg outward from the body's midline.
  • Adduction is the movement towards the body’s central axis or moving the leg inward toward the body's midline.

Sunday, March 3, 2024

Mojo5 - Testing the Leg Design (#3)

Mojo5 Leg Design Test Setup
Mojo5 - Leg Design Test setup

It has been a few months since had some focus on this robot. As a stalled project, it always sits in the back of your head. The next task was to test the configuration to see if this design would be strong enough to lift its own weight in addition to the additional weight of the robot - Chassis, controller, battery, etc.

This weekend, I had a clear desk and I set up the test stand - which failed immediately, snapping my quick clamp with a loud snap. After some time, it occurred to me that I could have the 'chopsticks' extend out in two directions and this would be sufficient to test. Voila!

The robot Leg design with its cam and push rods was easily able to lift 450g + in addition to its own weight. I could not easily strap any additional weight to it. But, I am quite satisfied with this amount of payload for the moment!

Hey - did you notice I am using a Champagne Cork for the foot! :)

Saturday, January 13, 2024

A more RUGGED robot platform for Exploration

continuing on the these of robotic design, with the help of AI (robotics) tools... 

 It seems our last attempts resulted in an unexpected robot! The broad pink and white stripes, soft lighting, and big goofy grin resulted in a "Total Adorable Robot".

Extra Adorable, Totally Not Evil Robot

However, I do not think that that Robot would survive very long in the Sonoran Desert!  There for, back to the discussion, and lets add some design that would work in a sandy, rocky region:


The first in the conversation, when reminded that the robot needed operate in rocky or sandy environments, it added broad wheels in an awkward arrangement.  I could imagine this also as a treaded solution. The bigger innovation perhaps is having the sensor platform as a (potentially) pivoting top component over the drive chassis. The streamlined frame looks like it would handle wind storms much better. The front sensors and display may need some protection and covering. overall, this is a much more robust robot.  But where is the manipulator arm?

The next variation of the prompt placed the robot in an icy, rocky environment and requested it have a manipulator that inspects geology:


The wheels of this Robot are considerably more realistic. It is also quite plausible that this robot would fall over. If you examine closely, there is a suggestion of the manipulator arm jutting out between wheels, but it gets blended into the axel and is lost. 

The next request required that the manipulator be the focus of the image.


This version has even more rugged wheels and a top structre that looks like it could support a wide variety of payload. The manipulator arm is placed in a difficult position, but looks robust and effective.

The Rugged, Wheeled Hexapod:

What if the Wheels of the robot were replaced with rugged manipulators with wheels? The robot could crawl over rocks. Here is a potential variant:


I like this concept robot a lot. It would require very strong actuators and some compliant joints. By having some wheel motion capability at the end of the manipulators, the robot could roll and climb in difficult terrain. The top structure has the appearance of a turret, to move to present its sensors in all directions.

There is a lot of potential with a robot design like this.

For those interested in learning how OpenAI's ChatGPT and DALLE-3 work, the last three images were all generated using the exact same prompt. Here you can see how variation is introduced when executing the same prompt.


and once again:



Prompt: 
A robust, medium-sized exploration robot designed for rugged terrains, now uniquely featuring six multipurpose manipulator arms, used for both agile movement and examination. The robot has no wheels and moves by crawling on these manipulator arms. The body of the robot is adorned with broad pink and white stripes, presenting a friendly appearance. It has smooth, rounded contours, similar to an animal-like form. A digital display on the front shows a goofy grin, adding to its welcoming demeanor. The manipulator arms are made of strong, lightweight material, matching the robot's color scheme. These arms are intricately designed, demonstrating their flexibility and dexterity as the robot crawls across a challenging landscape, using them to navigate as well as examine geological features. Soft, warm lighting and clear branding with labels like 'Research Robot' and 'TOTALLY Not Evil!' are included in the design, with a focus on the unique functionality and design of the manipulator arms in the rugged environment.

Interesting what might actual make these images unique.  Once rendered it would be almost impossible to re-render the exact same picture.  See for yourself how very different they are drawn, given the same instructions.

Of course, they are all supposed to have six arms, and the last two only have four! They are also each supposed to have "Research Robot" and "Totally Not Evil!" stickers on them. It seems we stil have some more learning and progress to get DALLE3 on task, even when it was in a moderated thread with ChatGPT.