Max Freeman

Hi! I'm Max, a senior studying Mechanical Engineering at Cornell University. This website serves as an online portfolio for the work completed during MAE 4190 (Fast Robots).


Lab 1 - Part 1

This part of Lab 1 was primarily focused on getting our Artemis Boards set-up and ensuring that core functionalities were working correctly before proceeding any further. As such, it mostly involved running example scripts available through the Arduino library.

1) Blink it Up

The first task in this lab was to run the "Blink it Up" example that comes with the Arduino IDE. This worked without any issues, as seen in the video below.



2) Example4_Serial

The purpose of this task was to ensure that the Artemis was correctly sending data to the serial monitor. In the image below you can see that this worked correctly - when I typed "hi", the serial monitor succesfully echoed this back to the serial output.



3) Example2_analogRead

This task was intended to test that the onboard temperature sensor was working correctly. In the video below, you can see how the temperature output to the Serial Monitor changes when I put my thumb over the sensor.



4) Example1_MicrophoneOutput

This task was intended to test that the onboard microphone was working correctly. In the video below, you can see how the "loudest frequency" output to the Serial Monitor changes when I tap the microphone.



Discussion & Reflection

Lab 1 was a good opportunity to get up and running with the Artemis board and ensure there were no major software or hardware issues before proceeding.


Lab 1 - Part 2

The second part of Lab 1 was focused on setting up a Bluetooth connection between our computers and our Artemis boards. We worked on testing several basic functionalities in order to build our understanding of the BLE library and how to send and recieve data with our Artemis board.

Prelab

The prelab for this Lab required us to set up a virtual environment on our computers and then download the Codebase for the Lab to this environment. Once this was done, we started a Jupyter server from the virtual environment and then setup our Artemis boards for bluetooth connection. We could verify that this step was properly completed by checking that the Artemis successfuly output its MAC adress to the Serial Monitor, as seen below.



Initiating a bluetooth connection between our computers and our Artemis board required us to generate a unique UUID, as well as referencing the correct MAC address. This UUID was then referenced in the ble_arduino.ino file to ensure that we connected to the correct device. The MAC address and UUID were also referenced in the connections.yaml configurations file. With this done, a successful connection was established.



1) ECHO

This task required us to implement a new command on our Arduino that recieved an input string and then sent back an augmented string to our computers via bluetooth. In this case the string "Robot says -> ... :)" was added to any string input. So a string input of "HiHello" would return "Robot says -> HiHello :)" to our computers. Arduino code for this command, as well as evidence of it working can be seen below. Before implementing this function, the command had to be added to the CommandTypes enum on the Arduino code and the cmd_types.py file.



2) GET_TIME_MILLLIS

This task required us to implement a command on our Arduino that sends the current time from the millis() function from our Arduino to our computer. millis(0 returns the time, in milliseconds, since the arduino booted up.



3) Creating a notification handler

In this part of the Lab, we created a notification handler to recieve strings from our Arduino. Some minor parsing was added to the handler so that it stripped the "T:" from the returned time and only printed the actual value of the time. A global list, messages, was created for storing all strings in a single transmission, which was useful for some of the data analysis tasks later in the Lab.



The latter portion of the lab focused on comparing different methods of sending data from the Arduino to our computers in order to find out which methods were fastest.



4) TEST_TIME

Here, we created a new command that rapidly recorded the current time using milli() and sent them one by one to the computer via bluetooth. I created a for loop and used it to send 1000 time values to my computers. Once I had received them, I used this information to work out how many messages per second I was sending to my computer. This number came out to be about ~160 messages per second. Using the fact that each of these is a 16 bit string and that there are 8 bits in a byte, we can work out that the effective data transfer rate is about ~320 B/s



5) TEST_TIME_ARR & SEND_TIME_DATA

In the TEST_TIME_ARR command, the proccess differed in that the time stamps were all collected and stored locally in an array on the Arduino. Once all the values were collected in the array, the SEND_TIME_DATA command would iterate through the array and send the values one by one. In the notification handler (see 3), I stored each of these values into an array called messages. I then checked the length of this array to ensure 1000 timestamps were correctly recieved.



6) GET_TEMP_READINGS

In the TIME_TEMP_ARR command, temperature and time data were collected simultaneously and stored in two different arrays, 'temp' and 'time'. The GET_TEMP_READINGS command then iterated through each of these arrays and combined the information together into a single, comma separated, string (e.g. "24013, 13")to send to the computer.

In order to parse this information, I created a new notification handler called temp_handler, which used the split() method to split the incoming string using the comma as a separator. I then stored the time and temperature data into separate lists, whie preserving the order. The images below show the code for the notification handler, as well as the sample output from running the code.



7) Comparing data transfer methods

The final part of this Lab asked us to compare the two data transfer methods demonstrated in tasks 4 and 5 - namely, writing data in large batches locally and then sending them one-by-one versus sending data one-by-one. It also asked us to consider how much data could be stored on the Artemis board before running out of memory.

The second method (where data was stored locally and sent in a batch) resulted in much greater rates of data transfer but the primary drawback here is that it uses up much more of the Artemis' onboard memory so we have to be careful not to overfill.

The Artemis has 384kB of RAM. The second method was writing data at a rate of ~32,000 messages per second, which results in a write speed of about 6.4kB/s. At this rate, the Artemsis would run out of memory in approximately 60 seconds. Looked at differently, if each piece of data is a 16-bit string, we can write 192 strings to memory before running out of space.



Discussion & Reflection

Overall, Lab 1 offered an invaluable insight into the workings of bluetooth communication, as well as providing an understanding of how sending data is not always just a matter of sending the data - there can be faster or more efficient ways within this.


Lab 2

Lab 2 was focused on setting up the IMU, beginning to collect data and implementing a Low Pass Filter and Complimentary Filter. The purpose of the lab was to investigate how these different methods of post-processing affect the data, as well as to see how information from different sensors could be combined to produce more accurate results. We are using the ICM-20948 IMU.

1) Setup the IMU

The beginning of this lab required us to setup our IMUs with our Artemis boards. We began by connecting the IMU to the board using the QWIIC connectors. This can be seen in the image below. I also implemented a small piece of code in my setup() loop, which makes the Artemis' on-board LED blink when it boots up. This was useful to add a bit of visual feedback for debugging purposes.



In establishing an I2C connection between our Artemis and IMU it was also importnat for us to consider ADO_VAL. This value determines the last bit in our IMU's I2C address. In my case, the IMU jumper was closed so the value was 0. The AD0_VAL value is important as it can allow us to have two IMUs connected to our Artemis with unique addresses.

After connecting the IMU and setting up a connection to it via I2C, I used the ICM-20948 example script, "Example1_Basics" to test out the IMU. The video below shows the output from the Serial Plotter when running this example script. This helped to confirm that the IMU was working correctly.

In order to better understand the data from the IMU, such as the differences between the accelerometer and gyroscope, I chose to send packages of recorded data via bluetooth to a Python environment. From here, I was able to conduct a more detailed analysis. This will be discussed in Sections 2 and 3 below.

2) Accelerometer

After ensuring the IMU was correctly set up we then had to convert the raw data from the accelerometer (in the form of accelerations in the x, y and z directions) into pitch and roll values. This was done using the equations below. Note that it is not possible to calculate yaw from the accelerometer since the force of gravity acts along the z-axis, which is the axis that yaw acts around. Any rotation about the z-axis (yaw) will not result in a change in gravitational forces measured by the accelerometer and, hence, yaw cannot be measured.



To confirm that these equations were correctly implemented, the IMU was placed at orientations of -90, 0, and 90 degrees in both the pitch and roll directions. The surface and edges of a table were used in order to get these values as close as possible. There is some discrepancy between the reported IMU pitches and rolls and the actual pitches and rolls but, in general, they are in accordance with one another. Any discrepencies are small enough to be attributed to a mix of innacuracy in the setup (i.e., placing the IMU at an ange that isn't exacty 90 degrees) and noise.



I then used a Fast Fourier Transform (FFT) to conduct a frequency spectrum analysis of the accelerometer signal. This was used to analyse the impact of background noise on the accuracy of the signal. A Low Pass Filter was then implemented to remove noise from the final signal. The results of this analysis in the pitch and roll directions can be seen below.

In an ideal world, this analysis would have been conducted with the IMU attached to the robot as it was driving. This would allow us to record the vibrational frequency of the robot's driving and fiter this noise out. However, this lab was carried out before connections were soldered so we were not able to do this. As a result of this, there was relatively little background noise (about +/- 2 degrees) when the FFT was carried out. It is worth noting that the IMU we are using already has a built-in Low Pass Filter so this may explain the relatively small amounts of noise present in the unfiltered signal.

I still opted to implement a low pass filter with an alpha vaue of 0.08 because, after some testing, I found that this reduced the amount of noise in the data. Going forward, once the IMU is mounted to the robot, I would like to repeat this analysis to see if a better value of alpha can be chosen to minimise vibrational noise. The code snippet below gives a high-level version of the logic applied in the Low Pass Filter's implementation. As we can see, the Low Pass Filter successfuly removes much of the noise in the signal leading to closer to +/- 0.1 degrees of error. The effect of the Low Pass Filter can also be seen on the Frequency Spectrum FFT graph as the amplitude of many of the noisy frequencies decreases.

3) Gyroscope & Comparison With Accelerometer

I then used the data from the IMU's gyroscope to compute pitch, roll, and yaw using a different set of equations. Since the gyroscope measures a rate of change in angle [deg/s], I multiplied the gyroscope reading by a time differential, dt, to get an angle in degrees. Some high-level pseudocode that demonstrates the logic for this can be seen below.

I then recorded this gyroscope data over a period of time and sent it to my Python environment via bluetooth to plot it. The plot below shows the pitch, roll and yaw data collected from the gyroscope while the robot was stationary.

Plotting the data revealed a few key differences between the signals from the gyroscope and accelerometer. Namely, although the gyroscope data isn't as noisy as the accelerometer, the method of computation results in a signficant drift in the signal. That is, the signal continually increases or decreases even when the robot is not moving. This drift can produce large errors in the angle recorded by the robot. As a result of this, I looked at combining data from both the gyroscope and accelerometer in a Complimentary Filter. This is discussed in Section 4.

The drifting of the gyroscope data has an important relationship with sampling rate. The slower the sampling rate - i.e., the longer the delay between fetching new data - the greater the data drift is. So, rapid sampling is important to reduce drift.

4) Complimentary Filter

In order to implement a Complimentary Filter I fused the data from both the gyroscope and the accelerometer. The goal here is to get readings for pitch, roll and yaw that are stable, noise-free and not susceptible to drift. This relies on defining a value, alpha_comp, which determines the weighting between the gyroscope and accelerometer data. After testing, I found a value of 0.3 was optimal.

I chose to feed the results of my LPF accelerometer data into the Complimentary Filter rather than passing in raw data because, during my testing, this produced more reliable and less noisy results. However, this also introduces a slight lag in the data. This may be a problem when implementing a PID controller later in the semester, as it will slow down the control loop, which could cause some of the robot's movements to be less smooth. I iterated over several different versions of the alpha value for the Complimentary Filter, alpha_lpf, in order to reduce noise without introducing too much lag. The figures below show two examples with different alpha_lpf values recorded whille moving the IMU. The first plot was created using an alpha_lpf value of 0.08, while the second was created using an alpha_lpf value of 0.2. The data in the second plot lags less, while still reducing noise. I chose 0.2 as my final value for alpha_lpf.

5) Sample Data

The final part of this lab invovled restructuring the code base to speed up the sampling rate. Notably, I moved the filtering code into the main loop and removed all delays and print statements. I also updated my case so that it acted as a flag to start and stop recording data in the main loop rather than carrying out the data collection in the case itself. With these changes I was able to achieve a sampling rate of ~300 Hz, which corresponds to one measurement every ~3ms.

I then tested this over a period of 5s to confirm that I could collect and send that amount of data of bluetooth. I also printed the first 10 values of each of the pitch, roll, yaw, and time arrays to confirm they were correctly populated. The results of that test can be seen in the image below.



6) Record A Stunt

Finally, I tested the RC car out to get a sense of its behaviour so that I could establish a baseline to see it was working correctly. A video of this can be seen below.


Lab 3

Lab 3 focused on setting up the Time of Flight (ToF) sensors on our robots. We installed a QWIIC Breakout board to allow us to connect three sensors to our Atermis boards: the IMU and two ToF sesnors. We then soldered the QWIIC connectors to our ToF sensors. We then tested our ToF sensors, considering strategies to manage having both sensors communicating over I2C at the same. We are using the VL53L1X ToF sensor.

1) I2C Communication With Two Sensors

One of the major problems in this lab was working out how to communicate with both ToF sensors over I2C. This is because both sensors had the same default I2C address. The default address listed in the datasheet for the ToF sensors is 0x52. However, by performing an I2C scan of connected devices we can see that the ToF sensor's actual address is 0x29 (see image below). In order to connect two ToF sensor and communicate with both of them over I2C, I had to change the address of the second sensor.



I chose to change the address of the second sensor to be 0x30. This was done by using the XSHUT pin on the sensor to power it down. The address was then changed and the sensor was powered back on. Sample code for this can be seen below.

2) Wiring

The image below shows a wiring diagram for how the sensors were connected to the Artemis board. The thicker purple lines indicate connections where ordinary QWIIC cables were used, while the thinner lines indicate soldered connections.



The finished board with IMU and two ToF sensors can be seen below.



3) Sensor Placement

As the robot only has two sensors it will only be able to detect obstacles in a maximum of two directions using the ToF sensors. It makes sense to place at least one sensor in the front since this is the default direction of travel for the robot. The second could either be doubled up at the front to increase the reliability of the results by offering two sets of readings, or placed on one of the sides to increase the robot's awareness of its surrounding area. I will likely choose to mount the second sensor to the back of the robot. Ultimately, this means that obstacles coming towards the robot from the sides will be missed.

4) Sensor Readings & Comparison of Short vs Lond Distance Mode

The ToF sensors have two modes that optimize their ranging performance for different maximum ranges. "Short Mode" is optimized for distances of up to 1.3m, while "Long Mode" is optimized for distances of up to 4m. In order to test each of these modes, and the performance of my sensors in general, I mounted the sensor to a fixed point and used a tape-measure to place an object at different distances from the sensor. This setup can be seen below.



I took readings from the sensor at distances of: 0.2m, 0.4m, 0.6m, 0.8m, 1.0m, 1.5m, and 2.0m accross both modes. For each distance, I took 50 recordings. I then sent these results to my Jupyter environment and plotted them, as well as computing average distances, standard deviations and percentage errors for each case. These results can be seen below.

The results were reasonably consistent across both modes. As expected, though, the Long Mode was much more accurate than the Short Mode at longer ranges. This analysis also revealed that the reliability of the data decreases dramatically at range, with the standard deviation between readings rising up to ~30 after 2m. Overall, I find that, given that these two modes performed similarly, the Long Distance mode should be used going forward. This wil give the robot the opporunity to respond to obstacles at a greater distance, which will help given how fast it travels.

5) Using Two Sensors

Once the address was changed, as described in Section 1 above, I was able to collect data fromn both ToF sensors simultaneously. This is demonstrated in the video below.



6) ToF Sensor Speed

I also wanted to compare the rate at which the ToF sensors were collecting data with the rate at which the main loop of the Artemis board was running. To do this, I printed the current on-board time as quickly as possible and printed a "NEW DATA" flag only once data from the sensors became available. This analysis showed that the main loop ran about once every 1-3ms when the sensors were not trying to collect data. Once the data collection process started, the main loop only printed the time once every ~10ms. Additionally, the sensors report a new value about once every ~100ms. A screenshot from a moment during this test where the sensors where collecting data can be seen below. I think the current limiting factor of the speed of the loop when data collection is happening likely comes from the call to check if new sensor data is available - "checkForDataReady()".



In order to carry out this test, I used the following code in my main loop() function:

7) Timestamped Sensor Data

Finally, I recorded timestamps along with all of my sensor data. I then sent this data to my Jupyter environment via bluetooth and plotted distance vs time graphs. An example of one of these graphs is shown below.


Lab 4

Lab 4 focused on setting up the dual motor drivers for our robot and switching from manual control to open loop control. We first soldered the motor drivers to the Artemis board and then connected and mounted all components from the previous labs inside the chassis of the robot. After this, we calibrated and tested our motors before demonstrating open-loop control of our robot.

1) Wiring Dual Motor Drivers

Before we could start soldering the motor drivers to the board, we first had to plan out how we would wire them. The wiring diagram below shows the final setup for this. One key consideration for this step was ensuring that the motor drivers were connected to PWM enabled pins on the Artemis board. After consulting the Artemis specifications, pins 4, A5, 6, and 7 were selected for use, as they were PWM enabled and in locations that were physically close to the motor drivers. Though the motor drivers we are using are designed to control up to two motors each, we opted to bridge the A and B pins on each of the motor drivers. This allowed each motor driver to deliver twice as much current to the motors, allowing for faster motor speeds.



It is also worth discussing battery power in this arrangement. Both motor drivers draw power from one battery, while the Artemis draws power from its own, separate, battery. Because of this, it was necessary to bridge the Vin and GND terminals of the two motor drivers so that they could be powered from the same battery. The Artemis and motor drivers are powered from different batteries in order to avoid any undesirable effects under load. If they were powered from the same battery, it is possible that the motors would draw too much current under load, causing the Artemis to power down or reset. Powering the motors and Artemis separately also acts as something of a back-up, so that the Artemis can still remain powered on even if the motor battery is fully discharged.

2) Oscilloscope Testing

Once I had soldered the first motor driver, I tested it using an oscilloscope and power generator to verify that the PWM signals were being correctly generated. I connected the Vin and GND pins of my motor driver to an external power supply. I chose to set the power output of this supply to 3.7 V in order to replicate the same voltage that would come from the battery (3.7 V 850 mAh). Also, by looking at the datasheet for the motor drivers we are using, we can see that the range given for the operating voltage is 2.7-10.8 V, so a voltage of 3.7 V is acceptable. After powering the motor driver board, I attached the oscilloscope probe to one of the board's output pins and attached the other end of the probe to the common ground from the power supply. This setup can be seen in the image below.



I then sent a simple PWM signal to the motor driver. I began by defining each of the pins that were connected and switching the pin mode for these pins to be output. I then used the analogWrite function to send a PWM value of 200 to the motor driver. A code snippet of this can be seen below.

I then used the osciloscope to verify the output. This output is shown below. The output from the oscilloscope confirms our expectations in two ways. Firstly, it is a square wave, as we would except for a PWM output; and, secondly, it reflects the high duty cycle (200) that I used as an input, because the output is HIGH for most of the period.



3) Spinning One Side

After confirming the motor driver was working correctly, I then sodlered it into the car to test it on the actual motors. To do this, I sent another simple PWM command to the robot, except this time I alternated between sending forward and backward commands to test whether the wheels could spin both ways. The code snippet below shows the code used and the video below shows the result of this test. As you can see, the wheels on the robot spin one way, before switching and spinning in the opposite direction (this change in direction is also evidenced from the robot rocking back and forth because of its inertia).



4) Spinning Both Sides

Next, I soldered the second motor driver in place and connected both drivers to a 3.7V 850mAh battery, rather than powering them from the external power supply. I then ran a simple test to see if both wheels would spin forward. The results of this test can be seen below.



5) Final Layout

The image below shows the final layout of the components mounted onto the robot.

6) Motor Calibration

With the components all soldered and mounted onto the car's chassis, it was now time to test driving the car. The first issue was calibrating the motor drivers. Given that the motors we are using are very cheap and of low quality, both sides may spin at different rates, even when given the same PWM input. This could be due to different amounts of friction in their internal gearboxes, for exampe. The video below shows the result of the car driving forward with the same input PWM to both motors. As you can see in the video, the car veers left. This is because the motors controlling the wheels on the left-hand side of my car had higher friction and therefore spun slower than the right-hand side.



In order to address this, I created calibration constants for the left and right motors to scale the PWM inputs such that the motors would spin at the same speeds. After testing various values for these constants, I found that a right_cal value of 0.6 and a left_cal vallue of 1 (i.e., unchanged) helped match the motor speeds better. The code below shows how I accomplished this.

Finally, the video below shows the car moving in a (relatively) straight line after the motor inputs had been scaled, as described above. While the car can drive straight for distances of ~8ft, it does still start to veer off at larger distances, as can be seen towards the end of the video. Again, this is likely due to the low-quality of these motors. However, this isn't overly concerning because once we implement our PID controllers, any small deviations like this can be corrected for.



This veering behaviour can be explained in terms of the different levels of internal friction in each of the motors. Even when calibrated to run at the same speeds, I found that the right motor would freely spin for longer than the left motor once they were both shut off. The video below shows both motors spinning (at approximately the same speed, thanks to calibration) and then being shut off. As you can see, the right motor continues spinning for significantly longer than the left one.



7) Motor Issues During Testing

While calibrating my motors, I initially ran into an issue where my left motor was spinning extremely slowly, which significantly limited the maximum speed of the car. At first, I thought this might just have been a result of the poor quality of the motors but, after further testing, I realized that some of my soldered connections on the left motor driver pads did not have enough solder, which was limiting the amount of current that could flow to the motors. After applying more solder to these pads, I was able to achieve normal motor speeds and carry out the calibration described in Section 6, above.

8) Lower Limit PWM Values

I also wanted to find the minimum PWM values required for each of the following behaviours: driving forwards, driving backwards, and on-axis turns in each direction. That is, what minimum PWM input is required for the wheels to spin fast enough to overcome friction and perform each of these behaviours. After repeatedly testing different values, I found the following approximate lower limits:

  • Forwards: ~55
  • Backwards: ~55
  • On-Axis Spin Clockwise (Left Motor Forwards): ~160
  • On-Axis Spin Anti-Clockwise (Right Motor Forwards): ~165

9) Open-Loop Control Over Bluetooth

Finally, I wanted to test open-loop control of the robot including turns and both backwards and forwards motion. Though not strictly required, I decided to test sending these commands to the robot via Bluetooth, rather than hard-coding a particular path into the robot. In order to do this, I defined three new commands: FORWARD, BACKWARD, and TURN90R. The first two would drive the robot in the named direction for 0.5 seconds, and the second one would turn the robot approximately 90 degrees to the right about its axis. Following from what we had learnt in the previous labs, I implemented flags that were triggered once the relevant command was called, which would then run code in the main loop to move the robot as needed. The first code snippet below shows the code in the FORWARD case, which triggers the flag when the command is called, and the snippet after that shows the corresponding logic in the main loop. In my actual implementation, this concept was expanded to also include the BACKWARD and TURN90R commands.

After implementing this code, I was able to send commands to the robot via bluetooth from the Jupyter environment, as seen in the image below.

The robot's response to these commands can be seen in the video below.



10) Linear Speed Control Function

As a last step, I created a function that would drive the motors based on an input value, u, which ranged from -100 to 100. This function would map the u value to a PWM speed through a linear relationship with a positive y-intercept designed to increase the PWM input above the minimum PWM values found in Section 8. A negative value would correspond to driving the motors backwards, while a positive one would drive forwards. The higher the magnitude of the input, the faster the motors would spin. This type of control is useful for implementing the PID controllers in the next lab. The code for this function can be seen below.


Lab 5

The purpose of Lab 5 was to set up a position-based feedback controller on our robots using data from the ToF sensors. I opted to implement a basic P controller on my robot at first and then branched into a PD controller once I got this working. Given that the main controller loop runs more quickly than the ToF sensor can return data, I also implemented linear extrapolation to extrapolate the sensor values between readings.

1) Prelab - Setting Up Debugging & Data Logging Infrastructure

A key consideration in this lab was to have a strong debugging infrastructure set-up. In this case, that involved logging and storing data on the robot in real-time and then sending that data to my computer over Bluetooth, where I could perform analysis and post-processing. I chose to store the following pieces of data in arrays for debugging:

  • The control input from the controller, u
  • PWM inputs for both the left and right motors
  • "Raw" Time of Flight sensor readings
  • Extrapolated Time of Flight sensor readings
  • Time stamps for all of these pieces of data

In order to do this, I created an array for each separate piece of data I wanted to store. Within each iteration of the main control loop, I would then call functions that collected the relevant pieces of data and stored them in their respective arrays. I would continue this process until my arrays were full, at which point I would stop PID control on the robot and send all of the data to my computer. For the purposes of this testing, I set my array size to a relatively small value of 1000. While I could have chosen a larger array size to collect data for a longer amount of time without overflowing the onboard memory, using a small value helped ensure relatively fast tests and data-collection, as the behaviours I wanted to capture would generally have taken place within this timespan.

The code snippet below shows some rough pseudo code for the structure of this main loop. In practice, the actual logic of exactly how I store some of the values within the functions, particularly in the case of the ToF values, was more complex. This will be explained in greater detail when discussing extrapolation.

The data is then sent from the Artemis board once the arrays are full. Data is only sent at the end, rather than in real time, so as to speed up execution of the loop.

On the Python side of things, I implemented the following notification handler to "listen" for data being sent over bluetooth. Data is sent from the Arduino in a comma separated string - e.g. "1,2,3" - where each number represents a value from a different array. The values are always sent in the same order so parsing them is easy. One minor detail to note about my implementation is that the "distance_front_raw" array (and its companion "times" array) only store values when a real sensor reading is collected, while "distance_front_extrap" records both the real values and the extrapolated values between them. Logically, this means that distance_front_array will reach the maximum array size long before distance_front_raw. So, there will be some 0 values in both distance_front_raw and times that need to be filtered out before plotting. The code snippet below shows my notification handler.

2) Setting Up Bluetooth Commands

As another small "quality of life" addition, I implemented a function that allowed me to input values for Kp Ki and Kd over bluetooth. This was useful because it was much faster than having to recompile and upload the code to my robot each time I wanted to tweak one value. The code snippet below shows my implementation of this case on the Artemis.

I also implemented a START_PID command, which would activate PID control within the mainloop, as well as resetting and initialising any variables for this. Most of the variables and steps taken here are self-explanatory. One thing to note is clearing the lists at the end by filling them with zeros. Usually this isn't necessary since you just write over the old values but since the number of "real" sensor readings recorded in a given run is not consistent, I found that in some cases there would be issues if the lists weren't cleared. E.g. imagine in one run there were 9 values recorded in dist_front_raw but in the next there were only 8, this would mean the 9th value from the previous run would not be overwritten and would remain.

3) Implementing Proportional Control

The goal of this lab was to design a controller that would have the robot drive as fast as possible towards the wall before stopping at a given setpoint. In order to do this, I first implemented a simple proportional controller. A proportional controller works by providing a control input that is proportional to the error. If the error is large and positive (i.e., the robot is far from the wall), the control input, u, will be large and positive, causing the robot to drive forwards. Once the START_PID command is sent, the PID_pos_flag variable is set to true and the PID control loop begins to run. The structure of this controller is fairly simple:

  1. Get a distance reading from the ToF sensor
  2. Work out the error between your setpoint and this reading
  3. Muliply this error by Kp to get your control input, u
  4. Clamp u such that: -100 <= u <= 100
  5. Pass u into the drive_motors_same function, where it is converted into a PWM input to drive the motors

The code snippet below shows this implementation in my main loop.

4) Extrapolating Between Readings

By post-processing data from the robot, I was able to determine that the ToF sensor has a sampling rate of ~10 Hz, while the main loop runs at a frequency of about~130 Hz. Given that the main control loop runs at a higher frequency than the ToF sensor's sampling rate, there will be some iterations where the ToF sensor does not have a value ready. In this case, the robot moves very sporadically and is unable to react to obstacles quickly. In order to mitigate this problem, a simple extrapolation is done using the last two "real" ToF sensors obtained. The slope between these two points is calculated and is then assumed to be constant until the next sensor reeading comes in. One key factor to this solution is that extrapolation cannot be carried out until the first two real data points have been recorded. My implementation waits until these first two readings come in before beginning. The code snippet below shows how this works.

The image below gives an example of a graph of ToF data versus time for one run. The blue dots show actual sensor readings, while the red line is the extrapolated data between them.



5) Motor Input

Since I scaled my u values from -100 to 100, I wanted to map these values to sensible PWM values rather than feeding them in as raw values. A u input of magnitude 100 would correspond to full speed, while 0 would correspond to not moving (and negative values correspond to backwards motion). To do this, I created a simple linear relationship between the two points: (u = 0, PWM = 50) and (u=100, PWM = 255). To get the equation: PWM = 2.05u + 50. In practice, I had to play around with these values a bit to get the relationship right, as well as set different values for the y-intercept for backwards and forwards motion. Note that a u input of 0 corresponds to a non-zero PWM input because of the deadband of the motors.

6) Adding Derivative Control

Derivative control works by adding a control input term that is proportional to the rate of change of the error of the system - i.e. u = Kd*(de/dt). In this way, it acts as a dampener on the system, acting against changes. The derivative controller helps to minimise overshoot, as well as minimising disturbances and unwanted oscillations at steady state. After adding derivative control to my controller, the control input, u, is calculated as follows:

Adding derivative control was fairly straightforward once the proportional controller had been implemented. Four new variables were defined: current_error, current_time, previous_error, and previous_time. These were updated within the loop and used to calculate de/dt for each loop iteration. Ordinarily, derivative kick might be a problem here but since we are not, currently, changing the setpoint during the robots' operation, this is not something we have to worry about here. If I did have to address this issue though, I would just use the time derivative of the sensor reading itself rather than the error. In this way, any change in the setpoint is not reflected in the control input.

7) Tuning The Controller

Given the range of motor input values (55-255) and the maximum distance that can be recorded by the ToF sensor (4000mm), we would expect Kp to be somewhere on the order of 255/4000 = ~0.06.

Kp = 0.03

I began by setting Kd = 0 and only experimenting with the proportional controller. If Kp was too low the system would not generate a large enough control input to drive the motors forwards or, if it could, it would move very slowly, having a long rise time. If it was too large, the system would overshoot the setpoint and experience oscillations from being overly sensitive to error. After iterating on different values, I settled on a Kp value of 0.03. This value was approximately the maximum Kp value that I could use without experiencing significant overshoot or oscillations. The image below shows a graph of data from one run with this value.



Next, I tried adding in derivative control in order to minimise the overshoot and oscillations from my proportional controller. Here, I experimented with different values, looking for the minimum value that would produce the effects I was looking for.

Kd Too Small - Kd = 5

If Kd was too small, it would not have a large enough effect and the system would overshoot.



Kd Too Large - Kd = 9

If Kd was too large, it would cause the system to undershoot.



7) Final Values & Trials

After continuing to tune my value for Kd, I found that a value of 7 was optimal for my system. The graphs and videos below show results from my system over three trials. In each trial I started the robot at a different distance from the wall ("Medium", "Long", "Short").

  • Kp = 0.03
  • Kd = 7

While the system performed well in all cases, I found that the starting distance from the wall did have an impact on the system dynamics, particularly with overshoot and undershoot, as it affects the robots' acceleration. By taking the maximum slope of the distance vs time graphs below, I was able to find that the maximum velocity of the robot was approx. 0.5m.

Trial 1 - "Medium" Distance

The system performed optimally at this distance.





Trial 2 - "Short" Distance

When starting closer to the wall, the system had difficulty reaching the setpoint, though it did eventually reach it towards the end. This is because, when starting from closer distances, the robot has less momentum as it comes close to the wall.





Trial 3 - "Long" Distance

When starting further away, the opposite is true - the system now has more momentum so overshoot is more likely.






Lab 6

The purpose of Lab 6 was to set up an orientation feedback controller on our robots using data from the on-board IMU. In order to do this, I implemented a PD controller on my robot.

1) Prelab - Setting Up Debugging & Data Logging Infrastructure

As in Lab 5, it was important to have a strong debugging infrastructure set-up. This involved logging data in real-time on the robot and then sending it over Bluetooth to my computer for analysis. For the sake of brevity, I will only describe which variables I decided to log, as a more detailed account of the methodology I used can be found in Lab 5 above. The variables I opted to store for debugging purposes were:

  • Yaw from Gyroscope
  • PWM inputs for both the left and right motors
  • Control signal, u, as well as each of its individual components: P, I, and D.
  • Time stamps for all of the data

2) Key Considerations

Before begining to implement the PD controller, there were some key points to consider.

Sensor Limitations

The first question was whether there are any limitatons of the sensor to be aware of. In the case of this IMU, there is a maximum rotational velocity that the gyroscope can read. By default, this value is 250 dps (degrees per second), which is not sufficient for some of the fast turns carried out. In order to address this, the maximum rotational speed can be re-configured. I chose a value of 1000 dps, which was more than sufficient. There are also options to configure the sensor to eliminate any constant bias that might exist in the readings but, given that I did not find any issues with bias in my sensor readings, I decided not to use these. The code below shows how I was able to reconfigure the sensor speed for this purpose.

Derivative Term - Low Pass Filter

Another important set of questions concerned the use of the derivative term. Namely, should a Low Pass Filter be included on this term. While this is something that I experimented with, I found that adequately filtering out the noise on this term would add too much delay to my signal, which would inhibit the ability of my controller to rapidy react to changes. Instead, I chose to use a relatively small value of K_d, so that small changes due to noise would not adversely impact the trajectory of the robot but would still add enough damping to be effective. In practice, I found this approach worked very well, as demonstrated in the trials below.

Codebase - Changing Setpoint While Robot Runs

I also edited my code from Lab 5 so that I could send both K values and my setpoint over bluetooth while the robot was running. This allowed me to change the robot's setpoint in real-time, which is crucial for implementing more complex behaviours later. The code snippet below shows how I implemented this functionality on the Artemis. To send this information to the Artemis over bluetooth I would just run a function like: ble.send_command(CMD.SET_PID_ANGLE, "1|2|3|90")

3) PD Controller Setup & Tuning

P Control - Kp = 0.7

As in Lab 5, I began by implementing a simple P Controller and tuned the value until reaching the maximum value possible that didn't introduce significant overshoot or oscillations. Getting the P value as high as possible without adding overshoot is important, as it decreases the system's rise time, leading to a faster control response. After experimenting with multiple values, I found that a P value of 0.7 worked best. The graphs below show the results of a test run with a setpoint of 180 degrees using only P control.



As you can see, the system overshoots on the way up to 180 degrees. This overshoot can be reduced by adding derivative control. Steady state error is very low - from analysing the steady state data, I can see that it is less than 1 degree.

PD Control - Kp = 0.7, Kd = 30

The D parameter is used to help reduce overshoot and oscillations around steady state. Generally, it acts as a damper on the system. I increased the D value until reaching the minimum amount that would minimise overshoot without introducing any instabiity or unwanted behaviours to the system, which can happen for large values of D. The graphs below show the performance of the system with the same setpoint of 180 degrees, but this time using derivative control too.



Adding the derivative control has now removed the overshoot that was present in the system when only P control was used. The system now performs very well.

4) Integral Control

Justification of Lack of Integral Control In Final Controller

The integral term in PID control is typically used to remove steady state error. However, in practice, I found that my controller had very little steady state error (typically less than ~1 degree). Also, adding integral control can have undesirable effects on the system, increasing both overshoot and settling time. As a result of this, while I did successfuly implement integral control to my controller, I decided not to use this in my final controller tuning.

[5000-Level Task] Implementation of Integral Control & Wind-up Protection

In order to implement my integral controller, I added a new variable, culm_sum, which collected a culmulative sum of the error accumulated over time. I also added wind-up protection to my controller, in the form of capping the magnitude of my I term at 100. Integrator wind-up can occur when the controller output saturates, causing the integral term to continue integrating error beyond useful limits. In this case, the integral term grows very large causing an overshoot. It is therefore necessary to implement wind-up protection methods to prevent this by limiting the growth of integral term. My final implementation of my PID controller can be seen below.

5) Trials

Next, I carried out several trials on my final controller (Kp = 0.7, Kd = 30). The aim of these trials was to characterize the effectiveness of my controller under different conditions.

Trial 1 - Large Setpoint (720 deg)

This trial was the simplest and consisted of giving the controller a large setpoint of 720 degrees to ensure it could reach it without spinning at a speed that was too high for the gyro to pick up.





In this case, the controller performs very well. Some minor oscillations can be seen at the end but the overall performance of the system is still good.

Trial 2 - Multiple Setpoints (90, 0, -90 deg)

Here, the setpoint was changed three times while the robot was running to ensure it could respond adequately to changes in setpoint without any derivative kick. The first set of images shows the system's response before any anti-kick functionality was added. As you can see, there are large spikes ("kicks") in the derivative signal when the setpoint changes.



To remove derivative kick, I simply altered the code so that insted of taking the derivative of the error, de/dt, it takes the derivative of the yaw sensor data itself. This removes any impact from changing the setpoint by removing any reference to it in calculating the derivative term. The code snippet below shows the difference between this implementation and the normal implementation.

The images and videos below show the system's reponse after the code was altered to remove derivative kick. Though eliminating the kick didn't produce a huge impact on the system in this particular case, it did help smooth out the response near the setpoints.





Trial 3 - Disturbance Response

Here, the setpoint was set to 0 degrees and the robot was nudged around to see if it could act against these disturbances to remain at 0 degrees.





The controller performed adequately in this test, though it could have been slightly more responsive. This indicates that I may have to increase my P value slightly more. It could also be that my battery was beginning to get lower as I ran this test at the end of a long run of testing. Another option is to add tape to the wheels of my robot, which I have heard from other students helps increase responsiveness by reducing friction.


Lab 7

In Lab 7, I developed a Kalman Filter and simulated implementing it on my robot using data collected from a test run. Kalman Filters are useful for estimating the state of our robot even in the presence of noisy measurements. This is particularly useful given the relatively low sampling rate of our ToF sensors, as it allows us to generate estimates of our state to drive our robot without having to wait for new data.

1) Characterising The System - Drag and Momentum

In order to implement a Kalman Filter, the first thing I had to do was characterise the robot to generate a state space model of my system. In order to do this, I drove my car towards the wall with a constant motor input and collected velocity readings while doing so. The goal was to reach steady state over the course of recording this data. I chose to use a u vaue of 30, corresponding to 30% of my maximum speed, as I found this best enabled me to reach steady-state in a reasonable time-frame.

Graphs

After running this test several times to get useful results, I ended up with the following graph. Note that there is a slight discrepency near t = 35s in the ToF sensor readings. However, since we are only really concerened with the data at steady state and at 90% rise time, this isn't a big issue. I also chose to compute estimates of the speed of the robot from the ToF sensor data. This was useful in assessing exactly when the robot's speed reached steady state. This computed speed is shown in the final plot of the image below. To calculate this, I used a simple center approximation to estimate the slope locally. The code for this can be seen below.





Analysis

With this data, I could then extract the 90% rise time, as well as the steady state velocity. The steady state velocity was calculated using the mean of the last four velocities (excluding the very last value, since it seemed anomalous). I found the values were as follows:

  • 90% Rise Time: 2.004 seconds
  • Steady State Velocity: -2215.8355 mm/s

I then calculated my values of d and m, which correspond to pseudo-values that characterise the system's drag and mass. These values are needed to populate the A and B matrices in the Kalman Filter. In order to obtain these values, we perform a force balance (F=ma) on our system at steady-state. I found the values were as follows:

  • d: -0.000451
  • m: -0.000363

Saving to CSV

Finally, I saved this data to a csv file so that I could preserve it for future use and would not have to repeat the trial if my kernel restarted. I used the in-built "csv" library to this. The code below shows how values were stored.



2) Initialising Kalman Filter

Initalise State Space Equation Matrices

With the d and m values calculated, I can now begin to initialise the A and B matrices that are used in my state space equation, as well as defining the state space matrix, x. These matrices are shown below.



In order to initialise these arrays, I ran the following piece of code. It is important to note that once the A and B matrices are created, they must then be discretized according to the timestep we are using. Since I am sampling values everytime the main loop runs, my sampling rate is around 100 Hz, so my Delta_T works out to about 0.01 seconds.



Initalise Noise Matrices & Define Kalman Filter Function

In order to implement a Kalman Filter it is also very important to correctly specify both the process noise and sensor noise covariance matrices. The significance of these matrices is further explored in Section 3, below. The code snippet below shows how these matrices were defined and how the Kalman Filter function was defined. One key point in the function definition is the scaling applied to u. My u value inputs have a magnitude of 0 to 100. So, I first have to divide by 100 to get them between 0 and 1. Next, since we assumed u = 1 when first calculating d and m, even though the real input was 30, we have to divide by 0.3 to get our scaled u value.



3) Run Kalman Filter On Test Data

I could now run my Kalman Filter on the test data I had collected during a run of my position-based PD controller from Lab 5. This was accomplished by looping through the data points and applying the Kalman Filter to each of them, as shown below.



Results

After applying my final Kalman Filter to my data, I plotted it to compare. This plot can be seen below. The Kalman Filter appears to perform very well and tracks the expected path nicely (first plot), as well as giving sensible values for the speed of the robot (last plot).



Parameter Tuning Discussion

In order to get a good Kalman Filter, it is important to correctly tune the values in the covariance matrices. These matrices essentially determine the weighting applied by the Kalman Filter to the sensor data vs. the estimated data. If the sensor uncertainty is low, the Kalman Filter will more closely match the sensor data and if it is high, the Kalman Filter will place a greater emphasis on the predictions generated by the state space model we have created. Meanwhile, the process noise describes the uncertainty in the state space model we are using, with sigma_1 being the uncertainty in position and sigma_2 being the uncertainty in velocity. So, a high uncertainty for the process noise will make the Kalman Filter "trust" our sensor data more. The test below demonstrates this effect in practice: by setting the process noise uncertainty to be low, our Kalman Filter will place a greater emphasis on its own internal model, departing from the sensor readings.




Lab 8

Lab 8 was the culmination of much of the work we have done on our robots this semester. Here, we implemented a stunt on the robot, choosing between a 180 degree drift and a flip. I chose to implement the drift (Task B) using my PID orientation controller from Lab 6.

1) Stunt Requirements & Implementation

The drift stunt required us to start our robot within 4m from the wall, drive fast forward and then, once the robot was within 914mm of the wall, initiate a 180 degree turn before driving back in the direction it came from.

My basic approach was to create a simple function that drove the robot forwards for a set amount of time (800ms in this case), while logging data. Once the time had elapsed, the PID controller was activated with a setpoint of 180 degrees, causing the robot to turn around while drifting. Once the setpoint was reached, the PID controller would stop and return to the drive forward function. In order to decide when the PID controller should stop running, I had to add a few lines of code that counted each time the current angle got within 5% of the setpoint. If the robot was within these tolerances for 5 iterations in a row then it would stop runnng. The code snippet below shows how that was handled.

2) Videos of Successful Trials

The videos below show the results of three succesful trial runs of this stunt, along with data from each of the runs. The system performed acceptably in all three trials, though the PID controller seemed to stutter a bit when turning in the second trial.

Trial 1





Trial 2





Trial 3






Lab 9

In Lab 9, we mapped out a static arena in the lab by placing our robot in five different locations in the arena and having it spin about its axis while collecting distance readings from the ToF sensors. After collecting data from each of these locations in the local reference frame of the robot, we then applied transformation matrices to the data to convert these readings to the global reference frame in cartesian coordinates.

1) Orientation Control

The first part of this lab required us to decide on a control method for spinning our robot. I opted to use my orientation PID controller from Lab 6 for this, spinning the robot in 10 degree increments for a total of 36 readings per rotation per sensor - resulting in a total of 74 readings across both sensors. I chose this method, rather than continuously spinning the robot, to ensure that the robot was stationary when each reading was taken.

A key concern with spinning the robot was ensuring it was actually spinning roughly about its own axis. If the robot was translating at all while spinning, then its distance from the obstacles around it would change over time, causing errors in the map. The video below shows that the robot turned roughly about its own axis for the duration of the turn.



Given the drift from the gyroscope sensors, the accuracy of the turns decreased over time with each subsequent turn.

2) Error Estimation

Given the drift in the sensor, the accuracy of the angle increments and the reliability with which the robot turns on axis, we can reason about the errors in our readings. From implementing the Kalman Filter in Lab 7, as well as the ToF testing in Lab 3, I estimate that the average uncertainty in a given ToF sensor reading is about ~10mm. From the video of the robot turning about its axis, we can see that the robot deivates at most about 1/5th of a tile (0.2 ft) when turning about its axis, which corresponds to 60mm. From the video, we can also estimate that the robot deviates from the "true" angle by no more than about 5 degrees over the course of a 360 degree turn. Assuming the robot was in a 4x4m square, empty room. Over the course of a turn, the on-axis error ranges from 0-60mm, so the average error is about 30mm. The uncertainty from the ToF sensor itself can be assumed to be constant. This gives us an average error of ~40mm and a maximimum error on the order of ~70mm.

3) Data Collection

With the turning behaviour setup, the robot could now be placed into the arena to collect data. The image below shows this arena, as well as labelling each point with a number that is used to differentiate the data from each point in the maps. The image also contains the coordinates of each point (as measured in feet), as well as showing the sign convention used in the GLOBAL reference frame.



The robot was then placed in each position and a map() command was sent. This command would set the PID orientation setpoint to 10 degrees, once the setpoint had been reached, a reading would be taken from the ToF sensors and stored in an array. The setpoint would then be incremented by 10 degrees and the robot would begin turning to the next position. This behaviour would repeat until a full 360 degree rotation had taken place, at which point the data would be sent over bluetooth to my computer. Once, I had recieved the data, I then generated polar plots at each point as a sanity check. The image below shows a polar plot of the data recorded at Position 4 in the arena.



4) Plotting in Cartesian

Once data had been collected at each of the five points in the arena, it was time to transform it into Cartesian coordinates so that a map of the entire space could be constructed from the data taken at each location. In order to do this, the saved front distance, side distance and theta angles were first read from the CSV file. Next, a small offset was added to the side and front distance values to account for the sensors' distance from the true center of the floortile due to their mounting on the robot. In the case of the front sensor, an extra 100mm was added to each data point, while for the side sensor, 30mm was added. These distance values were then placed into two 2x1 matrices, which had the form: [x; y]. They were also converted to feet. It is also worth noting that the sign covention used is that the positive x-axis points towards the robot's front, while the positive y-axis points out of the robot's left side (when looking from behind the robot). As a result of this, in my case, the side sensor's values were all entered as negative because my side sensor was mounted on the right side.

Next, two transformations were applied to each data point in order to transform them from the robot's intertial reference frame to the global frame. The first was a translation of the form [dx; dy]. The translation at each point corresponds to its grid location. For example, in the case of point 1, the translation applied is [-5; -3]. Next, a rotation is applied. This follows the standard form for a 2x2 rotation matrix in 2D-cartesian space: [cos(theta), -sin(theta); sin(theta), cos(theta)], where theta corresponds to the yaw angle of the robot at that data point.The position matrices were then multiplied by the transformation matrix in order to get the global points. The code below shows how this was done for point 1.



Error Correction

A small offset was also added to this angle to straighten the data points. In the case of point 1, this offset was 5 degrees. This is because many of the data points were at an incorrect angle - this could be due to innacuracies in the yaw readings or inconsistent turning angles, for example. The images below show the cartesian plot for point 1 before (first image) and after (second image) applying this offset. For all subsequent plots, only the final "adjusted" plot is shown. The importance of doing this is shown in Section 5, below.

Cartesian Plots @ Point 1

Raw


Adjusted


Cartesian Plots @ Point 2



Cartesian Plots @ Point 3



Cartesian Plots @ Point 4



Cartesian Plots @ Point 5



5) Merging Into One Map

Finally, the data from each point could be plotted together. The first image below shows what this would look like if the offset hadn't been added to the data before, while the second one shows the final version with offset added. As you can see, the walls are much straighter and form better right angles, as they do in real-life, when the angles are adjusted by adding an offset

Raw


Adjusted


Adding Walls

From this map, I then manually plotted walls on top of the data points. In order to correct for errors, I applied some of the following basic intuitions: data taken from points closer to a given wall was trusted more than data taken from points further away; and, points without many other points near them were taken as outliers and ignored, such as the ones outside the edges of the map. The image below shows the result of this approach.

Final Map w/ Walls



Lab 10

In Lab 10, I implemented a grid localization scheme using a Bayes Filter. In this Lab, we implemented the filter using a simulation of the robot generated in Python. The Filter uses its knowledge of the environment along with state estimation in order to compute the probability that it is at a given location given a set of sensor readings and knowledge of its movements/

1) Setup & Algorithmic Overview

The Bayes Filter is based on a grid-like representatioon of the world around the robot. In order to make computations feasible, we discretize the space in three different directions: x, y, and theta. The resolution of the grid is 0.3048m, 0.3048, and 20 degrees.

In each iteration of the Bayes Filter, we update the probability that we are in any given cell in the grid, given the movement carried out by the robot and the data reported by its sensors.

2) Compute Control

The first step of building the Bayes Filter required us to create a function that would take two odometry poses and calculate the two rotations and one translation that correspond to this change in position. This is crucial in evaluating the probabilities of being in different locations later in the model. These values can be calculated based on a simple geometric model of the robot's movement, which can be seen below.



With this model in mind, the function was then implemented, as shown below.

3) Odometry Motion Model

The next step involved creating a functiion that would take the current and previous pose of the robot, as well as a corresponding control input (in the form of two rotations and a translation). With this information, and assuming a Gaussian probability distribution, we could then calculate the probability that the robot was in fact at the given position.



4) Prediction Step

Next, we had to create a function for the prediction step of the model. This step calculated the prior belief of the robot. In order to do this, we first compute the "actual" control input of the robot given its current and previous poses. We then iterate through each cell in the grid and check if there is a non-negligible (> 0.001) probability the robot is there. This check is important to save computational time so that we don't check cells with a low probability. If there is a non-negligible probability, we iterate through every cell in the grid again so that we now have a pair of grid positions. Next, we use the odom_motion_model function to return the probability that the robot travelled between those two random locations given its "actual" ccontrol input. That is, what is the probability that the robot started at prev_pose and ended up at cur_pose, given its control input. Next the bel_bar (prior belief) matrix is updated to reflect this new probability.



5) Sensor Model

Just as with the motion model, we also needed to create a function for the sensor model, which could be used in the update step of the Bayes Filter. This function returns the probability that a sensor measurement is true, given the robots position in the map. In order to do this, we model each measurement with a gaussian distribution centered around the "true" measurement expected at that pose (generated from the map of the space. These values are simply computed, placed into an array and returned.



6) Update Step

Finally, the update step of the Bayes Filter was implemented. This step updates the probabilities in the belief matrix based on the sensor measurements from the robot. The logic here is to iterate through each possible cell in the grid and calculate the probability that the robot is in this cell, given the measurements it returned (this is done using the sensor_model function). Finally, this probability is used to update the belief matrix. Lastly, the belief matrix is normalised. This is done to ensure that the probabilities of being in each cell add up to 1.



7) Testing The Filter

The video below shows a test of the implemented Bayes Filter running in the simulated environment. Overall, the Filter performs extremely well. The blue line on the video shows the most likely state after each iteration of the Filter, while the green line indicates the actual position of the robot ( "ground truth"). The red line shows the position that would have been predicted if we had only relied on odometry. Clearly, the Bayes Filter performs much better than the odometry.


Lab 11

In Lab 11, I developed the "real" implementation of the Bayes Filter developed in Lab 10. The difference here was that the Filter was deployed on the actual robot using real data from its sensor. The robot was then placed in the arena that we tested in during Lab 9 and the update step of the filter was run as a test.

1) Simulation Test

Before starting, I first ran my code for the Bayes Filter in the simulation in order to verify that it was working correctly. The results of this test run are shown in the screenshot below.



2) Python Set-Up

While the Filter was implemented on the robot itself, the actual processing for the Bayes Filter was done remotely from within the Jupyter envrionment in Python. In essence: the robot would spin about its axis in 20 degree increments, taking measurements from the front time of flight sensor at each increment. Once it had completed a full 360 degree turn, it would send the data via bluetooth to my computer, where the Bayes Filter would take it as an input to carry out the update step. In order to implement this on the Python end, two key functions had to be created.

First, the notification handler, map_handler() was implemented. This was fairly simple code, which saved the time of flight sensor and yaw data into two arrays. The values were converted into meters and radians, respectively.



Secondly, the perform_observation_loop() function was implemented within the RealRobot class. This function is what triggers the spinnng, data-recording, behaviour on the robot. First, the PID controller's K values are set, along with a 20 degree setpoint. Next, the START_MAP function is called, which tells the robot to spin about its axis in 20 degree increments, collecting data along the way. The asyncio sleep routine is used to wait for the notifcation handlerto finish running - this allows us to only wait on this coroutine, rather than pausing the running of the entire Python script. Finally, the data was mapped to a numpy column array and returned, as per the function definition.s



3) Testing

The robot was then placed in the Arena at each of the marked out waypoints and tested. The images below show its performance in each tests. Note that in images where only one dot is visible, this is because both the ground truth and belief were in the same location (i.e., a good result). Overall, the localisation algorithm performed very well.

Test @ (-3, -2)



Test @ (0, 0)



Test @ (0, 3)



Test @ (5, -3)



Test @ (5, 0)




Lab 12

Lab 12 was a culmination of much of the work we completed this semester, combining PID orientation control, PID position control and localisation via a Bayes Filter. In this lab, we had to get the robot to traverse a series of waypoints as accurately and rapidly as possible. I opted to use a combination of Open Loop and Closed Loop control, as well as localisation at certain points.

1) Problem Statement & Solution Setup

The problem was to traverse 9 waypoints spaced throughout the arena, while also avoiding obstacles. We were provided with a map of the space and were free to do this however we wanted. The image below shows the map with the waypoints marked in green.



For my solution, I initially tried to implement localisation at every point. That is, at each point, my robot would do a 360 degree turn while collecting observation data and then the update step of the Bayes Filter would run in order to estimate the location of the robot. I subsequently chose to reduce the number of locations at which I localised for two reasons. The first of these was to speed up the operation of the robot: the 360 degree turn needed for localisation takes time and often doesn't produce particularly accurate results. The second reason was because I found that I could get much better results for the first three points by manually hardcoding a path, as such localisation was not needed at these points. The reasons for this decision are discussed in Section 2.

Ultimately, my solution involved: hardcoding the path to the first two waypoints; and then localising at every other waypoint number - e.g. at waypoint 4, 6 and 8 I would localise. At points where I did not localise (3, 5, 7, 9), I would assume my groundtruth pose was correct (i.e., that I had successfully made it to the right waypoint and was trying to travel to the next one ) in order to work out where my robot had to go. While I could have probably hardcoded all of the points to complete the path, I thought it would be more interesting to try and incorporate localisation.

2) Position Control

In order to make this work, I had to make some slight adjustments to my PID position controller. Rather than directly specifying a distance from the wall at which my robot should stop, which is how it worked before, I now needed to specify a distance to travel and have my robot work out where to stop based on this. Changing this was relatively straightforward and essentially involved storing the starting distance from the wall using the ToF sensor. I chose to take 10 initial readings and average them to increase the reliability of the results. Once this value was stored, you could subtract the distance to travel to work out what your setpoint from the wall should be. For instance, imagining you start 1000mm away from the wall and want to travel 200mm, your setpoint would be 800mm from the wall. The code below shows how I got the initial ToF readings (incidentally, this is the same code I used to get average readings at each of the increments when performing my localisation turn).

While this worked well for most points, some waypoints proved to be very problematic. The main issue with using this approach to measure distance travelled was that the robots do not travel particularly straight. This means that when the ToF sensors are pointed at an angle to a wall far away, any small deviation from a straight line will result in a large change in distance perceived. This issue is demonstrated on the map below for the first waypoint. You can see that the ToF sensor is measuring a very distant point at an angle. This is why I chose to hardcode this point.



2) Orientation Control

My orientation controller remained largely the same as in previous labs - no significant restructuring was necesary. One issue I had with this was due to the slight innacuracy with my PID controller. Over the course of a 360 degree turn for localisation, it often ended up ~15-30 degrees off course so I added an offset at the end of the turn to correct the robot's orientation back to zero. This can be seen in my solution implementation in Section 3.

3) Solution Implementation

The code snippet below shows my implementation of this solution. One thing to note is the structure of my move() function. The third parameter of the function is an angle, "offset", that is subtracted from the first rotation performed by the robot. There are a few different possible values for this offset. If the robot has just performed a localisation loop, the offset is 20 degrees to account for the fact that the robot consistently overshoots by about this amount when performing the 360. If the robot has not localised, the offset is 0 degrees as no turn has taken place that needs correcting. There are also a few manual offsets that can be seen for points 6 and 7. These were added because I found that my robot was consistently overshooting the angles in these locations.

4) Testing Solution & Thoughts on Improvements

After hours of testing, my robot was largely successful in reaching most of the waypoints (up to waypoint 7). The main issue limiting the success of my solution was the lack of accuracy with my orientation and position control. The lack of accuracy with my orientation controller could likely be improved in three ways: firstly, I could try to tune my PID controller further to improve the accuracy of the turns, however I am doubtful that I could make significant improvements with this method alone; secondly, I could reset my yaw value to zero after every 20 degree increment - this would help avoid the compounding affects of gyroscope drift when localising; finally, the magnometer could be used as a way to reset the robot to absolute zero after each turning maneuver was complete. For improving the reliability of position control, I would likely try to implement a Kalman Filter to help increase the reliability of the distance readings.

The video below shows the results of one of my final runs. The largest failure point was at the end - since my orientation controller wasn't accurate enough, it would overshoot the angle it needed by ~10-20 degrees and travel into the empty space around waypoint 1 instead of hitting waypoint 8.