Webcam Controlled Rover

The Webcam Controlled Rover is a mock-up which consist on a robot capable of finding out where a target is, taking it and moving it through its field of movement. Different learning outcomes can be obtained from the usage of this mock-up; path-following algorithms and image processing are just examples of what this project is able to do.

Ths project is developed using a combination of both MATLAB and SImulink.

COMPETENCES

FUNCTIONING

This mock-up is capable of detecting both the position of the rover and the target in a white area designated to the use of the project. This white area is the arena in which the rover is capable of working; outside of there, the location will not take place. Image processing is used to get the position of both the rover and the target, as well as the heading of the first one.

CONTROL THE ROVER AND FORKLIFT FROM MATLAB

The first thing to be doned is to test both the rover itself and its forklift, in order to verify the correct functioning of both. Doing these tests will ensure an easier configuration and utilization later on.

To test the correct functioning of the rover, the first step to take is ensuring that the connection to all of the electronic components is possible. In case one or more of them do not connect propertly, try doing different test involving a multimeter, so the misfunctioning section can be detected.

Once all the components are operative, the next step is to test them operatively. That is, comanding them to see whether they work propertly or not. The ones to be checked are both of the wheel motors and the servo. The first thing to do is testing the rover going straight forward. To do so, each of the motors needs to have asigned an speed value, and that value needs to be the opposite of the value asigned to the other motor. That means that, in case one motor has a value of 0.25, the other motor's value must be -0.25. Make sure that the rover is going forward or backward. If its turning, then cables of the motors are set inversely between each other. Change the connections of one of them so there is no need to change the Simulink models later on.

After doing this test, it is possible to detect an error in the rover (unsolvable in hardware. The changes needed to be done are in software, but in further Simulink models). The aforemencioned error is that the rover will turn a bit to one of its sides, even if it should go straight. This is due to one of the wheels not spinning freely. This error must be solved later, but having to face it or not may cause changes in some models. The models in this git repository had to face the error of the motor not spinning freely, so changes may be needed in case whether both motors spin freely, or one of them spins less free that in the test accomplished.

After testing the rover going forward, it is also advisable to test it turning around, both with one motor stopped and the other spinning, and with both motors spinning with the same value in speed.

Once the rover is tested, it is also utterly important to check which are the maximum and the minimum values that the servo motor is able to stand, as passing any of them may make the servo burn. To check this, try using different values and detecting in which points the servo is still able to move and in which is not. Store this data somewhere for using it later.

The next step is to make it possible to convert from a desired linear speed (v) and rate of rotation (w) for the rover to the values of angular speeds for both the right and the left motors.

Using the basicKin_sim model, it is possible to test which values are to be passed to each motor so the linear speed and the rate of rotation are reached. Once the desired values are especified on the left blocks, all left to do is testing the results. This model has a callback for loading into MATLAB some values needed so it can work. In case an error occurs, check that all the values in the paramsBasicKinematics.mat file are loaded into the MATLAB workspace.

The convToWheelVel block in this model (which is used in the models from now on) is a combination of Simulink blocks which, all together, are able to replicate the formula written inside the block.

CHOOSING BETWEEN OPEN-LOOP CONTROL AND CLOSE-LOOP CONTROL

There are two possible ways of handling the movement of the rover: The first one is to specify how much it is the distance to move or the angle to rotate (without any kind of feedback). The second one is receiving data that determines whether the rover is moving correctly or not. This can be done (in this mock-up) in two different ways: determining the position with a conversion of the values of each wheel's motor, or by using the camera to detect where the rover is and act in accordance to it.

Between open-loop control and close-loop control, undoubtedly the second option must be picked, as reaching the desired point using open-loop control is something extremely difficult. Instead, with closed-loop control that is not so difficult, as the system itself is able to correct the tragectory in order to reach the desired point. Among the two options within the closed-loop control models, both where checked and are explained in this Web page. Both of them are useful, but the one using the data from the encoders only is useful if the hardware works perfectly. As that was not the case with the rover used for the tests, the model using the data from the camera was used.

CALIBRATION FOR THE LOCALIZATION WITH WEBCAM

It is utterly necessary to localize the rover within the in which it will work. To do so, the first step to take is making the necessary calibrations to make possible the localization constantly. The mandatory steps can be found the the MATLAB script roverCalibration.mlx.

The first thing to do, apart from cleaning any variables or windows from MATLAB, is to measure the height and width of the arena and define those values in the workspace. After that, the camera will be initialized, and a snapshot will be taken. It is necessary to ensure that all of the arena can be seen in the preview of the camera. If that is not the case, try changing the possition of the camera so the arena is fully visible. In order to make conversions, it will be necessary to use the previously taken image to define the four corners of the arena.

After the camera is initialized, the rover will be placed in different points of the arena, taking snapshots in those places, so the orthogonal view can be calculated. Those places are TOP, BOTTOM, LEFT and RIGHT. It is necessary to follow that order (the script itself asks for those images directly, so there is no chance of making a mistake). Afterwards, it will be necessary to click on each image the center of the rover (which is the center of the colored disk) and the point in which that point is in the paper. In case the camera is set directly above the arena, those point will be the exact same. Otherwise, if the camera has some angle, those point will not coincide.

The exact same steps will be needed for the target. First, four snapshot will be taken, so both the center of the target and the point in which that is can be defined.

Lastly, it is necessary to calibrate the color threshold for the disk of the rover. The calculation of the heading is done using the colors in the disk (so the sticker must have been put correctly), so it is necessary to store the value of each of the colors. The original script had this done with RGB, but this coloring detection model had a flaw: the lumen component of the color is not separated, so the color value changes as the luminosity varies. This is problematic when the light that reaches each part of the arena changes. Anyway, if the color detecting model is changed to HSV, that does not happen, has lumen is a value apart from the color itself. That is why the color model was changed. It is advisable anyway to switch on any kind of lights, so the light varies the least possible.

All the steps followed create different .mat files, which are stored in the computer so they can be used in future scripts.

TESTING THE LOCALIZATION OF THE ROVER AND TARGET USING THE CAMERA

Once all the neccesary values for the localizations to be performed from now on are stored in .mat files, the next step is testing if both the target and the rover can be localized within the limits of the arena. To test that, the script roverLocalization.mlx can be used.

In this script, the data previously obtained is loaded, and the camera is initialized. After doing that, a snapshot is taken, and the image is translated to its orthogonal view. Then, the location of the center of the colored disk, as well as the heading of the rover are obtained without any type of corrections. Then, the offsets are introduced, so the location of the rover is accurate.

The same is repeated for the target, as first its location is determined without corrections, being those introduced right after. Then, a representation is done using the image taken, so it can be checked whether the data obtained is accurate or not (especially for the heading).

It is possible that either the target or the rover may not be detected. In case the target is missed, try reducing the value passed to the function strel inside the function targetPos. In case that happends with the rover, there are two possibilities: the first one is to reduce the value of the strel function inside the roverPosAng function. This is not advisable, as this may cause the location to be inaccurate. The other solution is to check which of the centroids was not detected (green, blue or red), and changing the threshold of that malfunctioning centroid. This is also done inside roverPosAng.

CLOSE-LOOP CONTROL WITH ENCODER DATA AND PREVIOUSLY OBTAINED VALUES

Having done the calibration and the localization, it is possible to do a close-loop control system using the data obtined from the encoders in each of the wheels' motors. This data is converted to values usable within the Simulink model, which are angle and distance.

This data is then passed to a chart, in which it is treated in order to move the rover. The waypoints defining the initial point, the target location and the ending point are also sent to the chart, using a constant block. This means that, in case those points change, it would be necessary to measure them again.

This closed-loop control model was discarded, as it only works under very especific conditions. If the data from the encoders is not accurate, the rover will never reach the target. In order to avoid so, a camera will be used, which will periodically send the location and heading of the rover via WiFi to the rover itself, so it can redefine the path to follow.

USING WIFI TO CONTROL THE ROVER

Using the WiFi technology to communicate the rover with a MATLAB script is also possible. With that, data that otherwise would be impossible to transmit is sent to the rover. As it is important to know the IP address of the rover, and be able to communicate with it, it is advisable to use the Mobile Hotspot available on Windows.

In order to establish WiFi communication between the rover and the script, is neccessary to use blocks that allow this communication in the model of Simulink. This blocks are the "Arduino WiFI TCP Receive" and "Arduino WiFi TCP Send" ones. In each of them, it can be especified if that connection will be whether a server or a client. If its a server, it only needs to know which port will it be listening to in order to obtain data. In case it is a client, it is also neccessary to especify the IP address which holds the server.

The script also needs to contain code which is capable of starting a communication. Although it is currently deprecated, this project uses the tcpip function, especifying the port and the IP address. In case this function gets unusable, try using "tcpclient" or "tcpserver" instead.

As the rover needs to send information to the script, so it knows when to stop sending information to the rover (when all the waypoints have been reached), the computer needs to be a server. Because the connection allowed by the function "tcpip" is oddly done, the steps to take each time the rover is to be used are the following:

  • Switch off both rover and the WiFi hotspot.

  • Switch on the WiFi hotspot.

  • Switch on the rover AFTER the hotspot is already on.

Following this steps, the server created in the computer will be opened. Otherwise, the script will not function.

After opening the server, the client connections are able to be created, so they can receive data from the rover and act in accordance to it.

CLOSE-LOOP CONTROL WITH DATA OBTAINED FROM THE CAMERA THROUGH THE EXECUTION

As the model using encoder data could not be used, due to malfunctioning of the hardware, WiFi technology was integrated to that model, which suffered some changes in order to work as expected. In order to use this method, both MATLAB and Simulink work together. The model done in Simulink is uploaded to the rover, and a MATLAB live-script is used to retreive data from the camera and pass it to the rover.

The Simulink model used establishes four server connections in order to receive data from the MATLAB live script. Three of them are redirected to a chart, in which it is ensured that the data uninterruptedy reaches the StateLogic chart. Without this chart, the values would oscilate between 0 and the obtained value. The fourth server connection is in charge of reading the waypoints of the rover. Once all the values (6 in total) are sent using that connection, it is possible for the rover to start moving. Without that data, the rover will never move. Once the waypoints are stored and the script is sending data on the position of the rover, the core chart will start to function.

Within the core chart, the process to make the rover move all the way to the target and back is the following:

The core chart is in a pause state, were both the angular and linear speed are 0, also having the angle of the servo established into a pre-defined value. Active is a variable which is established so the connection between the rover and the MATLAB live-script is active only when it is 1. If the variable change to 0, the sending of data will be stopped.

When the waypoints are defined, the value st is set to one. When neither cmX nor cmY are 0 (that is, data about the possitioning of the rover is being received) and st is 1, the chart transitions to the same state as before, where it waits one second before moving on to the next state. This is done in order to avoid malfunctioning.

Once that second passes, the chart transitions to the next state, which is turn. This state is active as long as the difference between the actual angle and the angle to achieve is greater than 8 degrees. In order to adjust the angular speed, a PI controller is used within angCtrl.

When the desired angle is achieved, the chart will check whether it is the first turn (going to take the target), or the second (returning with the target). According to that, it will select one of the two possible states, each of them containing different substates that differenciate each other. The first time, it will move forward, until it is at a distance of 20cm from the target, using a PI controller. There, the servo will start going down with the rover stopped. Once it is at the desired position, the rover will continue to advance until it reaches the target. The distance established is 6cm, but this may vary between setups (due to the position of the camera, for example), so checking if that value is correct is advisable.

When the target has been reached, the rover will stop, and the servo will start to go up. When it reaches its high possition, the rover will return to the turn state, where it will start spinning to face the point where the target is being taken. Then, it passes to the second posibility to move forward (as it is the second time). That consists on rover moving to the position desired for the target, except it stops 10cm before reaching the point, so when the servo lets the target, the possition of the target is the desired one.

The v and w values are passed to the subsystem previously explained (go to IMPLEMENTING DIFFERENTIAL DRIVE KINEMATICS IN SIMULINK) in order to convert them into suitable values for the motors. The FLAng is sent to the servo motor, so the position of the servo is correct. Lastly, the active value is sent via WiFi to the script determining if the rover is still moving or not, so the script will continue or stop sending data to the rover.

Last updated