Drawing Robot

The last project realized consisted on a robot capable of drawing different images taking into account the vectorized result obtained from them. Different learning outcomes can be obtained from this mock-up, among which there are trigonometry or image processing.

This project has been entirely developed using MATLAB.

COMPETENCES

FUNCTIONING

This project is capable of drawing vectorized images in a whiteboard, as well as capturing images using a camera and then drawing both those images or images stored in the computer. The process which needs to be followed is explained in the following sections:

VALUES FOR LEFT AND RIGHT MARKER AND TEST ON COMPONENT FUNCTIONING

First of all, in order to avoid burning the servo motor in charge of moving the markers up and down, its vital to measure correctly the values for both markers to touch the whiteboard without pussing more than necessary. With does two values, a third one (the value where no marker is being used) can be obtained.

After that, testing the capabilities of the motors to be used in the mock-up is time worthy. Different ways of using them are possible, and testing each one is something to take into account.

Both obtaining the values and testing the motors can be done using the MATLAB live script Task1.mlx.

CALCULATING THE POSITION OF THE ROVER AND DRAWING SIMPLE LINES

After having tested the functioning of the robot, the first step to be taken is measuring the base, also known as the distance between the pulleys. After having done this, and stored the data so it can be used later, calculating the robot's initial position is essential, so conducting the needed calculations is fundamental for the robot to work propertly.

After that, the next logical step is to test the capability of the robot when it comes to drawing. This will also help understand how does the robot work when drawing different types of lines. Is important to make sure that the motors aren not given high values regarding the speed, as this action may stall the motors, making them to burn.

The last step to be taken is to calculate the new position of the rover after the movements conducted. As the motors store how much times the hall effect happened (see the image below to behold how this effect works), the count value obtained from there can be used, among with some other calculations, to obtain the exact new position in which the robot is currently.

Each of the calculations conducted, as well as the SimplePlotterApp usable to test the capacity of the robot at drawing lines, can be found in the live script Task2.mlx. This script is oriented to be used only while doing tests, as afterwards a function with the calculations will be used.

CONVERT DESIRED POSITION INTO RADIANS

In order to obtain the desired workflow of the robot, the possibility to make the conversion from a point registered with a x position and a y position to a radian value is needed; that is, the exact opposite of what was performed in the previous step.

The conversion realized allows the robot to move to an exact position on the whiteboard. Beforehand, the movements done were interrupted manually by the user, and thus obtaining an exact position was hard to carry out. With the calculations done in the script Task3.mlx, it is possible to move the robot to the desired position in a whiteboard (the robot can be also moved to a series of point in the whiteboard), as it is especified with radians how much each motor should rotate. This script is thought only as a test script, as all the calculations are done in a function from now on (this function works the same way as Task3).

DEFINE WHITEBOARD LIMITS

Defining where and where not the robot should draw in the whiteboard is something highly valuable, as otherwise the three motors of the robot may be damaged. In order to prevent this, it is recommendable to make use of Task4.mlx, where some calculations are done in order to establish the limits inside which the robot will draw.

Firstly, the spectations of the motors need to be defined. This expectations are provided in the datasheet, so all it takes to find them is searching for them in the aforemencioned document. After the values needed are especified, the next step will be to calculate some motor constants, which will make possible to determine the torque limit for the usable voltage.

Secondly, it will be necessary to especify the whiteboard dimensions, as well as creating a grid of all possible board possitions. Next, it will be necessary to define robot constants for calculating torque, which will allow to compute torque at every position.

Once this is done, its imperative to eliminate the bad regions from the torque computation, so it is possible to choose the drawable region limits.

Every calculation necessary to carry this process is visible in Task4, and the data obtained will be stored in a .mat file, so the script is only needed to be runed once.

DRAW PREPARED IMAGES

The most important script is the one that perfoms exactly what the robot was thought to do, and that is drawing images. Those images need to be vectorized in order to be drawn. To know how to vectorize images, go to IMAGE PROCESSING SCRIPT. Once the image to be used has been vectorized, Task5 can be used to draw it on the whiteboard.

First of all, the data stored in a .mat file must be loaded. This data will contain the segments that compose the drawing, as well as the limits in pixels for x and y.

Next, a conversion needs to be done so the pixel values are transformed into meters. It can be especified which extent is desired to be occupied from the whiteboard, changing for that the value of the fraction parameter.

After that, the size of each segment needs to be reduced, as there are a considerable quantity of points in no more than a milimeter. In order to avoid high drawing times, as well as burning the motors, it is preferable reducing the point in an interval defined by the radius value (in meters). This value may be changed in case that the result is not satisfying.

Once all this steps have been given, the following one is starting the drawing. For that, it will be necessary to measure the length of both the strings. Then, all the distances obtained beforehand must be converted to angular displacements, in order to obtain a high enough precision. In case the scale is done incorrectly, try changing the value spool inside the function "xyToRadians". The higher the spool is, the smaller the drawing will be.

The following requierement is to initialize each component of the robot, loading the position stored without a marker and starting the drawing.

The drawing will be done following a pattern during all the execution: In a loop from the first segment to the last one, it will pick the points of the first segments which is yet to be drawn, move to the first position (with the marker up), then move the marker to touch the whiteboard and start drawing all the segment until the last point. There, it will raise the marker, and so the process will be repeated until the drawing is done.

It may happen that some lines are drawn when the robot is moving from the finishing point of one segment to the starting point of the next one. To avoid this, try incrementing the value asigned to nD in the function moveToRadians.

It is highly recommended to avoid doing drawing which have a considerable amount of segments. The maximum used though the test was near 170, so it is preferable to avoid drawing surpassing that amount.

Different plots are done though the script, so it can be checked visually whether the steps are performed correctly or not.

IMAGE PROCESSING SCRIPT

The images used in the previous section had been already procesed, and what was being used was the vectorized result obtained from the script Task6. In case and image that has not been vectorized yet is desired to be drawn, the image needs to pass though this scrpit in order to be drawable.

First, the image needs to pass though some conversion i order to be ready to be drawn. It will be changed to grayscale, binarized, and then the lines will be transformed so the image is composed by thin lines. Every conversion realized will be visible in order to make possible to check how it is done or where the conversion makes the image look different (due to all the conversions, sometimes the resulting image is not similar at all with the original one).

After the image is composed by thin lines, those need to be extracted from the image. After all of the pixels are extracted, each segment of the pixel list will be stored together in a cell array which will contain every segment of the drawing.

After that segments that are possible to be merged will be connected, in order to reduce the number of segments to be drawn, and the limits in x and y will be stored too, in order to be used when scaling in Task5. This information, along with the segments, will be stored wherever the user wants, in a .mat file.

TAKING IMAGES WITH A CAMERA

In order to draw images from real live (for example, a drawing done in the whiteboard), it is necessary to use a camera to take a photo and then store it in the computer, so Task6 is able to process it. That is exactly what Task7 is able to do.

First of all, the command webcamlist is used in order to identify the camera that is going to be used. After that, it needs to be selected, and the camera initialized. After showing a preview of what the camera is seeing, the user is able to select when to take a photo, and after is taken, it can be stored in the computer. This way, in can be then introduced in Task6 so it can be drawn.

Last updated