Today I went through Aiden's code that was about the limit switches on the linear rail and scissor lift. I rearranged the code in order to make it clearer and work more consistently.  I started by adding four more variables that were for each of the four uses of the limit switches, when the scissor lift is too far up or down, or when the linear rail is too far forward or back.  I then used these new variables in Aiden's old "if" statements and separated them so that if one limit switch went off it wouldn't stop the scissor lift/linear rail to move in either direction instead of just limiting its movement to only one direction.  After fixing this I then used an old 3D printed part and used foam tape on to the parts of the scissor lift and linear rail that activate the limit switches.  I added these so that the limit switches were triggered earlier because before the screw gear extended to a point that it could damage the thread. the limit switch is a contact activated piece that sends input to the processor which tells the scissor lift motors to stop lowering the lift

limitswitchholderIMG 1044IMG 1045

Today on the programming front I added the two new scissor lift motors in to the configuration and code of the robot.  Luckily, we are only using an encoder value on the first motor and both the motors will always have the same position, meaning most of the code didn't need to be touched.  Also, since one of the two motors is functioning exactly the same as the old motor half the configurations on the phones were already correct.  The only thing that I needed to change in the code was to add a second DC motor to the hardware section, and to simply copy the power input of the first motor over to the second motor in the autonomous and TeleOp modes.  Then for the configuration I only had to add in a second motor onto the second Hub, and to change the type of the first motor.  Even though when we first tried to test the two new motors and code the two motors were running in opposite directions, we only had to reverse the power cables of one of the motors and afterwards it run smoothly.  Surprisingly the first major change I made to the code worked the first time we tried it.

This meeting, I mostly worked on trimming down the included Vuforia demos such that they could be easily implemented into the Autonomous classes, as well as setting up the REV UltraUSB hub. Since I did not finish doing any of these things, I do not have any pictures or code to add, but I am half done, so next meeting will have double the content. What I did get done was removing all of the for-each loops and extra trackables in the Vuforia navigator class. These are unnecessary for me because I do not need to use them (vision is only being used to find the location of the skystones), so they were adding bulk to the code. Once I finish cleaning up the class and setting up the hub, I should be able to test it with a webcam. Speaking of, with the hub, I wired everything up and it is more or less done: inserting in the hub power and USB cables. The only thing I have left to do on this is to attach it to the REV Expansion hub, with power, and test it. As far as I can tell, everything will work, so not much of the next meeting will be taken up by that. Other than that, not much was done at this meeting regarding programming.

This meeting, I fulfilled my promise from last meeting and finished trimming down the demo vision class, got it working with the hub, and was able to successfully test out vision. Regarding the demo vision class, I took the file named ConceptVuforiaSkyStoneNavigationWebcam.java and got rid of every other trackable, as is seen below:

VuforiaTrackable stoneTarget = targetsSkyStone.get(0);
stoneTarget.setName("Stone Target");

stoneTarget.setLocation(OpenGLMatrix
.translation(0, 0, stoneZ)
.multiplied(Orientation.getRotationMatrix(EXTRINSIC, XYZ, DEGREES, 90, 0, -90)));

Another part that I trimmed was the loop section, as we do not need to go through every type of trackable due to there only being one (Skystone):

// check all the trackable targets to see which one (if any) is visible.
targetVisible = false;
if (((VuforiaTrackableDefaultListener)stoneTarget.getListener()).isVisible()) {
telemetry.addData("Visible Target", stoneTarget.getName());
targetVisible = true;

// getUpdatedRobotLocation() will return null if no new information is available since
// the last time that call was made, or if the trackable is not currently visible.
OpenGLMatrix robotLocationTransform = ((VuforiaTrackableDefaultListener)stoneTarget.getListener()).getUpdatedRobotLocation();
if (robotLocationTransform != null) {
lastLocation = robotLocationTransform;
}

}

// Provide feedback as to where the robot is located (if we know).
if (targetVisible) {
// express position (translation) of robot in inches.
VectorF translation = lastLocation.getTranslation();
telemetry.addData("Pos (in)", "{X, Y, Z} = %.1f, %.1f, %.1f",
translation.get(0) / mmPerInch, translation.get(1) / mmPerInch, translation.get(2) / mmPerInch);

// express the rotation of the robot in degrees.
Orientation rotation = Orientation.getOrientation(lastLocation, EXTRINSIC, XYZ, DEGREES);
telemetry.addData("Rot (deg)", "{Roll, Pitch, Heading} = %.0f, %.0f, %.0f", rotation.firstAngle, rotation.secondAngle, rotation.thirdAngle);
}
else {
telemetry.addData("Visible Target", "none");
}
telemetry.update();

Once I finished doing this, I proceeded to turn on the robot, hook up the USB hub, and test out vision. To make things short, it worked nearly perfectly with the Logitech webcam we have (the cheap one on Amazon). I say nearly because it only sees skystones when they are between 5 and 20 inches away, but other than that, it works. In fact, I was able to get values for where each position was. In other words, when the skystone is on the right, the y position value is greater than 1, when it is in the center it is around 0, and when on the left it is less than -1. The large enough distance between these means that I am able to program it into autonomous easily, which I began doing tonight. Overall, I accomplished a lot this meeting, and anything mentioned but not explained here is likely in my last blog entry (Programming 2/12/2020). I believe I am ahead of the schedule I set for myself , but that will probably change soon.

At these past two meetings, I have mostly been finishing my self-imposed task of implementing my new drive command into autonomous mode as well as (briefly) testing it. In one of my previous blog entries, I put in my new drive command code, which uses an inverted parabola (the graph of which is shown below) rather than two different functions. This is much more simple and lowers the processing cost as well as the possibility of error. Not only that, but it remains at the maximum power for longer, making it faster. In short, this new drive command should speed and simplify things. Here is the graph for that curve:

Capture262020

The other main thing that happened was that I was able to test autonomous briefly. After removing some code for distance/color sensors that were not yet attached, I ran autonomous mode once and it moved forward. I will test this more at later meetings and I hope it will work. I have hope, but whether or not it works will be left up to more testing.