It’s a mega update! We’ve hit a milestone of completing the majority of first chapter instructions, with images, and now with the pieces for 3D printing.
Today- Wrote the python program to parse the pieces sheets csv to go to html for the page. Got caught up on some css for a border. The trick was to make the border 0px and white. Yesterday we made the template for the pieces html, which was then used in the python parser. Luckily the html was a bit quicker this time, since it was based on the work already done with flexbox css. Special thanks to Beck for helping with the instruction descriptions for the arm, chassis, drive system, and hopper.
Here’s what was accomplished today, following the list of the previous log:
build – drive system
– DONE next steps: process the steps and parts sheet
– DONE next steps: process the pieces sheet
– DONE combined wheels into drive system model
– DONE for some reason going to part4 on this page redirects to brain kit https://www.robotmissions.org/learn/build-drivesys/part3/ — it was because the page was not renamed to part4
– DONE create & post new pages online https://www.robotmissions.org/learn/build-drivesys/pieces/ https://www.robotmissions.org/learn/build-drivesys/part1/
– DONE process the steps images for thumbnail and web versions
– DONE fix sheets part3 description
Today we finished the brain kit landing page. It contains all the info in one place about the brain kit, what it does, and how to get started. Go check it out! This will be our template for the other kit landing pages as well.
The progress started two days ago (March 21), but only finished today (March 23) because of figuring out how to make the gallery work. The brain kit links to its page on the store, and now it is complete. Actually, upon writing this, just realised that the 3d models are missing. They will likely go in to the guide page.
There was a good amount of progress that culminated this week!The uOttawa group got Tensorflow object detection working with the IR camera attached to the RPi! We were able to see what the objects were being detected as. This was just using the standard MobileNet model, and there were results such as surfboard in there too.
Beck completed the Bowie Brain Kit! It is now completely soldered. Next step is programming the first blink! As well, we had the chance to meet someone new, Queenie, who was interested in learning more about Robot Missions.
Hooray! It’s great to see it when progress meets a milestone. Congrats to the uOttawa team!
Meeting Queenie, who’s also interested in environmental robotics
Tensorflow is running!
Now setting up the RPi
We ran into an issue with this wifi, but switched to another and it worked
Tonight the uOttawa group tested the IR camera. They connected it to the Raspberry Pi, and took pictures with it using raspistill. The test images were showing some objects with the overhead lights on and off. The IR leds that are beside the camera do a good job at illuminating the objects when the overhead lights are off. This progress showed that the IR images can be captured, and the next step is to pass them to Tensorflow for object detection. Beck added the headers to the Teensy 3.6 and inserted it in to the Bowie Brain.
Tonight the groups continued on the progress. The pan-tilt servo connections were added to the Bowie brain board. The pan-tilt mechanism was assembled. Test code was written to make sure the pan-tilt works. Beck added the motor wires to the brain kit. It’s interesting to see how less dusty / sandy the brain is when new.
Jessica checking the pan-tilt wires
Motor wires added
Comparing the dust / sand free Bowie brain to the one that’s been in the field