Exercise: Creative Code with ML5
In this exercise, we’ll deploy Teachable Machine and ml5 to build an original web implementation of a responsive interface. This should be a creative project and doesn’t need to have any “purpose” beyond an aesthetic response to webcam input.
Your experience can either use a custom model trained on your own webcam input, as in the Rock Paper Scissors demo, or it can make use of an existing web-optimized model. The piece should either work from user tracking (if using the capacities of face and hand recognition) or distinguish between at least two types of input with a different response.
We’ll work through these examples and learn from their approaches:
Here’s a few of my examples:
I recommend the ml5.js Coding Train Videos for additional references on the current capabilities of the library.