Creative Code with ML5
This week, we’ll be working from our experiments with Teachable Machine and ml5 to build an original web implementation of a responsive interface. This should be a creative project and doesn’t need to have any “purpose” beyond an aesthetic response to webcam input.
Your experience can either use a custom model trained on your own webcam input, as in the Rock Paper Scissors demo, or it can make use of an existing web-optimized model. The piece should either work from user tracking (if using the capacities of face and hand recognition) or distinguish between at least two types of input with a different response.
In class, we’ll work through these examples and learn from their approaches:
I recommend the ml5.js Coding Train Videos for additional references on the current capabilities of the library.
Submit either an OpenProcessing / P5.js link or a GitHub Pages deployment of your experiment.