PoseNet estimates poses (joint positions of a human figure) from a webcam (or other image data). It runs in a web page, and can be used with p5.js, or other JavaScript programs.
References
ml5.js PoseNet API – if you are using PoseNet within p5.js
TensorFlow PoseNet – if you are using PoseNet from JavaScript, within the browser. The ml5.js PoseNet API uses library, and this page contains additional documentation beyond the ml5.js documentation – for example, the list of body parts.
Starter Templates
Use these to get started:
Selecting the Camera
If you have more than one camera, the system may select the wrong one. (This can happen if you have installed a virtual camera, such as Snap Camera or OBS Link.) In Chrome, follow these instructions to fix this:
- Select File > Preferences
- In the “Privacy and security” section of the Settings page, click “Site Settings”
- In the “Permissions” section of the page, click Camera
- At the top of the page, there is a popup menu that lists the available cameras. Select the correct camera. (For example, on a Macintosh this is FaceTime HD Camera (Built-in).
Oliver' Tools
Here are some things I've created for use with PoseNet:
- p5pose-recorder (online version) records PoseNet data into a JSON file (or set of files). Before saving the file, the user can use a built-in timeline editor to trim the beginning and end, which tend to includes poses from when the user backed up from the webcam after starting the program, and from when they approached the webcam again after creating the pose.
- p5pose-playback (online demo) adds a menu to (my version of) the ml5.js PoseNet starter. Use the menu to switch between the webcam, and PoseNet JSON datasets that were recorded with p5pose-recorder (above).
- p5pose-optitrack presents data from an OptiTrack motion capture setup as though it were PoseNet data. Students who have written a sketch to work with PoseNet data can run it on OptiTrack data by changing a line of code.