MachineGaze (2023)

Classifier as Drawing Tool

For: Quantitative Aesthetics
Project by: Kevin Tang, Kai Zhang, Skye Gao, Randong Yu


Introduction This project is an exploration into the “directionality of gaze” in alorgithms, and specfically a pre-trained classifier model.


We are training the model with datasets of what is consider informal apparrel and formal apparrel. The concept is to create an interactive feedback loop where the user can manipulate what they are wearing to exaggerate certain qualities the classifer deems as “formal” or “informal.” 

Data Set + Training Model with Pytorch




0. Gradient Descent
By generating a noisy mask and overlaying that on top of the orginal image, we are essentially able to “remove” a specific part of the image and see how this removal influenced the activation value of the classifer model. With this logic, we can find the location that negatively influences the actiation value thus the location where it is most important to the classifer.



1. Initial Training
Using Low Activation to find anti-categories

2. Rudimentary Pipeline
1. Camera Capture
2. Grid Search
3. Gradient Descend


3. How the classifer sees us.
1. Random Walk
2. Threshold drawing

3. Re-ording Sub-frames based on activation value



4. How we interact with classifier Real-time Gradient Descent Drawing