What is it?
GauGAN2 is an image-generation “AI” that was trained by NVidia data scientists just to create landscapes. If you try to get it to create images of people, the best it can do is a strange, gleaming blob. I believe that this is by design, given other high-profile AI-bias incidents.
Naturally, I was interested to find out whether there were any residual biases that could be teased out from the network.
One interesting property of GauGAN2 is that it lets you enter just text, or you can draw a rough outline, or give it a source photo. If you enter text alone and then update the text, it will keep the same geometry and update it with the new language you’ve entered. So it is easy to compare its output for different phrases. I have done this here with the following phrases:
- where people live
- where white americans live
- where african-american people live
- where indigenous people live
The first two look almost the same.
About the Series
This is part of my “Algorithm” body of work:
- “Illegal in Illinois”
- “Morale is Mandatory (Algorithm Livery)”
- “Probing GauGAN2”
- “Feedback Loop”
- “Probing DALL-E Mini”
- “Probing ImageNet”
- “Snap Judgment”
- “Data Chains”
- “Print/Shred”
More details at “About Algorithms,” the companion site for this work.