I’m using Dan Shiffman’s flocking library (here) with processing to display images that have varying relevance to each other. This is a combination of ideas, remix theory and intertextuality, in the hope of revealing insightful information about user generated content, artworks and their influences/originals. I will give each ‘piece’ an attraction value so that they form flocks of similar works OR similar works will flock towards a specific point.
I started with a previous experiment in which I changed the flocking example to form two distinct flocks:
Then I added images to see how it would look and to get some code developed:
It’s only working for two images but I will be able to have each boid displaying its own image and then they will be able to form their own flocks once appropriate values are given. I’m using examples from lolbender but the final work will include a much more varied source of influence, not just Avatar: The Last Airbender.