With the recent emergence of image-generating AI systems such as DALL-E, Stable Diffusion and Midjourney, the task of generating images with computers has been remodelled as a text-based process known as 'prompting': instructing a pre-trained AI model to hallucinate images in reaction to the specifics of text-based inputs. Prompts dictate image style, composition and genre, to the extent that images generated using these systems often appear as weird pastiches, with giveaway aesthetic signals.
Workshop participants will experiment with trying to find the cracks in these systems, using creative prompting to explore whether these systems really are the dawn of a new horizon, offering the potential for breakthroughs in digital image-making beyond the offset of human labour.
What will be covered?
Session 1: Introduction to the interface of Stable Diffusion’s Automatic 1111 software. You will explore the impact of the software’s wide variety of features on images generated through text-based prompts.
Session 2: Building on the first session, you will learn how Stable Diffusion's img-2-img functionality can be used as a collaborative tool to reimagine image-based practice.
What will I achieve?
- Gain proficiency in using Master Stable Diffusion's Automatic 1111 software for generating images from text prompts.
- Develop a critical understanding of AI-driven image-making potential.
Do I need any particular skills?
The workshop is designed for beginners, with no or limited experience of AI image generating and Stable Diffusion required
Dates: Sat 11th & 18th May 2024
Time: 10am-12pm
Location: arebyte Gallery
Price: £25 for 2 sessions +$5 Run Diffusion
Prerequisites: Please bring your own laptop. Before the session, create an account with rundiffusion.com and add a $5 credit for 10 hours of Run Diffusion. The credit will cover both sessions and provide you with additional time for your personal projects.
James Irwin
James Irwin is an Artist, PhD researcher at Kingston School of Art, Lecturer at UAL and Digital Media Tutor at the Royal Academy Schools.
He works with web technologies, AI systems and digital sound and image to investigate the notion of a vital life force inherent within digital media.
By creating cognitive assemblages - made from a combination of networked digital hardware, software and human wetware - his work builds from new materialist ideas around recentering the human, undoing our role as autonomous individuals and pointing to the ways in which the production of subjectivity is offset to forces outside of our bodies; the posthuman is biological, but also networked and dispersed through machines.