With the rise of image-generating AI systems like DALL-E, Stable Diffusion, and Midjourney, creating images with computers has evolved into a text-driven approach known as 'prompting.' This process involves guiding a pre-trained AI model to generate pastiches from datasets based on the specifics of text-based inputs that shape the image's style, composition, and genre.
In this workshop series, participants will learn how to harness new levels of control over text-to-image (T2I) AI systems and build a small dataset of stylised images and accompanying alt-text descriptions for fine-tuning a Stable Diffusion LoRA (Low-Rank Adaptation) model. Throughout this process, and by using the fine-tuned model to generate images, participants will explore the renewed notions of agency and authorship that exist at the edges of these systems. The workshops will provide an opportunity to interrogate what happens when we insert a slice of local human decision making back into the process of creating imagery with AI.
What will be covered?
Session 1: Participants will build a dataset of AI generated images and accompanying text descriptions (written within the workshop) for fine tuning a Stable Diffusion LoRA (Low-Rank Adaptation) model in the interim period between Workshop 1 and Workshop 2.
Session 2: Participants will put their fine tuned Stable Diffusion LoRA model to use and generate images of a particular style.
What will I achieve?
- Gain proficiency in generating AI images of a specific style by fine tuning a Stable Diffusion model.
- Develop a critical understanding of AI-driven image-making potential.
Do I need any particular skills?
The workshop is designed for beginners, with no or limited experience of AI image generating and Stable Diffusion.
Dates: Sat 21st & 28th September 2024
Time: 10am-12pm
Location: arebyte Gallery
Price: £25 for 2 sessions +$5 Run Diffusion
Prerequisites: Please bring your own laptop. Before the session, create an account with rundiffusion.com and add a $5 credit for 10 hours of Run Diffusion. The credit will cover both sessions and provide you with additional time for your personal projects.
James Irwin
James Irwin is an Artist, PhD researcher at Kingston School of Art, Lecturer at UAL and Digital Media Tutor at the Royal Academy Schools.
He works with web technologies, AI systems and digital sound and image to investigate the notion of a vital life force inherent within digital media. By creating cognitive assemblages - made from a combination of networked digital hardware, software and human wetware - his work builds from new materialist ideas around recentering the human, undoing our role as autonomous individuals and pointing to the ways in which the production of subjectivity is offset to forces outside of our bodies; the posthuman is biological, but also networked and dispersed through machines.