Getty Images has banned the uploading and selling of any AI-generated images – in an effort to keep itself safe from any legal issues that might arise from what is effectively the Wild West of today’s art generation.
“There are real copyright concerns regarding the output of these models and unaddressed rights issues regarding the images, the image metadata, and the individuals in the images,” Getty Images CEO Craig Peters said. the edge (Opens in a new tab).
With the advent of AI art tools like DALL-E, Stable Diffusion, and Midjourney, among others, there has been a sudden influx of AI-generated images on the web. For the most part, we’ve seen these images come and go as amusing slips on Twitter and other social media platforms, but as these AI algorithms become more sophisticated and efficient at creating images, we’ll see these images used in a lot more.
And that’s a business that Getty, one of the leading providers of curated image libraries, wants to stay away from entirely.
Getty’s CEO declined to say whether the company had actually received legal challenges regarding the AI-generated images, though it asserted that it had “extremely limited” AI-generated content in its library.
All AI image creation algorithms require training, and huge image sets are required to do this effectively. As The Verge reports, Stable Diffusion is trained on images taken from the web via a dataset from German charity LAION. This data set was created in accordance with German law, Stable Diffusion states, although it acknowledges that the exact legality regarding copyrighting images created with its tool “will vary from jurisdiction to jurisdiction.”
As such, it will likely become increasingly difficult to tell if an artwork is derived from another copyrighted image.
There are other concerns with image data sets and scraping techniques, as an artist based in California Discover pictures of private medical records (Opens in a new tab)Taken by their doctor in the LAION-5B photo set. The artist, Lapine, discovered that their images had been used by using a website designed specifically to tell artists if their work had been used in these types of collections, called ‘Was I trained? (Opens in a new tab)“
These images were confirmed by Ars Technica in an interview with Lapine, who has kept his identity confidential for privacy reasons. While it is clear that confidentiality was not given to supposedly confidential medical records that were kept by the artist’s doctor after the doctor’s death in 2018, it is deeply disturbing to consider how these records have accessed a very public data set without permission since then.
Labin isn’t the only one affected, apparently, as Ars also stated that while searching for images of Labin they discovered other images that may have been obtained through similar means.
🚩 My face is in the #LAION dataset. In 2013, my doctor photographed my face as part of clinical documentation. He passed away in 2018 and somehow this photo ended up somewhere online and then it ended up in the dataset – the photo I signed on my doctor’s consent form – not in a dataset. pic.twitter.com/TrvjdZtyjDSeptember 16, 2022
When asked about the image, the CEO of the company behind Stable Diffusion, Stability AI, said he could not speak for LAION but did mention that it might be possible to de-train Stable Diffusion to remove certain images from its algorithm, but that the end result as it is today is not a copy Transcript of any information from a given photo set.
There are increasing privacy and legal concerns that will undoubtedly surface in the coming months and years regarding the production and distribution of AI-generated images. What is a fun, and perhaps even sometimes useful, tool is likely to become a difficult subject for lawmakers, rights holders, and ordinary citizens.
I don’t blame old photo libraries for being behind on technology today.