This is a great website, but not in the way the authors intended. Based on some of the examples they explicitly provided, it is clear to me Stable Diffusion creates novel art. Here's a random example https://www.stableattribution.com/?image=a2666aee-0a1a-411b-...
I will admit this is a nice tool for verifying the creations of SD aren't pure copies, so I think it will be useful for a time. But as AI-generated images start to taint future datasets, attribution is going to be significantly more complicated.
The discussion over novelty is useless, all these legal structures like copyrights patents royalties and licenses are about creating business structures to allow artist or inventors be compensated.
They were already imperfect in it and now suddenly a new technology drags their work into the wilderness with no stuctures for compensation.
The images created by the AI are indeed novel but they feed on the work of people who spent decades building this style. Of course artist themselves feed from each other but they usually don't interfere with the business. So let's say, if an artist developed a particular style and someone wants to hire them for a business project like a game they can't feasibly just learn that style and use it so they hire the artist. Later other people catch on this style and develop over it. It only works because monetising through copying the style is not very feasible.
Suddenly you have a machine that makes it feasible. Instead of hiring the artist or licensing their works, you train your machine on it and start generating any number of images of that style or combination with other style without paying the people who come up with all that.
How is the artist supposed to be compensated for spending years of developing that style/method?
I'm fine with getting rid of all that copyright and license stuff but let's not pretend that what's happening now is a fair endeavour.
The core of your concern (& argument) seems to be the problem of existing business models becoming disrupted by this technology.
But the point of good law isn’t to protect an established business model. If that were the case, we would have outlawed the loom because it displaced weavers, and the camera (sorry portrait painters) and the iPhone. (How many telegraph operators are left? None!)
When an artist learns to draw, they copy almost all their ideas from other art they’ve seen that they like. That’s how humans learn. I’m learning to compose music at the moment and everything that sounds good to me is probably a remix of musical ideas I’ve heard before in other songs. If I grew up in a different music tradition (say, ancient India) then all of my ideas about musicality would be different.
It seems to me that the only difference between me and stable diffusion is that stable diffusion copies less than I do, but from more sources.
Also, the idea that artists can’t learn each other’s styles is totally wrong. I heard an interview years ago with an art director at blizzard. The interviewer pointed out that blizzard’s games all have different art styles, and asked how they swap between styles. “Do you have different art teams?”. The art director laughed and said that was the difference between amateur artists and professionals. Professional artists can be given a style and a brief and they can draw in the style. When they move between games, he said, the whole art team spends about 2 weeks practicing concept art for game they’re moving to and critiquing each others’ work to align their styles. Then it’s smooth sailing. That’s how they hire artists too - they ask them to draw some concept art in the style of one of their games.
Sounds far fetched? That’s what we do when we hire and onboard programmers. Art for hire isn’t any different.
> But the point of good law isn’t to protect an established business model.
Really? The Constitution specifically includes a bit about: "to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries". That was quickly followed by the first copyright act.
As I understand it, the copyright portion of this was exactly to protect the established business model for writers, which was publishing and selling copies of their work. In particular, protect it against other people publishing editions of their work.
exactly. This usage of works for _training_ is not part of that exclusive right, as far as i can tell. Otherwise, it would be a copyright violation for a human to read and learn off an existing works.
This technology was clearly not one anticipated when the Constitution or the first Copyright Act was written. Or any of the later ones.
This will not fit in existing laws, so we have to go back to the purpose, "promote the progress of science and useful arts" via "securing for limited times to authors and inventors" particular rights.
What specific rights we'll need to secure here are going to take a while to work out. But new technologies have forced updates to copyright laws many times, and I' sure this won't be the last time.
if the people are using the AI to replicate an exiting works, to try to loophole the copyright act, then they will just get sued.
> release the AI model or a product that uses it
I argue that the model itself cannot be construed as copyright violation. After all, the model is information. What if i released a table of all of the word frequencies from books, and published that table? The table of word frequencies does not violate the copyright of the books from which it was derived.
Just because you _could_ re-derive the original books from this dataset, doesn't mean the dataset violates copyright. It _could_, if the dataset cannot do anything else (e.g., i just zipped up the text of the books and released that). But the AI model does not _only_ output the original, but it could also generate new works.
but they're not. Works derived stylistically are not copies.
Were they publishing copies, your harry potter analogy would hold, but that is categorically not the thing happening.
Diffusion is not compression, or copying. It's stylistic synthesis, which is more like someone reading all the Harry Potter books, then doing a podcast about Harry Potter fan fiction.
The question is how did you do training. If in the process you 'copied' the image (e.g. from network to memory) you did require copyright.
Reading with a human is not copying, but reading by machine is - there are several cases where that has been enforced. This is covered by reproduction rights.
The stated goal is "to promote the progress of science and useful arts". Protecting an established business model via copyright was merely a means to that end, not the end in itself.
In this case, diffusion models are actually a perfect example of "the progress of science and useful arts". The law should be structured in such a way as to promote such progress, not hinder it.
"business model" feels like a very narrow lens to view this through. This isn't weavers or telegraph operators, this is coming for all human art forms.
We are potentially creating an all-seeing instant plagiarism machine that saps away not just monetary compensation but even credit and recognition.
Spend 2 years writing your novel? You'll barely sell a single copy, but people will happily pay 2 cents to hear GPTx's paraphrase of it.
Spend 5 months in the jungle with wild animals to snap that perfect picture?
The magazine covers will be Dalle 5 rendering "nature photograph of orangutan cradling its new born baby in the style of X", your royalties won't cover buying a single lens.
> This isn't weavers or telegraph operators, this is coming for all human art forms.
so somehow the arts is above the telegraph operators or weavers?
> but people will happily pay 2 cents to hear GPTx's paraphrase of it.
except if you can tell that the novel GPTx "paraphrased" is a derivative work from yours, rather than transformative, you can either sue for loyalties, or take action. If you cannot tell if GPTx is sourcing from your works, even if it is, then there's no case to stand on. The AI wrote a better novel than the human.
> Spend 5 months in the jungle
and there's a lot of people doing "organic", "handcrafted" produce/foods. They tend to be very expensive compared to mass produced factory foods, and the market for it is relatively small. Just because something is difficult to do, doesn't automatically mean that people need to be paying for it to allow it to continue to exist. If it isn't profitable, stop doing it, rather than demand the world be changed to allow for it to continue.
>except if you can tell that the novel GPTx "paraphrased" is a derivative work from yours, rather than transformative, you can either sue for loyalties, or take action.
Respectfully, no.
The top comment in this thread is saurik asking Stable Diffisuion to generate "an avatar of saurik" and getting a veritable likeness of themselves back.
It would be laughable to think that this is feasible without the model having labeled photos of saurik, which are all posted by (go figure) saurik.
The AI also generated a "better" photo than the one saurik posted. The AI saurik has more hair.
That doesn't change the fact that without the source material, this output would not exist. Nor that it would be really hard for saurik to "take action" on that.
----
In the end, the experiment to do would be very simple. Exclude certain data from the training set, retrain the model, give it the same prompt, and see whether the output changes significantly.
If it does, then that data was indispensable for generating the output for that prompt.
Something tells me we won't see such experiments done by OpeanAI any time soon.
A paraphrased novel is clearly a copyright violation under existing laws. If you use GPT to produce one, then you will be sued.
This does not imply that the underlying model is a copyright violation. There is no case law on the copyright status of models but it is generally believed that training a model is fair use.
In the human analogy: it is legal to read Harry Potter and to learn Harry Potter from memory such that a copy of it is stored within your head. But if you reproduce and publish or perform substantial parts of Harry Potter from memory you are violating copyright (subject to the various exceptions to copyright: quotation, parody, etc).
"If it does, then that data was indispensable for generating the output for that prompt."
It gets murkier than this, by far. The diffusion models are not just trained on images, but on text. It varies by implementation but Stable Diffusion for example used a pre-trained CLIP transformer network from OpenAI (and subsequently OpenCLIP). CLIP can have internal associations between words that in turn steer the diffusion image generation.
To given a simple example of how this could work (I'm not saying this particular example does work though): the model in total could understand that a "swedish flag" consists of a "yellow cross on a blue background", and it could have yellow crosses in its image training set, but no images of Swedish flags, and it would still understand how to draw a "Swedish flag" based on the language semantics.
Just as a human, actually.
So, this won't work. A style can be inferred and described and it doesn't necessarily need to be in the image training set at all.
> so somehow the arts is above the telegraph operators or weavers?
Yes, obviously. We haven't found 40,000 year old telegraph stations in caves. No one thinks weaving is an intrinsic part of being human.
The cat is out of the bag, but we should think very carefully about what we're leaving behind before we embrace an artless society.
> Just because something is difficult to do, doesn't automatically mean that people need to be paying for it to allow it to continue to exist. If it isn't profitable, stop doing it, rather than demand the world be changed to allow for it to continue.
This I find positively pro-dystopian. Who cares about reality when we can get regurgitated fakes generated much cheaper? Just write "crying woman bleeds from head in front of ruined building, warzone (pulitzer) [trending on Flickr]" in the little text box, click generate, ship it, and go for coffee.
> But the point of good law isn’t to protect an established business model
But there are plenty of bad laws that protect business models. Hell, copyright - the law in question - is intended to protect certain models. Regulatory Capture is a term for a reason.
I agree with other points of yours though. Copying in art is for sure a thing, and same with style changing.
Sure, it's disruptive and that's fine. The problem is that there's no mechanism to compensate the people who enable your new disruptive machine.
As I said, humans learn but the phase humans do it still allows for people be compensated.
I'm not anti-AI at all, I think its great and I think the artist who can leverage it can hugely benefit from it but it is not fair to just take something from people away.
I don't get the assumption that there should be. The machine was trained on publicly available hard that was already free. Why do people think they need to be compensated for something they put up online for free?
They don't have the right to not allow people to learn from it, that's just never been a part of copyright.
It is expected of any professional artist to have an online portfolio if they are serious about generating work. Scraping my portfolio that I put out because I want to generate revenue is a shitty thing to do. Also 'trained' is a complete misnomer. AI is fed images made by humans, labelled and tagged by humans, categorized by styles defined by humans, then plotted, copied, traced by a program written by humans. Along comes a human who inputs keywords and the AI uses statistics and algorythm programs also written by humans to amalgamate a resulting 'work of art' for the human with no physical art making effort. If you were to erase all tags and labelling from source images you would find zero learning happened. Art is made with physical motor skills, time and effort. AI is a human made program that uses human made imagery, classified by more humans to generate a shopping list written by another human who want art that requires no effort of aquiring the physical skill needed to make the art. With no physical product, there is no AI art and that art costs the maker. Public and free are not synonomous. If you want to physically copy my art, go for it, I will applaud your skill, but taking my art and shoving it in a program with a bunch of other art to dilute the provenance doesn't change that you are utilizing the effort and skill of artists for your own gain. If you can live with that, then I am very sad for you.
My AI art generator works right now, on my PC. If everyone stopped making art, the "normal" way, my AI art generator would still work.
This is a false equivalence to your example, because if farmers stopped farming, then it would not be possible to make burgers.
But, thats usually what happens when someone comes up with a pithy, one off meme response, like you just did, instead of actually responding to the substance of the argument. (I expect any future response from you, to instead not be engaging with the substance, and instead coming up with reasons why the silly analogy still works)
> Art is made with physical motor skills, time and effort.
Hasn’t been about the motor skills time and effort for a long long time. Art has been about the idea rather than a fetishization of how much time and skill something takes to build.
This here shows that you are speaking without understanding what you are speaking about.
AI is absolutely trained. It's a process that is quite literally inspired by the way we understood neurons to work in the 1970s.
AI start with a big batch of random numbers. There's a big fancy scientific method used to adjust those numbers in order to cause the system to learn to do some task.
The process creates genuine and novel understanding of the problem space at hand. AI trained to do something simple, like add two numbers, will have a real solution to add two numbers in them once you have finished training them.
With bigger problems, similar understanding certainly exists and in some cases has been proven to exist. However, once you get to the difficult problems with billions of parameters, it's very difficult for us to check that because if we could we would have just written it without needing to use an AI in the first place.
There are lots of researchers who do lots of study and effort to ensure that AI are actually producing real outputs and have genuine understanding of the problems face they are working in. Do not insist that AI has no understanding if you have not taken the time to learn about those techniques.
Stable diffusion has genuine understanding of different images and how to produce them within it. It is not a simple system that reproduces would already exists. It is not a collage tool. It is not incapable of producing novel outputs.
Diffusion models are perfectly capable of producing any image that can possibly exist. It is only a matter of time before someone invents a new style that hasn't been seen before, and someone else is able to find a combination of descriptive words that causes stable diffusion to produce that output.
Or that someone produces a novel combination of words, chucks it into stable diffusion (or some other AI model), and produces a new style of art.
> Art is made with physical motor skills, time and effort. AI is a human made program that uses human made imagery, classified by more humans to generate a shopping list written by another human who want art that requires no effort of aquiring the physical skill needed to make the art. With no physical product, there is no AI art
This is simply not true. Art is incredibly cultural, vaguely defined, and often involves little to no work at all. If someone prompts an AI, gets a result, and sticks it to a wall, calling it art, it is probably able to be considered art.
> If you want to physically copy my art, go for it, I will applaud your skill, but taking my art and shoving it in a program
In short, it appears that you are threatened by the existence of AI and the copyright argument is not actually about copyright, it's about ensuring that the competition does not exist, and that people did not have access to these tools that might make artists less valuable.
I personally care very little for copyright or copycats. I have great disdain for those who would profit off the backs of the labor of others. Saying that art often involves little to no effort just shows how ignorant you are of the subject. I am not threatened by AI and frankly I don't see it as competition, it's not really that good. Besides that the majority of my art is three dimensional. I am just saddened by how it will be misused to disrespect the effort and labor of the artists that it is feeding off of. No matter how complex AI might be, it is not sentient and it relies on human work and direction and is therefore not actually creating.
Available for free online is not a valid justification for copying under copyright law, though, right? You can’t distribute something just because you can see it. True for museums and magazines as it is for online content.
> They don’t have the right to not allow people to learn from it, that’s just never been a part of copyright.
Yeah this is true. Stable Diffusion and other neural networks are not “learning” from it they way humans do though, it’s remembering and remixing and interpolating pixels (fixed expression), which is a part of copyright.
BTW compensation is not a very accurate summary of this problem. Machines that borrow someone’s style and copies and remixes without attribution is inherently problematic far beyond artists who live off selling their art, it dilutes both creativity and credit, in addition to undermining people who have to work much harder than the computer to produce images. These AI completely depend on being given human-created art to begin with, and the companies that are making them and using them are already making handsome profits, so it’s reasonable to expect some kind of return in addition to proper credit.
>> Why do people think they need to be compensated for something they put up online for free?
> Available for free online is not a valid justification for copying under copyright law, though, right?
Legal and attribution and licensed are all terms that are usually involved, but the core assertion is more or less correct. eg Images on billboards are protected by copyright in a similar manner. Something displayed on a website does not invalidate the copyright of the author or indemnify the owner of the site (or downstream users) from licensing conditions.
It is exactly remembering the pixels. Just not all of them and it obviously fills in gaps (more hair as mentioned in a another post). You can consider the way it stores those pixels as a lossy compression format. If I copy a music sample but I store a compressed version of it (mp3 for example) you will not find the original bits in my database at all. I am still violating copyright.
But it's really not, though. It's remembering something related to the pixels, yeah, but that's like remembering the shape a line can take or the color of the sky.
To extend your musical analogy, it's remembering that many songs are in 4/4 time, and that major chords sound appealing.
Also, were you to compress anything, an mp3 or a picture, in a lossy fashion, to that degree of compression (~10^-5), you would no longer have anything resembling the original. The audio would be glitchy noise, and the image would be a scattering of apparently random pixels a few pixels wide.
Here's the thing - I empathize that this is disruptive in a very similar fashion to a tool that does store compressed copies of the work in question. It is capable of doing the same kind of damage. There's a conversation to be had there - but it's just not compression. That's not how the thing works.
In the case of an overfit image, which is the thing Stable Diffusion is being sued over, it is just compression, literally. The image data is stored in the network weights, and the image can be reconstructed. You’re drawing a distinction without a difference.
'cause those images are not the same. Sports events are just easy to fake, because they're boring - all sports pictures look roughly the same.
Edited to add: There's another lawsuit (a class action - 2), and after a little light reading, I came across section 5: 'Do diffusion models copy?', and my stomach jumped.
What they're doing, to make a point at trial that stable diffusion copies images, is _training images into the model, then using that trained model to prove that stable diffusion is a compression algorithm_.
This is a patent fabrication. If you train a model hard enough, yeah, it will produce the image you trained it on. And become useless for all other images. Congrats, you've just compressed your 7kb image to a 7gb diffusion model.
What scares me about this, is that the average court in the US is absolutely dumb enough to fall for it.
This is dismissive in the face of increasing evidence that a bunch of NN models have already been caught reproducing accidentally overfit data. Many examples have popped up with Stable Diffusion, not just one you disagree with. Same goes for ChatGPT, for GitHub Copilot, for Imagen, and a bunch of models.
Calling people dumb is to be willfully ignorant to the fact that neural networks actually can and really do remember images, not just when overfitting, but also when examples are in a low-density area of the latent space, when it doesn’t have enough neighbors to average with. The machine really is technically a machine intentionally and specifically built to reproduce a weighted combination of it’s inputs, and it really is possible for that weight vector to spike on some specific training examples. This won’t go away by pretending it doesn’t happen, it will go away when people curate training data that is legal to use, and/or when people write software that detects and rejects outputs that are too similar to a training sample, or otherwise guarantee no individual examples can be reconstructed. This is precisely why the project we’re commenting on is interesting, because it takes a step in that direction.
I agree with you that they have the capacity to remember an image - but they're not compressing them. That's a fundamentally different thing. The argument being made by that class action lawsuit is that "this thing can reproduce image X so it's a compression algorithm and nothing more", which they are predicating on an exercise that is sneaky and dishonest, and only likely to hold water with someone who has a limited understanding of the tech and isn't paying very close attention.
I think it does go without saying that our legal system has made some pretty dumb decisions regarding tech in the past - we read here all the time about the patent system, which is damn close in spirit to copyright.
Again, yes, they can remember an image, but they are not remembering pixels, and it's not compression. The vectors you're referring to are not a smaller version of the data, nor are they a pixel representation or even a close derivative thereof. Sure, there's a connection between the latent space and the pixels, but I don't see how that's the same thing.
For those following along, (1) is the best paper I could find talking about extracting images from SD. I'm open to more resources, and I'm even open to being convinced I'm wrong, but not by intentionally overtraining a model and calling it 'compression'. That's a lie.
To take a step back here, is it really the incidental occasional regurgitating of an existing image that's got everyone on edge, or is that just an easier target than "this is disruptive so I want to make it go away"? I'm not saying it doesn't suck that this is gonna put a ton of people out of jobs; both my parents were professional photographers in the 80s. I get it. But like, let's talk about that. Not some orthogonal strawman.
And hey, just to get it out there. We might disagree but I'm not calling you dumb. I do appreciate your willingness to engage an opposing view - it's part of what keeps me coming back to HN.
Compression (especially a lossy one) means storing a smaller sample of the original data in whatever form you desire and then using some algorithm to reconstruct the original data up to some acceptable approximation. I would argue that in the situation we are discussing the network does just that and it is obvious to everyone involved.
this is the crux of my issue with the term "compression" in this context: Is it a smaller version of the data?
Yes, the model is smaller than the total input data. But when it comes to recreating a single image, how many of the weights must be configured 'just-so' recreate an image enough to call it the same image? I'll admit ignorance here - but I also don't think that this is a thing anyone knows for sure. We can only just extract othello piece colors from a simplified, othello-specialized model designed to recognize two colors.
How much of the information from other images must be present to perform this task?
My instinct, given my understanding of how these things work, is that to replicate an image with any recognizable fidelity, you have to overtrain the model enough that you've affected a set of weights much, much larger than the pixel data. The internal representation of these images is concerned with much more visual information than just 'this pixel is this color' - by looking at layered outputs from the inverse type of system (image recognition, which is the core component of these models), you can see that they're encoding layers of shading, lines that map to brushstrokes or object boundaries, foreground, background, all kinds of stuff. A direct representation of an image with all of these would be necessarily huge - and we know this because we have them. Artists use layers in all kinds of image-creation software, and they're always way bigger than the JPEG itself.
I get that this may sound pedantic, but the term 'compression' doesn't seem, to me, that it fits here. Compression, by definition, makes stuff smaller
Fair enough, maybe compression is a too specific term to apply here but I does not matter if it's compression or not to violate copyright. Compression was a good example to mention because it is already familiar to laypeople and established law. The main point is that it stores some sample of the original data - and if it's more it is derived from the original data (your strokes example) and applying some algorithm to reconstruct it to some approximation that we humans might find indistinguishable
It is effectively remembering pixels, and we can prove it because it can regenerate some of the training images verbatim, close enough to violate copyright law. It doesn’t matter that it’s compressed.
They put it online for free for human consumption. Because the tech is very new, old concepts and laws don't cover it and that's not the point.
The assumption that there must be compensation comes from the capitalist society that we live in. Switching to Communism or something else can be a solution to not directly pay people who do works and still have them around.
The Google bot has been consuming their art for many years, even copying it verbatim into Google's site and yet they didn't complain.
This seems much more about the fear of competitions, than about violation of copyright. And yes, that's scary, but unavoidable as tech progresses. I don't think anything good could come from trying to strickten copyright here. The AI is obviously not copying directly, at best it takes a bit of inspiration from other works, something artists themselves do plenty themselves.
Something like UBI might help with the monetary needs eventually. Though the fear part might start getting really interesting in the near future. What do you do with your life when AI is better at it than you? When everything you can create, can be created by the AI faster and better?
Artists shouldn’t have to compete against themselves. If these AI companies didn’t use the work of artists, the output would suck! I’m sure many artists would be happy to compete against the artistic talents of software developers. But they’re not, they’re having to compete against their own work.
I listened to an interview with an artist who referenced the “three C’s”:
Consent
Credit
Compensation
These seem reasonable to me. The request isn’t to eliminate the technology. It’s that artists should be able to consent to their work being used. Works derived from their work should include credit. And, they should be compensated.
The request is to eliminate the technology. The training data set for stable diffusion has 5 billion images. Even if it was a single dollar per image, the data set as a whole would cost $5 billion.
Getting consent of all the authors within that 5 billion images, managing the infrastructure of paying them, would be a Herculean task, and the cost of that task would far outrun the 5 billion spend if you gave a dollar for each image.
That would kill any and all possibility of an open source AI model. The future where this happens would be a very dystopian one.
This. I see and understand the FUD on behalf of the artists. It's real; this is going to change things in a way that makes their lives harder, and that sucks.
What I see in the discussion is a results first, truth second thing. This is understandable - in an existential fight, damn the consequences - I'm swinging for my own survival.
What's missing from that analysis though, is that by constricting open source models, you're not by any measure stopping the development or deployment of these models. Google will make one. Microsoft will make one. Apple will make one.
Adobe will make one. Then, in order to use these technologies, you will have to pay. A lot - and it won't be paying the artists. Sure, the initial training night send out a few cents for each work included, a small price for Google to pay to prevent competition. But you're smoking the wrong shit if you think for a second that won't change as soon as the lead is cemented.
So now you're still looking for another job, and you don't even get to play with this new tech and continue making art, because you don't work for Google.
It’s insane how much of a gift to artists Stable Diffusion is, this tech could have been wrapped up in extremely expensive subscriptions by any of the current creative tool rent barons but it was handed to us all for free.
Many are too angry and small minded to see how lucky we are. A tool beyond anything Adobe currently ships today, for free. Could have easily been a AutoCad level subscription (10k+ a year IIRC that would have left you behind if you hadn’t paid.
I think that some of the friction here is about mindset. I have a lot of friends who are artists, and many of them consider this stuff 'tech shit I'll never understand'; I think, because we've culturally set ourselves up in a strata of 'professionals' (who are assumed to be better / smarter / uniquely capable) and non-professionals (who are born without whatever it takes to grok x). This is along many lines - my artist friends in question both believe they understand something about the creative process that the art-uninitiated will never get, and that 'tech shit' is leagues beyond what they'll ever get, because they did bad in math.
But here's the thing; I have never 'done art' as a part of my identity, and yet I can listen to them talk about it and understand where they're coming from enough to contribute.
Also, I work in tech and I didn't even make it through basic trig. I have a GED and a set of scattered community college credits that are never going anywhere because of a GPA < 2.
The whole idea that there are 'this type of person' and 'that type of person' and that those are immutable is a horrific and dangerous lie.
Future of creativity and artistry isn't just being expert in one thing anyway, already seeing in a lot of fields where top level creatives are managing to excel in several fields.
These sort of tools are only going to accelerate this trend in my eyes.
> This seems much more about the fear of competitions, than about violation of copyright
Yep, it's all about being compensated and yes copyrights are already very problematic and creativity limiting concept.
People in these industries happen to be the first to face AI revolution but it will come for as all, eventually. UBI might be something but I don't know, it seems like hard times are ahead of us.
People and companies have complained an enormous amount about Google's usage of images, particularly their inclusion in Google's site, and legal action or the threat thereof has caused Google to change how Google Images works before.
I think what we are seeing here is a proxy argument.
Artists do not care about the copyright of people being able to learn from their art and reproduce it.
It is not been an issue for decades, because it is a long established standard of copyright. You can't copy a work directly, you can learn from it and reproduce your own sort of stuff.
What's different now is that this is a tool which artists can be replaced by. They not angry that their art has been learned from. Artists are angry that there is a tool which can replace them. And artists are looking for a way to make sure that tool is hampered as much as physically possible.
If it were not learning copyright it would be something else, and whatever that's something else is it would also probably be something with proxy for that core issue.
Imagine we had created an AI which was perfect, it could take your exact description of anything you asked it for and drew that thing.
And then I went and asked it for a picture of Mickey mouse in the style of Disney. Because it's a perfect AI perfectly reproduces it.
Who is in the wrong? I think it's absurd to think the creators of the AI would begin the wrong, because they did their job perfectly. It's a perfect app, it is a tool capable of literally anything.
At the end of the day, it's the person using the tool to violate copyright who is in the wrong, assuming they distribute those images.
And I don't believe the answer here is communism, I believe the answer is the same answer we've had for decades for the same situation. People need to find new jobs and learn new skills and do new things.
And as programmers, it's probably going to happen to us too, about as soon as it happens to artists. Time to consider to start to learn to weld or build homes or something like that.
I don't think we can look into this from copyright standpoint. It was already not working very well in the digital age, now it's completely useless.
I don't object that people should learn new skills and find new jobs, my concern is that people who actually put huge effort to produce that "training data" are now left dry. They should be compensated and move on, I'm completely against hampering the abilities of AI in order to preserve the current business structures.
The problem is, the net contribution of most artists to these systems is next to zero.
It's greater than zero, otherwise the AI wouldn't exist, but these AI are trained off of 5 billion plus examples.
Examples that required work to collect and sort through. Examples that required millions of dollars worth of compute time to make into something useful.
What is there to compensate them for? These machines are not actually going to make these companies that much money either. The tools are open source. Competition is going to reduce profit on these systems to a very slim margin.
You have to not only compensate for the fact that they aren't involved in actually creating the AI, so they're only getting a fraction of what this thing is worth in the first place. Then you have to consider that there are five billion images in the training set.
I agree, the situation is not ideal. These machines should't been trained on their work in first place.
Currently compensating the artists would be like the Pirate Bay compensating the studios for their production costs through the gambling ads they run. No new move would have been ever made if that was the case.
Isn't this machine never being invented a net loss for society?
I want my computer to be able to summon images out of the ether if I ask it for a picture of anything I could imagine. I could not imagine a positive system that would value such a small group of individuals over the need of the whole in a situation like this.
Actually, you are wrong. The capital of the artist is to physically create art. AI has to be fed physically created art and the told by humans what it is being fed. AI is not 'inspired', it is statistically driven by a human written progam, using mapping of human made work, labeled, tagged and defined by more humans to render a shopping list input by another human. Humans are using programming technology, written by humans to exploit other humans physical work.
"Capitalism" is selling the right itself to someone else, not earning royalties from it. IP rights are government creations; it's not like you signed a contract with each royalty payer.
It would seem that it opens you up to the problem of other people deciding whether you're an artist or not and if so whether you're a good artist.
The same of course happens with capitalism, where it's your customers doing it. (Or in the case of books, which are almost never profitable, the VC-like publishers deciding to give you advances.)
So of course the reason modern mixed economies are good is there's more than one set of people that you may be able to convince to fund you.
So the parent post to yours said he’s taking piano lessons. Let’s suppose he becomes great. And let’s suppose he writes his own music. How is Taylor Swift supposed to get paid by him from his early lessons which have undoubtedly played a key role in not only his love for music but the song he writes today?
She doesn't because it's not a business issue for her. She is not getting paid when you sing her songs with your friends too and this is also not a problem.
I'm not thinking from the current definitions of right and compensation structures and I'm not calculating fees here. What I say is that if you build your thing on top of other people's work and put them out of business this needs to be addressed because you will cause huge problems to the people who made your machine possible in first place and you will dry up your source of "inspiration" because you will no longer have people making living of it.
> You think, maybe, it’s fair for the people being replaced to feel a bit upset about it?
I think it makes all sorts of sense that people are upset about stable diffusion.
And, people being upset isn't a good reason to change the law or outlaw the new technology.
Today the word "luddite" is a slur. But luddites were real people who made the same argument you're making. In their case, they were a secret group of textile workers so upset by the introduction of the mechanical loom that they went around sabotaging equipment.
Suppose we wound the clock back and the luddites won a legal battle and successfully outlawed the mechanical loom in the UK. Knowing what we know now, would you support a law like that to protect the jobs of textile workers in the 19th century? I sure wouldn't - it would have decimated the economy of the UK and stunted innovation.
Thats the danger of using the law to protect the status quo. Sometimes the status quo needs to change to make room for what comes next, regardless of how painful that change is. The law doesn't exist to protect your business model.
We're at the verge of the second industrial revolution. I have no idea how it shakes out, but I don't think clinging desperately to the old ways of doing things will be a winning strategy in the long run.
The true story of the Luddites is just one more chapter in the long history of state repression of the working people any time they organize together to improve their lot.
The law shouldn't protect the status quo, it should protect human beings. Unfortunately, the law usually protects the wealthy first and only.
You are fundamentally not understanding the issue.
People don’t have an inherent right to be an artist just because they are an artist.
They can get upset, smash the machines in a Luddite frenzy or try to use the law to stifle competition but the simple fact is nobody owes them anything. Maybe they win a lawsuit or two but the AIs will just get trained on out of copyright work and art styles (for those who now pay for graphic artists and whatnot) will change like they do every generation.
I, for one, look forward to an AI generated animated Matisse dancer selling me milk.
Use of AI models is far from "free". Even given their prior existence, ignoring their research and training cost, AI models like Stable Diffusion require energy and computing resources to execute, and require iteration and selection labor for raw image content to be created. Once raw content has been created, the content still requires processing (i.e. color correction) and integration (i.e. editing, scaling etc.) labor. If analog reproductions are desired, they also require capital to produce.
The "problem" is only a problem if you assume that everything creates a right to be compensated.
The busker doesn't get compensated if I listen to him as I walk by (unless I choose to). He doesn't get compensated even if he has a really cool, kinda-unique idea, I get inspired by it, and start doing something very similar (without outright copying him in the sense of copyright). He doesn't get compensated even if I get my friends to join me and put him out of business.
Copyright is already an artificial piece of compensation that was added, and that is debatable (copyright can be seen as "theft from the commons"). It intentionally covers certain things and doesn't cover others, and this seems to fall under "others", as long as the training data was obtained legally.
I expect that to be the real problem, especially where content was reposted without permission.
There is no compensation for any usage save for duplication. No works are being taken away. Style is not and has not been copyrightable. If the disrupted lobbies are powerful enough, the law may change in this regard. To do so, would be a tragic corruption of fair use.
> So let's say, if an artist developed a particular style and someone wants to hire them for a business project like a game they can't feasibly just learn that style and use it so they hire the artist. Later other people catch on this style and develop over it. It only works because monetising through copying the style is not very feasible.
I don't think copying an art style is that hard. Professional artists in your example of the game industry can absolutely imitate each other upon request. If using the same style as another artist were so difficult games with multiple artists wouldn't have a coherent art style, but they do because the studio will develop a style guide and use their senior artists to guide the juniors.
> How is the artist supposed to be compensated for spending years of developing that style?
They haven't been in the past and they shouldn't be. They're compensated for the specific works they produce which are granted copyright. This is a good thing because otherwise the artistic domain would be horrendously polluted with claims, and producing original art would be like navigating a legal minefield.
Food and housing should be a human right, and it is only because of greed and fear that society allows someone to become homeless.
Not only should we provide housing and food for the artists. We should do it for everyone. And then they can spend time doing what they want, and earn more money that way but never have to worry about housing or basic food for survival.
And medical care! Nobody should have to worry about food, housing, or medical care.
Our greatest failing as a society is viewing those three things as individual problems. It was not an accident, it is a very profitable social failure.
I guess you first should do your revolution, establish your new order where food and housing is a given right and later go after the work of the artist.
> do your revolution, establish your new order [...] and later go after the work of the artist
I don't think that's what's going to happen.
But if things get as bad as you guys are saying, maybe after enough people lose their jobs to AI society will change somehow to take care of the less fortunate among us in better ways, providing housing, food and healthcare to the people.
If you want to reason under the current legal framework that makes the works they create "their" work for economic purposes... then you also need to accept that under that same framework they are not deemed to be the authors.
Nothing. Continue as we currently are where artistic style is not protected, but output is. Artists get work making art as they always have, but they're competing against AI that can pump out shit quality work very quickly. If you can't do better than that as an artist then you should find another line of work.
This is no different than professional calligraphers losing their jobs because of the printing press and then later due to customizable fonts on printers.
I have a ton of respect for calligraphers and believe they are artists, but at the same time I don't think that the millions of people who create custom fonts or use custom fonts are doing a bad thing.
Artist function is not limited to the act of producing the creation. Their value comes from creating the methods of exploring ideas or feelings and not from the output itself. Guernica is not invaluable because no one else can draw it and Picasso is not infamous because drew the best lines.
Just to be clear, I think art made with AI also can have value because it's a tool after all. My concern is, it breaks the business for the people that feeds on and that's not OK.
> My concern is, it breaks the business for the people that feeds on and that's not OK.
Why not? This is a website that's filled with programmers, almost all of our wealth is built on the skulls of jobs that once existed, and those programs were built without the consent of the people whose jobs were automated away, often through observing and rewriting the processes they used to do.
The standard, this has been going on for decades, the only reason it's getting backlash is because this is a group of people who never expected to have this happen to them.
We automate people's jobs, not taking their intellectual properties and build machines that can churn endless versions of it. The people put out of work by computers no longer work and they were paid in full for the work they did, unlike the people who developed intellectual property on their own dime hoping to be compensated later only to find out that their work was copied and distributed by the computer people.
What do you guys think Excel spreadsheets did? Back in the day people had pages with lots of boxes and did the calculations manually. The processes, terms, techniques, were all human inventions and shamelessly replicated by machines.
Almost everything computers do was once done by a person, and the people who did those jobs laid the framework for which the processes were automated through.
This has happened dozens of times. My entire job is writing software that was written in the 80s, which replaced customer service reps who are needed in the '70s, and the computer does what they used to do. It took what they invented, their processes, and made the computer do it.
This is what automation is. It's always built on top of, and replaced, human beings who used to do those jobs.
Automated jobs do the physical work. Here the computer is redistributing the physical work made by the artists, however much diluted. AI cannot make art on its own, it needs to be fed physical art and then be told what it is and what to do with it. It is not automation, it is simply a data manipulating tool.
to me it's less about the process and jobs lost and more about some feeling loss related to the excitement about removing expression and experience from a domain for honestly very little real benefit
continuing to transform things which in part centered around exploration, discovery and experimentation into something cold and kind of dumb
like we will make some interesting things but it's the general trend of modern society that upsets people everything just getting easier/worse faster
On the other hand, paints and canvases are very very expensive, making art a domain of the rich. You can use the ai tools at the library, making art more proletarian.
I think the pushback is around status and elitism, and that people with certain backgrounds are societally expected to be not making art.
I picked up some nice canvases recently at the dollar tree. Masonite, wood and paper can be painted on. Masonite is actually preferable for acryllic paints and it is quite affordable, as are acryllic paints themselves. You can paint with coffee grinds and beet juice...the exploration is endless. I come from a family of seven living in the backwoods and most of my artist friends would not nearly qualify as rich. I had a teacher once who made the most beautiful art out of entirely recycled metal junk. Art is a reflection of culture, top to bottom and ingenuity plays a big role. Ingenuity is accessible to everybody.
That is also my feeling. There is nothing like having a pile of raw materials, or a set of new inks or water color pencils, etc. and to explore the boundaries of what you can do with them. Art being made from a menu seems rather sad.
Technically we do have examples of stable diffusion violating copyright, it will generate some exact clone images if you give it the right prompt and that image exists a few hundred times in the training data.
For something like 11 images which were accidentally repeated hundreds of times in the training data this is true. They're more the exception that proves the rule.
Copyright already cover characters and stories. Just because an ai (or a human) comes out with a copyrighted character in a novel pose, it doesn't mean it's not copyrighted
Style itself is not copyrighted, and that's a good thing.
I can create copyrighted content privately. I cannot share it. Law doesn't forbid imagination, wether human or ai and that's a good thing. But I cannot distribute these private rendition already without violating the content owner copyright.
Law seem pretty complete and well defined to me.
Are there cases where an ai and a human can generate the same media but which result in the media having different legality?
> Law seem pretty complete and well defined to me.
I don't think lawyers would agree with you at all.
> Just because an ai (or a human) comes out with a copyrighted character in a novel pose, it doesn't mean it's not copyrighted
My understanding is that there isn't a legal consensus on whether or not thats true. Copyright law wasn't written with AI generated art in mind. For a work to be copyrightable, my understanding is that it requires that you can make a "sweat off my brow" argument. Ie, you had to work to create something.
Does an AI count? We don't know. Or to put it in other terms, we haven't (collectively) decided as a society whether it should be copyrightable.
On the other side, "fair use" arguments are also at play here. If I train a 1bn parameter model on 5bn images, I could probably make an argument that 1/5th of a f16 probably constitutes fair use of copyrighted work. If that argument holds, I can train my model with impunity on any amount of copyrighted work so long as my model is small and the number of training examples is large.
Will that argument sway a court room? I have no idea. Is that fair? I don't know!
The law is decided by people. And we haven't had AIs like stable diffusion and ChatGPT before, so the laws haven't been written with this stuff in mind. There isn't even a legal precedent yet for how the current laws should apply to AI art. Speculating is fun, but speculating on how a judge will apply old laws to a totally novel problem is a fool's errand.
If you want to read about fun edge cases to copyright law, look up the history of copyright law for maps (can facts be copyrighted?) and how that interacts with trap streets.
This stuff is hairy and complicated even for lawyers. As an outsider, boldly claiming that copyright law is simple only demonstrates that you don't understand law. It'd be like a lawyer boldly arguing (with no knowledge) that compilers are simple.
This doesn't seem too complicated to me. An AI trained on images that produces a new image even if in same style as the source database is not violating copyright. It sucks for artists but that's not an argument that current law bans it. If current law banned it, every artist who was inspired in their work by other styles or image would be violating copyright. E.g, all comic book artists study and are inspired by other comic book artists. I don't doubt there will be attempts by courts and/or legislatures hostile to AI to make up new law to impose some sort of penalty/license fee on AI generated images. One approach would be make an artists style a trademark (for all I know that is already established law but I doubt it). I doubt the effort will be successful as there will be a gazillion ways around it or it will result in a relatively small number of monopolies of protected styles which would seem even a worse outcome than a flood of AI generated art. Ultimately I think artists will need to be even more careful about branding and probably will insist on prominent displays of their signatures and promoting the idea that a premium should be placed on human generated art probably through some sort of "certified human art" label. It definitely will increase the supply of art and likely decrease the number of artists that can survive financially off their work.
> This doesn't seem too complicated to me. An AI trained on images that produces a new image even if in same style as the source database is not violating copyright
Yeah, software "doesn't seem too complicated" to non-programmers too.
No offense, but if you aren't a lawyer then your opinion on legal matters has about as much veracity as a dentist explaining how software is made.
I have a certain amount of scorn for non-engineers telling me how their app idea is a weekend job and I should do it for free. I'm sure lawyers feel the same way about us when we claim there are easy answers around stable diffusion and copyright law.
The AI was trained without permission on copyrighted data. If it was as cut and dry as you claim then why do both parties think its worth going to court?
I don't know much about the law, but I know enough to recognise when its a job for the lawyers to figure out.
Training method and recall are of no consequences because copyright law doesn't deal with technology for the most part. There is a test to wether the image is stored or not which is interesting here for the topic at hand, but it's always in the context of distribution.
The worst is that weight may be considered storage, but the point is... Law already covers that. Because it doesn't concern with technology, but with results of actions.
That is the whole problem with the argumentation, law is technology agnostic, just adding a layer of redirection doesn't matter, because tm it cares about the input and the output not what happens in between.
> law is technology agnostic, just adding a layer of redirection doesn't matter, because tm it cares about the input and the output not what happens in between.
What makes you think that?
The law cares about whatever lawyers decide to care about. There was a case a few years ago where (if memory serves) a black woman sued an insurance company for discrimination after the insurance company refused to provide her cover. The company was using a neural net to decide whether to cover someone. The court demanded they explain the neural networks' decision - and of course, they couldn't. The insurance company lost the case.
In the aftermath they moved from a neural net to a decision tree based ML system. The decision tree made slightly worse decisions, but they figured if it lowered their legal exposure, it was worth it. With a decision tree, they can print out the decision tree if they were ever sued again and hand it to a judge.
> law is technology agnostic
Clearly not in this case.
There's plenty of other examples if you go looking. In criminal law, they care a great deal about the technology used in forensic analysis - both in its strengths and weaknesses.
If you don't know much about law, being humble and wrong will serve you better than being confident and wrong.
Insurance is not copyright and the case is not even the same subject matter.
And again that case is technology agnostic, discrimination law requires you to be able to provide proof that results are non discriminatory, law itself doesn't care that it was specifically a neural network, it only cares about the end result, the firm lost because it failed to provide required data about their decision process, not because it was using neural networks, that they used a neural network was irrelevant on its own, and it could have been fine if they baked explainability in it.
It's worth noting that "worse decisions" is from the point of view of the insurance company, which would prefer to act racist if only that pesky law didn't stop them, and will continue to do so to the extent they can get away with it.
Incidentally, copyright law etc on fonts is very interesting. Many places license their fonts via in-page JavaScript. However, I believe that such a protection against the font designer doesn’t necessarily apply if for instance you use the font to create a logo. ( I’m sure I’ve missed some nuance here.)
Stable Diffusion isn't violating copyright for the same reason artists aren't violating copyright when they look at a bunch of images and make something similar.
Sounds like fair use to me, unless I’m mistaken… what could be wrong with replicating a style that someone else introduced? It’s entirely possible someone else came up with the same style hundreds of years before and it just never became well known. People copy various anime styles all the time, why can’t they use a computer program to help them do it?
Clearly the argument can’t just be “because they should hire someone to do it”, heck what if that person is totally booked? It’s not taking away from them in the slightest to produce additional artwork using a similar style if they weren’t available to take the job, so that seems irrelevant.
As long as you’re not taking their work and saying it’s yours or selling it I don’t see why you can’t mimic the style.
If you have the skill to physically mimic my art, I see no problem. I am not a huge fan of copyright. AI does not have the physical skill to make art. It is entirely directed by human input and programming and relies on such to generate images that correlate to a human directed request. Computers are directed by code written for them by humans and rely on humans to input data. In this instance, programmers have written a program that feeds off of physically made art and generates statistical reiterations of the data that has been entered. Learning requires concious intellect. Actual physical art is being used, however much diluted, and capitalized on without the consent of the artist making the physical effort to create it.
I use computers to do things I don’t have the innate ability to do all the time, that’s what tools are for. If someone can present that an artifact I used in a commercial product is an asset directly taken from their portfolio then I’d graciously remove it.
IMHO the current laws and concepts don't work anymore, at all. Copyright laws were already very problematic and caused all kinds of troubles but what's happening now is well beyond all that.
I also think that the use of AI is a fair game, the problem is that the thing itself is made possible by the hard work of thousands of people. By hard work, I mean years or even decades of development. They didn't learn to draw perfect photorealistic paintings, they developed styles and methods that capture something about humans and the machine took it from them.
True, also they all got a salary and royalties for their work. They were well compensated and they continue being compensated. Have you heard of ARM Holdings? Its the British CPU design firm that designed all the CPUs in your phone and now in your macbooks. Apple and others continue paying them for their work they did in the 80s.
> How is the artist supposed to be compensated for spending years of developing that style/method?
I’m not sure. How are they providing compensation to every other artist whose work they’ve seen? To all of the people who gave them feedback and criticism?
The arts have existed for far, far longer than IP law and copyright and royalties. This idea that artists have no influences and that human art is not derivative is just weird.
This presumes the "style" is unique. It probably is, but it's also influenced by what was already out there.
The idea of a specific artist having a unique style they learn is dying right now: the faster and cheaper way to develop a style will be to train an AI model on public domain work, and tweak till you get something you like. Then use it as part of a process to go from sketch and composition to output. If you must, play with your own style and make a custom model - that technology has gotten very accessible very quickly.
This is a gnashing of teeth over an unexpected price shock: creatives have been feeling pretty assured that automation wasn't coming for their revenue because what they did was a unique human endeavor that a soulless machine could not possibly threaten. It was meant to be factory workers and retail staff first.
People aren't worried about copyright. People are worried that a skillset acquired at substantial sacrifice is about to become irrelevant and are desperately latching on to any perceived lifeline.
> How is the artist supposed to be compensated for spending years of developing that style/method?
By claiming that style and all future works that look like it, barring humans and AI from ever drawing anything else that has a similar style. Copyright used to cover just the expression, now we want it to cover all possible expressions of a style.
Don't worry about it too much.
From now on, nobody will release their art for public access anymore. Even portfolios will be paywalled to prevent copyright theft being laundered through "AI".
Which means there will not ever be a Stable Diffusion 2.0 with new art styles. In a year or two the instantly recognizable style of Stable Diffusion will be seen as extremely lame and crass.
Won't this be self-correcting as all artists start to add visible and invisible fingerprints to their work with a license that says, in effect, "this image may only be viewed with human eyeballs. Any other consumption is forbidden. "
> Based on some of the examples they explicitly provided, it is clear to me Stable Diffusion creates novel art.
You're jumping to a conclusion that the data doesn't warrant, I suspect because it's a conclusion that suits you.
They're doing something like a "reverse image search" from an AI generated image over the original dataset and returning a few examples that have a high degree of similarity. There's no guarantee that the images they return are actually the ones combined to create the AI image. In fact their results are dubious - see Saurik's comment:
I don't have an opinion regarding the question of whether SD creates art or not. To be honest, as someone who enjoys art or kind-of-art in many forms, it's not important to me (although, of course, I can see why it is important to someone who makes a living from art or AI generated images).
However, this website doesn't add anything except noise to either side of the debate, from what I can see.
Using pixel-space or latent-space distances to measure if a generative model has simply memorized the training data is a common evaluation metric in ML literature. If the website is using the full training set of LAION 5-B used to train Stable Diffusion, then I find it extremely convincing that SD has generalized to the data distribution it was trained on, and does not simply regurgitate other people's artwork. If there is a way to find more similar images, I have yet to see it.
I think this website is evidence SD is a novel image generator, and rarely creates infringing images (at least in the way we thought of infringing before these kinds of AI).
Until somebody comes out with a better way to trace back output images to training data, this is the best "data" we have so far.
Before this, there were anecdotal examples of SD outputting an image with some Shutterstock watermark or very similar to some artist's work, but the prompts also seemed highly specific or were asking for something in that artist's style.
This tool at least lets us start to trace back the average image, and so far it does seem SD is adding something novel to its outputs.
> Until somebody comes out with a better way to trace back output images to training data, this is the best "data" we have so far.
There are many better ways, in the sense that they actually do something like estimate the causal effect of a specific training datapoint, like leave-one-out cross-validation training or surrogates to Shapley value, or using nonparametric models to trace backwards. This is a whole subfield of ML research.* (The primary summary is: "it's hard and the easy approaches don't work." Which why he's not doing any of those but an easy incorrect thing.)
* I'm not entirely sure why anyone cared so much... Research topics can be kinda arbitrary. But in this case, I think there was something of a fad around 2017 that there were going to be 'data marketplaces' where you would be trying to estimate the value of each datapoint to price it. This turned out to not exist as a business model: you either used big public data for free for generic model capabilities, or you had small proprietary data you'd die rather than sell to a competitor.
> Based on some of the examples they explicitly provided, it is clear to me Stable Diffusion creates novel art
Fiasco of a tool to detect the source of plagiarism does not mean that Stable Diffusion creates novel art. When you need a character with dyed hair tips, SD will occasionally spit out copyrighted Harley Quinn portrait from DC Comics, Stable Attribution won't mention Paul Dini and Bruce Timm as character authors, but tool may show you some other pictures of people with colored head. Good luck with lawyers, what I can say.
This is my hangup about luddite hand wringing with claims that the existence of Stable Diffusion and tools like it constitute some violation of the rights of the artists whose data was included in the training set. Existing laws on copyright infringement and whatnot don't persecute the paintbrushes, they go after the artist who published infringing material. Why should this be any different?
Yeah this is probably the future, it seems like the vast majority of the time the output is very unique, but if there is far too much source material for a particular prompt, it copies. Say a prompt like “Mona Lisa”.
So now you can just use this tool to verify your output is safe.
Exactly. The tool is incomplete, in a sense, as while it shows the work of original art produced by the AI and the human images it was trained on, it has no way to show the imagery on which the human artists were trained to produce their images. (The model behind a human mind is vastly more complex than that of a deep learning model, but the principle is analogous.)
Im sure its very random, but why is it that 99% of the time someone says this about some AI artwork its an image of a beautiful woman?
I am pretty convinced at this point that in a lot people's heads, the fight to claim pictures like this as "real" or novel art is unconsciously just an urge to claim the beautiful woman as their own! Just thousands of men every day, generating beautiful ladies, yelling at people on the internet that they are real and they need to be protected...
Maybe its all just coincidental or something, but if you just step back and look at it, its very... interesting.
I will admit this is a nice tool for verifying the creations of SD aren't pure copies, so I think it will be useful for a time. But as AI-generated images start to taint future datasets, attribution is going to be significantly more complicated.