I've been finding that the strangest part of discussions around art AI among technical people is the complete lack of identification or empathy: it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!! I am a failed artist (read, I studied painting in school and tried to make a go at being a commercial artist in animation and couldn't make the cut), and so I decided to do something easier and became a computer programmer, working for FAANG and other large companies and making absurd (to me!!) amounts of cash. In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done. Art AI is terrifying if you want to make art for a living- and, if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?
Artists have all my sympathy. I'm also a hobbyist painter. But I have very little sympathy for those perpetuating this tiresome moral panic (a small amount of actual artists, whatever the word "artist" means), because I think that:
a) the panic is entirely misguided and based on two wrong assumptions. The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute. Text alone is a toy; the field will just become more and more complex and technically involved, just like 3D CGI did, because if you don't use every trick available, you're missing out. The second wrong assumption is that it's going to replace anyone, instead of making many people re-learn a new tool and produce what was previously unfeasible due to the amount of mechanistic work involved. This second assumption stems from the fundamental misunderstanding of the value artists provide, which is conceptualization, even in a seemingly routine job.
b) the panic is entirely blown out of proportion by the social media. Most people have neither time nor desire to actually dive into this tech and find out what works and what doesn't. They just believe that a magical machine steals their works to replace them, because that's what everyone reposts on Twitter endlessly.
That's exactly that lack of empathy the OP was on about: if you don't see that there is something wrong by a bunch of programmers feeding everybody's work into the meatgrinder and then to start spitting out stuff that they claim is original work when they probably couldn't draw a stick figure themselves then it is clear that something isn't quite right. At least, to me.
As someone who has always had a huge gap between what I can imagine and what I can manifest, outside of text anyway, I find the whole thing amazing and massively enabling. And I think it is possible to come up with original images, even though the styles are usually derivative.
At the same time I recognise that this is a massive threat to artists, both low-visibility folks who throw out concepts and logos for companies, and people who may sell their art to the public. Because I can spend a couple of dollars and half an hour to come up with an image I’d be happy to put on my wall.
I’m not sure what the answer is here, but I don’t think a sort of “human origin art” Puritanism is going to hold back the flood, though it may secure a niche like handmade craft goods and organic food…
What will happen is exactly the same thing that happened when email made mass mailings possible: a torrent of very low quality art will begin to drown out the better stuff because there is no skill required to produce a flood of trash whereas to produce original work takes talent and time.
As the price of a bit dropped the quality of the comms dropped. It is inevitable that the price of the creation of (crappy) art will do the same thing if only because it will drag down the average.
> This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.
The goal of programming as a discipline is to create tools that allow problems to be solved. Art is a problem - how do I express myself to others? The entire industry is designed for moments like this.
Unlike many other professions, I don’t think there is much crictical thought from the tech community as to what tech and programming is and isn’t for.
A few people engaged in “hand ringing” but not deep, regular discourse on the evolving nature of what we want “tech” and “programming” to be going forward.
Despite delivering transformative social shifts, even this last decade, where is the collective reflection?
> "The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute."
This is the first wave of half decent AI.
But more importantly, you are vastly underestimating the millions of small jobs out there that artists use as a stepping stone.
Think of the millions of managers who would happily be presented with a choice of 10 artistic interpretations, and pick one for the sake of getting a quick job done.
No way on earth this isn't going to make a major impact. Empathy absolutely required.
If small pointless jobs able to be done by machines are so great then lets get rid of computers, power tools, and automation so we can get those unemployment numbers down… why can’t we find a solution that doesn’t hamper progress? At the end of the day progress saves lives.
Actually all progress will definitely will have a huge impact on a lot of lives—otherwise it is not progress. By definition it will impact many, by displacing those who were doing it the old way by doing it better and faster. The trouble is when people hold back progress just to prevent the impact. No one should be disagreeing that the impact shouldn't be prevented, but it should not be at the cost of progress.
I very much agree, and I feel the campaigns to stop AI image generation in its tracks are misguided.
I do wonder what happens as the market for the “old way” dries up, because it implies that there is no career path to lead to doing things better - any fool (I include myself) can be an AI jockey, but without people that need the skills of average designers, from what pool will the greats spring?
The gun made it so that even a dainty person could kill a strong person. However, some people are better shooters than others. It will just shift the goal post so that a new skill is required. Being strong is still a thing… just maybe not the most important when in a gun fight.
I don’t see this situation as analogous or even particularly useful - we’re not talking about gun fights, we’re talking about art and design, and whether we will see fewer great artists and designers as the market for moderate or learner artists and designers dries up.
It doesn’t really matter to humanity if strong people can still win fights, but it might matter if artists and designers who do produce great, original work stop being produced. It probably even matters to the AI models because that forms part of their input.
So empathy as in being considerate they are losing their jobs right? Not that AI art generation is inherently a bad thing? Or that they or I can doing anything about it?
You are demonstrating that lack of empathy. Artist's works are being stolen and used to train AI, that then produces work that will affect that artist's career. The advancement of this tech in the past 6 months, if it maintains this trajectory, demonstrates this.
It has been fascinating to watch “copyright infringement is not theft” morph into “actually yes it’s stealing” over the last few years.
It used to be incredibly rare to find copyright maximalists on HackerNews, but with GitHub Co-pilot and StableDiffusion it seems to have created a new generation of them.
But it's not even copyright. Copyright does not protect general styles. It protects specific works, or specific designs (e.g. Mickey Mouse). It doesn't allow someone to claim ownership over a general concept like "a painting of a knight with a castle and a dragon in the background".
Are there any documented cases where copyright law didn't seem to offer sufficient protection against something that really did seem like copyright infringement but done using AI tooling? I started looking for some a few weeks ago because of this debate and still haven't seen anything conclusive.
"copyright infringement is not theft" is not an especially common view among artists or musicians, since copyright infringement threatens their livelihood. I don't think there's anything inconsistent about this. Yes, techies tend to hold the opposite view.
Personally, I think "copyright infringement is not theft" but I also think that using artists' work without their permission for profit is never OK, and that's what's happening here.
> I don't think there's anything inconsistent about this.
It amounts to saying that anything that benefits me is good and anything to my detriment is bad. Sure, there's a consistency to that. However, if that's the foundation of one's positions, it leads to all manner of other logical inconsistencies and hypocrisies.
Individual humans copying corporate products vs corporations copying the work of individual humans they didn't pay.
The confusion is that “copyright infringement is not theft” really was about being against corporate abuse of individuals. It's still the same situation here.
So it’s okay to infringe on copyright against a group of people getting paid by a corporation. But not individual artists and you should definitely not break open source copyright rules?
I think we miscommunicated somewhere. I was being sarcastic when I said cooperations were people. If we had a model of capitalism dominated by collective employee ownership I think your ethical argument might work. We don't.
Copyright should not exist, but artists do need support somehow and doing away with copyright without other radical changes to economy/society leaves them high and dry. Copyright not existing should pair with other forms of support such as UBI or worker councilization, instead of ridding it while clutching capitalist pearls and ultimately only accelerating capitalism at their expense
Is it though? What if I were to look at your art style and replicate that style manually in my own works? I see no difference whether it's done by a machine, or done by hand. The reality is that every art is a derivative of some other art. Interestingly, the music industry has been doing this for years. Ever since samplers became a thing, musicians spliced and diced loops into their own tracks for donkeys years, and created an explosion of new genres and sound. Hip-hop, techno, dark ambient, EDM, ..., all fall into the same category. Machine learning is just another new tool to create something.
It’s not stolen. If I create a work mimicking the style of whomever, I’ve not taken anything from them besides an idea. Ideas are not protected. Ideas are the point. If you don’t want to share your ideas, feel free not to.
Most people do not understand the purpose of copyright. Copyright is a bargain between society and the creator. The creator receives limited protection of the work for a limited time. Why is this the deal?
The purpose of copyright is to advance the progress of science and the useful arts. It is to benefit humanity as a whole.
AI takes nothing more than an idea. It does not take a “creative expression fixed in a tangible media”.
I'd say it's more similar to an artist drawing influence from another artist, and there is a difference in that the machines can do it much more efficiently.
Personally, I'm all for AI training and using human artwork. I think telling it not to prevents progress/innovation, and that innovation is going to happen somewhere.
If it happens somewhere, humans who live in that somewhere will just use those tools to launder the AI-generated artwork, and companies will hire those offshore humans and reap the benefits, all the while, the effect on local artists' wages is even more negative because now they don't have access to the tools to compete in this ar(tificial intelligence)ms race.
That's a false analogy. Variable renames does not change anything, it's still the exact replica of the algorithm in question. Also, in engineering and computer science circles, cloning designs or code is often regarded as an acceptable practice, even encouraged (within the bounds of licensing). And for good reason, if there is a good solution to a problem, then why reinvent the wheel?
last time this happened on human, people are so angry. the guy who copy other artwork even got cancelled by company. but actually not in music region, you are right.
As someone who's shifted careers twice because disruptive technologies made some other options impractical, I can definitely appreciate that some artists are very upset about the idea of maybe having to change their plans for the future (or maybe not, depending on the kind of art they make), but all art is built on art that came before.
How is training AI on imagery from the internet without permission different than decades of film and game artists borrowing H. R. Giger's style for alien technology?[1]
How is it different from decades of professional and amateur artists using the characteristic big-eyed manga/anime look without getting permission from Osamu Tezuka?
Copyright law doesn't cover general "style". Try to imagine the minefield that would exist if it were changed to work that way.
[1] No, I don't mean Alien, or other works that actually involved Giger himself.
> Copyright law doesn't cover general "style". Try to imagine the minefield that would exist if it were changed to work that way.
We don’t need to “try to imagine”, we just need to wait a bit and watch Walt’s reanimated corpse and army of undead lawyers come out swinging for those “mice in the general style of Mickey Mouse”.
Intellectual property and copyright are entirely different, and you'd be come after by Disney for making those kinds of images with or without AI. I wish people in the fight against AI would stop trotting this argument out, it muddies stronger arguments against it.
Intellectual property generally includes copyright, patents, trademark, and trade secrets, though there are broader claims such as likeness, celebrity rights, moral rights (e.g., droit d'auteur in French/EU law), and probably a few others since I began writing this comment (the scope seems to be increasing, generally).
I suspect you intended to distinguish trademark and copyright.
So who's that mythical artist that hasn't seen and learned from the works of other artists? After all, these works will have left an imprint in their neural connections, so by the same argument their works are just as derivative, or "stolen".
These are not artists being inspired by the works of other artists, these are programmers taking the work of artists and then claiming to create original works when in fact they are automatically generated derivatives.
Try telling one of the programmers to produce a work of art based on a review of all of the works that went into training the models and see how it works out.
Modern artists use photoshop and benefit from a lot of computational tools already. There isn’t much difference between a computational or AI-assisted tool such as a “paint style digital brush” or “inpainting” and a tool such as a physical brush, paint knife, or toothbrush when used by the artist to achieve an effect. There is no universal rule that says only me mechanically made art is real art. Collage artists who literally copy and paste other people’s photos are also making art. In fact Photoshop already incorporates many AI assisted tools to add to the artist’s repertoire, and being able to generate unique images from a statistical merging of all the art styles online is just another tool in this fashion. Automation is the foundation of all our progress, as it is just the enhancement of another tool that replaces our hands and makes them bigger (metaphorically) so that we can build bigger and better things constantly.
Ok so now many more people can generate cool looking photos now in an automatic fashion. So what? It just means we’ve raised the bar… for what can be considered cool.
> that something is not created in a mechanical fashion.
I wonder if the nerds have shot themselves in the foot here with terminology? I suspect the nerd’s lawyers would have been much happier if the entire field was named “automated mechanical creativity” instead of “artificial intelligence”. It’d be kinda amusing to see the whole field of study lose in court because of their own persistent claims that what they’re doing is not just “creating in a mechanical fashion” but creating “intelligence” which can therefore be held to account for copyright infringement. Shades of Al Capone getting busted for taxes…
So I employ quite a few artists, and I don't see the problem. This whole thing basically seems more like a filter on photoshop then something that will take a persons job.
If artists I employ want to incorporate this stuff into their workflow, that sounds great. They can get more done. There won't be less artists on payroll, just more and better art will be produced. I don't even think it is at the point of incorporating it into a workflow yet though, so this really seems like a nothing burger to me.
At least github copilot is useful. This stuff is really not useful in a professional context, and the idea that it is going to take artists jobs really doesn't make any sense to me. I mean, if there aren't any artists then who exactly do I have that is using these AI tools to make new designs? If you think the answer to that is just some intern, then you really don't know what you're talking about.
With respect, you need to pay more attention to how and why these networks are used. People write complex prompts containing things like "trending on artstation" or "<skilled artist's name>" then use unmodified AI output in places like blog articles, profile headers, etc where you normally would have put art made by an artist.
Yes, artists can also utilize AI as a photoshop filter, and some artists have started using it to fill in backgrounds in drawings, etc. Inpainting can also be used to do unimportant textures for 3d models. But that doesn't mean that AI art is no threat to artists' livelihoods, especially for scenarios like "I need a dozen illustrations to go with these articles" where quality isn't so important to the commissioner that they are willing to spend an extra few hundred bucks instead of spending 15 minutes in midjourney or stable diffusion.
As long as these networks continue being trained on artists' work without permission or compensation, they will continue to improve in output quality and muscle the actual artists out of work.
That's only one side of a coin. If a tool is so advanced that it takes away the easy applications, then it's also advanced enough to create novel fields.
Take for example video games. They distracted many people from movies, but also created a huge new field, hungry for talents. Or another one, quite a few genres calcified into distinctive boring styles over the years (see anything related to manga/anime as an example) simply because those styles require less mechanical work and are cheaper to produce. They could use a deep refresh. This tech will also lead to novel applications, created by those who embraced it and are willing to learn the increasingly complex toolset. That's what been happening the last several decades, which have seen several tech revolutions.
>As long as these networks continue being trained on artists' work
This misses the point. The real power of those things is not in the collection of styles baked into it. It's in the ability to learn new stuff. Finetuning and style transfer is what all the wizards do. Construct your own visual style by hand, make it produce more of that. And that's not just about static 2D images; neither do 2D illustrators represent all artists in the broad sense. Everyone who types "blah blah in the style of Ilya Kuvshinov" or is using img2img or whatever is just missing out, because the same stuff is going to be everywhere real soon.
If you are looking for a bunch of low quality art there are tons of free sources for that already. If this is what you mean when you say "putting artists out of work" you are really talking about less than 1% of where artist money is spent.
OK, so your argument here is "it doesn't matter because the art being replaced by AI is cheap and/or mass-produced"? What happens once the quality of the network-generated art goes up and it's able to displace more expensive works? What is the basis for your argument that this is "less than 1%"?
Art will get better and we will have artists that use AI tools to produce a lot more of it faster and entirely new professions will emerge as an evolution in art occurs and the world gets better.
This is like saying that photoshop is going to put all the artists out of work because one artist can now do the work of a team of people drawing by hand. So far these AIs are just tools. Tools help humans to produce more and the economy keeps chugging ever upwards.
There is no upper limit of how much art we need. Marvel movies and videogames will just keep looking better and better as our artists increase their capabilities using AI tools to assist them.
Daz3d didn't put modelers and artists out of work, and what Daz and iClone can do is way way more impressive(and useful in a professional setting) than AI Art.
Is 'looking at something' equivalent to stealing it? The use by all these diffusion networks is pretty much the definition of transformative. If a person was doing this it wouldn't even be interesting enough to talk about it. When a machine does it somehow that is morally distinct?
Humans have my sympathy. We are literally at the brink of the multiple major industries being wiped out. What was only theoretical for the last 10-15 years started to happen right now.
In few short years most humans will not be able to find any employment because machine will be more efficient and cheaper. Society will transform beyond any previous transformations in history. Most likely it's going to be very rough. But we just argue that of course our specific jobs are going to stay.
That is the point of my comment :) I argue that coming changes are underestimated, there is not enough awareness, and as such discussion and preparedness for them. I would rather have stable societal transition than hunger, riots, civil or world war.
Honestly, I don't know. I spent last few days thinking about all this more seriously than in the last 20 years.
Essentially we are going to get away from market economy, money, private property. The problem is that once these things go personal freedom goes as well. So either accept the inevitable totalitarian society, or something else? But what?
I have no idea how well it holds up to modern reading, but I found it interesting at the time.
He posits two outcomes - in the fictionalised US the ownership class owns more and more of everything, because automation and intelligence remove the need for workers and even most technicians over time. Everyone else is basically a prisoner given the minimum needed to maintain life.
Or we can become “socialist” in a sort of techno-utopian way, realising that the economy and our laws should work for us and that a post-labor society should be one in which humans are free from dependence on work rather than defined by it.
Does this latter one imply a total lack of freedom? It certainly implies dependence on the state, but for most people (more or less by definition) an equal share would be a better share than they can get now, and they would be free to pursue art or learning or just leisure.
> But I have very little sympathy for those perpetuating this tiresome moral panic (a small amount of actual artists, whatever the word "artist" means)
> A small amount of actual artists
It's extremely funny that you say this, because taking a look at the Trending on Artstation page tells a different story.
And ironically, the overwhelming majority of knowledge used by these models to produce pictures that superficially look like their work (usually not at all), is not coming from any artworks at all. It's as simple as that. They are mostly trained on photos which constitute the bulk of models' knowledge about the real world. They are the main source of coherency. Artist names and keywords like "trending on artstation" are just easily discoverable and very rough handles for pieces of the memory of the models.
I don't think the fact that photos are making up the vast majority of the training set is of any particular significance.
Can SD create artistic renderings without actual art being incorporated? Just from photos alone? I don't believe so, unless someone shows me evidence to the contrary.
Hence, SD necessitates having artwork in it's training corpus in order to emulate style, no matter how little it's represented in the training data.
SD has several separate parts. In the most simplistic sense (not entirely accurate to how it functions), one translates English into a semantic address inside the "main memory", and another one extracts the contents of the memory that the address refers to. If you prevent the first one (CLIP) from understanding artists names by removing the correspondence between names and addresses, the data will still be there and can be addressed in any other way, for example custom trained embeddings. Even if you remove artworks from the dataset entirely, you can easily finetune it on anything you want using various techniques, because the bulk of the training ($$$!) has already been done for you, and the coherency, knowledge of how things look in general, shapes, lighting, poses, etc is already there. You only need to skew it towards your desired style a bit.
Style transfer combined with the overall coherency of pre-trained models is the real power of these. "Country house in the style of Picasso" is generally not how you use this at full power, because "Picasso" is a poor descriptor for particular memory coordinates. You type "Country house" (a generic descriptor it knows very well) and provide your own embedding or any kind of finetuned addon to precisely lean the result towards the desired style, whether constructed by you or anyone else.
So, if anyone believes that this thing would drive the artists out of their jobs, then removing their works from the training set will change very little as it will still be able to generate anything given a few examples, on a consumer GPU. And that's only the current generation of such models and tools. (which admittedly doesn't pass the quality/controllability threshold required for serious work, just yet)
> The first is that textual input and treating the model as a function (command in -> result out) are sufficient for anything. No, this is a fundamentally deficient way to give artistic directions, which is further handicapped by primitive models and weak compute. Text alone is a toy
Some artists just do the descriptive part though, right? The name I can think of is Sol LeWitt, but I'm sure there are others. A lot of it looks like it could be programmed, but might be tricky.
I'm mostly seeing software developers looking at the textual equivalent, GPT-3, and giving a spectrum of responses from "This is fantastic! Take my money so I can use it to help me with my work!" to "Meh, buggy code, worse than dealing with a junior dev."
I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0]; and (b) we've been expecting this for ages already, to the extent that many of us are cynical and jaded about what the newest AI can do.
[0] for example, I was recently in the Cambridge University Press Bookshop, and they sell gift maps of the city. The background of the poster advertising these is pixelated and has JPEG artefacts.
It's highly regarded, and the shop has existed since 1581, and yet they have what I think is an amateur-hour advert on their walls.
I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
This depends entirely on _how_ the code is wrong. I asked chatGPT to write me code in python that would calculate SHAP values when given a sklearn model the other day. It returned code that ran, and even _looked_ like it did the right thing at a cursory glance. But I've written SHAP a package before, and there were several manipulations it got wrong. I mean completely wrong. You would never have known the code was wrong unless you knew how to write the code in the first place.
To me, code that is 95% correct will either fail catastrophically or give very wrong results. Imagine if the code you wrote was off 5% for every number it was supposed to generate. Code that is 99.99% correct will introduce subtle bugs.
*
No shade to chatGPT, writing a function that calculates shap values is tough lol, I just wanted to see what it could do. I do think that, given time, it'll be able to write a days worth of high quality code in a few seconds.
The thing about ChatGPT is that it warning shot. And all these people I see talking about it, laughing about how the shooter missed them.
Clearly ChatGPT is going to improve, and AI development is moving at a breakneck pace and accelerating. Dinging it for totally fumbling 5% or 10% of written code is completely missing the forest for the trees.
Yeah, but people were also saying this about self-driving cars, and guess what that long tail is super long, and its also far fatter than we expected. 10 years ago people were saying AI was coming for taxi drivers, and as far as I can tell we're still 10 years away.
I'm nonplussed by ChatGPT because the hype around it is largely the same as was for Github Copilot and Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).
I wonder if some of this is the 80 20 rule. We're seeing the easy 80 percent of the solutions which has taken 20% of the time. We still have the hard 80% (or most of) to go for some of these new techs
Tesla makes self-driving cars that drive better than humans. The reason you have to touch the steering wheel periodically is political/social, not technical. An acquaintance of mine read books while he commutes 90 minutes from Chattanooga to work in Atlanta once or twice a week. He's sitting in the driver's seat but he's certainly not driving.
The political/social factors which apply to the life-and-death decisions made driving a car, don't apply to whether one of the websites I work on works perfectly.
I'm 35, and I've paid to write code for about 15 years. To be honest, ChatGPT probably writes better code than I did at my first paid internship. It's got a ways to go to catch up with even a junior developer in my opinion, but it's only a matter of time.
And how much time? The expectation in the US is that my career will last until I'm 65ish. That's 30 years from now. Tesla has only been around 19 years and now makes self-driving cars.
So yeah, I'm not immediately worried that I'm going to lose my job to ChatGPT in the next year, but I am quite confident that my role will either cease existing or drastically change because of AI before the end of my career. The idea that we won't see AI replacing professional coders in the next 30 years strains credulity.
Luckily for me, I already have considered some career changes I'd want to do even if I weren't forced to by AI. But if folks my age were planning to finish out their careers in this field, they should come up with an alternative plan. And people starting this field are already in direct competition to stay ahead of AI.
I was of the impression that Tesla's self driving is still not fully reliable yet. For example a recent video shows a famous youtuber having to take manual control 3 times in a 20 min drive to work [0]. He mentioned how stressful it was compared to normal driving as well.
If you watch the video you linked, he admits he's not taking manual control because it's unsafe--it's because he's embarrassed. It's hard to tell from the video, but it seems like the choices he makes out of embarrassment are actually more risky than what the Tesla was going to do.
It makes sense. My own experience driving a non-Tesla car the speed limit nearly always, is that other drivers will try to pressure you to do dangerous stuff so they can get where they're going a few seconds faster. I sometimes give into that pressure, but the AI doesn't feel that pressure at all. So if you're paying attention and see the AI not giving into that pressure, the tendency is to take manual control so you can. But that's not safer--quite the opposite. That's an example of the AI driving better than the human.
On the opposite end of the social anxiety spectrum, there's a genre of pornography where people are having sex in the driver's seats of Teslas while the AI is driving. They certainly aren't intervening 3 times in 20 minutes, and so far I don't know of any of these people getting in car accidents.
I'm doubtful - There's a pretty big difference between writing a basic function and even a small program, and that's all I've seen out of these kinds of AIs thus far, and it still gets those wrong regularly because it doesn't really understand what it's doing - just mixing and matching its training set.
Roads are extremely regular, as things go, and as soon as you are off the beaten path with those AIs start having trouble too.
It seems that in general that the long tail will be problematic for a while yet.
> [...] Copilot fizzled badly. (Full disclosure: I pay for Copilot because it is somewhat useful).
In what sense did Copilot fizzle badly? It's a tool that you incorporated into your workflow and that you pay money for.
Does it solve all programming? No, of course not, and it's far from there. I think even if improves a lot it will not be close to replacing a programmer.
But a tool that lets you write code 10x,100x faster is a big deal. I don't think we're far away from a world in which every programmer has to use AI to be somewhat proficient in their job.
Sure, it will improve, but I think a lot of people think "Hey, it almost looks human quality now! Just a bit more tweaking and it will be human quality or better!". But a more likely case is that the relatively simple statistical modeling tools (which are very different from how our brains work, not that we fully understand how our brains work) that chatGPT uses have a limit to how well they work and they will hit a plateau (and are probably near it now). I'm not one of those people who believe strong AI is impossible, but I have a feeling that strong AI will take more than that just manipulating a text corpus.
I'd be surprised if it did only take text (or even language in general), but if it does only need that, then given how few parameters even big GPT-3 models have compared to humans, it will strongly imply that PETA was right all along.
Excellent summation. Majority of the software developers work on crud based frontend or backend development. When this thing's attention goes beyond the 4k tokens its limited to, there will be very less number of developers needed in general. Same way less number of artists or illustrators will be needed for making run of the mill marketing brochures.
I think majority wouldn't know what hit them when the time comes. My experience with chatgpt has been highly positive changing me from a skeptic to a believer. It takes a bit of skill to tune the prompts but I got it to write frontend, backend, unit test cases, automation test cases, generate test data flawlessly. I have seen and worked with much worse developers than what this current iteration is.
The thing is though, it's trained on human text. And most humans are per difinition, very fallible. Unless someone made it so that it can never get trained on subtly wrong code, how will it ever improve? Imho AI can be great for suggestions as for which method to use (visual studio has this, and I think there is an extension for visual studio code for a couple of languages). I think fine grained things like this are very useful, but I think code snippets are just too coarse to actually be helpful.
Anyone who has doubts has to look at the price. It’s free for now, and will be cheap enough when openai starts monetizing. Price wins over quality. It’s demonstrated time and time again.
Depends on the details. Skip all the boring health and safety steps, you can make very cheap skyscrapers. They might fall down in a strong wind, but they'll be cheap.
After watching lots of videos from 3rd world countries where skyscrapers are built and then tore down a few years later, I think I know exactly how this is going to go.
It does depend on the details. In special fields, like medical software, regulation might alter the market—although code even there is often revealed to be of poor quality.
But of all the examples of cheap and convenient beating quality: photography, film, music, et al, the many industries that digital technology has disrupted, newspapers are more analogous than builders. Software companies are publishers, like newspapers. And newspapers had entire building floors occupied by highly skilled mechanical typesetters, who have long been replaced. A handful of employees on a couple computers could do the job faster, more easily, and of good enough quality.
Software has already disrupted everything else, eventually it would disrupt the process of making software.
I experienced ChatGPT confidently giving incorrect answers about the Schwarzchild radius of the black hole at the center of our galaxy, Saggitarius A-star. Both when asked about "the Scharzchild radius of a black hole with 4 million solar masses" (a calculation) and "the Scharzchild radius of Saggitarius A-star" (a simple lookup).
Both answers were orders of magnitude wrong, and vastly different from each other.
JS code suggested for a simple database connection had glaring SQL injection vulnerabilities.
I think it's an ok tool for discovering new libraries and getting oriented quickly to languages and coding domains you're unfamiliar with. But it's more like a forum post from a novice who read a tutorial and otherwise has little experience.
My understanding is that ChatGPT (and similar things) are purely language models; they do not have any kind of "understanding" of anything like reality. Basically, they have a complex statistical model of how words are related.
I'm a bit surprised that it got a lookup wrong, but for any other domain, describing it as a "novice" is understating the situation a lot.
Over the weekend I tried to tease out a sed command that would fix an uber simple compiler error from ChatGPT [0]. I gave up after 4 or 5 tries - while it got the root cause correct ("." instead of "->" because the property was a pointer), it just couldn't figure out the right sed command. That's such a simple task, its failure doesn't inspire confidence in getting more complicated things correct.
This is the main reason I haven't actually incorporated any AI tools into my daily programming yet - I'm mindful that I might end up spending more time tracking down issues in the auto-generated code than I saved using it in the first place.
Whether 95% or 99.9% correct, when there is a serious bug, you're still going to need people that can fix the gap between almost correct and actually correct.
Sure, but how much of the total work time in software development is writing relatively straightforward, boilerplate type code that could reasonably be copied from the top answer from stackoverflow with variable names changed? Now maybe instead of 5 FTE equivalents doing that work, you just need the 1 guy to debug the AI's shot at it. Now 4 people are out of work, or applying to be the 1 guy at some other company.
> Sure, but how much of the total work time in software development is writing relatively straightforward, boilerplate type code that could reasonably be copied from the top answer from stackoverflow with variable names changed?
It may be a significant chunk of the butt-in-seat-time under our archaic 40hour/week paradigm, but it's not a significant chunk of the programmer's actual mental effort. You're not going to be able to get people to work 5x more intensely by automating the boring stuff, that was never the limiting factor.
Does anyone remember the old maxim, "Don't write code as cleverly as you can because it's harder to debug than it is to write and you won't be clever enough"?
Two issues. First, when a human gets something 5% wrong, it's more likely to be a corner case or similar "right most of the time" scenario, whereas when AI gets something 5% wrong, it's likely to look almost right but never produce correct output. Second, when a human writes something wrong they have familiarity with the code and can more easily identify the problem and fix it, whereas fixing AI code (either via human or AI) is more likely to be fraught.
You (and everyone else) seem to be making the classic "mistake" of looking at an early version and not appreciating that things improve. Ten years ago, AI-generated art was at 50%. 2 years ago, 80%. Now it's at 95% and winning competitions.
I have no idea if the AI that's getting code 80% right today will get it 95% right in two years, but given current progress, I wouldn't bet against it. I don't think there's any fundamental reason it can't produce better code than I can, at least not at the "write a function that does X" level.
Whole systems are a way harder problem that I wouldn't even think of making guesses about.
It might improve like Go AI and shock everyone by beating the world expert at everything, or it might improve like Tesla FSD which is annoyingly harder than "make creative artwork".
There's no fundamental reason it can't be the world expert at everything, but that's not a reason to assume we know how to get there from here.
What scares me is a death of progress situation. Maybe it cant be an expert, but it can be good enough, and now the supply pipeline of people who could be experts basically gets shut off, because to become an expert you needed to do the work and gain the experiences that are now completely owned by AI.
The problem of a vengeful god who demands the slaughter of infidels
lies not in his existence or nonexistence, but peoples' belief in such
a god.
Similarly, it does not matter whether AI works or it doesn't. It's
irrelevant how good it actually is. What matters is whether people
"believe" in it.
AI is not a technology, it's an ideology.
Given time it will fulfil it's own prophecy as "we who believe" steer
the world toward that.
That's what's changing now. It's in the air.
The ruling classes (those who own capital and industry) are looking at
this. The workers are looking too. Both of them see a new world
approaching, and actually everyone is worried. What is under attack is
not the jobs of the current generation, but the value of human skill
itself, for all generations to come. And, yes, it's the tail of a
trajectory we have been on for a long time.
It isn't the only way computers can be. There is IA instead of AI.
But intelligence amplification goes against the principles of capital
at this stage. Our trajectory has been to make people dumber in
service of profit.
What's under attack is the notion that humans are special - that there's some kind of magic to them that is fundamentally impossible to replicate. No wonder there's a full-blown moral panic about this.
Agreed, but that train left the station in the late 1800s, driven by
Darwin and Nietzsche. The intervening one and a half centuries haven't
dislodged the "human spirit" in its secular form. We thought we'd
overcome "gods". Now, out of discontent and self-loathing we're going
to do what Freud warned against, and find a new external something to
subjugate ourselves to. We simply refuse to shoulder the burden of
being free.
Maybe AI can replicate everything humans can do. But this technology isnt that. It just mass reads and replicates what humans have already done, but actual novel implementations seem out of its grasp. (for now) The art scene is freaking out because a lot of art is basically derivative already, but everyone pretended it was not. Coders already knew and admitted they stole all the time.
The other patterns of AI that seem to be able to arrive at novel solutions basically use a brute force approach of predicting every outcome if it has perfect information or a brute force process where it tries everything until it finds the thing that "works". Both of those seem approaches seem problematic in the "real world". (though i would find convincing the argument that the billions of people all trying things act as a de facto brute force approach in practice)
For someone to be able to do a novel implementation in a field dominated by AI might be impossible, because core foundational skills cant get developed anymore by humans for them to achieve heights that the AI hasn't reached yet. We are now stuck, things cant really get "better", we just get maybe iterative improvements on how the AI implements the already arrived at solutions.
TLDR, lets sic the AI on making a new Javascript framework and see what happens :)
> What is under attack is not the jobs of the current generation, but the value of human skill itself, for all generations to come. And, yes, it's the tail of a trajectory we have been on for a long time.
Wow, yes. This is exactly what I've been thinking but you summed it up more eloquently.
Tesla is limited by the processing power contained in the chip of each car. That's not the case for language models; they can get arbitrarily large without much problem with latency. If Tesla could train just one huge model in a data center and deliver it by API to every car I bet self driving cars would have already been a reality.
To be fair to those assumptions, there've been a lot of cases of machine-learning (among other tech) looking very promising, and advancing so quickly that a huge revolution seems imminent—then stalling out at a local maximum for a really long time.
The architecture behind the chatGPT and the other AIs that are making the news won't ever improve so it can correctly write non-trivial code. There is a fundamental reason for that.
Other architectures exist, but you can notice from the lack of people talking about them that they don't produce any output nearly as developed as the chatGPT kind. They will get there eventually, but that's not what we are seeing here.
> The architecture behind the chatGPT and the other AIs that are making the news won't ever improve so it can correctly write non-trivial code. There is a fundamental reason for that.
Probably because it doesn't maintain long term cohesion. Transformer models are great at producing things that look right over short distances, but as the output length increases it often becomes contradictory or nonsensical.
To get good output on larger scales we're going to need a model that is hierarchical with longer term self attention.
Whole systems from a single prompt are probably a ways away, but I was able to get further than I expected by asking it what classes would makeup the task I was trying to do and then having it write those classes.
And GPT can't fix a bug, it can only generate new text that will have a different collection of bugs. The catch is that programming isn't text generation. But AI should be able to make good actually intelligent fuzzers, that should be realistic and useful.
It can't? I could've sworn I've seen (cherry-picked) examples of it doing exactly that, when prompted. It even explains what the bug is and why the fix works.
Those are cherry picked, and most importantly, all of the examples where it can fix a bug are examples where it's working with a stack trace, or with an extremely small section of code (<200 lines). At what point will it be able to fix a bug in a 20,000 line codebase, with only "When the user does X, Y unintended consequence happens" to go off of?
It's obvious how an expert at regurgitating StackOverflow would be able to correct an NPE or an off-by-one error when given the exact line of code that error is on. Going any deeper, and actually being able to find a bug, requires understanding of the codebase as a whole and the ability to map the code to what the code actually does in real life. GPT has shown none of this.
"But it will get better over time" arguments fail for this because the thing that's needed is a fundamentally new ability, not just "the same but better." Understanding a codebase is a different thing from regurgitating StackOverflow. It's the same thing as saying in 1980, "We have bipedal robots that can hobble, so if we just improve on that enough we'll eventually have bipedal robots that beat humans at football."
It is only a matter of time. It can understand error stacktrace and suggest a fix. Somebody has to plug it to IDE then it will start converting requirements to code.
Yes it can, I've been using it for exactly that. "This code is supposed to do X but does Y or haz Z error fix the code."
Sure you can't stick an entire project in there, but if you know the problem is in class Baz, just toss in the relevant code and it does a pretty damn good job.
sure but now you only need testers and one coder to fix bugs, where you used to need testers and 20 coders. AI code generators are force multipliers, maybe not strict replacements. And the level of creativity to fix a bug relative to programming something wholly original is days apart.
Maybe for certain domains it's okay to fail 5% of the time but a lot of code really does need to be perfect. You wouldn't be able to work with a filesystem that loses 5% of your files.
> I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
All software has bugs, but it's usually far better that "95% right." Code that's only 95% right probably wouldn't pass half-ass testing or a couple of days of actual use.
Fixing the last 5% requires that you understand 100% of all. And understanding is the main value added by programmer, not typing characters into text editor.
I agree with you. Even software that had no bugs today (if that is possible) could start having bugs tomorrow, as the environment changes (new law, new hardware, etc.)
EDIT: I posted this comment twice by accident! This comment has more details but the other more answers, so please check the other one!
> code that's only 95% right is just wrong,
I know what you mean, but thinking about it critically, this is just wrong. All software has bugs in it. Small bugs, big bugs, critical bugs, security bugs, everything. No code is immune. The largest software used by millions every day has bugs. Library code that has existed and been in use for 30 years has bugs.
I don't think you were actually thinking of this in your comparison, but I think it's actually a great analogy - code, like art, can be 95% complete, and that's usually enough. (For art, looks good and is what I wanted is enough, for code, does what I want right now, nevermind edge cases is enough.)
The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.
Secondly, we do a lot more than write one-off pieces of code. We write much, much larger systems, and the connections between different pieces of code, even on a function-to-function level, are very complex.
> The reason ChatGPT isn't threatening programmers is for other reasons. Firstly, it's code isn't 95% good, it's like 80% good.
The role that is possibly highly streamlined with a near-future ChatGPT/CoPilot are requirements-gathering business analysts, but developers at Staff level on up sits closer to requiring AGI to even become 30% good. We'll likely see a bifurcation/barbell: Moravec's Paradox on one end, AGI on the other.
An LLM that can transcribe a verbal discussion directly with a domain expert for a particular business process with high fidelity, give a precis of domain jargon to a developer in a sidebar, extracts out further jargon created by the conversation, summarize the discussion into documentation, and extract how the how's and why's like a judicious editor might at 80% fidelity, then put out semi-working code at even 50% fidelity, that works 24x7x365 and automatically incorporates everything from GitHub it created for you before and that your team polished into working code and final documentation?
I have clients who would pay for an initial deployment of that for an appliance/container head end of that which transits the processing through the vendor SaaS' GPU farm but holds the model data at rest within their network / cloud account boundary. Being able to condense weeks or even months of work by a team into several hours that requires say a team to tighten and polish it up by a handful of developers would be interesting to explore as a new way to work.
>> "I think the two biggest differences between art AI and code AI are that (a) code that's only 95% right is just wrong, whereas art can be very wrong before a client even notices [0];"
Art can also be extremely wrong in a way everyone notices and still be highly successful. For example: Rob Liefeld.
Artists are, necessarily, perfectionists about their work — it's the only way to get better than the crude line drawings and wildly wrong anatomy that most people can do.
Frustratingly, most people don't fully appreciate the art, and are quite happy for artists to put in only 20% of the effort. Heck, old enough to remember people who regarded Quake as "photorealistic", some in a negative way saying this made it a terrible threat to the minds of children who might see the violence it depicted, and others in a positive way saying it was so good that Riven should've used that engine instead of being pre-rendered.
Bugs like this are easy to fix: `x = x – 4;` which should be `x = x - 4;`
Bugs like this, much harder:
#define TOBYTE(x) (x) & 255
#define SWAP(x,y) do { x^=y; y^=x; x^=y; } while (0)
static unsigned char A[256];
static int i=0, j=0;
void init(char \*passphrase) {
int passlen = strlen(passphrase);
for (i=0; i<256; i++)
A[i] = i;
for (i=0; i<256; i++) {
j = TOBYTE(j + A[TOBYTE(i)] + passphrase[j % passlen]);
SWAP(A[TOBYTE(i)], A[j]);
}
i = 0; j = 0;
}
unsigned char encrypt_one_byte(unsigned char c) {
int k;
i = TOBYTE(i+1);
j = TOBYTE(j + A[i]);
SWAP(A[i], A[j]);
k = TOBYTE(A[i] + A[j]);
return c ^ A[k];
}
I do appreciate that the way in which a piece of code "works" and the way in which an piece of art "works" is in some ways totally different- but, I also think that in many cases, notably automated systems that create reports or dashboards, they aren't so far apart. In the end, the result just has to seem right. Even in computer programming, amateur hour level correctness isn't so uncommon, I would say.
I would personally be astonished if any of the distributed systems I've worked on in my career were even close to 95% correct, haha.
Understanding what you are plotting and displaying in the dashboard is the complicated part, not writing the dashboard. Programmers are not very afraid of AI because it is still just a glorified fronted to stackoverflow, and SO has not destroyed the demand for programmers so far. Also, understanding the subtle logical bugs and errors introduced by such boilerplate AI-tools requires no less expertise than knowing how write the code upfront. Debugging is not a very popular activity among programmers for a reason.
It may be that one day AI will also make their creators obsolete. But at that point so many professions will be replaced by it already, that we will live in a massively changed society where talking about the "job" has no meaning anymore.
A misleading dashboard is a really really bad. This is absolutely not something where I would be happy to give it to an AI to do just because "no one will notice". The fact that no one will notice errors until it's too late is why dashboards need extra effort by their author to actually test the thing.
If you want to give programming work to an AI, give it the things where incorrect behaviour is going to be really obvious, so that it can be fixed. Don't give it the stuff where everyone will just naively trust the computer without thinking about it.
The other day I copied a question from leetcode and asked GPT to solve it.
The solution had the correct structure to be interpreted by leetcode(Solution class, with the correct method name and signature, and with the same implementation of a linked list that leetcode would use). It made me feel like GPT was not implementing the solution for anything. Just copying and pasting some code it has read on the internet.
Setting aside questions of whether there is copyright infringement going on, I think this is an unprecedented case in the history of automation replacing human labor.
Jobs have been automated since the industrial revolution, but this usually takes the form of someone inventing a widget that makes human labor unnecessary. From a worker's perspective, the automation is coming from "the outside". What's novel with AI models is that the workers' own work is used to create the thing that replaces them. It's one thing to be automated away, it's another to have your own work used against you like this, and I'm sure it feels extra-shitty as a result.
> From a worker's perspective, the automation is coming from "the outside".
Not, if the worker is an engineer or similar. Some engineers built tools that improved building tools.
And this started even earlier than the industrial revolution. Think for example of Johannes Gutenberg. His real important invention was not the printing press (this already existed) and not even moveable types, but a process by which a printer could mold his own set of identical moveable types.
I see a certain analogy between what Gutenberg's invention meant for scribes then and what Stable Diffusion means for artists today.
Another thought: In engineering we do not have extremly long lasting copyright, but a lot shorter protection periods via patents. I have never understood why software has to be protected for such long copyright periods and not for much shorter patent-like periods. Perhaps we should look for something similar for AI and artists: An artist as copyright as usual for close reproductions, but after 20 years after publication it may be used without her or his consent for training AI models.
That was not what was at issue in my comment. It referred to a sentence where the Parent was not talking about Stable Diffusion in particular, but about what he claimed was a general difference from the usual conditions since the industrial revolution. My comment merely referred to the fact that this is not generally true everywhere (in most specific cases, of course, it may very well be true). In this context, the real difference, however, with regard to Stable Diffusion is not the involuntary nature of the artists' "contributions", but the fact that the artists are not usually the developers of the AI software. In this respect, the Parent is right that for them all this comes from "the outside". It is just that I wanted to point out that this does not apply equally to all professional groups.
Some did, others did not. But those who did could still use the entire engineering corpus of knowledge they have studied towards their goal, even if they learned it from those who would not approve.
I don't know why we keep framing artists like they're textile workers or machinists.
The whole point of art is human expression. The idea that artists can be "automated away" is just sad and disgusting and the amount of people who want art but don't want to pay the artist is astounding.
Why are we so eager to rid ourselves of what makes us human to save a buck? This isn't innovation, its self destruction.
Most art consumed today isn't about human expression, and it hasn't been for a very long time. Most art is produced for commercial reasons with the intent of making as much profit as possible.
Art-as-human-expression isn't going anywhere because it's intrinsically motivated. It's what people do because they love doing it. Just like people still do woodworking even though it's cheaper to buy a chair from Walmart, people will still paint and draw.
What is going to go away is design work for low-end advertising agencies or for publishers of cheap novels or any of the other dozens of jobs that were never bastions of human creativity to begin with.
It's an important distinction you make and hard to talk about without a vocabulary. The terms I've seen music historians use for this concept were:
- generic expression: commercial/pop/entertainment; audience makes demands on the art
- autonomous expression: artist's vision is paramount; art makes demands on the audience
Obviously these are idealized antipodes. The question about whether it is the art making the demands on the audience or the audience making demands on the art is especially insightful in my opinion. Given this rubric, I'd say AI-generated art must necessarily belong to "generic expression" simply because it's output has to meet fitness criteria.
I think fine artists and others who make and sell individual art pieces for a living will probably be fine, yeah. (Or at least won't be struggling much worse than they are already.)
There are a lot of working commercial artists in between the fine art world and the "cheap novels and low-end advertising agencies" you dismiss, and there's no reason to think AI art won't eat a lot of their employment.
Of course it will. Their employment isn't sacred. They have a skill, we're teaching that skill to computers, and their skill will be worth less.
I don't pay someone to run calculations for me, either, also a difficult and sometimes creative process. I use a computer. And when the computer can't, then I either employ my creativity, or hire a creative.
Okay, but that's a different argument from your original. First you said "only bad artists will lose their jobs," now it's "good artists will lose their jobs but I don't care."
I also agree that artist employment isn't sacred, but after extensive use of the generation tools I don't see them replacing anything but the lowest end of the industry, where they just need something to fill a space. The tools can give you something that matches a prompt, but they're only really good if you don't have strong opinions about details, which most middle tier customers will.
Just like AI can't replace programmers completely because most people are terrible at defining their own software requirements, AI won't replace middle-tier commercial artists because most people have no design sense.
Commercial art needs to be eye catching and on brand if it's going to be worth anything, and a random intern isn't going to be able to generate anything with an AI that matches the vision of stakeholders. Artists will still be needed in that middle zone to create things that are on brand, that match stakeholder expectations, and that stand out from every other AI generated piece. These artists will likely start using AI tools, but they're unlikely to be replaced completely any time soon.
That's why I only mentioned the bottom tier of commercial art as being in danger. The only jobs that can be replaced by AI with the technology that we're seeing right now are in the cases where it really doesn't matter exactly what the art looks like, there just has to be something.
Because when people discuss "art" they are really discussing two things.
Static 2D images that usually serve a commercial purpose. Ex logos, clip art, game sprites, web page design and the like.
And the second is pure art whose purpose is more for the enjoyment of the creator or the viewer.
Business wants to fully automate the first case and must people view it has nothing to do with the essence of humanity. It's simply dollars for products - but it's also one of the very few ways that artists can actually have paying careers for their skills.
The second will still exist, although almost nobody in the world can pay bills off of it. And I wouldn't be shocked it ML models start encroaching there as well.
So a lot of what's being referred to is more like textile workers. And anyone who can type a few sentences can now make "art" significantly lowering barriers to entry. Maybe a designer comes and touches it up.
The short sighted part, is people thinking that this will somehow stay specific to Art and that their cherished field is immune.
Programming will soon follow. Any PM "soon enough" will be able to write text to generate a fully working app. And maybe a coder comes in to touch it up.
You're defining the word "art" in one sentence and then using a completely different definition in the next sentence. Where are these people who want art, as you've defined it, but don't want to pay? Most of the people you're referring to want visual representations of their fursonas, or D&D characters, or want marketing material for their product. They're not trying to get human expression.
In the sense that art is a 2D visual representation of something, or a marketing tool that evokes a biological response in the viewer, art is easy to automate away. This is no different than when the camera replaced portraitists. We've just invented a camera that shows us things that don't exist.
In the sense that art is human expression, nobody has even tried to automate that yet and I've seen no evidence that expressionary artists are threatened.
It's ironic seeing your earlier comment on chatgpt coding and then this. If anything is easier to automate, it's programming which can be rigorous and have rules while art really isn't, it's only "easy" for those who don't understand it, which is what the person is actually talking about.
You're in for a rude awakening when you get laid off and replaced with a bot that creates garbage code that is slow and buggy but works and so the boss gets to save on your salary. "But it's slow, redundant, looks like it was made by some who just copy and pasted endlessly from stackoverflow" but your boss won't care, he just needs to make a buck.
For someone seeking sound/imagery/etc. resulting from human expression (i.e., art), it makes sense that it can't be automated away.
For someone seeking sound/imagery/etc. without caring whether it's the result of human expression (e.g., AI artifacts that aren't art), it can be automated away.
The idea that artists can be automated away is really just kind of dumb, not because people like AI created art and can get it cheap, but because it has no real impact on the "whole point" of the art... for the creation of the art. Pure art, as human expression, has no dependency on money. Anecdotally I very much enjoy painting and music (and coding) as art forms but have never sold a painting nor a song in my life. Just because someone won't pay you for something doesn't mean it has no value.
As far as money goes... long run artists will still make money fine as people will value the people generated (artisanal) works. Just as people like hand-made stuff today, even though you can get machine-made stuff way cheaper. You may not have the generic jobs of cranking out stuff for advertisements (and such) but you'll still have artists.
I follow plenty of artists on Elon's hellsite and professional artists of all stripes are upset about it. Jobs are already disappearing, being replaced entirely by AI and "prompt engineers" or people just using AI to copy someone's style for their portfolio. Granted, it isn't endemic yet, but the big Indiana Jones stone ball of progress is definitely rolling in that direction.
That is not what the post I was responding to was about, it was about the art as human expression. Nothing was said about it as a profession and making money creating art makes zero difference as to the worth of the art.
I wouldn't say saying it came from the inside is unique to AI art. You very much need a welder's understanding of welding in order to be able to automate it for example.
I'd just say the scale is different. Old school automation just required one expert to guide the development of an automation. AI art requires the expertise of thousands.
We need a better way to reward the contributing artists making the diffusion models possible. Might we be able to come up with a royalty model, where the artist that made the original source content used in training the diffusion model, gets a fractional royalty based on how heavily it is used when generating the prompted art piece? We want to incentivize artists to feed their works, and original styles, into future AI models.
Absolutely this -- and in many (maybe most cases), there was no consent for the use of the work in training the model, and quite possibly no notice or compensation at all.
That's a huge ethical issue whether or not it's explicitly addressed in copyright/ip law.
It is not a huge ethical issue. The artists have always been at risk of someone learning their style if they make their work available for public viewing.
We've just made "learning style" easier, so a thing that was always a risk is now happening.
Let's shift your risk of immediate assault and death up by a few orders of magnitude. I'm sure that you'll see that as "just" something that was always a risk, pretty much status quo, right right?
Oh, life & death is different? Don't be so sure; there's good reasons to believe that livelihood (not to mention social credit) and life are closely related -- and also, the fundamental point doesn't depend on the specific example: you can't point to an orders-of-magnitude change and then claim we're dealing with a situation that's qualitatively like it's "always" been.
"Easier" doesn't begin to honestly represent what's happened here: we've crossed a threshold where we have technology for production by automated imitation at scale. And where that tech works primarily because of imitation, the work of those imitated has been a crucial part of that. Where that work has a reasonable claim of ownership, those who own it deserve to be recognized & compensated.
> The 'reasonable claim of ownership' extends to restricting transmission, not use after transmission.
It's not even clear you're correct by the apparent (if limited) support of your own argument. "Transmission" of some sort is certainly occurring when the work is given as input. It's probably even tenable to argue that a copy is created in the representation of the model.
You probably mean to argue something to the effect that dissemination by the model is the key threshold by which we'd recognize something like the current copyright law might fail to apply, the transformative nature of output being a key distinction. But some people have already shown that some outputs are much less transformative than others -- and even that's not the overall point, which is that this is a qualitative change much like those that gave birth to industrial-revolution copyright itself, and calls for a similar kind of renegotiation to protect the underlying ethics.
People should have a say in how the fruits of their labor are bargained for and used. Including into how machines and models that drive them are used. That's part of intentionally creating a society that's built for humans, including artists and poets.
I wasn't speaking about dissemination by the model at all. It's possible for an AI to create an infringing work.
It's not possible for training an AI using data that was obtained legally to be copyright infringement. This is what I was talking about regarding transmission. Copyright provides a legal means for a rights holder to limit the creation of a copy of their image in order to be transmitted to me. If a rights holder has placed their image on the internet for me to view, then copyright does not provide them a means to restrict how I choose to consume that image.
The AI may or may not create outputs that can be considered derivative works, or contain characters protected by copyright.
You seem to be making an argument that we should be changing this somehow. I suppose I'll say "maybe". But it is apparent to me that many people don't know how intellectual property works.
There's a separate question of whether the AI model, once trained on a copyrighted input, constitutes a derived work of that input. In cases where the model can, with the right prompt, produce a near-identical (as far as humans are concerned) image to the input, it's hard to see how it is not just a special case of compression; and, of course, compressed images are still protected by copyright.
A derivative work is a creative expression based on another work that receives its own copyright protection. It's very unlikely that AI weights would be considered a creative expression, and would thus not be considered a derivative work. At this point, you probably can't copyright your AI weights.
An AI might create work that could be considered derivative if it were the creative output of a human, but it's not a human, and thus the outputs are unlikely to be considered derivative works, though they may be infringing.
If the original is a creative expression, then recording it using some different tech is still a creative expression. I don't see the qualitative difference between a bunch of numbers that constitutes weights in a neural net, and a bunch of numbers that constitute bytes in a compressed image file, if both can be used to recreate the original with minor deviations (like compression artifacts in the latter case).
This is like saying that continuously surveilling people when they are outside of their private property and live-reporting it to the internet is not a huge ethical issue. For you are always at risk of being seen when in public and the rest is merely exercising freedom of speech.
Something being currently legal and possible doesn’t mean being morally right.
Technology enables things and sometimes the change is qualitatively different.
Make open source code open source always has the risk of someone copying it and distributing it in proprietary code. That doesn't make it right or ethical. Stealing an unlocked car is unethical. Raping someone who is weaker than you is unethical. Just because something isn't difficult doesn't make something ethical.
Both personal autonomy and private property are social constructs we agree are valuable. Stealing a car and raping a person are things we've identified as unacceptable and codified into law.
And in stark contrast, intellectual property is something we've identified as being valuable to extend limited protections to in order to incentivize creative and technological development. It is not a sacred right, it's a gambit.
It's us saying, "We identify that if we have no IP protection whatsoever, many people will have no incentive to create, and nobody will ever have an incentive to share. Therefore, we will create some protection in these specific ways in order to spur on creativity and development."
There's no (or very little) ethics to it. We've created a system not out of respect for people's connections to their creations, but in order to entice them to create so we can ultimately expropriate it for society as a whole. And that system affords protection in particular ways. Any usage that is permitted by the system is not only not unethical, it is the system working.
That is a hard fight to have, since it is the same for people. An artist will have watched some Disney movie, and that could influence their art in some small way. Does Disney have a right to take a small amount from every bit of art which they produce from then on? Obviously not.
The real answer is AI are not people, and it is ok to have different rules for them, and that is where the fight would need to be.
I really think there's likely to be gigantic class action lawsuits in the near future, and I support them. People did not consent for their data and work to be used in this way. In many cases people have already demonstrated using custom tailored prompts that these models have been trained on copyrighted works that are not public domain.
I can't copy your GPL code. I might be able to write my own code that does the same thing.
I'm going to defend this statement in advance. A lot of software developers white knight more than they strictly have to; they claim that learning from GPL code unavoidably results in infringing reproduction of that code.
Courts, however, apply a test [1], in an attempt to determine the degree to which the idea is separable from the expression of that idea. Copyright protects particular expression, not idea, and in the case that the idea cannot be separated from the expression, the expression cannot be copyrighted. So either I'm able to produce a non-infringing expression of the idea, or the expression cannot be copyrighted, and the GPL license is redundant.
It's already explicitly legal to train AI using copyrighted data in many countries. You can ignore opt-outs too, especially if you're training AI for non-commercial purposes. Search up TDM exceptions.
Making money through art is already not a feasible career, as you yourself learned. If you want a job that millions of people do for fun in their free time, you can expect that job to be extremely hard to get and to pay very little.
The solution isn't to halt technological progress to try to defend the few jobs that are actually available in that sector, the solution is to fight forward to a future where no one has to do dull and boring things just to put food on the table. Fight for future where people can pursue what they want regardless of whether it's profitable.
Most of that fight is social and political, but progress in ML is an important precursor. We can't free everyone from the dull and repetitive until we have automated all of it.
>The solution isn't to halt technological progress
Technological progress is not a linear deterministic progression. We decidehow to progress every step of the way. The problem is that we are making dogshit decisions for some reason
Maybe we lack the creativity to envision alternative futures. How does a society become so uncreative I wonder
> We decide how to progress every step of the way.
I think the wheels are turning. It's just a resultant movement from thousands of small movements, but nobody is controlling it. If you take a look not even wars dent the steady progress of science and technology.
But do you know what reducing the progress of generative modeling will do? Because there seems to be this confusion that generative modeling is about art/music/text.
You'll find its nearly impossible to imagine a world without capitalism.
Capitalism is particularly good at weaponizing our own ideas against us. See large corporations co-opting anti-capitalist movements for sales and PR.
Pepsi-co was probably mad that they couldn't co-op "defund the police", "fuck 12", and "ACAB" like they could with "black lives matter".
Anything near and dear to us will be manipulated into a scientific formula to make a profit, and anything that cannot is rejected by any kind of mainstream media.
See: Capitalist Realism and Manufactured Consent(for how advertising effects freedom of speech in any media platform).
Perhaps it would be better to say you can't imagine "the future" without capitalism, as history prior to maybe the 1600s offers a less technologically advanced illustration.
The escape from a pure profit driven world would go so far.
Imagine all the good things that aren't done because they just don't make any money. Instead we put resources towards things that make our lives worse because they're profitable.
This is part of the reason why I am disappointed, but not surprised, by all the flippant response to the concerns voiced here.
So AI puts artists out of a job and in some utopian vision, one day puts programmers out of a job, and nobody has jobs and that's what we should want, right, so why are you complaining about your personal suffering on the inevitable march of progress?
There is little to no worthwhile discussion from those same people about if the Puritanical worldview of work-to-live will be addressed, or how billionaires/capitalists/holders-of-the-resources respond to a world where no one has jobs, an income stream, and thus money to buy their products. Because Capitalist Realism has permeated, and we can no longer imagine a plausibly possible future that isn't increasingly technofeudalist. Welcome back to Dune?
It’s pretty easy to imagine a world without capitalism. It’s the one where the government declares you a counterrevolutionary hedonist for wanting to do art and forces you to work for the state owned lithium mine.
Mixed social-democratic economies are nice and better than plutocracies, but they have capitalism; they just have other economic forms alongside it.
(Needing to profit isn’t exclusive to capitalism either. Socialist societies also need productivity and profit, because they need to reinvest.)
If it's so important, we could at least pay the people who create the training set. Otherwise, we're relying on unpaid labor for this important progress and if the unpaid labor disappears, we're screwed. How does it seem sensible to construct a business this way?
My empathy for artists is fighting with my concern for everyone else's future, and losing.
It would be very easy to make training ML models on publicly available data illegal. I think that would be a very bad thing because it would legally enshrine a difference between human learning and machine learning in a broader sense, and I think machine learning has huge potential to improve everyone's lives.
Artists are in a similar position to grooms and farriers demanding the combustion engine be banned from the roads for spooking horses. They have a good point, but could easily screw everyone else over and halt technological progress for decades. I want to help them, but want to unblock ML progress more.
I see this as another step toward having a smaller and smaller space in which to find our own meaning or "point" to life, which is the only option left after the march of secularization. Recording and mass media / reproduction already curtailed that really badly on the "art" side of things. Work is staring at glowing rectangles and tapping clacky plastic boards—almost nobody finds it satisfying or fulfilling or engaging, which is why so many take pills to be able to tolerate it. Work, art... if this tech fulfills its promise and makes major cuts to the role for people in those areas, what's left?
The space in which to find human meaning seems to shrink by the day, the circle in which we can provide personal value and joy to others without it becoming a question of cold economics shrinks by the day, et c.
I don't think that's great for everyone's future. Though admittedly we've already done so much harm to that, that this may hardly matter in the scheme of things.
I'm not sure the direction we're going looks like success, even if it happens to also mean medicine gets really good or whatever.
Then again I'm a bit of a technological-determinist and almost nobody agrees with this take anyway, so it's not like there's anything to be done about it. If we don't do [bad but economically-advantageous-on-a-state-level thing], someone else will, then we'll also have to, because fucking Moloch. It'll turn out how it turns out, and no meaningful part in determining that direction is whether it'll put us somewhere good, except "good" as blind-ass Moloch judges it.
What role exactly is it going to take? The role we currently have, where the vast majority of people do work not because they particularly enjoy it but because they’re forced to in order to survive?
That’s really what we’re protecting here?
I’d rather live in the future where automation does practically everything not for the benefit of some billionaire born into wealth but because the automation is supposed to. Similar to the economy in Factorio.
Then people can derive meaning from themselves rather that whatever this dystopian nightmare we’re currently living in.
It’s absurdly depressing that some people want to stifle this progress only because it’s going to remove this god awful and completely made up idea that work is freedom or work is what life is about.
I am happy to write code for a hobby. Who is going to pay for that?
The oligarchs of our time pay their tax to their own 'charities'. Companies with insane profits buy their own shares.
AI powered surveillance and the ongoing destruction of public institutions will make it hard to stand up for the collective interest.
We are not in hell, but the road to it has not been closed.
The ideal situation is that nobody pays for it. Picture a scenario where the vast majority of resource gathering, manufacturing, and production are all automated. Programmers are out of a job, factory workers are out of a job, miners are out of a job, etc.
Basically the current argument of artists being out of a job but taken to its extreme.
Why would these robots get paid? They wouldn’t. They’d just mine, manufacture, and produce on request.
Imagine a world where chatgpt version 3000 is connected to that swarm of robots and you can type “produce a 7 inch phone with an OLED screen, removable battery, 5 physical buttons, a physical shutter, and removable storage” and X days later arrives that phone, delivered by automation, of course.
Same would work with food, where automation plants the seeds, waters the crops, removes pests, harvests the food, and delivers it to your home.
All of these are simply artists going out of a job, except it’s not artists it’s practically every job humans are forced to do today.
There’d be very little need to work for almost every human on earth. Then I could happily spend all day taking shitty photographs that AI can easily replicate today far better than I could photograph in real life but I don’t have to feel like a waste of life because I enjoy doing it for fun and not because I’m forced to in order to survive.
Look, I like the paradise you created. You only forgot about who we are.
> There’d be very little need to work for almost every human on earth.
When mankind made a pact with the devil, the burden we got was that we had to earn our bread though sweat and hard labor. This story has survived millennia, there is something to it.
Why is the bottom layer in society not automated by robots? No need to if they are cheaper than robots. If you don't care about humans, you can get quite some labor for a little bit of sugar. If you can work one job to pay your rent, you can possibly do two or three even.
If you don't have those social hobbies like universal healthcare and public education, people will be competitive for a very long time with robots. If people are less valuable, they will be treated as such.
Humans have existed for close to 200,000 years. Who we ‘are’ is nothing close to what we have today. What humans actually are is an invasive species capable of subjugating nature to fit its needs. I want to just push that further and subjugate nature with automation that can feed us and manufacture worthless plastic and metal media consumption devices for us.
Your diatribe about not caring about humans is ironic. I don’t know where you got all that from, but it certainly wasn’t my previous comment.
I also don’t know what pact you’re on about. The idea of working for survival is used to exploit people for their labor. I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?
> Who we ‘are’ is nothing close to what we have today.
I am wondering why you define being in terms of having. Is that a slip, or is that related to this:
> I want to just push that further and subjugate nature with automation that can feed us and manufacture worthless plastic and metal media consumption devices for us.
Because I can hear sadness in these words. I think we can feel thankful for having the opportunity to observe beauty and the universe and feel belonging to where we are and with who we are. Those free smartphones are not going to substitute that.
I do not mean we have to work because it is our fate or something like that.
> Your diatribe about not caring about humans is ironic.
A pity you feel that way. Maybe you interpreted "If you don't care about humans" as literally you, whereas I meant is as "If one doesn't care".
What I meant was is the assumption you seem to make that when a few have plenty of production means without needing the other 'human resources' anymore, those few will not spontaneously share their wealth with the world, so the others can have free smart phones and a life of consumption. Instead, those others will have to double down and start to compete with increasingly cheaper robots.
----
The pact in that old story I was talking about deals with the idea that we as humans know how to be evil. In the story, the consequence is that those first people had to leave paradise and from then on have to work for their survival.
I just mentioned it because the fact that we exploit not only nature, but other humans too if we are evil enough.
People that end up controlling the largest amounts of wealth are usually the most ruthless. That's why we need rules.
----
> I guess people with disabilities that aren’t able to work just aren’t human? Should we let them starve to death since they can’t work a 9-5 and work for their food?
On the contrary, I think I have been misunderstood.:)
I hear more sadness in your words that are stuck on the idea of having to compete. The idea is to escape that and make exploiting people not an option. If you feel evil and competition for survival is what defines humans, that’s truly sad.
> The idea is to escape that and make exploiting people not an option.
I am in, but just wanted to let you know many had this idea before. People thought in the past we would barely work these days anymore. What they got wrong is that productivity gains didn't reach the common man. It was partly lost through mass consumption, fueled by advertising, and wealth concentration. Instead, people at the bottom of the pyramid have to work harder.
> I like my ideal world a lot better.
Me too, without being consumption oriented though. Nonetheless, people that take a blind eye to the weaknesses of humankind often runs into unpleasant surprises.
It requires work, lots of work.
IMO it’s impossible with the idea that survival=work. It’s evident here, with people desperately fighting against AI art because it’ll take away people’s jobs. It’s not even just that, though. It’s also the belief that AI art takes away from human art, as if AI chess existing makes Magnus vs. Niemann less exciting.
That same work=survival idea is what incentivizes competitiveness and of course, under that construct, some humans will put on their competitive goggles and exploit others.
There are a lot of human constructs that need to fade away before we can get to a fully automated world. But that’s okay. Humans aren’t the type to get stuck on a problem forever.
I agree with those points, especially competition is an important one. It has been the furnace of our progress too, so this is a double edged sword.
I think people will not stop forming a social hierarchy, and so competition remains a sticky trait I think.
> work=survival idea is what incentivizes competitiveness
True, the idea that you can do better than the Jones through hard work is alluring. Having a job is now a requirement for being worthy, the kind of job defines your social position.
Compare with the days of nobility though, where those nobleman had everything but a job ("what is a weekend?").
>When mankind made a pact with the devil, the burden we got was that we had to earn our bread though sweat and hard labor. This story has survived millennia, there is something to it.
This sounds mystical and mysterious; it would be a mistake to project one mode of production as being the brand all humans must live with until we go extinct.
> it would be a mistake to project one mode of production as being the brand all humans must live with until we go extinct.
Indeed, you should not read it as an imperative. The other commentator was also put on the wrong foot by this.
Maybe I should not have assumed people would know Genesis, https://en.wikipedia.org/wiki/Book_of_Genesis. I should be more explicit: we are not some holy creatures. Don't assume that the few who are gonna reap the rewards will spontaneously share them with others. We are able to let others suffer to gain a personal advantage.
Every other living thing on the planet spends most of it's time just fighting to survive. I think that's evidence it's not a 'made up idea' and likely may be what life is actually about.
What’re you doing on the internet? No other living thing on this planet spends time on the internet. Or maybe we shouldn’t be copying things from nature just because.
Also kinda curious how you deal with people that have disabilities and can’t exactly fight to survive. Me, I’m practically blind without glasses/contacts, so I’ll not be taking life lessons from the local mountain lion, thanks.
Taking a break from my struggle just like a lion takes a nap. I wouldn't agree we are copying nature, rather, we are an inseparable part of it. The fact that we do some things other members don't isn't a convincing argument for me that we're not part of nature.
If you can't support yourself for whatever reason, you rely on others to do that work on your behalf. Social animals, wolves for example, try to provide for their sick and handicapped, but that's only after their own needs are met first.
That fallacy asserts a judgement that natural=good. I'm not claiming that.
We have physical needs just like other members of the natural world - food for example, if we can't provide food for ourselves, we'll starve to death just like an animal. Why bother judging this situation as good or bad when it's not something that can be changed.
Food will be automated. A lot of it is automated even today. Robots that will till the soil, manage nutrients, plant the seeds, water the plants, and pick the crops. It’ll even be done without pesticides, as robots with vision and plant detection can work 24/7 to remove weeds and pests. Or we’ll switch to hydroponics, still fully automated and done on a mass scale. In this world, there’s no purchasing food. You would just request it and that’s it.
Now imagine that automation in food and expand it to everything. A table factory wouldn’t purchase wood from another company. There’s automation to extract wood from trees and the table factory just requests it and automation produces a table. With robots at every step of the process, there are no labor costs. There’s no shift manager, there’s no CEO with a CEO salary, there’s no table factory worker spending 12+ hours a day drilling a table leg to a table for $3 an hour in China.
That former factory worker in China is instead pursuing their passions in life.
> The space in which to find human meaning seems to shrink by the day
I don’t understand this. It reminds me of the Go player who announced he was giving up the game after AlphaGo’s success. To me that’s exactly the same as saying you’re going to give up running, hiking, or walking because horses or cars are faster. That has nothing to do with human meaning, and thinking it does is making a really obvious category error.
A lot of human meaning comes from providing value to others.
The more computers and machines and institutions take that over, the fewer opportunities there are to do that, and the more doing that kind of thing feels forced, or even like an indulgence of the person providing the "service" and an imposition on those served.
Vonnegut wrote quite a bit about this phenomenon in the arts—how recording, broadcast, and mechanical reproduction vastly diminished the social and even economic value of small-time artistic talent. Uncle Bob's storytelling can't compete with Walt Disney Corporation. Grandma's piano playing stopped mattering much when we began turning on the radio instead of having sing-alongs around the upright. Nobody wants your cousin's quite good (but not excellent) sketches of them, or of any other subject—you're doing him a favor if you sit for him, and when you pretend to give a shit about the results. Aunt Gertrude's quilt-making is still kinda cool and you don't mind receiving a quilt from her, but you always feel kinda bad that she spent dozens of hours making something when you could have had a functional equivalent for perhaps $20. It's a nice gesture, and you may appreciate it, but she needed to give it more than you needed to receive it.
Meanwhile, social shifts shrink the set of people for whom any of this might even apply, for most of us. I dunno, maybe online spaces partially replace that, but most of that, especially the creative spaces, seem full of fake-feeling positivity and obligatory engagement, not the same thing at all as meeting another person you know's actual needs or desires.
That's the kind of thing I mean.
The areas where this isn't true are mostly ones that machines and markets are having trouble automating, so they're still expensive relative to the effort to do it yourself. Cooking's a notable one. The last part of our pre-industrial social animal to go extinct may well be meal-focused major holidays.
I've pretty sure that's about as fundamental as it gets. Help tribe = feel good; tribe values your contributions = feel good; your talents and interests genuinely help the tribe = feel very good.
I don't mean this in a "people love work, actually", hooray-capitalism sense (LOL, god no), but the sense that humans tend to be happier and more content when they're helpful to those around them. It used to be a lot easier to provide that kind of value through creative and self-expressive efforts, than it is now. Any true need for artists and creative work (and, for the most part, craftspeople) at the scale of friend & family circles or towns or whatever, is all but completely gone.
Thank you for that response, it did help me understand.
My probably perverse takeaway is that Barbara Streisand might have been wrong: people who need people (to appreciate their work) may not be the luckiest people in the world. One can enjoy one’s accomplishments without needing to have everyone else appreciate them. Or you can find other people with similar interests, and enjoy shared appreciation.
In the extreme, the need for external validation seems to lead to people like Trump and Musk. Perhaps a shift in how we view this would be beneficial for society?
I agree and think the same way. The "just make numbers go up" mentality of happiness is a fallacy. If this was the case, plugging everyone up to heroin hibernation machines would be the most optimal path. But anyone with an iota of human sensitivity will see that as horrific, unhappy and a destruction of the human spirit.
Happiness needs loss, fulfillment, pain, hunger, boredom, fear and they need to be experiences backed up by both chemical feelings and experiences and memory and they have to be true.
But here's the thing, already the damage is done beyond just some art. I don't mean to diminish art, but frankly, look at how hostile, ugly and inhuman the world outside is in any regular city. Literal death worlds in fantasy 40k settings look more homey, comfortable, fulfilling, and human.
> My empathy for artists is fighting with my concern for everyone else's future, and losing.
My empathy for artists is aligned with my concern for everyone else's future.
> I want to help them, but want to unblock ML progress more.
But progress towards what end? The ML future looks very bleak to me, the world of "The Machine Stops," with humans perhaps reduced to organic effectors for the few remaining tasks that the machine cannot perform economically on its own: carrying packages upstairs, fixing pipes, etc.
We used to imagine that machines would take up the burden our physical labor, freeing our minds for more creative and interesting pursuits: art, science, the study of history, the study of human society, etc. Now it seems the opposite will happen.
> We used to imagine that machines would take up the burden our physical labor, freeing our minds for more creative and interesting pursuits: art, science, the study of history, the study of human society, etc.
You’re like half a step away from the realization that almost everything you do today is done better if not by AI then someone that can do it better than you but you still do it because you enjoy it.
Now just flip those two, almost everything you do in the future will be done better by AI if not another human.
But that doesn’t remove the fact that you enjoy it.
For example, today I want to spend my day taking photographs and trying to do stupid graphic design in After Effects. I can promise you that there are thousands of humans and even AI that can do a far better job than me at both these things. Yet I have over a terabyte of photographs and failed After Effects experiments. Do I stop enjoying it because I can’t make money from these hobbies? Do I stop enjoying it because there’s some digital artist at corporation X that can take everything I have and do it better, faster, and get paid while doing it?
No. So why would this change things if instead of a human at corporation X, it’s an AI?
1) It'll no longer be possible to work as an artist without being incredibly productive. Output, output, output. The value of each individual thing will be so low that you have to be both excellent at what you do (which will largely be curating and tweaking AI-generated art) and extremely prolific. There will be a very few exceptions to this, but even fewer than today.
2) Art becomes another thing lots of people in the office are expected to do simply as a part of their non-artist job, like a whole bunch of other things that used to be specialized roles but become a little part of everyone's job thanks to computers. It'll be like being semi-OK at using Excel.
I expect a mix of both to happen. It's not gonna be a good thing for artists, in general.
Maybe. But art was already so cheap, and talent so abundant, that it was notoriously difficult to make serious money doing it, so I doubt it'll have that effect in general.
It might in a few areas, though. I think film making is poised to get really weird, for instance, possibly in some interesting and not-terrible ways, compared with what we're used to. That's mostly because automation might replace entire teams that had to spend thousands of hours before anyone could see the finished work or pay for it, not just a few hours of one or two artists' time on a more-incremental basis. And even that's not quite a revolution—we used to have very-small-crew films, including tons that were big hits, and films with credits lists like the average Summer blockbuster these days were unheard of, so that's more a return to how things were before computer graphics entered the picture (even 70s and 80s films, after the advent of the spectacle- and FX-heavy Summer blockbuster, had crews so small that it's almost hard to believe, when you're used to seeing the list of hundreds of people who work on, say, a Marvel film)
Art is really not cheap. I think people think about how little artists generate in income and assume that means art is cheap, but non-mass-produced art is pretty much inaccessible for the vast majority of people.
It does just that though? Don't tell me nobody is surpised sometimes while prompting a diffusion model, that can only happen if a significant portion of creation happens, in a non-intuitive way for the user - what you could describe as 'coming up with something'.
Work like this helps us work towards new approaches for the more difficult issues involved with replacing physical labor. The diffusion techniques that have gained popularity recently will surely enable new ways for machines to learn things that simply weren't possible before. Art is getting a lot of attention first because many people (including the developers working on making this possible) want to be able to create their own artwork and don't have the talent to put their mental images down on paper (or tablet). You worry that this prevents us from following more creative and interesting pursuits, but I feel that this enables us to follow those pursuits without the massive time investment needed to practice a skill. The future you describe is very bleak indeed, but I highly doubt those things won't be automated as well.
> I think that would be a very bad thing because it would legally enshrine a difference between human learning and machine learning in a broader sense, and I think machine learning has huge potential to improve everyone's lives.
How about we legally enshrine a difference between human learning and corporate product learning? If you want to use things others made for free, you should give back for free. Otherwise if you’re profiting off of it, you have to come to some agreement with the people whose work you’re profiting off of.
I’m thinking about the people who use SD commercially. There’s a transitive aspect to this that upsets people. If it’s unacceptable for a company to profit off your work without compensating you or asking for your permission, then it doesn’t become suddenly acceptable if some third party hands your work to the company.
Ideally we’d see something opt-in to decide exactly how much you have to give back, and how much you have to constrain your own downstream users. And in fact we do see that. We have copyleft licenses for tons of code and media released to the public (e.g. GPL, CC-BY-SA NC, etc). It lets you define how someone can use your stuff without talking to you, and lays out the parameters for exactly how/whether you have to give back.
"Giving back" is cute but it doesn't make up for taking without permission in the first place. Taking someone's stuff for your own use and saying "here's some compensation I decided was appropriate" is called Eminent Domain when the government does it and it's not popular.
Many people would probably happily allow use of their work for this if asked first, or would grant it for a small fee. Lots of stuff is in the public domain. But you have to actually go through the trouble of getting permission/verifying PD status, and that's apparently Too Hard
> It would be very easy to make training ML models on publicly available data illegal
This isn't the only option though? You could restrict it to data where permission has been acquired, and many people would probably grant permission for free or for a small fee. Lots of stuff already exists in the public domain.
What ML people seem to want is the ability to just scoop up a billion images off the net with a spider and then feed it into their network, utilizing the unpaid labor of thousands-to-millions for free and turning it into profit. That is transparently unfair, I think. If you're going to enrich yourself, you should also enrich the people who made your success possible.
Because we know it's not going to happen any time soon, and when it does happen it won't matter only to devs, that's the singularity.
You'll find out because you're now an enlightened immortal being, or you won't find out at all because the thermonuclear blast (or the engineered plague, or the terminators...) killed you and everybody else.
Does that mean there won't be some enterprising fellas who will hook up a chat prompt to some website thing? And that you can demo something like "Add a banner. More to the right. Blue button under it" and that works? Sure. And when it's time to fiddle with the details of how the bloody button doesn't do the right thing when clicked, it's back to hiring a professional that knows how to talk to the machine so it does what you want. Not a developer! No, of course not, no, no, we don't do development here, no. We do prompts.
Overall though the sentiment is that AI tools are useful and are a sign of progress. The fact that they are stirring so much contention and controversy is just a sign of how revolutionary they are.
I think i have a fair bit of empathy in this area and, well like you said, i think my job (software) is likely to be displaced too. Furthermore, i think companies have data sets regardless of if we allow public use or not. Ie if we ban public use, then only massive companies (Google/etc) will have enough data to train these. Which.. seems worse to me.
At the end of the day though, i think i'm an oddball in this camp. I just don't think there's that much difference between ML and Human Learning (HL). I believe we are nearly infinitely more complex but as time goes on i think the gulf between ML and HL complexity will shrink.
I recently saw some of MKBHD's critiques of ML and my takeaway was that he believes ML cannot possibly be creative. That it's just inputs and outputs.. and, well, isn't that what i am? Would the art i create (i am also trying to get into art) not be entirely influenced by my experiences in life, the memories i retain from it, etc? Humans also unknowingly reproduce work all the time. "Inspiration" sits in the back of their minds and then we regurgitate it out thinking it as original.. but often it's not, it's derivative.
Given that all creative work is learned, though, the line between derivative and originality seems to just be about how close it is to pre-existing work. We mash together ideas, and try to distance it from other works. It doesn't matter what we take as inspiration, or so we claim, as long as the output doesn't overlap too much with pre-existing work.
ML is coming for many jobs and we need to spend a lot of time and effort thinking about how to adapt. Fighting it seems an uphill battle. One we will lose, eventually. The question is what will we do when that day comes? How will society function? Will we be able to pay rent?
What bothers me personally is just that companies get so much free-reign in these scenarios. To me it isn't about ML vs HL. Rather it's that companies get to use all our works for their profit.
> We mash together ideas, and try to distance it from other works. It doesn't matter what we take as inspiration, or so we claim, as long as the output doesn't overlap too much with pre-existing work.
I feel a big part what makes it okay or not okay here is intention and capability. Early in an artistic journey things can be highly derivative but that's due to the student's capabilities. A beginner may not intend to be derivative but can't do better.
I see pages of applications of ML out there being derivative on purpose (Edit: seemingly trying to 'outperform' given freelance artists with glee, in their own styles).
But the ML itself doesn't have intention. The author of the ML does, and that i would think is no different than an artist that purposefully makes copied/derived work.
TBH given how derivative humans tend to be, with such a deeper "Human Learning" model and years and years of experiences.. i'm kinda shocked ML is even capable of even appearing non-derivative. Throw a child in a room, starve it of any interaction and somehow (lol) only feed it select images and then ask it to draw something.. i'd expect it to perform similarly. A contrived example, but i'm illustrating the depth of our experiences when compared to ML.
I half expect that the "next generation" of ML is fed by a larger dataset by many orders of magnitude more similarly matching our own. A video feed of years worth of data, simulating the complex inputs that Human Learning gets to benefit from. If/when that day comes i can't imagine we will seem that much more unique than ML.
I should be clear though; i am in no way defending how companies are using these products. I just don't agree that we're so unique in how we think, how we create, and if we're truly unique in any way shape or fashion. (Code, Input) => Output is all i think we are, i guess.
Of course it's the intention of the user that matters here, I just see that these models give easy access to make extremely derivative works from existing artist's work - and I feel that's an unethical use of the unethically sourced models.
Anyone finding their own artistic voice with the tools, I respect that, those people are artists - but training with the aim to create derivative models, that should be called out.
I feel like I am missing something or holding it wrong, I would personally love if we had a tool that i could describe problems at a high level and out comes a high quality fully functional app. Most software is shit and if we are honest with ourselves there is a huge amount of inessential complexity in this field built up over the years. I would gladly never spend weeks building something someone else already built in a slightly different way because it doesn’t meet requirements, I would gladly not end up in rabbit holes wrestling with some dependency compatibility issue when I am just trying to create value for the business. If the tools get better the software gets better and the compexity we can manage gets larger. That said while these tools are incredibly impressive, having messed with this for a few days to try to even do basic stuff, what am I missing here? It is a nice starting point and can be a productivity boost but the code produced is often wrong and it feels a long way away from automating my day to day work.
> I would personally love if we had a tool that i could describe problems at a high level and out comes a high quality fully functional app.
I'm sure your employer would love that more than you. That's the issue here.
> That said while these tools are incredibly impressive, having messed with this for a few days to try to even do basic stuff, what am I missing here? It is a nice starting point and can be a productivity boost but the code produced is often wrong and it feels a long way away from automating my day to day work.
This is the first irritation of such a tool and it's already very competent. I'm not even sure I'm better at writing code than GPT, the only thing I can do that it can't is compile and test the code I produce. If you asked me to create a React app from a two sentence prompt and didn't allow me to search the internet, compile or test it I'm sure I'd probably make more mistakes than GPT to be honest.
Have you actually tried to get an app working using gpt? A lot of shared stuff is heavily curated. It is no doubt an extremely impressive tool but I think we always underestimate the last 10% in AI products. We had impressive self driving demos over a decade ago, we are all still driving and L5 still seems a ways away.
Exactly code has always been a means to an end not the end itself. Further our industry has been more than happy to automate inefficiency away from other fields, feels pretty hypocritical to want it to stop for ours.
If everyone is able to make their own app, then there is no need to advertise their apps, because everyone will just be using their own.
The real battle there would be protocols; how everyone's custom apps communicate. Here, we can fall back to existing protocols such as email, ActivityPub, Matrix, etc.
I think a lot of people are coming at this from an all or nothing approach. Does the current solution do everything we keep expecting ML to do? No, does it do way more than before and get us x percent closer? Yes.
Think of the 80/20 model, if it gets you 80% there (don't take that literally) then that's huge in it of itself. This tool is getting us closer to the example you mention and that in of itself is really cool.
I've played around a bit with Stable Diffusion and as far as I can tell, it's just a new tool, like a much better paintbrush.
It still needs a human to tell it what to paint, and the best outputs generally require hours of refinement and then possibly touch-up in photoshop. It's not generating art on its own.
Artists still have a job in deciding what to make and using their taste to make it look good, that hasn't changed. Maybe the fine-motor skills and hand-eye coordination are not as necessary as they were, but that's it.
Not disagreeing with your comment but this is not the case with Midjourney. Very little is needed to produce stunning images. But afaik they modify/enhance the prompts behind the screen
A key difference is someone with some prompt-writing skills and a tiny amount of aesthetic taste can now compete with trained artists who actually know how to create such images from scratch. Sally in Sales and Tom in Accounting can also do art as part of their job, whenever it calls for art. And copy-writing, et c. Or will be able to in the near future. Fewer dedicated artists, fewer dedicated writers, and so on. One artist can do the work of ten, and almost anyone in the office can pinch-hit to do a little art or writing here and there (by which I mean, tell a computer to make some art, then select which of that art is best).
Tangentially, this is something I think about from time to time: in tech, you can be mediocre and live a very comfortable life. In art (and many other areas), you often have to be extraordinary just to make ends meet.
So I don’t think art is “harder”. It’s just harder for the average practitioner/professional to find “success” (however you like to define it).
I wonder if this is due to existing forms of automation in art. Artists have been competing with reproductions of art in the form of recordings and prints for a long time now. That creates a really high floor. How many people who play an instrument have people around them genuinely want to listen to them play rather than a recording? How much lower would the bar be if recordings didn't exist?
Of course software gets copied all the time, but we have jobs because so much bespoke software is needed. Looking at some of what AI can do now, I wouldn't need surprised if our floor gets raised a lot in the next few years as well.
IMO, the 'why' is due to how mature the industry is - it'll absolutely be the future for every profession, given enough time. It's the natural distribution of wealth in our society: Few have too much, most have not enough.
> making art is vastly more difficult than the huge majority of computer programming that is done.
I'd reframe this to: making a living from your art is far more difficult than making money from programming.
> also be able to do the dumb, simple things most programmers do for their jobs?
I'm all for Ai automating all the boring shit for me. Just like frameworks have. Just like libraries have. Just like DevOps have. Take all the plumbing and make it automated! I'm all for it!
But. At some point. Someone needs to take business speak and turn it into input for this machine. And wouldn't ya know it, I'm already getting paid for that!
The lack of empathy in general on online forums is incredible. I don’t think NH is any worse than other places but it would be nice if we could be a little better as it would lead to some more interesting and nuanced topics.
As a developer/manager i am not yet scared of AI because i have had to already correct multiple people this week who tried to use chatGPT to figure something out.
It’s actually pretty good but when it’s wrong it seems to be really wrong and when you don’t have the background to figure that out a ton of time is wasted. It’s just a better Stackoverflow at the end of the day imo.
I think you need to see there are 2 types of people:
- those who want to generate results ("get the job done, quickly"), and
- those who enjoy programming because of it.
The first one are the ones who can't see what is getting lost. They see programming as an obstacle. Strangely, some of them believe that on the one hand that many more people can produce lots more of software because of AI, and simultaneously expect to keep being in demand.
They might think your job is producing pictures, which is just a burden.
I am from the second group. I never choose this profession because of the money, or dreaming about big business I could create. I dread pasting generated code all over the place. The only one being happy would be the owner of that software. And the AI model overlord of course.
I hope that technical and artistic skill will gain appreciation again and that you will have a happy live in doing what you like the most.
If you think code generating AI will take your job, you should also never hire junior engineers because one of them might take your job.
Nevertheless, having more engineers around actually causes you to be more valuable, not less. “Taking your job” isn’t a thing; the Fed chairman is the only thing in our economy that can do that.
> If you think code generating AI will take your job,
It might take away the joy of programming, feeling of ownership and accomplishment.
People today complain about having to program a bunch of api calls might be in for a rude awakening, tending and debugging the piles of chatbot output that got mashed together. Or do we expect that in the future we will suddenly value quality over speed or #features?
I love coaching juniors. These are humans, I can help them with their struggles and teach them. I try to understand them, we share experiences in life. We laugh. We find meaning by being with each other on this lonely, beautiful planet in the universe.
---
Please do not take offense: observe the language in which we are already conflating human beings with bots. If we do it already now, we will collectively do it in the future.
> People today complain about having to program a bunch of api calls might be in for a rude awakening, tending and debugging the piles of chatbot output that got mashed together. Or do we expect that in the future we will suddenly value quality over speed or #features?
I actually think we will. People are starting to realise where slapping together crap that works 80% of the time gets us, and starting to have second thoughts. If and when we reach a world where leaking people's personal information costs serious money (and the EU in particular is lumbering towards that), the whole way we do programming will change.
I’m empathetic, but my empathy doesn’t overcome my excitement.
This is a moment where individual humans substantially increase their ability to affect change in the world. I’m watching as these tools quickly become commoditized. I’m seeing low income first generation Americans who speak broken English using ChatGPT to translate their messages to “upper middle class business professional” and land contracts that were off limits before. I’m seeing individuals rapidly iterate and explore visual spaces on the scale of 100s to 1000s of designs using stable diffusion, a process that was financially infeasible even for well funded corps due to the cost of human labor this time last year. These aren’t fanciful dreams of how this tech is going to change society - Ive observed these outcomes in real life.
I’m empathetic that the entire world is moving out from under all of our feet. But the direction it’s moving is unbelievably exciting. AI isn’t going to replace humans, humans using AI are going to replace humans who don’t.
Making art is not "vastly more difficult" or at least it is (IMO) highly debatable. Some parts of it require decades of experience to do with any kind of excellence, yes. That's also the case with powerlifting, figure skating and raising children and indeed programming. It's just that your boss made a money printer that takes in bullshit and outputs bullshit which gives you your cosy job.
But that is not "programming". That is glueing together bullshit until it works and the results of that "work" are "blessing" us everyday. The gift that keeps on giving. You FAANG people are indeed astronomically, immorally, overpaid and actively harm the world.
But, luckily, the world has more layers than that. Programming for Facebook is not the same as programming for a small chemical startup or programming in any resource-restricted environment where you can't just spin up 1000 AWS instances at your leisure and you actually have to know what you're doing with the metal.
Creative professionals might take the first hit in professional services, but AI is going to come for engineers at a much faster and more furious pace. I would even go so far as to say that some (probably a small amount) of the people who have recently gotten laid off at big tech companies may never see a paycheck as high as they previously had.
The vast majority of software engineering hours that are actually paid are for maintenance, and this is where AI is likely to come in like a tornado. Once AI hits upgrade and migration tools it's going to eliminate entire teams permanently.
> The vast majority of software engineering hours that are actually paid are for maintenance, and this is where AI is likely to come in like a tornado.
I have the exact, almost completely opposite opinion. Greenfield is where AI going to shine.
Maintenance is riddled with "gotcha's", business context, and legacy issues that were all handled and negotiated over outside of the development workflow.
By contrast, AI can pretty easily generate a new file based on some form of input.
> The vast majority of software engineering hours that are actually paid are for maintenance, and this is where AI is likely to come in like a tornado. Once AI hits upgrade and migration tools it's going to eliminate entire teams permanently.
There's been huge improvements in automating maintenance, and yet I've never once heard someone blame a layoff on e.g. clang-rename (which has probably made me 100x more productive at refactoring compared to doing it manually.)
I'd even say your conclusion is exactly backwards. The implicit assumption is that there's a fixed amount of engineering work to do, so any automation means fewer engineers. In reality there is no such constraint. Firms hire when the marginal benefit of an engineer is larger than the cost. Automation increases productivity, causing firms to hire more, not less.
Just experience, but my definition is pretty broad. Once you get out of the valley most of what pays well (banking, finance, telecom, analytics, industrial, etc.) is maintenance code IMO. Basically anything that doesn't come out real R&D budget, even if it is a "new feature", is maintenance to me at this point.
The caveat there is 'paid hours'. The current working model for the industry is that all software engineers leetcodeuberhack on open source repos at night and by day have paying jobs maintaining companies' systems that use open source.
I believe the current generation of AI would be better suited to augmenting human understanding of code (through static analysis tools and the like), rather than generating it.
On an infinite timeline humans will no longer be needed in the generation of code (we hopefully will still study and appreciate it for leisure), but I doubt we're there yet.
The entire history of computer programming is using code generation tools to increase the level of abstraction most programmers work at. Having yet another one of those doesn't seem to present any realistic chance of replacing all of the development, testing, maintenance, and refinement of that entire stack. If your job is literally just being handed over a paragraph or so written requirement for a single function or short script, giving back that function/script, and you're done, then sure, worry.
But at least every job I've had so far also entailed understanding the entire system, the surrounding ecosystem, upstream and downstream dependencies and interactions, the overall goal being worked toward, and playing some role in coming up with the requirements in the first place.
ChatGPT can't even currently update its fixed-in-time knowledge state, which is entirely based on public information. That means it can't even write a conforming component of a software system that relies on any internal APIs! It won't know your codebase if it wasn't in its training set. You can include the API in the prompt, but then that is still a job for a human with some understanding of how software works, isn't it?
being afraid is the best way to run away from what's coming. If a computer can easily do some work, simply use that work to your advantage, and do something more complicated. If a computer can generate art, use what's generated to your advantage and do something better.
As long as the world is not entirely made of AI, there will always be some expertise to add, so instead of being afraid, you should just evolve with your time
Exactly! Didn't the rise of abstract art coincide with the ubiquity of photography? Realism in painting was no longer needed by the populace to the previous extent.
I do, because I don't see why it wouldn't be. If it's revolutionary and a lot of people need it, or if it change completely the way people work/live/are entertained, it will certainly evolve to be as much accessible as possible. A succesful product is a product used by many.
> we can choose different professions that are less susceptible to automation
What are those? It seems it's low-margin, physical work that's seeing the least AI progress. Like berry picking. Maybe also work that will be kept AI-free longer by regulators like being a judge?
I was gonna link to this robot (https://www.prnewswire.com/news-releases/worlds-first-roboti...) as a reason cooking is "in danger". Seems like it cannot do the harder motor skills like chopping ingredients. In another video it makes a steak and the cooking time seems to be determined by how incredibly slow the robot is. It also makes a huge mess because it has worse motor skills than a toddler. Cooks seem safe for now. Too bad it's already a terrible area to work in because too many people love it.
Most of us in technology have had to learn new skills. I used to rack up and wire servers in a lab as part of my dev work. I don't do that anymore and instead had to learn aws and terraform. Personally I don't expect any empathy due to my lab racking skills no longer being as relevant to many jobs.
What's going to happen if technologists collectively come to the table and engaging in sincere discussion rooted in kindness, compassion, and empathy?
I fully expect there will be zero reciprocation. There will, instead, be a strong expectation that that empathy turns into centering of fear and a resulting series of economic choices. AI systems are now threatening the ability of some artists to get paid and those artists would like that to stop.
I think we're seeing it right now. You shift effortlessly from talking about empathy to talking about the money. You consider the one the way to get the other, so you deplore the horrifying lack of empathy.
Let me put it another way. Would you be happy if you saw an outpouring of empathy, sympathy, and identification with artists coupled with exactly the same decisions about machine learning systems?
I do find it funny that artists are complaining about things like AI generated art clogging up art sites/reducing commissions/etc. because my particular artistic outlet of choice is writing and visual art has completely overtaken text based content online, particularly for anything fandom or nerd adjacent. The visual artists are also responsible for the monetization of fandom to begin with which I'm still pretty salty about. We moved from discussions and fanfic to 500+ 'commission me to draw your OTP!' and 'Look at this skimpy character art!' daily posts.
Shoe's on the other foot now and they don't like it.
Yes, an outpouring of sympathy, empathy, etc combined with the same unilateral decision making that technologists make would be terrible. I would call continuing to do that unempathetic.
Technologists acting like technocrats and expecting everyone to give them sympathy, empathy and identification is laughably rude and insulting.
My (admittedly totally non-rigorous) intuition is that the advances in AI might "grow the pot" of the software engineering, IT, and related industries at roughly the same rate that they can "replace professionals" in those industries. If that's the case, then there wouldn't be some existential threat to the industry. Of course, that doesn't mean that certain individuals and entire companies aren't at risk, and I don't want to minimize the potential hardship, but it doesn't seem like a unique or new problem.
As a crude analogy, there are a lot of great free or low-cost tools to create websites that didn't exist 15 years ago and can easily replace what would be a much more expensive web developer contract 15 years ago. And yet, in those last 15 years, the "size of the web pot" has increased enough that I don't think many professional web developers are worried about site builder tools threatening the entire industry. There seem to be a lot more web developers now then there were 15 years ago, and they seem to be paid as well or better than they were 15 years. And again, that doesn't mean that certain individuals or firms didn't on occasion experience financial hardship due to pressure from cheaper alternatives, and I don't want to minimize that. It just seems like the industry is still thriving.
To be clear, I really have no idea if this will turn out to be true. I also have no idea if this same thing might happen in other fields like art, music, writing, etc.
I'm both a digital artist and programmer. I never thought it would happen before, but I accept that this technology can easily replace some aspects of my professional value. But i don't let it take away from my experience and capacity to be creative, so I still think I have an advantage when leveraging these tools- and I've started to use them every day.
Rendering was only ever a small part of the visual arts process anyway. And you can still manually add pixel perfect details to these images by hand that you wouldn't know how to create an AI prompt for. And further, you can mash together AI outputs in beautifully unique and highly controlled ways to produce original compositions that still take work to reproduce.
To me, these AI's are just a tool for increased speed, like copy and paste.
>> making art is vastly more difficult than the huge majority of computer programming that is done.
I completely agree with it. Take a contemporary pianist for example, the amount of dedication to both theory and practice, posture, mastering the instrument and what not, networking skills, technology skills, video recording, music recording, social media management, etc.
You think music theory is more demanding than CS? I've dedicated decades and probably 75% of my youth to mastering this instrument called a computing device. It has numerous layers, each completely different and each significant enough to build a standalone career out of (OS, networking, etc). I feel insulted if you think playing and mastering a piano is the same thing.
Extreme specialists are found everywhere. Mastering skateboarding at world level will eat your life too, but it's not "harder" than programming. At least, for any commonsensical interpretation of "harder".
All the rest, we do too. Except I don't record videos and I'm sure it is not childishly easy, but it will not eat my life.
I'm literally speechless. What an arrogant and egotistical comment. This is why us tech workers have a such a bad rep as culturally ignorant/bubbled community. Do a bit of research into jazz theory and counterpoint theory before you make this kind of blatant over generalization.
This exact comment could be made by a jazz soloist with a few words changed and be just as valid. I think you're underestimating how deep other fields, including artistic fields, are. Anything as competitive as an artistic field will always result in amounts of mastery needed at the top level that are barely noticeable to outside observers.
> This exact comment could be made by a jazz soloist with a few words changed and be just as valid.
It's not that uncommon for professional programmers to be pro-level musical soloists on the side, or for retired programmers to play top-level music. The reverse is far less common. I do think that says something.
> Anything as competitive as an artistic field will always result in amounts of mastery needed at the top level that are barely noticeable to outside observers.
Sure. Top-level artistic fields are well into the diminishing returns level, whereas programming is still at the level where even a lot of professional programmers are not just bad, but obviously bad in a way that even non-programmers can understand.
Even in the easiest fields, you can always find something to compete on (e.g. the existence of serious competitive rubik's cube doesn't mean solving a rubik's cube is hard). A difficult field is one where the difference between the top and the middle is obvious to an outsider.
I think today he/she learned an important lesson for his/her career: there are things more difficult than the epitome, the apogee, the quintessence of professions, called computer science.
I am a classical clarinet player, a Physicist and a programmer. Music theory is ridicously easy in confront to everything STEM I've studied. It isn't even a paragon: the counterintuitiveness of Physics, the abstractions, the rigor of thinking needed was really stressfull to my mind while music theory was... underwhelming to say the least.
I couldn‘t, but I could also not study many other things and not because of what you call difficulty. Quite simply different people are good at some things and less good at others.
Maybe you are better at CS than music and therefore perceive it as easy and the other one as hard.
Again, it depends on the level. Maybe you took trivial CS courses. Many parts of CS are indistinguishable from mathematics, is that so easy as well? What about the various open problems that have remained unsolved for decades now in theoretical CS? You think these are simpler than music? Really?
Come on, at which level did you study them? I studied both at University level and was a classical clarinet player and anything STEM was much more difficult than anything music theory.
It isn't harder to be an artist or pianist, it's just that the cutoff of employability for these professions is much higher. It's like saying playing baseball is harder than programming because only a few thousand people are good enough to play baseball for a living.
I find it weird that they're considered separate talents. Programming is a creative task for me, and one reason I never took it up as a full time job is that I learned I hate trying to do creative work on demand. (I've been paid for both fiction writing and dev work and they produce very similar feelings in me.)
Programming is definitely easier to make a living from. I'm a very mediocre artist and developer and I'm never making enough off of art to live on, but I could get a programming job at a boring company and it would pay a living wage. In that sense, it's definitely 'easier'.
This has been going on for 250 years, and humanity still hasn't quite grasped it.
The steady progress of the Industrial Revolution that has made the average person unimaginably richer and healthier several times over, looks in the moment just like this:
"Oh no, entire industries of people are being made obsolete, and will have to beg on the streets now".
And yet, as jobs and industries are automated away, we keep getting richer and healthier.
Collectively, sure. How did that go for the people who's livelihoods got replaced though? I've had family members be forced to change careers from white-collar work after being laid off and unable to find engineering jobs due to people decades younger taking them all nearby. I saw firsthand the unbelievable amount of stress and depression they went through, and it took them years to accept that their previous life and career were gone.
"It'll massively suck for you, but don't worry, it'll be better for everyone else" is little comfort for most of us
Especially when promises and plans to use some of those windfalls of progress to help those harmed by it, seem never to see much follow-through.
Progress is cool if you're on the side of the wheel that's going up. It's the worst fucking thing in the world if you're on the side that's going down and are about to get smashed into the mud.
> Especially when promises and plans to use some of those windfalls of progress to help those harmed by it, seem never to see much follow-through.
The poor are economically better off than at almost any point in history; actual food poverty is almost unknown, objectively people are living in better houses than ever before, and so on. It just doesn't seem like any of that makes poor people any happier or poverty any less wretched, somehow.
I don't mean the poor, broadly, I mean people who were doing OK, but then aren't, after some major advance in technology or some change in economic policy. For the older ones, especially, "here's some money to retrain" (which they might get if they're lucky) doesn't compensate them for the harm they're suffering so that the overall pie, if you will, can grow.
For people caught in that kind of situation, progress sucks.
If I can just ask for a certain arbitrary machine state (with some yet unrealized future version of AI) who needs programmers?
We’ll need to vet AI output so there will still be knowledge work; we’re not going to let the AI decide to launch nukes, or inject whatever level of morphine it wants.
Data entry (programming being specialized data entry; code is a data model of primitives for a compiler/interpreter) work at a computer is not long for this world but analysis will be.
This is absolutely a problem, but it's also very clear that, as a problem, it has nothing to do with the technology, and everything with the society. If we are wary of new tech that improves productivity because someone might starve as a result of its deployment, that alone shows just how fucked up things are.
I don't think that's a road to empathy, because if we're talking about the matter of empathy i.e. "emotional should's" instead of nuances of current legal policy, then I'd expect a nontrivial part of technical people to say that a morally reasonable answer to both these scenarios could (or should) be "Yes, and whatever you want - not treated as derivative work bound by the license of the training data", which probably is the opposite of what artists would want.
While technically both artists and developers make their living by producing copyrighted works, our relationship to copyright is very different; while artists rely on copyright and overwhelmingly support its enforcement as-is, many developers (including myself) would argue for a significant reduction of its length or scale.
For tech workers (tech company owners could have a different perspective) copyright is just an accidental fact of life, and since most of paid development work is done as work-for-hire for custom stuff needed by one company, that model would work just as well even if copyright didn't exist or didn't extend to software. While in many cases copyright benefits our profession, in many other cases it harms our profession, and while things like GPL rely on copyright, they are also in large part a reaction to copyright that wouldn't be needed if copyright for code didn't exist or was significantly restricted.
It depends a lot of the type of software you are making. If it's custom software for a single client, then probably copyright is not important. (Anyway, I think a lot of custom software is send without the source code or with obfuscated code, so they have to hire the developer again.)
Part of my job is something like that. I make custom programs for my department in the university. I don't care how long is the copyright. Anyway, I like to milk the work for a few years. There are some programs I made 5 or 10 years ago that we are still using and saving time of my coworkers and I like to use that leverage to get more freedom with my time. (How many 20% projects can I have?) Anyway, most of them need some updating because the requirements change of the environment changes, so it's not zero work on them.
There are very few projects that have a long term value. Games sell a lot of copies in a short time. MS Office gets an update every other year (Hello Clippy! Bye Clippy!) , and the online version is eating them. I think it's very hard to think programs that will have a lot of value in 50 years, but I'm still running some code in Classic VB6.
Your example is like saying we should have empathy for people who can whittle when a 3D printer can now extrude the same design in bulk. Or like empathy for London cabbies having to learn roads when "anyone" can A-to-B now with a phone.
Code should not need to be done by humans at all. There's no reason coding as it exists today should exist as a job in the future.
Any time I or a colleague are "debugging" something, I'm just sad we are so "dark ages" that the IDE isn't saying "THERE, humans, the bug is THERE!" in flashing red. The IDE has the potential to have perfect information, so where is the bug is solvable.
The job of coding today should continue to rise up the stack tomorrow to where modules and libraries and frameworks are just things machines generate in response to a dialog about “the job to be done”.
The primary problem space of software is in the business domain, today requiring people who speak barely abstracted machine language to implement -- still such painfully early days.
We're cavemen chipping at rocks to make fire still amazed at the trick. No empathy, just, self-awareness sufficient to provoke us into researching fusion.
We can and should have empathy for all those people.
The question is perhaps not if we should have empathy for them. The question is what we should do with it once we have it. I have empathy for the cabbies with the Knowledge of London, but I don't think making any policy based on or around that empathy is wise.
This is tricky in practice. A surprising number of people regard prioritizing the internal emotional experience of empathy in policy as experiencing empathy.
> Some people cannot contribute to Wine because of potential copyright violation. This would be anyone who has seen Microsoft Windows source code (stolen, under an NDA, disassembled, or otherwise). There are some exceptions for the source code of add-on components (ATL, MFC, msvcrt); see the next question.
I've seen a few MIT/BSD projects that ask people not to contribute if they have seen the equivalent GPL project. It's a problem because Copilot has seen "all" GPL projects.
Humans have rights, machines don't. Copyright is a system for protecting human intellectual property rights. You can't copyright things created by a monkey[1] for example. Thus it's not a contradiction to say that an action performed by a human is "transformative" while the same action performed by a machine is not.
But that is giving AI too much credit. As advanced as modern AI models are, they are not AGIs comparable to human cognition. I don't get the impulse to elevate/equate the output of trained AI models to that of human beings.
The AI did not create anything. It responded to a prompt given by a human to generate an output. Just like photoshop responds to someone moving the mouse and clicking or a paintbrush responds to being dragged across a canvas.
So any transformativity of the action should be attributed to the human and the same copyright laws would apply.
But under this model, the comparisons to human learning don't apply either. What matters is whether the output is transformative - so it's fair to compare the outputs of AI systems to one of the many inputs and say "these are too similar, therefore infringement occurred". It doesn't matter what kind of mixing happened between inputs and outputs, just like it doesn't matter how many Photoshop filters I apply to an image if the result resembles what I started with "too much".
Sure, just like a human can manually draw something that infringes copyright, they can use the AI to draw something that infringes copyright. It's the human infringing the copyright, not the AI.
But the fact that the human looked at a bunch of Mickey Mouse pictures and gained the ability to draw Mickey Mouse does not infringe copyright because that's just potential inside their brain.
I don't think the potential inside a learning model should infringe copyright either. It's a matter of how it's used.
A language model is not a human. You at least have the possiblity that the human learned something. The language model is a parrot with a large memory.
That said Microsoft didn't allow their kernel developers to look at Linux code for a reason.
> In my humble estimation, making art is vastly more difficult than the huge majority of computer programming that is done.
The value of work is not measured by its difficulty. There's a small amount of people who make a living doing contract work that may be replaced by an AI, but these people were in a precarious position in the first place. The well-to-do artists are not threatened by AI art. The value of their work is derived from them having put their name on it.
If you assume that most programming work could be done by an AI "soon", then we really have to question what sort of dumb programming work people are doing today and whether that wouldn't disappeared anyway, once funding runs dry. Mindlessly assembling snippets from Stackoverflow may well be threatened by AI very soon, so if that's your job, consider the alternatives.
This has happened to developers multiple times. Frankly it’s happened so many times that it’s become mundane. These programs are tools, and after a while you realize having a new tool in the bag doesn’t displace people. What it does is make the old job easy and new job has a higher bar for excellence. Everyone who has been writing software longer than a few years can name several things that used to take them a long time and a lot of specialization, and now take any amateur 5 minutes. It might seem scary, but it’s really not. It just means that talented artists will be able to use these tools to create even more even cooler art, because they don’t need to waste their time on the common and mechanical portions.
The empathy you imply might also require that the artists (or programmer's) jobs be preserved, for the sake of giving them purpose and a way to make a living.
I dont think that is absolutely something a society must guarantee. People are made obsolete all the time.
What needs to be done is to produce new needs that currently cannot be serviced by the new AI's. I'm sure it will come - as it has for the past hundred years when technology supplanted an existing corpus of workers. A society can make this transition smoother - such as a nice social safety-net, and low-cost/free education for retraining into a different field.
In fact, these things are all sorely needed today, without having the AIs' disruptions.
Tools happen, folks get automated away and need to retool to make themselves useful. It will happen in computing, as a matter of fact, it has happened in computing.
What do you think cloud computing did? A lot of sysadmins, networking, backups, ops went the way of dinosaurs. A lot of programmers have also fallen on the side by being replaced with tech and need to catch up.
Wallowing in pity is not going to make help, we saw a glimpse of this with github-copilot. Some people built the hardware, the software behind these AIs, some others are constructing the models, applying it to distinct domains. There's work to be done for those who wish to find their place in the new world.
But people aren't being automated away - their work is input, and for the AI generated art to remain fresh and relevant instead of rehashing old stuff it would need artists to continue creating new art. It's not a tool that exists independently of people's creative work (although this is true of most AI, though it seems particularly terrible with art).
> why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?
Because those things, while dumb and simple, are not continuous in the way that visual art is. Subtle perturbations to a piece of visual art stay subtle. There is room for error. By contrast, subtle changes to source code can have drastic implications for the output of a program. In some domains this might be tolerable, but in any domain where you’re dealing significant sums of money it won’t be.
There's nothing to be depressed about. It's not a lack of empathy it's recognition of the inevitable. Developers realize that there is no going back. AI art is here to stay. You can't ban or regulate it. It would be extremely hard to police. All there is left to do is adapt to the market like you did, even if it's extremely difficult. It's not like AI made it significantly harder anyway. The supply for artists far surpassed the demand for them before the advent of AI art.
> if AI is able to do these astonishingly difficult things, why shouldn't it, with some finagling, also be able to do the dumb, simple things most programmers do for their jobs?
Art is more difficult than programming for people with talents in programming but not in arts. Art is easier than programming for people with talents in arts but not in programming. Granted, those two sentences are tautology, but nonetheless a reminder that the difficulty of art and programming does not form a total order.
It's quite typical of devs in my experience. I remember during the MegaUpload/Pirate Bay arrests, devs were quite up in arms about big media going after pirates, but when it came to devs going after app pirates with everything they've got, they were real quiet
> making art is vastly more difficult than the huge majority of computer programming that is done
Art and programming are hard for different reasons.
The difference in the AI context is that a computer program has to do just about exactly whats asked of it to be useful, whereas a piece of art can go many ways and still be a piece of art. If you know what you want its quite hard to get DALL-E to produce that exactly (or it has been for me), but it still generates something that is very good looking.
If you were transported back to the 19th century, would you have empathy for the loom operators smashing mechanical looms?
Art currently requires two skills - technical rendering ability, and creative vision/composition. AI tools have basically destroyed the former, but the latter is still necessary. Professional artists will have to adjust their skillset, much like they had to adjust their skillset when photography killed portrait painting as a profession.
Do you people think art is relegated to digital images only? No video? No paintings, sculptures, mixed media, performance art, lighting, woodwork, etc etc. How is it possible that everyone seems to ignore that we still have massive leaps required in AI and robotics to match the technical ability of 99% of artists.
Not being afraid of AI is not necessarily due to the lack of empathy. It could be due to acceptance: perhaps AI will make programmers obsolete. That is fine, programming is really boring most of the time, when it's just cobbling things together. Even if it will be to the short term disadvantage of some people (including the speaker), AI taking over tedious programming tasks will make humanity richer.
I think the issue is that our laws and economy are not structured in a way that makes it likely for those gains to be distributed back to anyone other than the ultra wealthy. Not that I expect AI to take over most programming jobs anytime soon (or ever), but if it does, it would almost certainly happen long before society manages to agree on a system to distribute those gains back in a way that benefits the average person.
I mean sure things will get harder for some artists but what is to be done about it? What will feeling sorry for them accomplish?
The job market will always keep on changing, you have to adept to it to a certain degree.
Now we can talk about supporting art as a public good and I am all for that but I don't see how artists are owed a corporate job. Many of my current programming skill will be obsolete one day, that's part of the game.
You're projecting your own fears on everyone else. I'm a programmer, too, among other things. I write code in order to get other things done. (Don't you?) It's fucking awesome if this thing can do that part of my job. It means I can spend my time doing something even more interesting.
What we call "programming" isn't defined as "writing code," as you seem to think. It's defined as "getting a machine to do what we (or our bosses/customers) want." That part will never change. But if you expect the tools and methodologies to remain the same, it's time to start thinking about a third career, because this one was never a good fit for you.
This argument has come up many times in history, and your perspective has never come out on top. Not once. What do you expect to be different this time?
Sidenote, you don't sound like a failed artist to me man. You sound like someone who survived the art machine and worked hard to make a smart career transition capable of supporting whatever kind of art you want to make. PS I did the same thing, painting MFA --> software development. Wish I was making FAANG money tho...
yea no, the difficulty in programming as a career is interaction with other humans. I would like AI to reach the stage where it can comprehend solutions that stakeholders don’t know themselves.
Because in my time the stakeholders in companies have never actually been decisive when scoping features.
Co-pilot is indeed the endgame for AI assisted programming. So I would say for art, someone mindful could train an AI on their own dataset and use that to accelerate their workflow. Imagine it drawing outlines instead of the full picture.
> the difficulty in programming as a career is interaction with other humans
It would be great if there was an AI that could be a liaison between developers and stakeholders, translating the languages of each side for mutual understanding.
it's been very frustrating to see how much ignorance and incuriosity is held by what i assume to be otherwise very worldly, intelligent and technical people in regards to what working artists actually do.
In art, you can afford a few mistakes. Like, on many photo-realistic pictures generated by midjourney, if you look closely you'll see a thing or two that are odd in the eyes of characters. In an AI-generated novel, you can accept a typo here and there, or not even notice it if it's really subtle.
In a program, you can't really afford that. A small mistake can have dramatic consequences. Now, maybe in the next few years you'll only need one human supervisor fixing AI bugs where you used to need 10 high-end developers, but you probably won't be able to make reliable programs just by typing a prompt, the way you can currently generate a cover for an e-book just by asking midjourney.
As for the political consequences of all of this, this is yet another issue.
I'm not sure that humans are going to beat AI in terms of defect rate in software, especially given that with AI you produce code at a fast enough rate that corner cutting (like skipping TDD) often done by human developers is off the table.
I don't think this is going to put developers out of work, however. Instead, lots of small businesses that couldn't afford to be small software companies suddenly will be able to. They'll build 'free puppies,' new applications that are easy to start building, but that require ongoing development and maintenance. As the cambrian explosion of new software happens we'll only end up with more work on our hands.
Could the bot not curate its own output? It has been shown that back feeding into the model result in improvement. I got the idea that better results come from increments. The AI overlords (model owners) will make sure they learn from all that curating you might do too, making your job even less skilled. Read: you are more replaceable.
Please prove me wrong! I hope I am just anxious. History has proven that increases in productivity tend to go to capital owners, unless workers have bargaining power.
Mine workers were paid relatively well here, back in the day. Complete villages and cities thrived around this business. When those workers were no longer needed the government had to implement social programs to prevent a societal collapse there.
Look around, Musk wants you to work 10 hours per day already. Don't expect an early retirement or a more relaxed job..
I don't think it's a matter of blindly curating bot output.
I think it's more a matter of enlarging the scope of what one person can manage. I think moving from the pure manual labor era, limited by how much weight a human body could move from point A to point B, to the steam engine era. Railroads totally wrecked the industry of people moving things on their backs or in mule trains, and that wasn't a bad thing.
> Don't expect an early retirement or a more relaxed job..
That's kinda my point, I don't think this is going to make less work, it'll turbocharge productivity. When has an industry ever found a way to increase productivity and just said cool, now we'll keep the status quo with our output and work less?
You describe stuff that is harmful or boring. In an other comment I touched upon this, as there seem to be a clear distinction between people that love programming and those that just want to get results. The former does not enjoy being manager of something larger per se if the lose what they love.
I can see a (short term?) increase in demand of software, but it is not infinite.
So when productivity increases and demand does not with at least the same pace, you will see jobless people and you will face competition.
What no one has touched yet is that the nature of programming might change too. We try to optimize for the dev experience now, but it is not unreasonable to expect that we have to bend towards being AI-friendly. Maybe human friendly becomes less of a concern (enough desperate people out there), AI-friendly and performance might be more important metrics to the owner.
> You describe stuff that is harmful or boring. In an other comment I touched upon this, as there seem to be a clear distinction between people that love programming and those that just want to get results.
There's nothing stopping anyone from coding for fun, but we get paid for delivering value, and the amount of value that you can create is hugely increased with these new tools. I think for a lot of people their job satisfaction comes from having autonomy and seeing their work make an impact, and these tools will actually provide them with even more autonomy and satisfaction from increased impact as they're able to take on bigger challenges than they were able to in the past.
"having autonomy and seeing their work make an impact"
I think we are talking about a different job. I mentioned it somewhere else, but strapping together piles of bot generated code and having to debug that will feel more like a burden for most I fear.
If a programmer wanted to operate on a level where "value delivering" and "impact" are the most critical criteria for job satisfaction, one would be better of in a product management or even project management role. A good programmer will care a lot about her product, but she still might derive the most joy out of having it build mostly by herself.
I think that most passionate programmers want to build something by themselves. If api mashups are already not fun enough for them, I doubt that herding a bunch of code generators will bring that spark of joy.
> I think that most passionate programmers want to build something by themselves
Most programmers are working in business-focused jobs. I don't think many of us, in grade school, said "I sure hope I can program business logic all day when I grow up." So I think the passion for 90% of people writing code is really about getting a paycheck. Then they use that paycheck to do what they're really passionate about in their personal life.
So I completely agree that people passionate about coding might want to write that code by hand, I just don't think that group accounts for most people writing code professionally.
I have to add that software is also something that can be copied without effort. If we can have 2000 drawing apps instead of 20, changes that none of those 2000 will fit the bill will get close to zero.
Industries have traditionally solved this with planned obsolence. Maybe JavaScript might be our saviour here for a while. :)
There is also a natural plateau of choice we can handle. Of those 2000, only a few will be winners and with reach. It might soon be that the AI model becomes more valuable than any of those apps.
Case in point: try to make a profitable app on Android these days.
The future of work will not be decided by now 60+ year olds in another 10-15 year; Millennials and Gen Z are not growing conservative as they age into and through their 30s as Gen X and Boomers did. Generational churn is a huge wildcard.
> making art is vastly more difficult than the huge majority of computer programming that is done
Creating art is not that much harder than programming, creating good art is much harder than programming. That's the reason that a large majority of art isn't very good, and why a large majority of Artists don't make a living by creating art.
Just like the camera didn't kill the artist, neither will AI. For as long as art is about the ideas behind the piece as opposed to the technical skills required to make it (which I would argue has been true since the rise of impressionism) then AI doesn't change much. The good ideas are still required, AI only makes creating art (especially bad art) more accessible.
While it was far from all of them, lots of the people who are decrying AI art were recently gleefully cheering the destruction of blue-collar jobs held by people with what they view as unacceptable value systems. "Learn to code" was a middle finger both to the people losing their jobs and to those who already code and don't want to see the value of their skills diluted. There's been plenty of "lack of empathy" going around lately, mostly because of ideological fault lines. Perhaps this will be a wake-up call that monsters rarely obey their masters for very long before turning on them.
>lots of the people who are decrying AI art were recently gleefully cheering the destruction of blue-collar job
I hear these sorts of statements a lot, and always wonder how people come to the conclusion that "people who said A were the ones who were saying B". Barring survey data, how would you know that it isn't just the case that it seems that way?
The idea that people who would tell someone else to learn to code are now luddites seems super counter-intuitive to me. Wouldn't people opposing automation now likely be the same ones opposing it in the past? Why would you assume they're the same group without data showing it?
I know a bunch of artists personally and none of them seem to oppose blue-collar work
Many of the artists I know personally are blue collar workers. One is a house painter, one an airport luggage handler, one works in a restaurant (which is perhaps working class rather than blue collar per se).
Honestly, I think “learn to code” is mostly used sarcastically?
These are society-wide problems, not a failure of empathy on the part of "technical people."
The lack you find depressing is natural defensiveness in the face of hostility rooted in the fear, and in most cases, broad ignorance of both the legal and technical context and operation of these systems.
We might look at this and say, "there should have been a roll out with education and appropriate framing, they should have managed this better."
This may be true but of course, there is no "they"; so here we are.
I understand the fear, but my own empathy is blocked by hostility in specific interactions.
I have zero empathy for “artists”. Art produced for commercial purposes is no art at all, a more apt title for such a job is “asset creator”, and these people are by no means banned from using AI generation tools to make their work easier. Already artists will generate some logo off a prompt that takes a few minutes and charge full price for it. Why cry about it?
I would argue because most AI imagery right now is made for fun and not monetary gains, so it is actually a purer form of art.
No one is in programming to "do programming". They're in it to get things done. I didn't learn C++ in high school to learn C++, I learned it to make games (then C++ changed and became new and scary to me and so I no longer say I know C++, possibly I never did).
If an AI will take care of most of the finicky details for me and let me focus on defining what I want and how I want it to work, then that is nothing but an improvement for everyone.
What we're talking about here is the immanent arrival of it being impossible for a very large number of people to get a career in something they enjoy (making images by hand).
It's fair to suppose (albeit based on a very small sample size, i.e., the last couple hundred, abnormal years of history) that all sorts of new jobs will arise as a result of these changes- but it seems to me unreasonable to suppose that these new jobs of the future will necessarily be more interesting or enjoyable than the ones they destroyed. I think it's easy to imagine a case in which the jobs are all much less pleasant (even supposing we all are wealthier, which also isn't necessarily going to be true)- imagine a future where the remaining jobs are either managerial/ownership based in nature or manual labor. To me at least, it's a bleak prospect.
At the risk of demonstrating a total lack of empathy and failure to identify, we long ago passed the arrival of it being impossible for a very large number of people to get a career in something they enjoy (making images by hand). Art has been a famously difficult career path for quite a long time now. This does not really seem like a dramatic shift in the character of the market.
Now, I have empathy. I paused a moment before writing this comment to identify with artists, art students, and those who have been unable to reach their dreams for financial reasons. I emphatically empathize with them. I understand their emotional experiences and the pain of having their dreams crushed by cold and unfeeling machines and the engineers who ignore who they crush.
Yet I must confess I am uncertain how this is supposed to change things for me. I have no doubt that there used to be a lot of people who deeply enjoyed making carriages, too.
Yes, and there will be much fewer of those jobs and they might not pay.
Ultimately though this isn't a technical problem but an economic one about how we as a society decide to share our resources. AI growth the pie, but removes leverage from some to claim their slice. Automation is why we'll inevitably need UBI at some point
Much of the history of programming has been programmers making other jobs obsolete, and indeed there is a saying that a good programmer makes themselves obsolete.
For me at least, Stable Diffusion has been this great tool for personal expression in a medium that was previously inaccessible to me: images. Now I could communicate with people in this new, accessible way! I've learned more about art history and techniques in the last 3 months than in my entire life up to that point.
So I came up with a few ideas about making some paintings for my mother, and children's books for my nieces and nephew. The anger I received from my artistically inclined colleagues over this saddened me greatly, so I tried to talk to more people to see if this was an anomaly. There was more anger, and argument for censorship! I have to admit I struggled to maintain any empathy after receiving that reception.
I'm personally really excited about a future where we don't have to suffer to create art, whether it's code, an image, or music. Isn't more art and less suffering in our lives a good thing? If there are economic structures we've set up that make that a bad thing, maybe it would be fruitful take a critical look at those.
Presently I'm looking at creating a few small B2B products out of various fine-tuned public AI models. The first thing I realized is that I'd be addressing niches that were just not possible to tackle before (cost, scale, latency). The second thing I noticed is I'd need to hire designers, copywriters, etc. for their judgement -- at least as quality control. So at least in my limited scope of activity, the use of AI permits me to hire creative professionals, to tackle jobs that previously employed zero creative professionals (because previously they weren't done at all, or just done very poorly, e.g. English website copy for small business in non-English-speaking developing economies).
I do feel for people that have decided that they need to retool because they feel AI threatens their job. I do that every couple of years when some new thing threatens an old thing that I do, it's a chunk of work, and not always fun. To show better empathy, I think I'm going to reach out to more artists and show them what the current AI tools can and cannot do, to help them along this path. So thank you for your post, because it gave me the idea to take this approach!
...and on the weekends, I can still write code in hand-optimized assembly, because that's the brush I love painting with.
I think that programmers are safe for now - because of the law of leaky abstractions. And there is hardly bigger and leakier abstraction than AI generated code.
Lack of empathy is because we are discussing about systems, not feelings
At the dawn of mechanization, these same arguments were being used by the luddites, I'd recommend you to read them, it was quite an interesting situation, same as now
The reality is that advances such as these can't be stopped, even if you forbid ml legislation in the US there are hundreds of other countries which won't care same as it happens with piracy
Remember, luddites largely weren't against technology.
What they were however was against was companies using that technology to slash their wages in exchange for being forced to do significantly more dangerous jobs.
In less than a decade, textile work went from a safe job with respectable pay for artisans and craftsmen into one of the most dangerous jobs of the industrialised era with often less than a third of the pay and the workers primarily being children.
That's what the luddites were afraid of. And the government response was military/police intervention, breaking of any and all strikes, and harsh punishments such as execution for damaging company property.
I don't disagree, except I don't get what you mean with "because we are discussing systems, not feelings."
I think artists feeling like shit in this situation is totally understandable. I'm just a dilettante painter and amateur hentai sketcher, but some of the real artists I know are practically in the middle of an existential crisis. Feeling empathy for them is not the same as thinking that we should make futile efforts to halt the progress of this technology.
I agree, but we should pay attention when we are asked for empathy. In this very thread we have an excellent demonstration of how easy it is for an appeal to feel empathy for people's position to change into an appeal to protect the same people's financial position.
I'll go so far as to say that in many cases, displaying empathy for the artists without also advocating for futile efforts to halt the progress of this technology will be regarded as a lack of empathy.
I am not so much conflating them by accident as expressing my belief that the two are the same. I am not convinced that we can make sure the people from which their jobs where taken by an AI will be able to live from its proceeds.
There's a very real chance that adding these costs on top will drive development away from the sort that pays the people who lose out. For example, attempting to require licensing for images may simply push model training towards public domain materials. Then the models still work and the usable commercial art is still generated cheaply, but there are no living artists getting paid.
We should not blithely assume an ideal option that makes everyone happy is readily available or even at all. The core incentive of a lot of users is to spend less on commercial imagery. The core incentive of artists is to get paid at least as much as before. We should take seriously the possibility that there is not a medium in there that satisfies everyone.
If the advances create catastrophic consequences there will be a stop by definition. Death of art(ists) and coders may not be a catastrophe, but it could be coincident with one. From OP, "Art AI is terrifying if you want to make art for a living". Empathize a little with that to see coding AI making coding not a way of life. Empathize even more and see few people having productive ways of life due to general purpose AI. The call to empathize is not about "feelings" necessarily, it is a cognitive exercise to imagine future consequences that aren't obvious yet.
Perhaps we should learn some of the lessons from that time. A large group of people's lives became markedly worse while a small group profited from their labors.
Well of course AI is going to take the programmer jobs, as a developer, this is practically the focal point of all discussions I've had with colleagues the past month, the consensus seem to be that within a 10 year window, software developer is going to be an almost niche occupation.
And yes, it will transform art completely, initially by lowering the barrier for producing quality art, and then by raising the bar in terms of quality, it's coming for every artistic field, 3d, film, music etc
If you want a career in these fields, you will need to ride this AI wave from the get go, but even that career will eventually succumb to automation, this is the inevitable end point, as an example, eventually you will be able to give a brief synopsis to an AI and it will be able to flesh that out and create a full movie of it with the actors you choose.
One of the things that I find problematic is that we enjoy so many conveniences or efficiencies where taking a step back feels unimaginable. We used to have human computers. Going back on this to rescue an old profession would seem unimaginable. Paying individual taxes is very easy for many nations. Going to what the US has just to rescue many accounting jobs seems absurd.
Now imagine a future where AI can assist in law. Or should we not have that because lawyers pay so much for education and they work so bitterly? Should we do away with farm equipment as well? Should we destroy musical synths so that we can have more musicians?
It’s one thing to say we should have a government program to ease transitions in industry. It’s something else to say that we should hold back technological progress because jobs will be destroyed.
How do we develop a coherent moral framework to address this matter?
What empathy were you expecting in a nation that refuses to pass easy access to healthcare for all?
When was making living through art guaranteed? Society has mocked artists for centuries.
Let them make art and give them a UBI.
AI will replace programmers too; if a user can ask a future AI to organize their machines state into an arbitrary video game, photorealistic movie, generate reports from sources abc, weighing for xyz on a bar chart; only the AI code base (whatever boot strapping and runtime it needs) becomes necessary.
Why would I ask an AI that can produce the end result to produce code? Code is just minimized ideal machine state.
You’re correct; there is a lot of empathy lacking in our culture, but it’s not just when it comes to art.
I don't think there is no empathy here, but there are clear divisions on whether this tech will help advance humankind or further destabilize the society as a whole.
To be perfectly honest, I absolutely love that particular attempt by artists, because it will likely force 'some' restrictions on how AI is used and maybe even limit that amount 'blackboxiness' it entails ( disclosure of model, data set used, parameters -- I might be dreaming though ).
I disagree with your statement in general. HN has empathy and not just because it could affect their future world. It is a relatively big shift in tech and we should weigh it carefully.
I think it's more a lack of historical perspective on the part of artists. I remember when Photoshop and other digital art tools became available and many artists were of the opinion "Feh! Digital art isn't really art. Real artists work with pens, brushes, and paper!". Fast forward a couple of decades and you won't find many artists still saying that. Instead they've embraced the tools. I expect the future won't be AI art vs human art but rather a hybrid as art tools incorporate the technique and artists won't think it is any less art than using other digital tools.
the issue at hand has nothing to do with gatekeeping, elitism or any kind of psued debate about what constitutes real art.
people are mad because job & portfolio sites are being flooded with aishit which is making them unusable for both artists and clients .
people are mad because their copyright is being scraped and resold for profit by third parties without their consent.
whether ai is the future is an utterly meaningless distraction until these concerns are addressed. as an aside, ai evangelists telling working professionals that they 'simply don't get' their field of expertise has been an incredibly poor tact for generating goodwill towards this technology or the operations attempting to extract massive profit from it's implementation.
It's a bit tiresome having people demand you demonstrate empathy in every single post. Do you truly want everyone typing up a paragraph of how sad they are in every comment? It won't actually help anything.
To be honest, I have been forced to choose a side during all those debates about copyright and advertising/adblocking. And it was artists who forced me to make that choice. It's hard not to see this as just another way in which artist are trying to limit how people use their own computing devices in a way that provides the most value to them.
All these talking points about lack of empathy for poor suffering artists have already been made a million times in those other debates. They just don't pack much of a punch anymore.
If "making art is vastly more difficult than the huge majority of computer programming that is done" - then I'm sorry, you must not be doing very difficult computer programming.
I have a strong background in both and I think creating good art is worlds more difficult than writing good code. It's both technically difficult and intellectually challenging to create something that people actually want to look at. Learning technical skills like draughtsmanship is harder than learning programming because you can't just log onto a free website and start getting instant & accurate feedback on your work. I do agree that it's very apples and oranges though - creating art requires a level of intuition and emotion that's mostly absent from technical pursuits like programming, and this very distinction is both the reason technical people can be so dismissive of the arts AND the reason why I think making art is ultimately more difficult.
I was raised a lit and music nerd, then staggered into CS in college because it was a creative discipline that could pay bills.
Twenty years down the pike I've gotten pretty solid at programming, certainly not genius-level but competent.
I agree strongly that making art anyone cares anout is massively harder than being a competent programmer. In both you need strong technical abilities to be effective, but intuition and a deep grasp of human psychology are really crucial in art - almost table stakes.
Because software dev is usually practical, a craft, you can get paid decently with far less brilliance and fire than will suffice to make an artist profitable.
...though perhaps the DNN code assist tools will change that soon.
This is a very strange thing to say since great art is often not technically difficult at all. Much of modern and contemporary art is like that, nevertheless the art is superb.
> Learning technical skills like draughtsmanship is harder than learning programming because you can't just log onto a free website and start getting instant & accurate feedback on your work.
Really? I sometimes wonder what people think programming really is. Not what you describe, obviously.
I actually think a lot of modern and contemporary art is more technically difficult than it appears (though certainly not as technically difficult as making a marble sculpture or something). But fair point.
Not sure I fully understand your second point: are you implying that I don't really know what programming is?
The majority of my career has been in back end web development so lots of work on APIs and related microservices. Judging by the tone of this comment I've probably never worked on anything that you'd consider computationally difficult. That being said, I've never sculpted something out of a block of marble or done a photorealistic painting or composed a symphony either. I'd say my technical skills are middling in both domains. I have, however, been paid for my code for the better part of a decade and I've produced decent enough work to be hired by major companies and consistently promoted. I've added useful features to platforms that many people on this forum probably use. Getting to this point in my career certainly wasn't trivial but it was much easier than getting to the point where I could produce any art that other people actually found compelling, and I still think I've only created one or two things in my life that were truly "good" art that meant something to anyone other than myself.
I‘m not judging since I don’t know you. I see programming as the profession, grounded in CS and with coding being usually not the problem (instead designing the solution is the problem).
Right and why is that? Because there is often no budget to solve the interesting parts and because of a lack of skills and because of terrible management - all of these mutually reinforcing.
Same if true by the way for writing. So? Doesn‘t mean writing well is easy.
The arithmetic a computer can do instantly is much more difficult to me that writing this sentence.
Point being: we can't compare human and computer skills.
As if I'm worried, I'm not because, if there is no government intervention to ruin things, even if I lose my job as a programmer society becomes richer and I can always move to do another thing while having access to cheaper goods
People should stop giving work all this meaning and also they should study economics so they chill.
> it seems to me that most computer programmers should be just as afraid as artists, in the face of technology like this!!!
I'm just as excited for myself as I am for artists. The current crop of these tools look like they could be powerful enablers for productivity and new creativity in their respective spaces.
I happen to also welcome being fully replaced, which is another conversation and isn't really where I see these current tools going, though it's hard to extrapolate.
AI art and reactions to it remind me of open source software. Yes, it's chipping into someones profits. But it's a benefit for vast majority and society as a whole. It's democratizing what was scarce and expensive. I really can't be mad at that.
Ultimately those who are able to integrate it into their creative process will be the winners. There will always be small niche for those who oppose it out of principle.
I don't care. After decades of having no TV, film, books or video games aimed at me, they might finally be generated instead of the bullshit written by committees.
Oh yeah I’m sure the AI that was trained on decades of tv, movies, and books that didn’t appeal to you will do a great job of creating things that appeal to you.
Do you understand how AI works? It can’t just pop out things that are completely different from what it was trained on. See threads about how Dall-e couldn’t draw a hedgehog without glasses.
I think that programmers here have a lot riding on the naive belief that all new tools are neutral, and there is no pile of bodies under these advances.
Basically, the argument is that you should not have ever charged for your art, since its viewing and utility is increased when more people see it.
The lack of empathy comes from our love of open source. That's why. These engineers have been pirating books, movies, games for a long time. Artists crying for copyright has the same sound as the MPAA sueing grandma 20 years ago.
This could easily be flipped on it's head. Artists wanting more control over their creations ensures bad actors can't use/misuse as easily. Freely creating tools for any bad actors to use/misuse appears incredibly naive in this light.
Now was Aaron Schwartz (what I view as on ultimate example of this open source idea you cite) naive, no. Maybe he knew in his heart the greater good would outweigh anything.
But I don't think we should judge too harshly merely falling on one side of this issue or not. Perhaps it's down to a debate about what creation/truth/knowledge actually are. Maybe some creators (of which aritsts and computer scientists are) view creations as something they bring into the world, not reveal about the world.
I don't know, happily let someone starve is very obvious.
when I read Japanese artists talk about how to treat AI now, it is difficult to feel sad. they don't know that they are impossible to fight against it. almost ten years ago, someone in Silicon Valley had decided all of them should starve and vanish. no one can stop technology.
It’s not about empathy but about the fundamental nature of the job.
Developers will be fine because software engineering is an arms race - a rather unique position to be in as a professional. I saw this play out during the 2000s offshoring scare when many of us thought we'd get outsourced to India. Instead of getting outsourced, the industry exploded in size globally and everything that made engineers more productive also made them a bigger threat to competitors, forcing everyone to hire or die.
Businesses only need so much copy or graphic design, but the second a competitors gains a competitive advantage via software they have to respond in kind - even if it's a marginal advantage - because software costs so little to scale out. As the tech debt and the revenue that depends on it grows, the baseline number of staff required for maintenance and upkeep grows because our job is to manage the complexity.
I think software is going to continue eating the world at an accelerated pace because AI opens up the uncanny valley: software that is too difficult to implement using human developers writing heuristics but not so difficult it requires artificial general intelligence. Unlike with artists, improvements in AI don’t threaten us, they instead open up entire classes of problems for us to tackle
Technically I'd imagine AI threatens developers (https://singularityhub.com/2022/12/13/deepminds-alphacode-co...) a lot more than artists because there's a tangible (or 'objectively correct') problem being solved by the AI. Whereas art is an entirely subjective endeavor, and ultimately the success of what is being made is left up to how someone is feeling. I also imagine humans will begin to look at AI generated art very cynically. Maybe we all collectively agree we hate AI art, and it becomes as cliché as terrible stock photography. Or, we just choose not to appreciate anything that doesn't come with a 'Made By Humans' authentication... Pretty simple solution for the artists.
Obviously a lot of money will be lost for artists in a variety of commercial fields, but the ultimate "success of art" will be unapproachable by AI given its subjective nature.
Developers though will be struggling to compete from both a speed and technical point of view, and those hurdles can't be simply overcome with a shift in how someone feels. And you're right about the arms race, it just won't be happening with humans. It'll be computing power, AIs and the people capable of programming those AIs.
If there’s a “tangible problem” people solve it with a SaaS subscription. That’s not new.
We developers are hired because our coworkers can’t express what they really want. No one pays six figures to solve glorified advent of code prompts. The prompts are much more complex, ever changing as more information comes in, and in someone’s head to be coaxed out by another human and iterated on together. They are no more going to be prompt engineers than they were backend engineeers.
I say this as someone who used TabNine for over a year before CoPilot came out and now use ChatGPT for architectural explorations and code scaffolding/testing. I’m bullish on AI but I just don’t see the threat.
I'm just arguing that its a lot easier for AI to replace something that has objectively or technically correct solutions vs something as subjective as art (where we can just decide we don't like it on a whim).
I’m arguing that there is no objectively or technically correct solutions to the work engineers are hired to do. You don’t “solve” a startup CEO or corp VP who changes their mind about the direction of the business every week. Ditto for consumers and whatever the latest fad they’re chasing is. They are agents of chaos and we are the ones stuck trying to wrangle technology to do their bidding. As long as they are human, we’ll need the general intelligence of humans (or equivalent) to figure out what to code or prompt or install.
In the sense that someone asks "I need a program that takes x and does y" and the AI is able to solve that problem satisfactorily, it's an objectively correct solution. There will be nuance to that problem, and how its solved, but the end results are always objectively correct answers of "it either works, or it doesn't."
I think in both domains there are parts which are purely technical (wrong or right) and others which are well ... an art.
In art these parts are often overlooked, but they are significant none the less. E.g. getting the proportions right is an objective metric and really off putting if it is wrong.
And in programming the "art" parts are often overlooked and precisely the reason why I feel that most software of today is horrible. It is just made to barely "work" and get the technical parts right up to spec and that's it. Beyond that nobody cares about resource efficiency, performance, security, maintainability or yet alone elegance.
Computer programmers have a general aversion to copyright, for a few reasons:
1. Proprietary software is harmful and immoral in ways that proprietary books or movies are not.
2. The creative industry has historically used copyright as a tool to tell computer programmers to stop having fun.
So the lack of empathy is actually pretty predictable. Artists - or at least, the people who claim to represent their economic interests - have consistently used copyright as a cudgel to smack programmers about. If you've been marinading in Free Software culture and Cory Doctorow-grade ressentiment for half a century, you're going to be more interested in taking revenge against the people who have been telling you "No, shut up, that's communism" than mere first-order self-preservation[1].
This isn't just "programmers don't have fucks to give", though. In fact, your actual statements about computer programmers are wrong, because there's already an active lawsuit against OpenAI and Microsoft over GitHub Copilot and it's use of FOSS code.
You see, AI actually breaks the copyright and ethical norms of programmers, too. Most public code happens to be licensed under terms that permit reuse (we hate copyright), but only if derivatives and modifications are also shared in the same manner (because we really hate copyright). Artists are worried about being paid, but programmers are worried about keeping the commons open. The former is easy: OpenAI can offer a rev share for people whose images were in the training set. The latter is far harder, because OpenAI's business model is charging people for access to the AI. We don't want to be paid, we want OpenAI to not be paid.
Also, the assumption that "art is more difficult than computer programming" is also hilariously devoid of empathy. For every junior programmer crudly duck-taping code together you have a person drawing MS Paint fanart on their DeviantART page. The two fields test different skills and you cannot just say one is harder than the other. Furthermore, the consequences are different here. If art is bad, it's bad[0] and people potentially lose money; but if code is bad it gets hacked or kills people.
[0] I am intentionally not going to mention the concerns Stability AI has with people generating CSAM with AI art generators. That's an entirely different can of worms.
[1] Revenge can itself be thought of as a second-order self-preservation strategy (i.e. you hurt me, so I'd better hurt you so that you can't hurt me twice).
The thing with programming is that it either works or does not work, but there is a huge window of what can be called art.
With no training, I, or even a 1 year old, could make something and call it art. I wouldn't claim it's very good but I think most people would accept it as art. The same cannot be said for programming.
Coders have been using "AI" for ages. You used to write assembly by hand, then got a compiler that you could just instruct to generate the code for you. I don't worry about my job, even though a single prompt to REPL can now replace thousands of hand-crafted machine instructions
The thing is, empathy doesn't really do anything. Pandora's Box is open and there's no effective way of shutting it that is more than a hopeful dream. Stopping technology is like every doomed effort that has existed to stop capitalism.
"so I decided to do something easier and became a computer programmer"
Get a grip me old fruit. You've basically described "growing up". The world is a pretty wild place and you need to find your niche or not (rince/repeat). You are not a failed artist at all. You probed at something "had a dabble" if you like and it didn't work out. Never mind. Move on and try something else but keep your interest in mind.
There are loads of professions that I'd like to have done but as it turns out I'm me and that's who I am. Personally speaking I'm a MD of a little IT firm in the UK that can fiddle up a decent 3-2-1 conc mix and do fairly decent first and second fix wood work. I studied Civ Eng.
"The lack of empathy" - really?
If you fancy your chances as an artist then go for it. At worst you will fulfill your ambition and create some daubs. At best, you will traverse reality and be a wealthy living artist.
> If you fancy your chances as an artist then go for it. At worst you will fulfill your ambition and create some daubs. At best, you will traverse reality and be a wealthy living artist.
With AI art gradually improving, I think that line of reasoning will convince less and less people that would otherwise have second thoughts. They would spend a couple of hours on Midjourney and decide that's as far they want to take their "art" hobby. The power of instant gratification will convince many faster than spending hundreds of hours honing a craft.
I think in the future a lot of people's gut reaction to failing as a manual artist will be to retreat to Midjourney or similar to satisfy their remaining desire to have creative work they can call their own instead of trying again. I personally find the near-instant feedback loop very addicting, and I think it will have a similar effect to social platforms in normalizing a desire for quick results over the patience needed to hone a craft.
But as opposed to scrolling newsfeeds for hours, at least the user obtains a creative output through generative art, and it doesn't carry the same type of guilt for me. This kind of thing is unprecedented and I don't look forward to how it will polarize the various communities involved in the coming years.
My primary empathy is with end users, who could be empowered with AI-based tools to express their dreams and create graphics or software without need to pay professional artists or programmers.
What I find interesting is how people literally cannot see any alternative besides, "This is just the way capitalism works", which implicitly acknowledges "capitalism is the only way it can work".
"Observing humans under capitalism and concluding it's only in our nature to be greedy is like observing humans under water and concluding it's only in our nature to drown."
Neither artists nor programmers should be afraid. "AI's" are nice tools, but I think they are very far off from being able to navigate office politics, client relationships, etc.
Sorry, I have no reason to be afraid of AI taking my job, not now, not ever. You seem to have a condescending idea of what programming is, given how you describe it as simple and dumb, but I can assure you, programming would be one of the last jobs to be deprecated by AI. If you think ChatGPT is enough to put programmers on the street, I would question what kind of programming you do.
I would turn this around to you: if a braindead AI can do these astonishingly difficult art, maybe art was never difficult to begin with, and that artists are merely finagling dumb, simple things to their work. Sounds annoying and condescending right? If you disagree what I said about art, maybe you ought to be more aware of your own lack of empathy.
>it seems to me that most computer programmers should be just as afraid as artists
That is absurd. Sure some basic AI tools have been helpful like co-pilot and it's sometimes really impressive how it can help me autofill some code instead of typing it out... but come on, there is no way we are anywhere close to AI replacing 99.99% of developers.
>making art is vastly more difficult than the huge majority of computer programming that is done
I don't know.. art is "easy" in the sense that we all know what art looks like. You want a picture of a man holding a cup with a baby raven in it? I can picture that in my head to some degree right away, and then it's just "doing the process" to draw it in some way using shapes we know.
How in the heck can you correlate that to 99% of business applications? Most of the time no one even knows exactly what they want out of a project.. so first there is the massive amount of constant changes just from using stuff. Then there is the actual way the code is created itself. Let's even say you could tell it "Make me an angular website with two pages and a live chat functionality" and it worked. Well, ok great it got you a starting template.. but first, maybe the code is so weird or unintuitive that it's almost impossible to really keep building upon- not helpful. Now let's say it is "descent enough", well fine.. then it's almost like an advanced co-pilot at this point. It helps with boilerplate boring template.
But comparing this all to art is still just ridiculous. Again, everyone can look at a picture and say "this is what I wanted" or "this is not what I wanted at all". Development is so crazy intricate that it's nothing like art.. I could look at two websites (similar to art) and say "these look the same", but under the hood it could be a million times different in functionality, how it works, how well it's structured to evolve over time.. etc etc. But if I look at two pictures that look exactly the same, I don't care how it got there or how it was created- it's done and exactly the same. Not true of development for 99% of cases.
This comment is downvoted, but it makes an important point. AI systems that produce an outcome that can be easily verified by non-experts are far more practical. If my mom can get an illustration out of the AI that she wants, she is done. Not so for software, where she cannot really verify it‘s that going to reliably do what was specified.
This is especially true for complex pieces.
If an AI could produce a world-class totally amazing illustration or even a book I will afterwards easily see or read it.
On the other hand real-world software systems consist of hundreds of thousands or lines in distributed services. How would a layman really judge if they work?
Nevertheless I also expect AI to have a big impact since less engineers can do much more.
lets just start a “no AI” movement. Humans appreciate effort and when everyone could become a dj, it just got boring, so some people (those exact people that you want at your show) will start looking for that when there is so much AI art that it just becomes dull.
Societies that choose to do this kind of thing would fall behind those that do not very quickly. One may argue that it doesn't matter so long as it leads to comfortable life, but the problem is that one day you'll wake up to the sound of Commodore Perry's guns.
Its not that terrifying the way the these models work they aren't really creating new works, just taking other ones and basically copying them. Honestly, new laws for copyright have to be made I wonder when it will happen. And how the judical systems in the world will deal with it. Or if big tech has enough in the pockets to pretend it isn't an issue.
Here's just a few of thousands in the vein of number 2:
> No talent or passion whatsoever
> He thinks he created something
> Why don't you subscribe to writing and art classes?
> This so ugly and shows real disrespect for people who have made stuff by themselves for years.
> Men will literally sell AI trash and call it "art" instead of go to therapy
> Can’t write or draw but wants to do both
> This is nothing but a HUGE disrespect to all the writers and artists around the world, and all it does is belittle their REAL work and effort.
>
> This is not art.
> Nothing to be proud of.
> I just spent 8 months illustrating a children’s book by hand—working, not “playing”—after a lifetime of training.
>
> FUCK OFF!
There are also plenty people are complaining about "theft", but it honestly, re-reading through it now, it feels like a minority. If this were done using fully public-domain content, does it sound like any of the people I quoted above been okay with it?
There's a clear disdain for "non-artists" creating art in a new way. I very much feel for the people who see their careers going away, and I can also empathize people who spent a long time acquiring a creative skill that's now "unnecessary". Programming has this too—those darn kids programming in Python rather than Assembly, or doing bootcamps that don't teach big-O notation. This is a normal, human way to feel, and I feel that too from time to time. BUT, I also resist that feeling. I choose not to express disdain for newcomers using new technology, or skipping the old ways.
A large (or at least loud) part of the art community seen here is expressing absolute disdain for those of us who are "cheating" not because "copyright infringement" but because we're using new technology that bypasses years of learning and that's very much eating into my empathy for the community in general. I find it toxic in the programming community and I find it toxic in the art community. Right now, it's exploding in the art community in a way far beyond what I've witnessed in programming.
I want to apologize in advance if my response here seems callous considering your personal experience as an artist. I'm trying to talk about AI and labor in general here, and don't mean to minimize your personal experience.
That said, I don't think AIs ability to generate art is a major milestone in the progress of things, I think it's more of the same, automating low value-add processes.
I agree that AI is/will-be an incredibly disruptive technology. And that automation in general is putting more and more people out of jobs, and extrapolated forward you end up in a world where most humans don't have any practical work to do other than breed and consume resources at ever increasing rates.
As much as I'm impressed by AI art (it's gorgeous), at the end of the day it's mainly just copying/pasting/smoothing out objects it's seen before (training set). We don't think of it as clipart, but that's essentially what it is underneath it all, just a new form of clipart. Amazing in it's ability to reposition, adjust, smooth images, have some sense of artistic placement, etc. It's lightyears beyond where clipart started (small vector and bitmap libraries). But at the end of the day it's just automating the creation of images using clipart. Re-arranging images you've seen before so is not going to make anyone big $$$. End of the day the quality of the output is entirely subjective, just about anything reasonable will do.
This reminds me a lot of GPT-3... looks like it has substance but not really. GPT-3 is great at making low value clickbait articles of cut-and-paste information on your favorite band or celebrity. GPT-3 will never be able to do the job of a real journalist, pulling pieces together to identify and expose deeper truths, to say, uncover the Theranos fraud. It's just Eliza [1] on steroids.
The AI parlor tricks started with Eliza, and have gotten quite elaborate as of late. But they're still just parlor tricks.
Comparing it to the challenges of programming, well yes I agree AI will automate portions of it, but with major caveats.
A lot of what people call "programming" today is really just plumbing. I'm a career embedded real-time firmware engineer, and it continues to astonish me that there's an entire generation of young "programmers" who don't understand basic computing principles, stacks, interrupts, I/O operations.. at the end of the day their knowledge base seems comprised of knowing which tool to use where in orchestration, and how to plumb it together. And if they don't know the answer they simply google and stack overflow will tell them. Low code, no code, etc. (python is perfect for quickly plumbing two systems together). This skill set is very limited and wouldn't even get you a junior dev position when I started out. I'm not suprised it's easy to automate, as it will generally have the same quality code (and make the same mistakes) as a human dev that simply copies/pastes Stack Overflow solutions.
This is in stark contrast to the types of problems that most programmers used to solve in the old days (and a smaller number still do). Stuff that needed an engineering degree and complex problem solving skills. But when I started out 30 years ago, "programmers" and "software engineers" were essentially the same thing. They aren't now, there is a world of difference between your average programmer and a true software engineer today.
Not saying plumbers aren't valuable.. they absolutely are as more and more of the modern world is built on plumbing things together. Highly skilled software engineers are needed less and less, and that's a net-good thing for humanity. No one needs to write operating systems anymore, lets add value building on top of them. Those are the people making the big $$$, their skillset is quite valuable. We're in the middle of a bi-furcation of software engineering careers. More and more positions will only require limited skills, and fewer and fewer (as a percentage) will continue to be highly skilled.
So is AI going to come in and help automate the plumbing? Heck yes, and rightly so... They've automated call centers, warehouse logistics, click-bait article writing, carry-out order taking, the list goes on and on. I'd love to have an AI plumber I could trust to do most of the low-level work right (and in CI/CD world you can just push out a fix if you missed something).
I don't believe for a second that today's latest and greatest "cutting edge" AI will ever be able to solve the hard problems that keep highly skilled people employed. New breakthroughs are needed, but I'm extremely skeptical. Like fusion promises, general purpose AI always seems just a decade or two away. Skilled labor is safe, for now.. maybe for a while yet.
The real problem as I see it, is that AI automation is on course to eliminate most low skilled jobs in the next century, which puts it on a collision course with the fact that most humans aren't capable of performing highly skilled work (half are below average by definition). Single parent workig the GM line in the 50's was enough afford an average family a decent life. Not so much where technology is going. At the end of the day the average human will have little to contribute to civilization, but still expects to eat and breed.
Universal basic income has been touted as a solution to the coming crisis, but all that does is kick the can down the road. It leads to a world of too much idle time (and the devil will find work for idle hands) and ever growing resource consumption. A perfect storm.... at the end of the day what's the point of existing when all you do is consume everything around you and don't add any value? Maybe that's someone's idea of utopia, but not mine.
This has been coming for a long time, AI art is just a small step on the current journey, not a big breakthrough but a new application in automation.
> entire generation of young "programmers" who don't understand basic computing principles, stacks, interrupts, I/O operations
Why would software engineers who work on web apps, kubernetes, and the internet in general need to understand interrupts. Not only they will never ever deal with any of that, but also they are supposed not to. All of those have been automated away so that what we call the Internet can be possible.
All of those stuff turned into specializations as the tech world progressed and the ecosystem grew. A software engineer specialized in hardware would need to know interrupts while he wouldnt need to know how to do devops. For the software engineer who works on Internet apps, its the opposite.
I'm not dissing cloud engineering. I've learned enough to really repesct the architects behind these large scale systems.
My point was about skill level, not specialization. Specialization is great.. we can build bigger and bigger things not having to engineer/understand what's beneath everything. We stand on the shoulders of giants as they say.
And I agree, there is no one job specialization that's more valuable than the other. It's contextual. If you have a legal problem, a specialized lawyer is more valuable than a specialized doctor. So yeah I agree that if you have a cloud problem, you want a cloud engineer and not a firmware engineer. Although I should add that things like interrupts/events/synchronization and I/O operations are fairly universal computing concepts even in the cloud world. If you're a cloud programmer and you don't know how long an operation takes / its big-O complexity, how much storage it uses / it's persistence etc. you're probably going to have some explaining to do when your company gets next months AWS bill.
And yes plumbing is useful! Someone has to hook stuff up that needs hooking up! But which task requires more skill; the person that designs a good water flow valve, or the person hooking one up? I'd argue the person designing the valve needs to be more skilled (they certainly need more schooling). The average plumber can't design a good flow valve, while the average non-plumber can fix a leaky sink.
AI is eating unskilled / low-skill work. In the 80's production line workers were afraid of robots. Well, here we are. No more pools of typists, automated call centers handling huge volumes of people, dark factories.
It's a terrible time to be an artist if AI can clipart compose images of the same quality much faster than you can draw by hand.
Back to original comment: I'm merely suggesting that some programming jobs require a lot more skill than others. If software plumbing is easy, then it can and will be automated. If those were the only skill I posessed, I'd be worried about my job.
Like fusion, I just don't see general purpose AI being a thing in my lifetime. For highly skilled programmers, it's going to be a lot longer before they're replaced.
Welcome to our digital future. It's very stressful for the average skilled human.
> My point was about skill level, not specialization
I fail to see the skill level in someone working on the web knowing about interrupts. And a firmware engineer knowing about devops, integrations or react.
> Although I should add that things like interrupts/events/synchronization and I/O operations are fairly universal computing concepts even in the cloud world
Not really. I/O has nothing to do with cloud, likewise interrupts. Those remain buried way, way down in the hardware that run the cloud at a place where not even datacenter engineers reach.
> If you're a cloud programmer and you don't know how long an operation takes / its big-O complexity
That still has nothing to do with interrupts or hardware I/O.
> I fail to see the skill level in someone working on the web knowing about interrupts
As a firmware/app guy I'm not qualified on talking about relative skill sets between different areas of cloud development. I agree that interrupts/threads aren't important at all to the person writing a web interface, should have found a better example. I'm not here to argue, for sure there are talented people up and down the stack.
What I can tell you is that I'm amazed at the mistakes I see this new generation of junior programmers making, the kind of stuff indicating they have little understanding of how computers actually actually work.
As an example, I continue to run into young devs that don't have any idea of what numeric over/underflow is. We do a lot of IoT and edge computing, so ranges/limits/size of the data being passed around matters a lot. Attempting to explain the concept reveals that a great many of them have no mental concept of how a computer even holds a number (let alone different variable sizes, types, signed/unsigned etc). When you explain that variables are a fixed size and don't have unlimited range, it's a revelation to many of them.
Sometimes they'll argue that this stuff doesn't matter, even as as you're showing them the error in their code. They feel the problem is that the other devs built it wrong, chose the wrong language or tool for the problem at hand etc. We had a dev (wrote test scripts) that would argue with his boss that everyone (including app and firmware teams) should ditch their languages and write everything in python, where mistakes can't be made. He was dead serious, ended up quitting out of frustration. I'm sure that was a personality problem, but still, the lack of basic understanding astounded us, and the phrase "knows enough to be dangerous" comes to mind.
I find it strange that there is a new type of programmer that knows very little about how computers actually work. I find it stranger that they are even a bit productive in their careers, although I suspect it's because the problem domains they work in are much more tolerant to these kinds of errors. CI/CD system is setup to catch/fix their problems, and hence the job positions can tolerate what used to be considered a below average programmer. Efficient? No. Good enough? Sure.
I suspect some of these positions can be automated before the others can.
> What I can tell you is that I'm amazed at the mistakes I see this new generation of junior programmers making, the kind of stuff indicating they have little understanding of how computers actually actually work.
> As an example, I continue to run into young devs that don't have any idea of what numeric over/underflow is
That doesn't happen in web application development either. You don't write that low level code that you could cause an overflow or underflow. There are a zillion layers in between your code and what could cause an overflow.
> they have little understanding of how computers actually actually work.
'The computer' has been abstracted away at the level of the Internet. Not even the experts who attend datacenters would ever pass near anything that is related to a numeric overflow. That stuff is hidden deep inside hardware or deep inside the software stack near the OS level in any given system. If there is anything that causes an overflow in such a machine, what they do would be to replace that machine instead of going into debugging. Its the hardware manufacturers' and OS developers' responsibility to do that. No company that does cloud or develops apps on the Internet would need to know about interrupts, numeric overflows and whatnot.
> I find it stranger that they are even a bit productive in their careers, although I suspect it's because the problem domains they work in are much more tolerant to these kinds of errors
Interrupt errors dont happen in web development. You have no idea at the level of abstraction that was built between the layers where it could happen and the modern Internet apps. We are even abstracting away servers, databases at this point.
You are applying a hardware perspective to the Internet. That's not applicable.
I agree with everything you're saying. It surprises me that people can call themselves programmers and not know the basics of computer computation, but it seems that just means I have an older/more-narrow definition of what "programming" is compared to what it has become.
I still stand behind my main point, which is that some of these jobs will be automated before others. Apparently the skill set differences between different kinds of programmers even wider than I thought it was. So instead of talking about whether AI will/won't automate programming in general, it's more productive to discuss which kind of programming AI will automate first.
> narrow definition of what "programming" is compared to what it has become.
Isnt that the case in every field in technology? Way back engineers used to know how circuits worked. Now network engineers never deal with actual circuits themselves. Way back back programmers had to do a lot of things manually. Now the underlying stack automates much of that. On top of TCP/IP, we laid the WWW, then we laid web apps, then we laid CMSes, then we came to such a point that CMSes like WordPress has their own plugins, and the very INDIVIDUAL plugins themselves became expertise fields. When looking for someone to work on a Woocommerce store, people dont look for WordPress developers, or plugin developers. They look for 'Woocommerce developers'. WP became so big that every facet of it became specializations in itself.
Same for everything else in tech: We create a technology, which enables people to build stuff on it, then people build so much stuff that each of those became individual worlds in themselves. Then people standardize that layer and then move on to building next level up. It goes infinitely upwards.
I have crossed over the other direction from coding to drawing and suspect that neither side understands their craft well enough to assess what'll happen.
Most of coding is routine patterns that are only perceived as complex because of the presence of other coders and the need to "talk" with them, which creates a need for reference materials(common protocols, documentation, etc.)
Likewise, most of painting is routine patterns complicated by a mix of human intent(what's actually communicated) and the need for reference materials to make the image representational.
Advancements in Western painting between the Renaissance and the invention of photography track with developments in optics; the Hockney-Falco thesis is the "strong" version of this, asserting that specific elements in historical paintings had to have come through the use of optical projections, not through the artist's eyes. A weaker form of this would say that the optics were tools for study and development of the artist's eye, but not always the go-to tool, especially not early on when their quality was not good.
Coding has been around for a much shorter time, but mostly operates on the assumptions of bureaucracy: that which is information is information that can be modelled, sorted, searched. And the need for more code exists relative to having more categories of modelled data.
Art already faced its first crisis of purpose with the combination of photography and mass reproduction. Photos produced a high level of realism, and as it became cheaper to copy and print them, the artist moved from a necessary role towards a specialist one - an "illustrator" or "fine artist".
What an AI can do - given appropriate training, prompt interfaces and supplementary ability to test and validate its output - is produce a routine result in a fraction of the time. And this means that it can sidestep the bureaucratic mode entirely in many circumstances and be instructed "more of this, less of that" - which produces features like spam filters and engagement-based algorithms, but also means that entire protocols are reduced to output data if the AI is a sufficiently good compiler; if you can tell the AI what you want the layout to look like and it produces the necessary CSS, then CSS is more of a commodity. You can just draw a thing, possibly add some tagging structure, and use that as the compiler's input. Visual coding.
But that makes the role a specialized one; nobody needs a "code monkey" for such a task, they need a graphic designer...which is an arts job.
That is, the counterpoint to "structured, symbolic prompts generating visual data" is "visual prompts generating structured, symbolic data". ML can be structured in either direction, it just takes thoughtful engineering. And if the result is a slightly glitchy web site, it's an acceptable tradeoff.
Either way, we've got a pile of old careers on their way out and new careers replacing them.
Forgive me but I would be lucky to have artists saying anything, positive or negative, about my way of life. Being knowledgeable in something critically studied is very rewarding. You are forsaking opportunity if I dare say so.
I don't understand your response, maybe I should clarify my comment. What I'm saying is there has historically been a fair amount of animosity and mean hearted banter between engineer types and artistic types. Particularly, artists sharing and promoting negative stereotypes about engineers. Claims that engineers are antisocial, can't design interfaces for 'real people', etc. Now that the fruit of engineering labor has threatened artists, it doesn't surprise me that engineers have little sympathy for the artists.
Indeed. If the AI had agency in this matter, I'd say it's doing divide and conquer against those who might oppose it. But really it's just an unfortunate circumstance.
I don't see the point. There is a copyright (and in that regard most of these images are fine) and then there is trademark which they might violate.
Regardless, the human generating and publishing these images is obviously responsible to ensure they are not violating any IP property. So they might get sued by Disney. I don't get why the AI companies would be effected in any way. Disney is not suing Blender if I render an image of Mickey Mouse with it.
Though I am sure that artists might find an likely ally in Disney against the "AI"'s when they tell them about their idea of making art-styles copyright-able Being able to monopolize art styles would be indeed a dream come true for those huge corporations.
If thouse mouse images are generated, that implies that Disney content is already part of the training data and models.
So in effect, they are pitting Disney's understanding of copyright (maximally strict) against that of the AI companies (maximally loose).
Even if it's technically the responsibility of the user not to publish generated images that contain copyrighted content, I can't imagine that Disney is very happy with a situation where everyone can download Stable Diffusion and generate their own arbitrary artwork of Disney characters in a few minutes.
So that strategy might actually work. I wish them good luck and will restock my popcorn reserves just in case :)
The problem I see though is that both sides are billion dollar companies - and there is probably a lot of interest in AI tech within Disney themselves. So it might just as well happen that both sides find some kind of agreement that's beneficial for both of them and leaves the artists holding the bag.
This is a bit silly, though? Search Google images for Mickey Mouse, is the results page a possible liability for Google? Why not?
Go to a baker and commission a Mickey Mouse cake. Is that a violation if the bakery didn't advertise it? (To note, a bakery can't advertise it due to trademark, not copyright. Right?)
For that matter, any privately commissioned art? Is that really what artists want to lock away?
I mean, isn't most of that "It's trademark infringement, but it is both financially tedious and a PR disaster to go after any but the most prominent cases"
Which is why e.g. Bethesda is not going to slap you for your Mr House or Pip-Boy fanart, but will slap the projects that recreate Fallout 3 in engine X.
The tables turn when it's not just some fans doing it, which takes time and effort. AI generated images can be pumped out by the thousands, and big companies are behind these services. See the problem?
>is the results page a possible liability for Google?
That's actually a tricky question and lengthy court battles were held over this in both the US and Europe. In the end, all courts decided that the image result page is questionable when it comes to copyright, but generally covered by fair use. The question is how far fair use goes when people are using the data in derivative work. Google specifically added licensing info about images to further cover their back, but this whole fair use stuff gets really murky when you have automatic scrapers using google images to train AIs who in turn create art for sale eventually. There's a lot of actors in that process that profit indirectly from the provided images. This will probably once again fall back to the courts sooner or later.
Not a lawyer, but from how I understand it the German courts argued that if you don't use any technology to prevent web crawlers from accessing the pictures on your website you need to accept that they are used for preview images (what the Google picture search technically is) as this is a usual use case.
Fair use is just a limitation of copyright in case of public interest. Europe has very similar exclusions, even though they are spelled out more concretely. But they don't make this particular issue any less opaque.
The right to citation is already part of the 1886 Berne Convention, a precedent that enables services like Google images.
The matters of the baker and the privately comissioned art are more complicated. The artist and baker hold copyrigh for their creation, but their products are also derived from copyrighted work, so Disney also has rights here [1]. This is just usually not enforced by copyright holders because who in their right mind would punish free marketing.
There's nothing wrong with the model knowing what Mickey Mouse looks like.
There are noninfringing usecases for generating images containing Mickey Mouse - not least, Disney themselves produce thousands of images containing the mouse's likeness every year; but also parody usecases exist.
But even if you are just using SD to generate images, if we want to make sure to avoid treading on Disney's toes, the AI would need to know what Mickey Mouse looks like in order to avoid infringing trademark, too. You can feed it negative weights already if you want to get 'cartoon mouse' but not have it look like Mickey.
The AI draws what you tell it to draw. You get to choose whether or not to publish the result (the AI doesn't automatically share its results with the world). You have the ultimate liability and credit for any images so produced.
Not a lawyer (and certainly no disney lawyer), but my understanding was that copyright is specifically concerned with how an image is created, less so that it is created. Which is why you can copyright certain recordings that only consist of silence. It just prevents you from using this record to base your own record of silence on, it doesn't generally block you from recording silence.
In the same way, making the model deliberately unable to generate Micky Mouse images would be much more far-reaching than just removing Micky imagery from the trainset.
Most Mickey Mouse image usage problems will be trademark infringement not copyright.
Copyright infringement does generally require you to have been aware of the work you were copying. So for sure there's an issue with using AI to generate art where you could use the tool to generate you an image, which you think looks original, because you are unaware of a similar original work, so you could not be guilty of copyright infringement - but if the AI model was trained on a dataset that includes an original copyrighted work that is similar, obviously it seems like someone has infringed something there.
But that's not what we're talking about in the case of mickey mouse imagery, is it? You're not asking for images of 'utterly original uncopyrighted untrademarked cartoon mouse with big ears' and then unknowingly publishing a mouse picture that the evil AI copied from Disney without your knowledge.
> But that's not what we're talking about in the case of mickey mouse imagery, is it? You're not asking for images of 'utterly original uncopyrighted untrademarked cartoon mouse with big ears' and then unknowingly publishing a mouse picture that the evil AI copied from Disney without your knowledge.
I think this is exactly the problem that many artists have with imagine generators. Yes, we could all easily identify if a generated artwork contained popular Disney characters - but that's because it's Disney, owners of some of the most well-known IP in the world. The same isn't true for small artists: There is a real risk that a model reproduces parts of a lesser known copyrighted work and the user doesn't realise it.
I think this is what artists are protesting: Their works have been used as training data and will now be parts of countless generated images, all with no permission and no compensation.
You can search the LAION5B CLIP-space and you find a lot of mickey in it, lots of fan art between photos of actual merch. If you search with a high aesthetic score, you'll find lots of actual Disney illustrations etc. in the neighbourhood. [0]
> If thouse mouse images are generated, that implies that Disney content is already part of the training data and models.
It doesn't mean that. You could "find" Mickey in the latent space of any model using textual inversion and an hour of GPU time. He's just a few shapes.
(Main example: the most popular artist StableDiffusion 1 users like to imitate is not in the StableDiffusion training images. His name just happens to work in prompts by coincidence.)
How do you get that coincidence? To be able to accurately respond to the cue of an artist's name, it has to know the artist, doesn't it?
In any case, in the example images here, the AI clearly knew who Mickey is and used that to generate Mickey Mouse images. Mickey has got to be in the training data.
For other artist cases the corpus can include many images that includes a description with phrases like "inspired by Banksy". Then the model can learn to generate images in the style of Banksy without having any copyrighted images by Banksy in the training set.
The Mickey Mouse case though is obviously bs, the training data definitely does just have tons of infringing examples of Mickey Mouse, it didn't somehow reinvent the exact image of him from first principles.
If you can find a copyrighted work in that model that wasn't put there with permission, then why would that model and its output not violate the copyright?
Sure, but this isn't philosophy. An AI model that contains every image is a copyright derivative of all those images and so is the output generated from it. It's not an abstract concept or a human brain. It's a pile of real binary data generated from real input.
StableDiffusion is 4GB which is approximately two bytes per training image. That's not very derivative, it's actual generalization.
"Mickey" does work as a prompt, but if they took that word out of the text encoder he'd still be there in the latent space, and it's not hard to find a way to construct him out of a few circles and a pair of red shorts.
The idea behind that is probably that any artist learns from seeing other artists' copyrighted art, even if they're not allowed to reproduce it. This is easily seen from the fact that art goes through fashions; artists copy styles and ideas from each other and expand on that.
Of course that probably means that those copyrighted images exist in some encoded form in the data or neural network of the AI, and also in our brain. Is that legal? With humans it's unavoidable, but that doesn't have to mean that it's also legal for AI. But even if those copyrighted images exist in some form in our brains, we know not to reproduce them and pass them off as original. The AI does that. Maybe it needs a feedback mechanism to ensure its generated images don't look too much like copyrighted images from its data set. Maybe art-AI necessarily also has to become a bit of a legal-AI.
Among the goals seems to be a bit of well-poisoning. Artists have done this previously by creating art saying, say, "This site sells STOLEN artwork, do NOT by from them", and encouraging followers to reply with "I want this on a t-shirt", which had previously been used by rip-off sites to pirate artwork. See:
If art streams are tree-spiked with copyrighted or trademarked works, then AI generators might be a bit more gun-shy about training with abandon on such threads.
Not sure about Stable Diffusion / Metawhatsit, but OpenAI's training set is already curated to make sure it avoids violence and pornography; and in any case, the whole thing relies on humans to come up with descriptions. Not clear how this sort of thing would "spike the well" in that sense.
These AI models are closer to Google in that regard, yes, you can instruct them to generate a Mickey Mouse image, but you can instruct them to generate any kind of image, just like you can search for anything on Google, including Mickey Mouse. When using these models you are essentially performing a search in the model weights.
Ehhh that’s like saying an artist who studies other art pieces and then creates something using combined techniques and styles from those set pieces is what ???? Now liable ???
An AI is not a person. Automated transformation does not remove the original copyright, otherwise decompilers would as well. That the process is similar to a real person is not actually important, because it's still an automated transformation by a computer program.
We might be able to argue that the computer program taking art as input and automatically generating art as output is the exact same as an artist some time after general intelligence is reached, until then, it's still a machine transformation and should be treated as such.
AI shouldn't be a legal avenue for copyright laundering.
Now we are in Ship of Theseus territory. If I downsample an image and convert it into a tiny delta in the model weights, from which the original image can never be recovered, is that infringement?
Except the machine is not automatically generating an input
> automatically generating art as output
The user is navigating the latent space to obtain said output, I don't know if that's transformative or not, but it is an important distinction
If the program were wholy automated as in it had a random number/words generator added to it and no navigation of the latent space by users happened, then yeah I would agree, but that's not the case at least so far as ml algos like midjourney or stable diffusion are concerned
That's still automated in the same way that a compiler is automated. A compiler doesn't remove the copyright, neither does a decompiler. This isn't different enough to have different copyright rules. There are more layers to the transformation, but it's still a program with input and output.
I'm not sure what you mean by "navigation of latent space". It's generating a model from copyrighted input and then using that model and more input to generate output. It's a machine transformation in more steps.
The output is probably irrelevant here, the model itself is a derivative work from a copyright standpoint.
Going painting > raw photo (derivative work), raw photo > jpg (derivative work), jpg > model (derivative work), model > image (derivative work). At best you can make a fair use argument at that last step, but that falls apart if the resulting images harm the market for the original work.
It's not clear at all whether the model is a derivative work from a copyright standpoint. Maybe they are, may be they are not - it's definitely not settled, the law isn't very explicit and as far as I know, there is no reasonable precedent yet - and arguably that would be one of the key issues decided (and set as precedent) in these first court battles. I also wouldn't be surprised if it eventually doesn't matter what current law says as the major tech companies may lobby passing a law to explicitly define the rules of the game; I mean if Disney could lobby multiple copyright laws to protect their interests, then the ML-heavy tech companies, being much larger and more wealthy than Disney, can do it as well.
But currently, first, there is a reasonable argument that the model weights may be not copyrightable at all - it doesn't really fit the criteria of what copyright law protects, no creativity was used in making them, etc, in which case it can't be a derivative work and is effectively outside the scope of copyright law. Second, there is a reasonable argument that the model is a collection of facts about copyrighted works, equivalent to early (pre-computer) statistical ngram language models of copyrighted books used in e.g. lexicography - for which we have solid old legal precedent that creating such models are not derivative works (again, as a collection of facts isn't copyrightable) and thus can be done against the wishes of the authors.
Fair use criteria comes into play as conditions when it is permissible to violate the exclusive rights of the authors. However, if the model is not legally considered a derivative work according to copyright law criteria, then fair use conditions don't matter because in that case copyright law does not assert that making them is somehow restricted.
Note that in this case the resulting image might still be considered derivative work of an original image, even if the "tool-in-the-middle" is not derivative work.
You seem to be confused as to nomenclature, transformative works are still derivative works. Being sufficiently transformative can allow for a fair use exception, the distinction is important because you can’t tell if something is sufficiently transformative without a court case.
Also, a jpg seemingly fits your definition as “no creativity was used in making them, etc” but clearly they embody the original works creativity. Similarly, a model can’t be trained on random data it needs to extract information from it’s training data to be useful.
The specific choice of algorithm used to extract information doesn’t change if something is derivative.
You seem to be confused, transformative works are still derivative works. Being sufficiently transformative can allow for a fair use exception but you may need a court case to prove something is sufficiently transformative to qualify.
> Automated transformation does not remove the original copyright
Automated transformation is not guaranteed to remove the original copyright, and for simple transformations it won't, but it's an open question (no legal precedent, different lawyers interpreting the law differently) whether what these models are doing is so transformative that their output (when used normally, not trying to reproduce a specific input image) passes the fair use criteria.
1) the artist is not literally copying the copyrighted pixel data into their "system" for training
2) An individual artist is not a multi billion dollar company with a computer system that spits out art rapidly using copyrighted pixel data. A categorical difference.
On 1, human artists are copying copyrighted pixel data into their system for training. That system is the brain. It's organic RAM.
On 2, money shouldn't make a difference. Jim Carrey should still be allowed to paint even though he's rich.
If Jim uses Photoshop instead of brushes, he can spit out the style ideas he's copied and transformed in his brain more rapidly - but he should still be allowed to do it.
A human can grow and learn based on their own experiences separate from their art image input. They'll sometimes get creative and develop their own unique style. Through all analogies, the AI is still a program with input and output. Point 1 doesn't fit for the same reason it doesn't work for any compiler. Until AI can innovate itself and hold its own copyright, it's still a machine transformation.
> On 1, human artists are copying copyrighted pixel data into their system for training. That system is the brain. It's organic RAM.
They probably aren't doing that. Studying the production methods and WIPs is more useful for a human. (ML models basically guess how to make images until they produce one that "looks like" something you show it.)
They do sometimes, or at least they used to. I have some (very limited) visual art training, and one of the things I/we did in class was manually mash up already existing works. In my case I smushed the Persistence of Memory and the Arnolfini portrait. It was pretty clear copycat; the work was divided into squares and I poorly replicated the Arnolfini Portrait from square to square.
I think the parent's point about (2) wasn't about money, but category. A human is a human and has rights, an AI model is a tool and does not have rights. The two would not be treated equally under the law in any other circumstances, so why would you equate them when discussing copyright?
Have to disagree with point 1, often this is what artists are doing. More strictly in the music part (literally playing others songs), less strictly in the drawing part. But copying, incorporating and developing are some of the core foundations of art.
As a human, I can use whatever I want for reference for my drawings. Including copyrighted material.
Now, as for training "AI" models, who knows. You can argue it is the same thing a human is doing or you could argue it a new, different quality and should be under different rules. Regardless, the current copyright laws were written before "AI" models were in widespread use so whatever is allowed or not is more of a historic accident.
So the discussion needs to be about the intention of copyright laws and what SHOULD be.
So, as a human, the individual(s) training the AI or using the AI to reproduce copyrighted material, are responsible for the copyright infringement, unless explicitly authorized by the author(s).
This would be a fairly novel law as it would legislate not just the release of an AI but the training as well? That would imply legislating what linear algebra is legal and illegal to do, no?
And practically speaking, putting aside whether a government should even be able to legislate such things, enforcing such a law would be near impossible without wild privacy violations.
> That would imply legislating what linear algebra is legal and illegal to do, no?
No, it would just legislate what images are and which ones are not on the training data to be parsed, artists want a copyright which makes their images unusable for machine learning derivative works.
The trick here is that eventually the algorithms will get good enough that it won't be necessary for said images to even be on the training data in the first place, but we can imagine that artists would be OK with that
> but we can imagine that artists would be OK with that
No they won't. If AI art was just as good as it is today, but didn't use copyrighted images in the training set, people would absolutely still be finding some other thing to complain about.
Artists just don't want the tech to exist entirely.
> The trick here is that eventually the algorithms will get good enough that it won't be necessary for said images to even be on the training data in the first place, but we can imagine that artists would be OK with that
They shouldn't be OK with that and they probably aren't. That's a much worse problem for them!
The reason they're complaining about copyright is most likely coping because this is what they're actually concerned about.
I am not allowed to print $100 bills with my general-purpose printer. Many printing and copy machines come with built-in safeguards to prevent users from even trying.
It's quite possible to apply the same kind of protections to generative models. (I hope this does not happen, but it is fully possible.)
Entirely different scales apply here. You can hardcode a printer the 7 different bills each country puts out no problem, but you cannot hardcode the billions of "original" art pieces that the model is supposed to check against during training, its just infeasible.
Not exactly true. Given an image, you can find the closest point in the latent space that image corresponds to. It is totally feasible to do this with every image in the training set, and if that point in the latent space is too close to the training image, just add it to a set of "disallowed" latent points. This wouldn't fly for local generation, as the process would take a long time and generate a multi gigabyte (maybe even terabyte) "disallowed" database, but for online image generators it's not insane.
If you do need permission, is Page Rank a copyright infringing AI, or just a sparkling matrix multiplication derived entirely from everyone else's work?
A bulldozer destroys the park and other people's ability to enjoy it -- active, destructive.
Passively training a model on an artwork does not change the art in the slightest -- passive, non-destructive
Mind you, this is not talking about the usage rights of images generated from such a model, that's a completely different story and a legal one.
Bad argument. Being allowed to see art and being allowed to copy art are two different things. Being allowed to _copy_ is a reserved _right_, that's the root of the word copyright.
Bad argument. Copying art is not the crime, distributing the copied art is the crime. The Disney Gestapo can't send storm troopers to your house if your kid draws a perfect rendition of Mickey, but they can if your kid draws a bunch of perfect renditions and sells them online.
This falls apart for 2 reasons. First, I don't think there's any technical definition of "inspiration" that applies to a deeply nested model of numerical weights. It's a machine. A hammer does not draw inspiration from nails that have been hammered in before. Second an AI is not a human under the law and there's no reason to think that an activity that would be considered "transformative" (e.g. learning then painting something similar) when done by a human would still be considered such if performed by an AI.
It’s blatantly obvious that regardless of if it will work or not, they’re trying to get companies with enough money to file law suits to make a move and do so.
> I don’t see the point.
…or you don’t agree with the intent?
I’m fine with that, if so, but you’d to be deliberately trying very hard not to understand what they’re trying to do.
Quite obviously they’re hoping, similar to software that lets you download videos from YouTube, that tools that enable things are bad, not neutral.
Agree / disagree? Who cares. I can’t believe anyone who “doesn’t get it” is being earnest in their response.
Will it make any difference? Well, it may or may not, but there’s a fair precedent of it happening, and bluntly, no one is immune to law suits.
> I don't get why the AI companies would be effected in any way.
It doesn't necessarily matter if they're affected. My thought when seeing this is that they want some legal precedent to be set which determines that this is not fair use.
Surely, if the next Stable Diffusion had to be trained from a dataset that has been purged of images that were not under a permissive license, this would at most be a minor setback on AI's road to obsoleting painting that is more craft than art. Do artists not realise this (perhaps because they have some kind of conceit along the lines of "it only can produce good-looking images because it is rearranging pieces of some Real Artists' works it was trained on"), are they hoping to inspire overshoot legislation (perhaps something following the music industry model in several countries: AI-generated images assumed pirated until proven otherwise, with protection money to be paid to an artists' guild?), or is this just a desperate rearguard action?
I think this drastically overestimates what current AI algorithms are actually capable of, there is little to no hint of genuine creativity in them. They are currently severely limited by the amount of high quality training data not the model size. They are really mostly copying whatever they were trained on, but on a scale that it appears indistinguishable from intelligent creation. As humans we don't have to agree that our collective creative output can be harvested and used to train our replacements. The benefits of allowing this will be had by a very small group of corporations and individuals, while everyone else will lose out if this continues as is. This will and can turn into an existential threat to humanity, so it is different from workers destroying mechanical looms during the industrial revolution. Our existence is at stake here.
And everything else is just copying with either small tweaks or combinations. There’s a reason art went through large jumps in understanding from cave paintings to where we are today.
I was the first photographer I knew of that combined astrophotography with wedding portraiture. That was new. Now lots of people do it - far better than me (I rarely get the chance)!
I’m a small fry so they almost assuredly didn’t get the idea from me, before anyone says I claim otherwise. There were probably a few photographers who thought to do it and now everybody has seen it and emulates it. The true artists put just a little spin on it, from which others will learn. So it goes.
This has been a line of argument from every Luddite since the start of the industrial revolution. But it is not true. Almost all the productivity gains of the last 250 years have been dispersed into the population. A few early movers have managed to capture some fraction of the value created by new technology, the vast majority has gone to improve people's quality of life, which is why we live longer and richer lives than any generation before us. Some will lose their jobs and that is fine because human demand for goods and services is infinite, there will always be jobs to do.
I really doubt that AI will somehow be our successors. Machines and AI need microprocessors so complex that it took us 70 years of exponential growth and multiple trillion-dollar tech companies to train even these frankly quite unimpressive models. These AI are entirely dependent on our globalized value chains with capital costs so high that there are multiple points of failure.
A human needs just food, clean water, a warm environment and some books to carry civilization forward.
There is a significant contingent of influential people that disagree. "Why the future doesn't need us" (https://www.wired.com/2000/04/joy-2/), Ray Kurzweil etc.
This is qualitatively different than what the Luddites faced, it concerns all of us and touches the essence of what makes us human. This isn't the kind of technology that has the potential to make our lives better in the long run, it will almost surely be used for more harm than good. Not only are these models trained on the collectively created output of humanity, the key application areas are to subjugate, control and manipulate us. I agree with you that this will not happen immediately, because of the very real complexities of physical manufacturing, but if this part of the process isn't stopped in its tracks, the resulting progress is unlikely to be curtailed. I at least fundamentally think that the use of all of our data and output to train these models is unethical, especially if the output is not freely shared and made available.
It seems we are running out of ways to reinvent ourselves as machines and automation replace us. At some point, perhaps approaching, the stated goal of improving quality of life and reduce human suffering ring false. What is human being if we have nothing to do? Where are the vast majority of people supposed to find meaning?
I don't see why machines automatically producing art takes away the meaning of making art. There's already a million people much better at art than you or I will ever be producing it for free online. Now computers can do it too. Is that supposed to take away my desire to make art?
I've been lucky enough to build and make things and work in jobs where I can see the product of my work - real, tangible, creative, and extremely satisfying. I can only do this work as long people want and need the work to be done.
> They are really mostly copying whatever they were trained on
People keep saying this without defining what exactly they mean. This is a technical topic, and it requires technical explanations. What do you think "mostly copying" means when you say it?
Because there isn't a shred of original pixel data reproduced from training data through to output data by any of the diffusion models. In fact there isn't enough data in the model weights to reproduce any images at all, without adding a random noise field.
> The benefits of allowing this will be had by a very small group of corporations and individuals
You are also grossly mistaken here. The benefits of heavily restricting this, will be had by a very small group of corporations and individuals. See, everyone currently comes around to "you should be able to copyright a style" as the solution to the "problem".
Okay - let's game this out. US Copyright lasts for the life of author plus 70 years. No copyright work today will enter public domain until I am dead, my children are dead, and probably my grandchildren as well. But copyright can be traded and sold. And unlike individuals, who do die, corporations as legal entities do not. And corporations can own copyright.
What is the probability that any particular artistic "style" - however you might define that (whole other topic really) - is truly unique? I mean, people don't generally invent a style on their own - they build it up from studying other sources, and come up with a mix. Whatever originality is in there is more a function of mutation of their ability to imitate styles then anything else - art students, for example, regularly will do studies of famous artists and intentionally try to copy their style as best they can. A huge amount of content tagged "Van Gough" in Stable Diffusion is actually Van Gough look-alikes, or content literally labelled "X in the style of Van Gough". It had nothing to do with them original man at all.
I mean, zero - by example - it's zero. There are no truly original art styles. Which means in a world with copyrightable art styles, all art styles eventually end up as a part of corporate owned styles. Or the opposite is also possible - maybe they all end up as public domain. But in both cases the answer is the same: if "style" becomes a copyrightable term, and AIs can reproduce it in some way which you can prove, then literal "prior art" of any particular style will invariably be an existing part of an AI dataset. Any new artist with a unique style will invariably be found to simply be 95% a blend of other known styles from an AI which has existed for centuries and been producing output constantly.
In the public domain world, we wind up approximately where we are now: every few decades old styles get new words keyed into them as people want to keep up with the times of some new rising artist who's captured a unique blend in the zeitgeist. In the corporate world though, the more likely one, Disney turns up with it's lawyers and says "we're taking 70% or we're taking it all".
Ok, let me try to be technical. These models fundamentally can be understood as containing a parametrised model of an intractable probability distribution ("human created images", "human created text"), which can be conditioned on a user provided input ("show me three cats doing a tango", "give me a summary of the main achievements of Richard Feynman") and sampled from. The way they achieve their impressive performance is by being exposed to as much of human created content as possible, once that has happened they have limited to no ways of self-improvement.
I disagree that there is no originality in art styles, human creativity amounts to more than just copying other people. There is no way a current gen AI model would be able to create truly original mathematics or physics, it is just able to reproduce facsimile and convincing bullshit that looks like it. Before long the models will probably able to do formal reasoning in a system like Lean 4, but that is a long way of from truly inventive mathematics or physics.
Art is more subtle, but what these models produce is mostly "kitsch". It is telling that their idea of "aesthetics" involves anime fan art and other commercial work. Anyways, I don't like the commercial aspects of copyright all that much, but what I like is humans over machines. I believe in freely reusing and building on the work of others, but not on machines doing the same. Our interests are simply not aligned at this point.
Trying to be exact about "mostly copying", I want to contrast Large Language Models (LLM) with Alpha Go learning to play super human Go through self play.
When Alpha Go adds one of its own self-vs-self games to its training database, it is adding a genuine game. The rules are followed. One side wins. The winning side did something right.
Perhaps the standard of play is low. One side makes some bad moves, the other side makes a fatal blunder, the first side pounces and wins. I was surprised that they got training through self play to work; in the earlier stages the player who wins is only playing a little better than the player who loses and it is hard to work out what to learn. But the truth of Go is present in the games and not diluted beyond recovery.
But a LLM is playing a post-modern game of intertextuality. It doesn't know that there is a world beyond language to which language sometimes refers. Is what a LLM writes true or false? It is unaware of either possibility. If its own output is added to the training data, that creates a fascinating dynamic. But where does it go? Without Alpha Go's crutch of the "truth" of which player won the game according to the hard coded rules, I think the dynamics have no anchorage in reality and would drift, first into surrealism and then psychosis.
One sees that AlphaGo is copying the moves that it was trained on and a LLM is also copying the moves that is was trained on and that these two things are not the same.
Exactly this, and it was clear based on the backlash got SD 2.0 after they removing artist labels and getting 'less creative'. Most people are not interested on the creative aspect, just looking for a easy way to copy art from people they admire.
There’s only one way to figure it out - train on a properly licensed content and show them that.
Your line of reasoning sounds like “ah, we already won so your protest doesn’t matter anyway”, but did you already win actually? Do you really not need all their development to draw on the same level? Just show that.
I'm not in AI and my GPU barely runs games from 10 years ago, so I'll pass. To be more precise, though, I think that it _seems_ that their protest won't matter, but the one way in which I see that it may (the second out of three options) leads to an outcome that I would just consider bad in the short term (for society, and for artists that are not established enough to benefit from any emerging redistribution system; we observe cases in Germany every so often where pseudonymous musicians are essentially forced to charge for their own performances and redirect proceeds to rent-seekers and musicians that are not them, because they can't prove ownership of their own work to GEMA's satisfaction).
Also if a theoretical purged-dataset SD were released, it would still be easy and cheap for users to extend it to imitate any art style the want. As they wouldn't be redistributing the model and presumably they would use art they have already licensed the copyright issue would be further muddled.
I think attempting to prevent this is a losing battle.
To quote another comment but "Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits."
I think this isn't just a simple discussion on competition and copyright, I think it's a much larger question on humanity. It just seems like potentially a bleak future if enjoyable and creative pursuits are buried and even surpassed by automation.
If the pursuit is enjoyable, it should continue to be enjoyable as a hobby, no?
Meanwhile, where is my levy of custom artists willing to do free commission work for me? It’s enjoyable, right?
I see a lot of discussion about money and copyright, and little to no discussion about the individual whose life is enriched by access to these tools and technologies.
As for your bleak future… will that even come to pass? I don’t know. Maybe it depends on your notion of “surpass”, and what that looks like.
> If the pursuit is enjoyable, it should continue to be enjoyable as a hobby, no?
I think for most people the enjoyable and fulfilling part of life is feeling useful or having some expression and connection through their work. There's definitely some people who can create in a vacuum with no witness and be fulfilled, but I think there's a deep need for human appreciation for most people.
> As for your bleak future… will that even come to pass? I don’t know. Maybe it depends on your notion of “surpass”, and what that looks like.
I don't know either, maybe it will be fine. Maybe this will pass like the transition from traditional to digital. But something about this feels different...like it's actually stealing the creative process rather than just a paradigm shift.
Yeah maybe, but I think we also already have a problem with overconsumption of media though. I am not sure this is helping.
It seems inevitable and I don't think we can stop it, but I just am kind of worried about the collective mental health of humanity. What does a world look like where people have no jobs and even creative outlets are dominated by AI? Are people really just happy only consuming? What even is the point of humanity existing at that point?
The only signature and branding for a artist when they solicit commissions and clients for future work than for selling completed paintings (unless their dead)
No, the appeal of the artist is the artist. The art does offer a means to connect with the artist. It does not follow that the art may not offer its own appeal besides.
If they competed with me by throwing my product through a decompiler, fed it into an AI model, and selling the generated output, I'd be pretty upset about it.
Which is pretty close to the actual issue here, that artists did not give their permission to use their own work to generate their competition.
The big issue is precisely this, yeah, living* artists are upset that an ai can take their own names as input and output their artistic styles, that's the big thorn with these ml systems
There is a secondary issue on that there is other people being able to craft high quality images with strong compositions without spending the "effort/training" that artists had to use over years to produce them, so they are bitter about that too, but that's generally a minor cross-section of the publicvoutcry tho they are quite vitriolic
Photobashing, tracing, etc there have always been a layer of purists whom look down on anyone that doesn't "put the effort in" yet get great results in a timely manner, these purists will always exist, just like how it was when digital painting was starting, people were looked down by oil painters for not putting the effort in, even when oil painters themselves used tricks like projectors to the empty blank canvas to get perspective perfect images, but that's just human nature to a degree, trying to put down other people while yourself doing tricks to speed up processes
> Would you mind if AI starts creating art like yours?
The law isn't there to protect my feelings, so whether I mind or not is irrelevant. Artists have had to deal with shifting art markets for as long as art has been a profession.
> What if your clients tell you they bought the AI generated art instead of yours?
I'd be sad and out of a source of income. Much the same way I would be if my clients hired another similar but cheaper artist. The law doesn't guarantee me a livelihood.
I still don't see how this isn't the "Realistic Portrait/Scenic Painters vs Photography" argument rehashed.
Imagine you are a painter and you have developed your expertise in photorealistic painting over your entire lifetime.
Would you mind if someone snaps a photograph of the same subject you just painted?
What if your commissioners tell you they decided to buy a photograph instead of your painting because it looked more realistic?
Every argument I've seen against AI art is an appeal to (human) ego or an appeal to humanity. I don't find either argument compelling. Take this video [0] for example and half of the counterarguments are an appeal to ego - and one argument tries to paint the "capped profit" as a shady dealing of circumventing laws without realizing (1) it's been done before, OpenAI just tried slapping a label on it and (2) nonprofits owning for-profit subdivisions is commonplace. Mozilla is both a nonprofit organization (the Foundation) and a for-profit company (the Corporation).
E:
I'm going to start a series of photographs that are intentionally bad and poorly taken. Poor framing, poor lighting, poor composition. Boring to look at, poor white balance, and undersaturated photos like the kind taken on overcast days. With no discernable subjects or points of interest. I will call the photos art - things captured solely with the press of a button by pointing my camera in a direction seemingly at random. I'm afraid many won't understand the point I am making but if I am making a point it does make the photographs art - does it not? I'm pretty sure that is how modern art works. I will call the collection "Hypocrisy".
Chosen because it is grey and boring. The light is not captured by the fabric in any sort of interesting manner - the fabric itself is quite boring. There is no pattern or design - just a bland color. There is nothing to frame - a section of the curtain was taken at random. The photo isn't even aligned with the curtain - being tilted some 40 odd degrees. Nor is the curtain ever properly in focus. A perfect start for a collection of boring, bland photos.
A second photo has been added to the collection - for anyone who thought I might be joking about doing this.
Photos will periodically be added to the collection - not that I expect anyone whatsoever to ever be interested in following a collection of photos that is meant to be boring and uninspired. However - feel free to use this collection of photos as a counterargument to the argument that "art requires some effort". I promise that I will put far less thought and effort into the photos of this collection than I have in any writing of prompts for AI generated art that I've done.
Art is little more than a statement and sometimes a small statement can carry a large message.
Tomorrow I will work on setting up a domain and gallery for the images - to facilitate easier discussion and sharing. Is the real artistic statement the story behind the collection and not the collection itself? How can the two be separated? Can one exist without the other?
“We” decide on today’s issues, not on all future possibilities. The reason for that decision in the past was to allow many creators to create without being too held back by “private property” signs everywhere. The current situation allows AI to create but demotivates creators. Now it’s time to think what will we do when AI wouldn’t pick a new style and there are not enough creators anymore who can or want to do that, whether it is a near future problem or maybe not a problem at all, and what should we decide again.
Simply hiding in an obsolete technicality is sure a wrong way to handle it.
Style is entirely subjective and impossible to define. Van Gogh had a style. Are we going to say that we would want a society where only Van Gogh is allowed to make Impressionist paintings? Who decides if your painting is similar enough to Van Gogh that it’s illegal? What if your style is simplistic. Are you going to need to compare your art to all published art to make sure a court couldn’t find it “too similar”? What if we make a painting with AI that is a mix of Picasso and Van Gogh? Style?
It’s a stupid concept. It would never work. Even the visualizations we see that are explicitly attempting to copy another artist’s style are often still clearly not exactly the same.
I don’t think style will be a subject here at all. Maybe we’ll settle on that AI user must take an exicit permission before training on someone’s content and humans must not.
This is true, and many bitter wars are fought over ISS licensing. I’m not sure it derails his point - there’s an awful lot of BSD, MIT etc licensed code out there.
Would they mind if another artist would create the same art-style independent of them? Or something 99% alike? 95%? How many art-styles are even possible without overlapping too much?
The idea that the AI will compete with you by copying your unique style seems like exactly the sort of short-sighted conceit that I alluded to in my post above. As an artist, would you be much happier if, rather than the AI copying your style, the AI generated infinitudes of pictures in a style that the overwhelming majority of humans prefers to yours, so that you couldn't hope to ever create anything that people outside of a handful of hipsters and personal friends will value?
> The idea that the AI will compete with you by copying your unique style seems like exactly the sort of short-sighted
Could you please elaborate, why its "short-sighted"?
> As an artist, would you be much happier if, rather than the AI copying your style, the AI generated infinitudes of pictures in a style that the overwhelming majority of humans prefers to yours, so that you couldn't hope to ever create anything that people outside of a handful of hipsters and personal friends will value?
You mean that any artist should be just happy that his work is used by other people / rich corporation / AI without consent? Cool, cool.
> Could you please elaborate, why its "short-sighted"?
Because it's barely been a year since we've gone from people confidently asserting that AI won't be able to produce visual art on the level of human professionals at all to the current situation. Predictions on ways in which AI performance will not catch up to or overtake human performance have a bad track record at the moment, and it has not been long enough to even suspect that the current increase in performance might be plateauing. Cutting-edge image generation AI appears to often imitate human artists in obvious ways now, but it seems quite plausible that the gap between this and being "original"/as non-obvious in your imitation of other humans as those high-performing human artists that are considered to be original is merely quantitative and will be closed soon enough.
> You mean that any artist should be just happy that his work is used by other people / rich corporation / AI without consent? Cool, cool.
I don't know how you get that out of what I said. Rather, I'm claiming that artists will have enough to be unhappy about being obsoleted, and the current direction of their ire at being "copied" by AI may be a misdirection of effort, much as if makers of horse-drawn carriages had tried to forestall the demise of their profession by complaining that the design of the Ford Model T was ripped off of theirs (instead of, I don't know, lobbying to ban combustion engines altogether, or sponsoring Amish proselytism).
I’m sure artists realise that. They also realise the power of these things and I see this more as a fight against survival. They’re up against the wall and they know it, and they’re incredibly well connected and have invested their lives up to now into this so they won’t just lie down without a fight (trying anything).
I sympathize with artists on this matter, but they're really bad at protesting.
AI Mickey Mouse is a possible copyright as well as trademark violation which would likely be enforced in the exact same way if you were to hand draw it. This type of violation is not AI specific.
The main threat that AI poses is not that it outputs copyrighted characters, instead brand new works that are either totally new (idea is never drawn before but the style is derived) or different enough from a known character to be considered a derived work.
Another way to put it: artists' current job is not to draw mickey. It is to draw new works, which is the part AI is threatening to replace. Sure, Disney may chase the AI companies to remove Mickey from the training set, and then we lost AI Mickey. That doesn't solve any problem because there are no artist jobs that draw Mickey.
Even in the case of extreme success where it becomes illegal to train a copyrighted image without explicit consent, the AI problem doesn't go away. They'll just use public domain images. Or sneak in consent without you knowing it. As was the case with your "free and unlimited" Google Photos.
Finally, if there's any player interested in AI art, it has to be Disney. Imagine the insane productivity gains they can make. It's not reasonable to expect that they would fight AI art very hard. Maybe a little, for the optics.
I think you are giving the AI too much credit in being able to pull out the trademarked bits. Artists can introduce trademark iconography into their work as a poision pill. Sort of like GPL but with more powerful allies.
I don't really believe that. An example: "Robot owl in the style of van Gogh".
This will closely mimic van Gogh's style but nobody cares because style cannot be copyrighted in itself. So it draws a robot owl, which for the sake of this example, is a new character.
Zero copyright violations.
My point remains that AI users aren't going to aim for output that directly looks like an existing character. These artists are now intentionally doing that for the sake of the protest but this is not how AI is used. It's used to create new works or far-derived works.
With this line of reasoning, I'd bet on a cat-and-mouse game between "poison pill"-removing AI generators and new tools and techniques for introducing the pill.
So they're protesting alleged copyright violations in the form of AI copying artistic styles (presuming an artistic style alone rises to the level of copyright protection) by committing trademark violations? Yeah, I don't get it.
I can appreciate that there are all kinds of potential "intellectual property" issues with the current glut of AI models, but the level of misunderstanding in some affected communities is concerning.
Outside of lawyers, what communities do you think should have an "understanding" of intellectual property law, and to what degree? Or, maybe the fact that it takes a lawyer to truly understand it indicates that the complexity of applicable laws and regulations isn't beneficial to the communities they're ostensibly meant to protect?
Communities that generate and/or profit off of "intellectual property" ought to have a rudimentary understanding of the laws involved. Doubly so if they're protesting what they see as violations of those laws. It honestly does not take a lawyer to understand the distinctions at play here.
When I took my graphic design class in college, there was a big chunk about copyright and trademark. We had to be very cautious about images we were using and the difference between the two was drilled into our heads.
Challenging to navigate. These demonstrations are technically copyright infringement if done for financial gain (selling a T-shirt with Mickey Mouse icon). The same would be true if you were to draw by hand Mickey Mouse with a gun and sold it on a T-shirt. The only exception would be if it is a clear derivative, or satire, or parody, or personal use of course.
The challenging part is that these artists are protesting the use of 'style' in AI synthesized media. That is, an artist's style is being targeted (or, even, multiple artist's styles are combined in a prompt to create a new AI-original work). This is not protected by copyright—if you draw a new scene in another artist's style, it would be perhaps unethical, but legally derivative work.
If the artists who are challenging these AI systems do get there way, and they are able to legally copy-protect their "style" (like a certain way of brush strokes), this would inevitably backfire against them. To give an example: any artist whose work now too closely resembles the "style" of Studio Ghibli might be liable to copyright infringement, where before the work would be clearly derivative, or just influenced by another work, as is the case with most art over time.
Challenging legally, and challenging philosophically. I would think an artist doing it for the art would embrace the fleeting nature of all things, including art. Some artists demonstrate this by creating temporary art, or even throwing their own art away after making it. The desire to make money from art is certainly reasonable, but accepting a world where all art styles are immediately mimicked, where art is trivialized and commoditized, and where there's no recognition to be had let alone money... that's going to be a tough philosophical pill to swallow.
We already have a great example of a group that has fought technological development of a synthetic alternative to their product -- the diamond industry.
For years DeBeers and other diamond moguls have run extensive propaganda campaigns to try to convince people that lab-grown diamonds are physically, emotionally, and morally inferior. They had a lot of success at first. Based on lobbying, the US FTC banned referring to lab-grown diamonds as "real", "genuine", or even "stone". It required the word "diamond" be prefixed with "lab-grown" or "synthetic" in any marketing materials.
Technology kept improving, economies of scale applied, and consumer demand eventually changed the balance. The FTC reversed its rulings and in 2022 demand for lab-grown stones (at small fractions of equivalent natural prices) is at an all-time high.
Artists (and writers, and programmers) can fight against this all they like, and may win battles in the short term. In the end the economic benefits accruing to humankind as a result of these technologies is inexorably going to normalize them.
I'm still organising my thoughts on the subject so please feel free to push back.
This ongoing discussion feels classist. I've never seen such strong emotions about AI (and automation) taking blue-collar jobs, some shrugs at most. It's considered an unavoidable given, even though it has been happening for decades. The only difference now is that AI is threatening middle-upper class jobs, which nobody saw coming.
I do not see the difference between both. Can somebody that does explain to me why now is "critical" and not so much before?
I have to wonder whether there's a relationship between the stage you're at in your career and your level of panic over AI. There's a lot of hope among students that a job in the technology sector will pull a person into the middle class -- and with significantly less weird antiquated institutional classism than in other lines of work like law.
Personally, I'm new in my career, and I'd like to not have the rug pulled out from under me. If I were a student again, I would have to consider whether the university debt was going to be worth it in the long term or if I should look at a more traditional field to be in.
My job as a dev is literary to automate human work.
My first job was to write C code for industrial machines that replaced humans doing manual work. Sometimes I even had to go watch them work so I could fully understand what they were doing.
In my second job as a developer, I wrote a Django application that automated away a whole department in the company. I saw 100 people getting fired due to a script that I wrote.
That was all happening in the third world country were I came from. These were real people getting fired, with families that depend on them. Most of them were already in poverty even before being fired.
These artists complaining sound like a very 1st world problem to me. I doubt that anyone actually "lost a job" because of this technology so far.
It probably has to do with the desirability of the replaced jobs. There's definitely an element of classism, but also people /really/ want to be artists and make a living doing that specific thing. There's the "starving artist" who will take little pay just as long as they can do art, but I don't think we have a similar idea for lots of blue collar work. How much do factory workers have passion for their work, vs it being the best paying option for them? Not to say there aren't any, but there's for sure a desirability difference.
Also I'm not sure most artist jobs are middle-upper class.
That is true: that's why certain industries endure more mistreatment than others (acting, fashion, design in general). Passion can be a pain in the ass when it gets in the way of just making a living.
However, these are individual reactions, not behaviours as a community/society. If you read comments around HN or some other liberal circles, you have the feeling that is our human-ness is being threatened, one of our core defining traits. It seems like "artistic creativity" is being enshrined as a circular argument (also I'm wary of calling startup-landing-page illustrators "artists" – more like craftpeople, although this distinction might hurt the conversation).
My broader point is that ChatGPT is not "the beginning of the end", but another chapter in a history of automation and replacement that will pose serious challenges for humankind. That treating it as more critical than factory automation is demeaning to blue-collar workers and also untrue. Everything we do is what defines us as people: cherry-picking some skills is a relic from Enlightenment we should get rid of.
> Also I'm not sure most artist jobs are middle-upper class.
I do not have any data at hand, only my circle of friends and former colleagues (I was formerly a graphic designer). Few people endure being a "starving artist" without a little financial safety coming from above. Also, it is a profession that only provides status to a certain socio-economic milieu.
Context, too long to fit into the HN title: "In order to protest AI image generators stealing artists work to train AI models, the artists are deliberately generating AI art based on the IP of corporations that are most sensitive to protecting it."
But the premise is just bad law. Disney does, in fact, hold a copyright on the Mickey Mouse character (at least until the end of 2023) [1]. It doesn't matter where the art comes from. Anyone making copies of something with Mickey Mouse in it -- whether drawn by a Disney artist, or drawn by someone else, or "drawn" by an AI -- is violating their copyright (at least for another year).
On the other hand, nobody owns a copyright on a specific style. If I go study how to make art in the style of my favorite artist, that artist has no standing to sue me for making art in their style. So why would they have standing to sue for art generated by an AI which is capable of making art in their style?
Interesting approach, but is drawing fan art illegal?
I would think that generating those images is okay by Disney, the same as if I painted them. The moment Disney would object is when I start selling them on merch, at which point it is irrelevant how they were created.
Artists have a complicated ethical system where 1. reposting/tracing a solo artist's images without "citing the artist" is "stealing" (copyright violation) 2. imitating their style is also "stealing" but 3. drawing fanart of any series without asking is fine and 4. any amount of copyright violation is not only fine but encouraged as long as it's from a corporation.
The punishment for breaking any of these rules is a lot of people yell at you on Twitter. Unfortunately, they've been at it so long that they now think these are actual laws of the universe, although of course they have pretty much nothing to do with the actual copyright law.
That actual law doesn't care if you're selling it or not either, at least not as a bright line test.
(Japanese fanartists have a lot more rules, like they won't produce fan merch of a series if there is official merch that's the same kind of object, or they'll only sell fan comics once on a specific weekend, and the really legally iffy ones have text in the back telling you to burn after reading or at least not resell it. Some more popular series like Touhou have explicit copyright grants for making fanart as long as you follow a few rules. Western fanartists don't read or respect any of these rules.)
Japan doesn't have fair use, so the only thing ensuring that copyright owners don't go after fanartists is that fanart is generally either beneficial to them or is not worth going after. However that would change if the artist were attempting to directly interfere with their revenue, which is why they won't do things like producing imitations of merch.
Copying an artist's style isn't in and of itself looked down upon, any artist will tell you that doing so is an important part of figuring out what aspects of it one likes for their own style. The problem with AI copying it is that the way the vast majority of users are using it isn't in artistic expression. The majority of them are simply spamming images out in an attempt to gain a popularity "high" from social media, without regard for any of the features of typical creative pursuits (an enjoyment of the process, an appreciation for other's effort, a desire to express something through their creativity, having some unique intentional and unintentional identifying features).
Honestly maybe the West messed up having such broad fair use protections since it seems people really have no respect for any creative effort, judging by all the AI art spam and all the shortsighted people acting smug about it despite the questions around it being pretty important to have a serious conversation about, especially for pro-AI folk.
The AI art issue has several difficult problems that we are seemingly too immature to deal with, it makes it clear how screwed we'd be as a society if anything approaching true AGI happened to be stumbled upon anytime soon.
I'm not saying that because I think all derivative creativity is lesser than 'original' creativity. Rather, we've gotten so used to such broad protections on all creativity that a good chunk of us genuinely think that their dozens of minor variations on a popular prompt entirely spat out by a tool and published to a site every hour are at the same level of creativity as something even just partially drawn by a person (eg characters drawn into an AI generated background or AI generated character designs then further fixed up).
The vast majority of AI art I've seen on sites like Pixiv has been 'generic' to the level of the 'artist' being completely indistinguishable from any other AI-using 'artist'. There has been very little of the sort where the AI seemed to truly just be a tool and there was enough uniqueness to the result that it was easy to guess who the creator was. The former is definitely less creative than the latter.
Understood. I was mostly making a defense of collages, remixes, mashups, and other legally-derivative works that are equally, if not more so, creative than the original sources.
Fan art is pretty much illegal or infringement actually it's just not really enforced by most companies. There are some caveats for fair use but generally most fan art could be successfully taken down if a company was motivated enough in my opinion. Nintendo is pretty notorious for this but it has rarely gone to court as most people are too scared to fight takedown requests.
Copyright isn't level legal vs illegal, it's infringing vs non-infringing. Fan art very often could be argued to be infringing, but no company has any reason to pursue it in the vast majority of cases, so they just don't.
It's very confusing, especially when you have to consider trademark as related but separate.
Chill out, people. Humans are still great generalists. We are pretty capable of leveraging these tools to amplify our productivity. It's only the specialists between us that are going to suffer a lot with these new developments. All these AI innovation is truly showing us how pathetic our ability to deeply understand and specialize at something is. We are always going to lose to computers, be it in chess, go or art. Therefore, we should cultivate our generalist skills and stop fighting AI progress.
This will be a losing battle for artists. Anyone can train any data they want. It's the equivalent of a human learning to draw someone else's art style or take photos the same as a famous photographer. There is no stopping it now and it's only going to get better and easier. Video is getting close to being just as accessible as image or text generation. Regardless of how you feel about all this, there's no stopping it. It's the future.
A fascinating angle I heard recently is that when the new tech of photography swept the world, it made tons of painters unemployed.
And that was the main reason for "modern art". A camera can do a portrait or landscape instantly and more precise than a painter, but it can't compete on abstract or imagined pictures.
Will something analogous happen when AIs takes over other industries? I have no clue, but it will, as always, be interesting to see what happens.
I’m going to be a broken record here: both of the words “artificial” and “intelligent are hellaciously difficult to define, put them together and you’ve got a real epistemological quantum on your hands.
What we’re actually always talking about is “applied computational statistics”, otherwise known as ML.
And if an artist wants to sample from the distribution of beautiful images and painting and photographs as a source of inspiration, why not? We do it in other fields.
But using a computer to sample from that same distribution and adding nothing will be rightly rewarded by nothing.
My memory of this is really fuzzy, so I'm probably getting the details wrong.
I watched a documentary in roughly the early oughts about AI. The presenter might have been Alan Alda.
In one segment, he visited some military researchers who were trying to get a vehicle to drive itself. It would move only a few inches or feet at a time as it had to stop to recalculate.
In another segment, he visited some university researchers who set up a large plotter printer to make AI-generated art. It was decent. He saw it could depict things like a person and a pot, so he asked if it would ever do something silly to us like put a person in a pot. The professor said not to be silly.
To jokingly answer the title question: everyone who saw that one specific documentary 20 years ago knew that AI art was way ahead of AI machines.
Art is useful when someone subjectively finds it enjoyable or meaningful. While it might not achieve all of what humans can, the barrier to entry is relatively lower.
Wow, good call. The car part was probably from season 7 episode 5, which first aired in 1997. I skimmed the video and didn't see the art part, so maybe that was a different show. Apparently it's was 25 years ago, which explains my fuzzy memory of it.
I find it weird how I don’t see any mention of the TDM exceptions (“Text and Data Mining”) that already explicitly allows AI companies to train on copyrighted data, in some cases even allowing them to ignore opt-outs (such as in research institutions). This is already implemented in the UK, EU, Japan and Singapore.
It seems to me that the online discourse is very US-centric, thinking that the AI regulatory battles are in the future, when in some other countries it’s already over.
If a human wrote the prompt, how is AI different from a paintbrush or any other tool of the trade?
Every tool makes some of the 'decisions' about how the artwork results by adding constraints and unexpected results. If anything I'd argue that AI art allows for more direct human expression: going from mental image to a sharable manifestation has the potential to be less lossy with art than with paint.
This feels like a bunch of misplaced ludditism. We need to implement a UBI because 99.9% of human labor is going to be valued below the cost of survival in the next 50-100 years. Always fun to see people thumbing their nose at Disney though.
I think it is different because you don't need any pictures to create a paintbrush or a pencil. You can still have the AI code as tool, but without the dataset (images), it won't go anywhere.
The slice Im curious about is what happens, when you let loose your AI art generator and start copy/trademarking everything it creates to basically make sure all kinds of art that could have been created is potentially infringing for you?
The art equivalent of patent trolling or domain squatting basically. Is that possible legally?
As an artist, I already realized that the war is lost, without a fight.
There is no way to stop the removal of human labor. At first, A.I. tools will need supervision and optimization, but soon they will do this by themselves.
I moved all of my art related work into a real medium.
If someone in the future finds value of owning an actual art, I will provide.
If people are happy with metaverse A.I. generated images, projected in their minds, so be it.
It is over. The rest is just an echo of human civilization. Transhumanistic clones are coming to town:)
The war is not lost. The goal isn't to try to force people to never be able to use AI to generate art, but to force them to only use input that they gave permission to use.
AI replacing artists functionally is just the surface fear. The real problem is using AI as an automated method of copyright laundering. There's only so much hand waving one can do to excuse dumping tons of art that you didn't make into a program and transform it into similar art and pretend like you own it. People like to pretend that it's like a person learning and replicating a style, but it's not. It's a computer program and it's automated. That the process is similar is immaterial.
I'm thinking the same way, plein air painting is a nice activity. You get something nothing can take away from you, any kind of mark you make with your own body is yours. At least at the moment, using prompt- or inpainting-based tools feels like talking through Microsoft Sam (voice synth).
To me, it's an art vs. craft issue and there are many shades of gray to the discussion, because the root is really based in the question that every first-year art student is tasked with answering for themselves "What is art?"
If art for you is primarily centered on fidelity of implementation (i.e. "craft") then you will be very threatened by AI, particularly if you've made it your livelihood. However, if your art is more about communication/concepts, then you might even feel empowered by having such a toolset and not having to slog through a bunch of rote implementation when developing your ideas/projects. Not to mention that a single person will be able to achieve much much more.
I feel like it's possibly a good thing for art/humanity overall to stop conflating craft with art, because new ideas will rise above all of the AI-generated images. i.e. splashiness alone will no longer be rewarded.
In an ideal future when we all live in the Star Trek universe, none of it will matter and whoever loves crafting stuff can do it all day long. Until then of course, it's tragic and lots of people will be out of jobs.
To be honest: I'm not generally a luddite, but in this case- I think we should nip this in the bud. I can see where this is going. You can argue back and forth about whether this will make the economy grow, but that's not the point. The profits from increased productivity do not accrue to the workforce but to the owners of the capital, in the absence of concerted, organized resistance, so I would not expect the quality of life for the majority of people to improve because of this.
The question is: do you like human beings? Because there is really no job that can't be replaced, if the technology goes far enough. And then the majority of the population, or all of the population, becomes dead weight. I'm a musician; how long before an AI can write better songs than I can in a few seconds?
This is fundamentally different than past instances of technology replacing human labor, because in the past, there was always something else that humans could do that the machines still could not. Now- that may not be the case.
There is only one choice: I think we should outlaw all machine learning software, worldwide.
Was there ever any doubt about that? There are literally entire graduate studies on it.
However, art isn't solely interpolation. The critical part is that art styles shift around due to innovations or new viewpoints, often caused by societal development. AI might be able to make a new Mondriaan when trained on pre-existing Mondriaans but it won't suddenly generate a Mondriaan out of a Van Gogh training set - and yet that's still roughly what happened historically.
Lots of people in these comments trying to reduce art in a way that is pretty hilarious. You hit the nail on the head. Art is only interpolation if you....remove the human that created it, in which case you would not call the image art. AI "art" is computational output, to imply otherwise is to mistakenly imply a family resemblance to human (and uniquely human I would argue) creation.
The human brain is just a model with weights and a lifelong training step. Seems like a distinction without a difference - even more so as ML models advance further.
This is giving ML models, more credit than they are due. They are unable to be imagine, they might convincingly seem to produce novel outputs, but their outputs are ultimately proscribed by their inputs and datasets and programming. They're machines. Humans can learn like machines, but humans are also able to imagine as agents. "AI" "art" is just neither of its namesakes. That doesn't mean it isn't impressive, but implying they are the same is granting ML more powers and abilities than it is capable of.
You're oversimplifying imagination. It could be related to something they've seen before, or it could not be. It could be entirely invented and novel in a way that has no antecedent to senses. Nor is it mere randomness added in. Imagining is something an agent does and is capable of. The fly in the ointment is still that ML models simply do not have agency in a fundamental way; they are programmed and they're are limited by that programming, that's what makes them and computers so effective as tools: they do exactly as they are programmed, which can't be said for humans. We, as humans, might find the output imaginative or novel or even surprising, but the ML model hasn't done anything more than follow through on its programming. The ML programmer simply didn't expect (or can't explain the programming) the output and is anthropomorphizing their own creation as a means of explanation.
But you know. Everything you said can easily be imagined to apply to humans as well. You can’t see your own programming, and so can’t fully understand it, and so you imagine it to be something more than what it is.
The problem you run into with that is that saying "humans are programmed" in the identical sense as "computers are programmed" is nonsensical. We have powers that computers simply do not, like agency, imagination, we are capable of understanding, etc. So, the concept of programming a computer and "programming a human" would mean different things, which they do in our language. You run into either fundamentally redefining what programming means, placing sentient, agential, humans on the same plane as non sentient, non agential, machines; or you run into a situation where it makes no sense to say "Humans are programmed identically to computers."
But if you say "humans are programmed" in a metaphorical sense, then yeah sure that's an interesting thought experiment. But it's still a thought experiment.
Art is now low value. It has no value addition. Technology in the palm of our hands and higher quality of life is also the reason.
Let’s not forget the very impressive population explosion in the past century. Every ‘job’ is a skill that has been out streamed so the needs of the population are satisfied by skills of the population so resources are distributed evenly.
Art is no longer a need and there are way too many artists simply proportional to the population.
Further, a lot of ‘art’ taught is technique. It’s not creativity. Can creativity be taught? I don’t think so.
Culture played a part in preserving artists and honoring their skills. But as ‘culture’ becomes global, mainstream is adopted more as it’s more accessible. And mainstream is subject to the vagaries of market as well as vulnerable to market manipulation.
Contrary to population notions, our world is very homogenous. Somehow the promotion of diversity has ended up with the tyranny of conformity. How did this happen? This is the biggest puzzle of this past few decades.
I hope that AI companies don't end up implementing another system like Youtube's DMCA system. Right holders and trolls alike can scrub these "black boxes" of whatever content they want, adding more garbage and uncertainty to their output.
Then again, there should be some sort of solution so this can coexist with artists, and not replace them
People who are not techies and have a clue about Stable Diffusion and DALL-E being trained on copyrighted images without their permission or attribution / credit knew this? This was absolutely unsurprising [0] [1].
Stability AI knew they would be sued to the ground if they trained their AI generating music equivalent called 'Dance Diffusion' model on thousands of musicians without their permission and used public domain music instead.
So of course they think it is fine to do it to artists copyrighted images without their permission or attribution, as many AI grifters continue to drive everything digital to zero. That also includes Copilot being trained on AGPL code.
Anything that weakens copyright is something that should be supported. Copyright has expanded well beyond its original goals to in fact be a harm to those goals
Copyright (in the US) was NOT in fact created to protect creators, it was to encourage creation and advance science. Today copyright is being used to curb and monopolize creation and prevent advancement (case in point this very story)
On the other hand, copyleft licenses are being used to protect creators. Without copyright protection, what is stopping companies from blatantly violating even more open-source licenses?
The problem is copyright laws , not the models (which are inevitable and impossible to stop anyway). The sketch of a mouse should not be protected more than the artistic style of any guy. IP laws are ancient concept and it s a mystery why people still cling to them so tightly
Copyright infringement is a backbone of the digital world. People remix others’ music, write fanfiction of others’ works, draw fan art of others’ characters, write and distribute clones of others’ games, etc. Much of internet culture involves drawing captions on others’ art and spreading it around, i.e. memes.
Twitch pulls in multiple billions of dollars of revenue from video game streaming, which hasn’t been tested in court and may very well be copyright infringement. People regularly pirate games, movies, television shows, music, books, software, research papers, etc.
I believe that the culture benefits tremendously from this. My question is, why should we draw the line exactly here, at AI generated images, code, and writing?
you can still copyright characters separatedly. he's feigning ignorance of how copyright work to make a sensationalistic point, which pretty much invalidate and poison what is otherwise an interesting argument at the boundary between derivative work and generative art.
Probably because they disdain the use of AI being used to copy their IP and distribute it at "machine" scale? Not an artist myself but can imagine I'd be pissed off that a bot is replicating my art with random changes.
HOWEVER, if a person were to ask for permission to use my pictures to feed into an AI to generate a number of images, and that person _selected_ a few and decided to sell them, I wouldn't have a problem with that. Something to do with the permission provided to the artist and an editing/filtering criteria being used by a human makes me feel ok with such use.
What you're describing is basically copyright, which is exactly what artists are demanding: the legal protection to which they are entitled to.
Edit: Silicon Valley exceptionalism seems to preclude some thought leaders in the field to remember the full definition of copyright: it's an artist's exclusive right to copy, distribute, adapt, display, and perform a creative work.
A number of additional provisions, like fair use, are meant to balance artists' rights against public interest. Private commercial interest is not meant to be covered by fair use.
No one is disputing that everyone, including companies in the private sector, is entitled to using artists' images for AI research. But things like e.g. using AI-generated images for promotional purposes are not research, and not covered by fair use. You want to use the images for that, great -- ask for permission, and pay royalties. Don't mooch.
Copyright (in the US) also includes fair use provisions of which education and research is a fair use of copyrighted work for which no permission from the artist is needed
> fair use provisions of which education and research is a fair use
I don't think people are debating fair use for education and research. It's the obvious corporate and for profit use which many see coming that is the issue. Typically, licensing structures were a solution for artists, but "AI" images seem to enable for-profit use by skirting around who created the image by implying the "AI" did, a willful ignorance of the way that the image was generated/outputted.
>>I don't think people are debating fair use for education and research. It's the obvious corporate and for profit use
Sounds like you are, because in copyright law there is not carve out for only non-profit education / research. Research and Education can be both profit and non-profit, copyright law does not distinguish between the 2, but it sounds like you claim is research can only ever be non-profit but given the entire computing sector in large part owes itself to commercial research (i.e Bell Labs) I find that a bit odd
Doesn't fair use make a distinction in the use though? Fair use in terms of commentary on something for instance is not the same as a company presenting marketing images, for example, as theirs in the selling of a product. If someone has legally protected their artwork, you can't just apply a photoshop layer to it and claim it is yours as fair use though, right? The issue seems to become almost more about provenance.
>If someone has legally protected their artwork, you can't just apply a photoshop layer to it and claim it is yours as fair use though, right?
That depends on what the layer was, and there is current cases heading to supreme court that have something similar to that so we may see
however commentary is just one type of fair use and would not be a factor here, nor is anyone claiming the AI is reselling the original work. The claim is that copyright law prevents unauthorized use of a work in the training of AI, AI training could (and likely would) be treated as research, and the result of the research is a derivative work wholly separate from the original and created under fair use
The copyright to what exactly though? Imagine you're an artist that draws abstract paintings of trees. If an AI uses those, the results it produces will be generic abstract trees in your style. And since I doubt that you can copyright trees, you would have to copyright your specific style. But is that possible?
If someone builds an AI self-driving car and feeds it images of Honda cars. Should the company be required, under threat of legal action, to remove the Honda from the model? What if this makes the model less accurate and causes more accidents?
In other words, I am wondering if the current issue here is the model being trained or the model being able to generate images.
Coming back to my example, if the car displayed the closest vehicle on the HUD. Would Honda ask the car company to replace the likeness of their car with a generic car icon or would they ask for the model to be scrubbed?
In our game studio, engineers are creating lots of developer art on their own. But the real productivity booster is coming from artists using language models to generate entire art pipeline scripts. Several Python scripts to automate Blender3D and offline asset post-processing. Many artists are also changing shaders by asking language models to modify existing code.
If regulation is found to be necessary, here are some options
- government could treat open ai like an electricity utility, with regulated profits
- open ai could be forced to come up with compensation schemes for the human source images. The more the weights get used, the higher the payout
- the users of the system could be licensed to ensure proper use and that royalties are paid to the source creators. We issue driving licenses, gun licenses, factory permits etc. Licenses are for potentially dangerous activities and powers. This could be one of those.
- special taxation class for industries like this that are more parasitic and less egalitarian than small businesses or manufacturing
- outright ban on using copyrighted work in ai training
- outright ban on what can be considered an existential technology. This has been the case for some of the most important technologies in the last 100 years including nuclear weapons.
The title is an erasure of the minoritized workers who've been exploited in labeling and curation and moderation who've been raising concerns; it's an erasure of the many who've been raising concerns about the misogyny and predation involved in the construction of the data sets (e.g. https://www.image-net.org/) which make these models possible https://arxiv.org/abs/2006.16923.
Marx makes the case in Grundisse https://thenewobjectivity.com/pdf/marx.pdf that the automation of work could improve the lives of workers -- to "free everyone’s time for their own development". Ruth Gilmore Wilson observes that capital's answer is to build complexes of mass incarceration & policing to deal with the workers rendered jobless by automation https://inquest.org/ruth-wilson-gilmore-the-problem-with-inn... -- that is, those who have too much "free" time. In such a world, Marx speculates that "Wealth is not command over surplus labour time’ (real wealth), ‘but rather, disposable time outside that needed in direct production", but Wilson reminds us that capital's apparent answer to date has been fascism.
As a thought experiment, let's say that the next version of stable diffusion is able to integrate large text datasets into the training set and can generate an accurate Mickey Mouse without ever having to be trained on an image of Mickey Mouse since it's integrated enough information from the text.
What then? Certainly an individual artist can't go and sell images of Mickey Mouse since it's still copyright infringement, but what claim would Disney have against the AI company?
I wrote in another comment that if you make the training of such models illegal regardless of distribution, it's essentially making certain mathematics illegal. That poses some very interesting questions around rights, whether others will do it anyways, and the practicality of enforcing such a rule in the first place.
One bone to pick: this says "artists" are fighting this and mentions Disney, Nintendo and Marvel. "Corporations" would be more accurate than "artists".
Training a model with artists' work seems completely fine to me. If something is out in the world and you can see it, you can't really control how that affects a person or a model or whatever.
The actual issue is reproduction of trademarked and copyrighted material. There are already restrictions on how you can use Mickey Mouse's likeness in any derivative work. That's not an AI issue. It's an IP issue. The derivative works are no different than if I, a person, produced the same derivative work.
It would be funny to me that we had to turn our attention to training AIs in IP laws.
No one WANTED to pay artists to begin with if they could bring their own ideas to life.
Artists should realize they were just a necessary evil to many other people's creative endeavors.
Every single human being alive has the creative spark.
As an artist who makes money from selling art, I find the panic to be ridiculous. The tide doesn't wait for you. Find new ways of understanding and creating art, move with the flow, don't stand in one place, shouting at the Gods to, "stop this damn technological advancement!" serves no purpose. It's not protesting, it's fear of the unknown.
More specifically, certain works featuring Mickey Mouse may actually be hitting the public domain. Mickey Mouse himself is trademarked, and there's no limited duration on trademarks.
Yes, but a trademark only prohibits you from putting the word on a box or a store listing page. And only in source-identifying contexts - purely descriptive or nominative ones don't count[0]. This is the sort of thing where you would have to think ahead of time about the context of certain uses. But generally speaking, after January 1st, 2024, the balance is in the favor of the public domain:
- If I draw an animation and post it to YouTube, and one of the characters happens to be Mickey Mouse, that will be legal. But I still can't name my channel "Mickey Mouse Official" or put the character's face in my channel profile, since that's source-identifying material.
- If I just flat-out reupload Steamboat Willie to YouTube, with the (possibly incorrect) title "Walt Disney's FIRST EVER CARTOON", that also will be legal - because the title is purely nominative and does not imply that I'm licensed by Disney.
- If I release STL or STEP files on Thingiverse for printing Mickey Mouse christmas ornaments, that will be legal - but I have to make sure that nobody thinks this is actually made by Disney.
- Mass-produced merchandise sold in stores will be very difficult to sell legally, since generally speaking the whole object is considered source-identifying when you put it on a store shelf. About the only thing you could do is sell figurine blind-bags with no indication that there's public-domain Disney stuff in there.
That last one is probably why Disney isn't trying to, say, push Mexican life+100 terms[1] on everyone. Mickey Mouse is more valuable as a branding and merchandising tool than as a creative work.
Copyright law itself also has a preemption clause[2] which prohibits making copyright-shaped claims under other laws. This is usually mentioned in the context of state right-of-publicity laws[3], but the text of the clause would also apply to trying to "trademark a copyright" to keep the mouse in his cage.
[0] This is part of "trademark fair use", which is an entirely different concept to the copyright fair use one.
[1] Oh, yeah, I forgot - in all those YouTube examples you need to convince YouTube to block your upload in Mexico, which they are unwilling to do. The stated reason is that pirates could be harder to catch if they geoblocked their uploads. However, this already causes problems for, say, people reviewing anime - which is actually illegal in Japan! So I suspect that YouTube might have to change their policies on this at some point as more large publishers' work hits the public domain in certain countries but not others.
Art has historically always been about copy and improve.
This whole copyright / intellectual property idea is something that unfortunately cropped up in the 20th century, and the fact that it was codified into law is certainly not something 20th century humanity should be proud of or regard as progress.
If you create a trademark-violating image using an AI model does that demonstrate anything more than that that particular image is violating? Like it's also violating if I hand draw those images, the fact that they're AI-generated doesn't enter into it.
The way these image generating neural nets are trained is illegal. They copy and use other artists' work without asking them or paying them. There's a lot of legal exposure here - why hasn't anyone taken advantage of that yet?
In the US we have fair use, and it's not clear at all to me that this wouldn't count. If I took every image on artstation and averaged all of them (creating a muddy mess), I think I would be legally able to distribute the result without compensating or crediting the original artists.
In the EU, UK, Japan and Singapore, it is explicitly legal to train AI on copyrighted work. I saw another comment say that AI companies train in those countries.
- In the actually interesting projects that I worked on I always ran out of time. So much more could be imagined that could have been done but there was no time or budget to do it. Looking forward to AI making a dent in this a bit.
One traditional way of learning to make art was to go to the museum and copy the works of masters ... What's the difference in principle if one trains AI on them?
It seems like the least-regulated professions will be the front lines, due of course to the friction created by getting AI operating in regulated environments.
I feel for artists who feel like they’re losing their livelihood. Art has always been a tough profession, and this doesn’t help because late-stage capitalism all but guarantees that a lot of potential customers will just skip the human-made article in favor of the “good-enough” mechanical production.
That said, automation is coming for all of us. The problem is not “we Need to stop these AIs/robots from replacing humans.” It’s “We need to figure out the rules for taking care of the humans when their work is automated”
Does anybody else find the whole AI art generation thing both amazing and incredibly depressing at the same time? I’ve played around with it and it’s lots of fun. But I can also see a deluge of mediocre “content” taking over the internet in the near future. “Real art” will become a niche underground discipline. Most popular music will be AI generated and will have fake performers also generated to go along with it. And most people will be fine with that.
I don’t think “real art” will disappear. People will always want to create (although monetising that will now be exceedingly more difficult).
It feels like we are ripping the humanity out of life on a greater and greater scale with tech. Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
NB: when I’m referring to art I mean of all types as that’s where we are heading.
I've been complaining about this with AI generated content in general as well, especially Twitter and blog posts. I worry that we're in a sort of downward spiral, creating a feedback loop of bad content. Eventually models will get trained on this badly generated content, and it will reduce the overall vocabulary of the Internet. Take this to the extreme, and we'll keep going until everything is just regurgitated nonsense. Essentially sucking the soul out of humanity (not that tweets and blog posts are high art or anything). I know that sounds a little drastic but I really think there's a lurking evil that we don't have our eye on here, in terms of humanity and AI. We've already seen glimpses of it even with basic ad targeting and various social media "algorithms".
I've been thinking the same thing. I wonder if this might give rise to some kind of analog renaissance as people get sick of all the digitally regurgitated garbage. There has to be a point of diminishing returns for this kind of content, right? Maybe there will be some kind of Made By Humans verification that will make certain content much more valuable again simply by differentiating it from all the AI-generated simulacra.
This would be very cool. I think we're starting to see some hints of this. Maybe we'll see publishing houses and presses return to their former glory because they're the only ones not putting out AI generated recycled nonsense.
>we'll keep going until everything is just regurgitated nonsense.
I feel like this about the mostly-human-created fashion. In my not so long lifetime I've seen everything from the 90s making a comeback. Ultimately I guess in terms of clothing that is practical with the materials that are available, we've already cycled through every style there is, such that the cycle time is now <30years.
If you think about how much content we're already getting from mediocre artists and writers, how many tv shows are complete garbage, how much governments and corporations are promoting and trolling in online discussions, how many search results are already ruined by lazy copied content, it's difficult to see things getting orders of magnitude worse.
Good stuff will still be good stuff, and it will keep being rare. The biggest change will be that producing mediocre content will be cheaper and more accessible, but we're already drowning in it, so .. meh?
> Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
Fair assessment, and I agree with much of your premise, though regards "it's difficult to see things getting orders of magnitude worse": Please don't challenge them.
I totally agree that there is a lot of low effort and consequentially low quality stuff out there in the world already. However, it still costs to make that. With this form of automation getting better it will simply become a lot cheaper to produce and is thus going to happen a lot more. So, I expect the ratio to become worse, maybe even "orders of magnitude" worse.
Yeah I agree. I was generally pretty pro AI art and agree with a lot of the pro AI sentiments here on a logical basis still, but as the tech develops I drift more and more towards thinking this may be a bleak path for humanity.
> Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
Yeah really hit the nail on the head here. I thought a lot of backlash against AI was due to workers not really reaping the benefits of automation and that's a solvable problem. But I've seen a lot of artists who are retired or don't need to work dive into despair over this still. It's taking their passion away, not just their job.
I don't really know how we could stop it though without doing some sweeping Dune-level "Thou shalt not make a machine in the likeness of the human mind" type laws.
> But I can also see a deluge of mediocre “content” taking over the internet in the near future.
This has always been the case. Most entertainment regardless of form (music, art, tv, games...) is mediocre or below mediocre, with the occasional good or even rarer exceptional that we all buzz about.
AI image gen is only allowing a wider range of people to express their creativity. Just like every other tools that came before it lowered the bar of entry for new people to get in on the medium (computer graphics for example allowed those who had no talent for pen and paper to flourish).
Yes, there will be a lot of bad content, but that's nothing out of the ordinary.
> Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
This feels like the natural outcome of Moravec's paradox[1]. I can imagine a grim future where most intellectually stimulating activities are done by machines and most of the work that's left for humans is building, cleaning, and maintaining the physical infrastructure that keeps these machines running. Basically all the physical grunt work that has proven hard to find a general technological solution for.
Oh, so there's a term for this -- TIL. I've heard something along these lines. I think AI diagnostics are a good thing, but I expressed worry about the medical field going away to somebody and was unironically told "you can just be emotional support for sick people". Now that's a fulfilling volunteer activity and everyone who has the inclination to do it should, but as a matter of practicality: Does it come with a salary?
We are a long political fight away from people in industries affected by AI not feeling like their livelihoods are under attack. It would be better received, at least for me, if the AI guys would admit that under the system we have they're playing with a big heaping flamethrower in a vast ocean of gasoline.
If humans end up getting expelled from virtual reality back into physical one not forcibly, but by lack of meaningful virtual pursuits... is that really a grim future?
Not to distract too much from your point, because I agree that the obviously imminent explosion of AI generated work will probably lead to a generation of stylistic stagnation, but...
We already live in a time of artistic stagnation. With how much audio engineers manipulate pop music in Pro Tools, "fake" singers have been a practical reality for 20 years. Look at Marvel movies. Go to any craft fair on a warm day, or any artists' co-op, in a major city and try, try to find one booth that is not exactly like 5 other booths on display.
People have been arguing about what is "real art" for centuries. Rap music wasn't real because it didn't follow traditional, European modes and patterns. Photography wasn't real because it didn't take the skill of a painter. Digital photography wasn't real because it didn't take laboring in a dark room. 3D rendering wasn't real. Digital painting wasn't real. Fractal imagery wasn't real. Hell, anything sold to the mass market instead of one-off to a collector still isn't "real art" to a lot of people.
Marcel Duchamp would like to have a word.
If anything, I think AI tools are one of the only chances we have of seeing anything interesting break out. I mean, 99% of the time it's just going to be used to make some flat-ui, corporate-memphis, milquetoast creative for a cheap-ass startup in a second rate co-working space funded by a podunk city's delusions they could ever compete with Silicon Valley.
But if even just one person uses the tool to stick out their neck and try to question norms, how can that not be art?
Near future? The internet is cesspool of mediocre and terrible content already. AI is going to have an impact on art and everything else in general. Artists may (and likely will be forced) to adapt to/adopt its use.
> Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits.
But in my case, I don't happen to find drawing or painting enjoyable. I simply don't, for nature- or nurture-based reasons. I also don't believe that everyone can become a trained manual artist, because not everyone is interested in doing so, even if they still (rightly or wrongly) cling to the idea of having instant creative output and gratification.
I think this lack of interest is what makes me and many other people a prime target for addiction to AI-generated art. Due to my interest in programming I can tweak the experience using my skills without worrying about the baggage people of three years ago had to deal with if they wanted a similar result.
So without any sort of generation, how does one solve the problem of not wanting to draw, but still wanting one's own high-quality visual product to enjoy? I guess it would be learning to be interested in something one is not. And that probably requires virtuosity and integrity, a willingness to move past mistakes, and a positive mindset. The sorts of things that have little to do with the specific mechanics of writing code in an IDE to provoke a dopamine response. Also, the ability to stop focusing so hard on the end result, a detriment to creativity that so many (manual) art classes have pointed out for decades.
I sometimes feel I lack some of those kinds of qualities, and yet I can somehow still generate interesting results with Stable Diffusion. It feels like a contradiction, or an invalidation of a set of ideas many people have held as sacred for so long, a path to the advancement of one's own inner being.
I will relish the day when an AI is capable of convincing me that drawing with my own two hands is more interesting than using its own ability to generate a finished piece in seconds.
So I agree that, on a bigger scale beyond the improvement of automated art, this line of thinking will do more harm to humanity than good. An AI can take the fall for people who can't or don't want to fight the difficult battles needed to grow into better people, and that in turn validates that kind of mindset. It gives even the people who detest the artistic process a way to have the end result, and a decent one at that.
I think this is part of the reason why the anti-AI-art movement has pushed back so loudly. AI art teaches us the wrong lessons of what it means to be human. People could become convinced to not want to go outside and walk amongst the trees and experience the world if an AI can hallucinate a convincing replacement from the comfort of their own rooms.
I will say, the kind of art intended for corporate needs (much of which in the last decade in particular has been a deluge of bland vector art with weird blob people) is not the same as the art that many artists make in their own time, or would regard as good.
The through line for a lot of mediocre stuff is the intention of the artist/creator to appeal to as broad a demographic/audience as possible so as to dissolve away anything that makes the art interesting, challenging, and good.
If these generated arts just replace human created arts, then it can be construed as depressing.
But what if AI generates arts where humans do not scale?
For example, what if the AAA game you are expecting gets done in half of the time, or has ten times the size of explorable area, because it is cheap and fast to generate many of the arts needed by AI?
Or if some people excellent at story telling but mediocre at drawing can now produce world class manga due to the assistance of AI?
We've seen this before when CGI first came out, then with the proliferation of Photoshop and other cheap editors. Now fake garbage is everywhere on the internet. Did that make human life substantially different? Nope. Everyone just ignores most of it and only believes stuff that comes from "reputable sources." That will be the end game here too. A flight to quality.
But also, the explosion in interest means there had been a latent interest in instantly generating pictures to begin with.
I think this situation says a lot about the nature of human desire, not just the fact that a few people were ingenious to come up with the idea of diffusion models. A lot of ingenious inventions are relatively boring when exposed to the broader populace, and don't hit on such an appealing latent desire.
What will this say about the limitless yet-to-be-invented ideas that humanity is just raring to give itself, if only someone would hit on the correct chain of breakthroughs? Would even a single person today be interested in building a backyard nuclear warhead in an afternoon, and would attempt to if the barrier of difficulty in doing so was solved?
To me it's terrifying and gives me a bit of panic playing with it. This is still early stuff, like dial-up or 100Mhz processors. We all know the trajectory tech takes nowadays, and the writing on the wall here is an event horizon where it's impossible to see the full scope of how this tech will change the world.
We're like people getting the very first electric light bulbs in their home, trying to speculate how electricity will change the world. The pace of change however will be orders of magnitude faster than that.
Majority of everything is always mediocre at best. There is no absolute value in those things, they always get pitched against each other. Something mediocre today, could have been a masterpiece some decades ago. A masterpiece from decades ago could be hot garbage today. Those things are a constantly moving target and will always shift. People will just adapt their taste and figure out some new random rules to say why something was yesterday a masterpiece and became today mediocre and so on.
> But I can also see a deluge of mediocre “content”
Have you been to the internet?
In all seriousness, the cream will rise to the top. The mediocre “content” will get generated and we will get better at filtering it out which will decrease the value in generating mediocre content, etc etc. The tools being produced just further level the playing field for humanity and allow more people to get “in the arena” more easily.
Humans are still the final judge of the value being produced, and the world/internet will respond accordingly.
For a thought exercise, take your argument and apply it to the internet as a whole, from the perspective of a book or newspaper publisher in the 1990s.
High-quality content rarely rises to the top. The internet as of 2022 optimizes for mediocrity: the most popular content is the one which is best psychological manipulation using things like shock value and sexuality. Just take a look at Twitter, Facebook, or Reddit: it is extremely rare to see genuine masterpieces on there. Everything is just posted to farm as many shares and likes as possible.
If anything, this will result in the cream getting drowned in shit. Not to mention that artists do not get the space to develop from mediocre to excellent - as the mediocre market will have been replaced with practically free AI.
Abolish copyrights. At all. Unrestricted exchange boosts learning curves of societies and benefits everyone in the long run, except a few won't become too rich in the process. There are several downsides attached to that, but I am willing to accept that.
I wonder if that could be a solution to this. Anything AI generated is public domain, no one can own the IP to it. It would allow it to be used for research and education, hobbyists, but hinder how large corporations could use it.
Maybe even have it like GNU license, anything using AI generated stuff must also be public domain.
The lack of empathy is incredibly depressing...