This article seems to be three to six months past due. As in the insights are late.
>One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said.
This seems obvious to me.
I’ve drawn birthday cards for kids where I first use gen AI to establish concepts based on the person’s interests and age.
I’ll get several takes quickly but my reproduction is still an original and appreciated work.
If the source of the idea cheapens the work I put into it with pencils and time, I’m not sure what to say.
> “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”
This sounds eerily similar to the messaging around SWE.
I do not see a way past this—-one must rise past prompting and into orchestration.
The Lord of the Rings: The Return of the King back in 2003 used early AI VFX software MASSIVE to animate thousands of soldiers in battle: https://en.wikipedia.org/wiki/MASSIVE_(software) - I don't think that was controversial at the time.
According to that Wikipedia page MASSIVE was used for Avengers: Endgame, so it's had about a 20 year run at this point.
The problem is not AI per se (which are only a mix of algorithms). The problem is that this new wave of AI is trained in propietary content, and the owners/creators didn't allow it in the first place.
If this AI worked without training, no one would say anything.
> If this AI worked without training, no one would say anything.
I don’t believe that for one second.
People are rightfully scared of professional and economic disruption. OMG training is just a convenient bit of rhetoric to establish the moral high ground. If and when AIs appear that are entirely trained on public domain and synthetic data, there will be some other moral argument.
Yea, it definitely is just a convenient argument to people that feel threatened. I in no way feel as though the same internet that has so consistently disregarded copyright laws with such reckless abandon is now clutching their pearls about this.
petabytes of training data are transformed into mere gigabytes of model weights. no existing copyright laws are violated. until new laws declare that permission is required, this is a non-argument.
>If this AI worked without training, no one would say anything.
adobe firefly was trained on licensed content, and rest assured, the anti-AI zealots don't give it a pass.
the copyright is just one of the many angles they use to decry the thing that threatens their jobs.
People would still be griping about how it devalues the hard work artists have put in, "isn't real art" and all the other things. The only difference is the public at large would be telling them to put a sock in it, rather than having some sympathy because of deceptive articles about how big tech is stealing from hardworking artists.
- LLMs were trained on copy protected content and devaluing the input a worker puts into creating original work
- LLMs are a tool for generating statistical variations and refinements of work, this doesn't devalue the input but makes generating output easier
Form vs Function issues. So it would be preferable to give people a legal pathway to continue making money and own their work instead of allowing their work to be vacuumed up by the people at corporations looking to automate them away. The functional issue still exists but doesn't put your personal work at risk of theft/abuse outside of it's economic intent. Then the social stigma doesn't really matter because "an LLM is just a tool" is now a solid argument not causing abuse or deterioration of existing legal protections.
I'd say the comparison points at misunderstanding the current controversy, though I realize you are doing so deliberately to ask "Is it really that different if you think about it?"
But I'll bite. MASSIVE is a crowd simulation solution, the assets that go into the sim are still artist-created. Even in 2003, people were already used to this sort of division of labor. What the new AI tools do is shift the boundary between artists providing input parameters and assets vs. computer doing what its good at massively and as a big step change. It's the magnitude of the step change causing the upset.
But there's also another reason that artists are upset, which I think is the one that most tech people don't really understand. Of course industrial-scale art does lean on priors (sample and texture banks, stock images, etc.) but by and large operations still have a sort if point of pride to re-do things from scratch where possible for a given production rather than re-use existing elements, also because it's understood that the work has so many variables it will come out a little different and add unique flavor to the end product. Artists see generative AI as regurgitation machines, interrupting that ethic of "this was custom-made anew for this work".
This is typically not an idea that software engineers share much. We are comfortable and even advised to re-use existing code as is. At most we consider "I rewrote this myself though I didn't need to" a valuable learning exercise, but not good professional practice (cf. ridicule for NIHS).
This is one of the largest difference in the engineering method vs. the artist's method. If an artist says "we went out there and recorded all this foley again by hand ourselves for this movie", it's considered better art for it. If a programmer says "I rolled my own crypto for my password manager SaaS", they're in incredibly poor judgement.
It's a little like convincing someone that a lab-grown gemstone is identical to one dug up at the molecular level, even: Yes, but the particular atoms, functionally identical or not, have a different history to them. To some that matters, and to artists the particulars of the act of creation matters a lot.
I don't think the genie can be put back in the bottle and most likely we'll all just get used to things, but I think capturing this moment and what it did to communities and trades purely as a form of historical record is somehow valueable. I hope the future history books do the artists' lament justice, because there is certainly something happening to the human condition here.
I really like your comparison there between reused footage and reused code, where rolling your own password crypto is seen as a mistake.
There's plenty of reuse culture in movies and entertainment too - the Wilhelm scream, sampling in music - but it's all very carefully licensed and the financial patterns for that are well understood.
This is just shifting the goal posts though. I remember people making similar arguments in the early days of Photoshop, digital camera (and what constitutes a "real" photographer), CGI, etc.
I agree the magnitude of the step change is upsetting, though.
Right, I agree the sentiment isn't new, I'm mostly just trying to explain that way of thinking.
But yeah, the tension between placing a value on doing things just in time vs. reducing the labor by using tools or assets has surely always been there in commercial art.
Tech companies love to show off that they are using AI, how they are embracing it, etc. Among engineers, there is also a growing community of folks who embrace tools like Cursor, Chat GPT, Gemini, v0, etc.
When it comes to artists, I have less insight but what I see is that they are extremely critical of it and don't like it at all.
It's interesting to see that gap in reactions to AI between artists and tech companies.
Tech people like it because it isn't good enough to completely replace them yet. The sophisticated, coherent architecture of a well designed system is (for now) still beyond the LLMs, so for tech people, it's still just a wonderful tool. But give it another year, and the worm will turn.
lumberjacks didn't go away when chainsaws were invented; demand for wood rose to meet the falling cost of wood and lumberjacks kept cutting down trees. don't see why it won't be any different for programmers.
I’m an artist and also work in tech. Enjoy using AI for work, no interest in using it for my art.
Using AI for art is an idiotic proposition for me. If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself. If you don’t enjoy perfecting the sentence, maybe don’t be a writer?
That’s why there’s a disconnect. I make art for personal fulfillment and the joy of the creative act. To offload that to something that helps me do it “faster” has exactly zero appeal.
AI will probably enable new workflows and forms of expressions. “Old” ways will still likely be around in some form. Photography didn’t kill portrait painting or movies theater.
> If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself.
The same would be true if I were going to use AI to read it. If we just wanted to trade Clif Notes around, why bother with novels at all?
Cyber-Leo-Tolstoy types a three-page summary of "War and Peace" into ChatGPT and tells it to generate an 800-page novel. Millions of TikTok-addled students ask ChatGPT to summarize the 800-page novel into three pages (or a five-paragraph essay). What is the point of any of this?
I would imagine that if it is shameful among the established players to use AI, what will happen is that entirely new players will come in. For me, it's the story that matters, and if they can tell a better story with AI, then many people will naturally flock to them.
I suggest starting with their tutorial with a decent LLM to help. The workflows can be represented in written syntax which you can export repeatedly, pasting into a chat for feedback from the LLM based on your goal.
For example, I wanted help setting up the use of a lora and batch iteration. The LLM can figure out where you've hooked things up incorrectly. The UI is funky and the terms and blocks require familiarity you won't have to start.
I think learning the basics of it this way would be useful because you'll get some positive feedback loop going before trying to make use of someone's shared, complex workflow.
IMHO, before the end of the decade you will absolutely be able to generate entire long form movies just by writing a paragraph in a prompt. And people will not be able to tell the difference.
Hollywood might save money on the short run but they are doomed to irrelevance on the long run, because you'll have access to the exact same tools as they do.
> It might cost only $10 million, but it would look closer to a $100 million movie. "We’re going to blow stuff up so it looks bigger and more cinematic,” he said.
Hiding it? Are they supposed to slap on a disclaimer? Feels like one could safely assume they've been using it.
Anyways, AI generated media is gonna lead to hyper-personalized, on-demand, generated media for people to consume. Sure, hollywood will still be around, but once consumer computing power and the models catch up, there are gonna be a ton of people choosing their own worlds than the ones curated by an industry.
I think the really interesting part will be the illusion of choice presented by these systems. You'll think you've got the reins, but really it's a choose-your-own-adventure book that's effectively constraining you to the experience They want you to have.
The only way out of this will be HN types who roll their own, and those will probably suck in comparison to the commercial systems filled with product placement and mindblowing amounts of information harvesting.
We watched the live action Lilo & Stitch reboot yesterday. One thing that struck me was almost every shot was like 2 seconds or less. A lot of camera work for a kids movie... or that's all they could manage to generate?
It's a race to the (AI-slop) bottom. But most of the inhabitants of the world will barely notice.
I have to wonder if movies will improve or not with AI, because some really stupid franchises have seen stupid amounts of money, while most people barely watch the actually good creative stuff. We're already swamped with unwatchable schlock, but I'm not sure it will improve if we automate it. It's the same people spending the money to make, promote, and distribute movies, the AI doesn't have any money to make a movie or the impetus. But if most people cared about art, creativity, and good storytelling, there probably wouldn't be a race to the bottom in the entertainment industry.
AI will make professional level movies cheaper and easier to make, which will the medium more accessible. The AAA movies probably won't look any better, but indie stuff that previously would have been suggestive and bare due to budgetary constraints can now be more direct and lavish. In many cases that's going to be the difference an indie project being a viable film and not.
>AI will make professional level movies cheaper and easier to make
If you're talking about the kind of movies with big-budget explosions and violence, then no thanks. That isn't what I'm talking about at all. Sure, AI will make that schlock cheaper. A lot of the "indie" stuff is garbage, too.
I use my instant pot to make risotto in 20 minutes with very little effort that is about 90% as good (in my very stubborn opinion) as making it the hard way. I very much appreciate an amazing risotto, but when I make it myself I'll usually choose the instant pot versus the extra work.
>One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said.
This seems obvious to me.
I’ve drawn birthday cards for kids where I first use gen AI to establish concepts based on the person’s interests and age.
I’ll get several takes quickly but my reproduction is still an original and appreciated work.
If the source of the idea cheapens the work I put into it with pencils and time, I’m not sure what to say.
> “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”
This sounds eerily similar to the messaging around SWE.
I do not see a way past this—-one must rise past prompting and into orchestration.
I don't know how accurate that is.
According to that Wikipedia page MASSIVE was used for Avengers: Endgame, so it's had about a 20 year run at this point.
If this AI worked without training, no one would say anything.
I don’t believe that for one second.
People are rightfully scared of professional and economic disruption. OMG training is just a convenient bit of rhetoric to establish the moral high ground. If and when AIs appear that are entirely trained on public domain and synthetic data, there will be some other moral argument.
Same goes for music. If you need AI and autotune, find another way to earn a living.
petabytes of training data are transformed into mere gigabytes of model weights. no existing copyright laws are violated. until new laws declare that permission is required, this is a non-argument.
>If this AI worked without training, no one would say anything.
adobe firefly was trained on licensed content, and rest assured, the anti-AI zealots don't give it a pass.
the copyright is just one of the many angles they use to decry the thing that threatens their jobs.
- LLMs were trained on copy protected content and devaluing the input a worker puts into creating original work
- LLMs are a tool for generating statistical variations and refinements of work, this doesn't devalue the input but makes generating output easier
Form vs Function issues. So it would be preferable to give people a legal pathway to continue making money and own their work instead of allowing their work to be vacuumed up by the people at corporations looking to automate them away. The functional issue still exists but doesn't put your personal work at risk of theft/abuse outside of it's economic intent. Then the social stigma doesn't really matter because "an LLM is just a tool" is now a solid argument not causing abuse or deterioration of existing legal protections.
But I'll bite. MASSIVE is a crowd simulation solution, the assets that go into the sim are still artist-created. Even in 2003, people were already used to this sort of division of labor. What the new AI tools do is shift the boundary between artists providing input parameters and assets vs. computer doing what its good at massively and as a big step change. It's the magnitude of the step change causing the upset.
But there's also another reason that artists are upset, which I think is the one that most tech people don't really understand. Of course industrial-scale art does lean on priors (sample and texture banks, stock images, etc.) but by and large operations still have a sort if point of pride to re-do things from scratch where possible for a given production rather than re-use existing elements, also because it's understood that the work has so many variables it will come out a little different and add unique flavor to the end product. Artists see generative AI as regurgitation machines, interrupting that ethic of "this was custom-made anew for this work".
This is typically not an idea that software engineers share much. We are comfortable and even advised to re-use existing code as is. At most we consider "I rewrote this myself though I didn't need to" a valuable learning exercise, but not good professional practice (cf. ridicule for NIHS).
This is one of the largest difference in the engineering method vs. the artist's method. If an artist says "we went out there and recorded all this foley again by hand ourselves for this movie", it's considered better art for it. If a programmer says "I rolled my own crypto for my password manager SaaS", they're in incredibly poor judgement.
It's a little like convincing someone that a lab-grown gemstone is identical to one dug up at the molecular level, even: Yes, but the particular atoms, functionally identical or not, have a different history to them. To some that matters, and to artists the particulars of the act of creation matters a lot.
I don't think the genie can be put back in the bottle and most likely we'll all just get used to things, but I think capturing this moment and what it did to communities and trades purely as a form of historical record is somehow valueable. I hope the future history books do the artists' lament justice, because there is certainly something happening to the human condition here.
There's plenty of reuse culture in movies and entertainment too - the Wilhelm scream, sampling in music - but it's all very carefully licensed and the financial patterns for that are well understood.
I agree the magnitude of the step change is upsetting, though.
But yeah, the tension between placing a value on doing things just in time vs. reducing the labor by using tools or assets has surely always been there in commercial art.
When it comes to artists, I have less insight but what I see is that they are extremely critical of it and don't like it at all.
It's interesting to see that gap in reactions to AI between artists and tech companies.
Using AI for art is an idiotic proposition for me. If I was going to use AI to write my novel, I would literally be robbing myself of the pleasure of making it myself. If you don’t enjoy perfecting the sentence, maybe don’t be a writer?
That’s why there’s a disconnect. I make art for personal fulfillment and the joy of the creative act. To offload that to something that helps me do it “faster” has exactly zero appeal.
The same would be true if I were going to use AI to read it. If we just wanted to trade Clif Notes around, why bother with novels at all?
Cyber-Leo-Tolstoy types a three-page summary of "War and Peace" into ChatGPT and tells it to generate an 800-page novel. Millions of TikTok-addled students ask ChatGPT to summarize the 800-page novel into three pages (or a five-paragraph essay). What is the point of any of this?
We don’t want any of this and are working to build around it.
It’s being really pushed by a lot of the same people who were pushing Web3 and NFTs and blockchain grifts.
I wonder who is making the oss version of these tools? So you can specify all the hundreds of parts needed to just compose a decent framework
Heavy emphasis is on making cutting edge models work with limited local compute.
For example, I wanted help setting up the use of a lora and batch iteration. The LLM can figure out where you've hooked things up incorrectly. The UI is funky and the terms and blocks require familiarity you won't have to start.
I think learning the basics of it this way would be useful because you'll get some positive feedback loop going before trying to make use of someone's shared, complex workflow.
Hollywood might save money on the short run but they are doomed to irrelevance on the long run, because you'll have access to the exact same tools as they do.
Is it good or bad? I don't know, it just is...
It's bad. Look at what social media and cellphones have done to society and human attention spans.
There will be a lot of bad shit that will come out of this that won't truly be appreciated until it's already too late to reverse course.
... and produced the worst AI-upscales of True Lies and Aliens, to universal scorn from audiences.
No comment needed.
https://m.youtube.com/watch?v=7ttG90raCNo
Takeaway: Maybe AI good. Maybe AI bad. Scary. But possibility. Everybody try.
Anyways, AI generated media is gonna lead to hyper-personalized, on-demand, generated media for people to consume. Sure, hollywood will still be around, but once consumer computing power and the models catch up, there are gonna be a ton of people choosing their own worlds than the ones curated by an industry.
The only way out of this will be HN types who roll their own, and those will probably suck in comparison to the commercial systems filled with product placement and mindblowing amounts of information harvesting.
I have to wonder if movies will improve or not with AI, because some really stupid franchises have seen stupid amounts of money, while most people barely watch the actually good creative stuff. We're already swamped with unwatchable schlock, but I'm not sure it will improve if we automate it. It's the same people spending the money to make, promote, and distribute movies, the AI doesn't have any money to make a movie or the impetus. But if most people cared about art, creativity, and good storytelling, there probably wouldn't be a race to the bottom in the entertainment industry.
Idiocracy was a documentary, and "ASS" https://www.youtube.com/shorts/kJZjU2k5abs is what the AI will calculate we want to see, and it will win awards.
If you're talking about the kind of movies with big-budget explosions and violence, then no thanks. That isn't what I'm talking about at all. Sure, AI will make that schlock cheaper. A lot of the "indie" stuff is garbage, too.
I feel Hollywood might be the same way.