Generative AI: A New Tool, Same As The Old Tool
By mrkiouak@gmail.com on 2025-05-24

Generative AI: A New Tool, Yet Familiar Concerns
I recently encountered a personal blog with a disclaimer—complete with emojis—stating its content wasn't for Large Language Models (LLMs). This has started to become common and I find it odd and sort of silly.
Here is some context. A recent The Atlantic article by Alex Reisner, "ChatGPT Turned Into a Studio Ghibli Machine. How Is That Legal?", highlighted a representative intellectual property (IP) issue. The general concern here is that the AI has committed a kind of theft, and while copyright law is highly imperfect, lets set aside copyright as a topic here[^1]. Nothing stops someone from drawing a Studio Ghibli style animated and selling it as their own. Copyright law, apparently per the article though it was news to me, protects specific characters, not artistic styles, in the same way I could copy Hemingway's style (I wish) and sell that writing without breaking the law.
So why do I think this moral theft concern is silly?
I haven't credited or linked to the blog I read that had this disclaimer and emoji. Normally I would, but I haven't and I'm not sure it was terribly noticeable. I expect the only people that gave it two thoughts were folks who find it implausible someone would include such a thing.
So we all commit the kind of theft associated with GenAI; Unless you communicate in nothing but the "googoos and gah gahs" of a newborn, you're imitating and stealing yourself. If you think through the train of thought, there is no difference between using the alphabet, repeating someones story or joke they told you as their own, and a GenAI training on data to then generate a similar style thing. The idea that a publicly available blog post shouldn't be used for "training"—whether an LLM or a student in a classroom—seems radical to me.
This objection seems comparable to a college requiring students to sign an NDA preventing them from discussing course material in perpetuity outside of school. What then is the point of the college or the blog post?
So, what's truly driving the GenAI backlash?
There are parallels to past technological anxieties. People resisted ATMs due to fears of bank teller job losses, and barcodes raised concerns for cashiers. More recently, self-checkout initiatives aimed at reducing cashier roles have seen mixed results; large companies like Amazon (which pivoted to selling the tech due to higher costs and theft) and Walmart & Costco (which cut back and hired more staff for self-checkout areas) have cut efforts.
To be clear, there are job losses caused by technology. Balancing this, the overall profit increases and cost reductions (sometimes passed to consumers) made the U.S. wealthier. It's worth noting the gains have been unequal (as discussed in pieces like []"The Important Thing in 2025"](https://musings-mr.net/post/5xSBXOGQ0yAMtyH0KVN1)), but we should distinguish between a tool's benefits and how society allocates the benefits.
For the past 30-40 years, software engineering has been a skill in high demand relative to its supply. This underlying demand isn't likely to change. Demand for pure implementation skills might decrease relative to the 2000s-2010s, but demand for senior engineering skills—system architecture, management, and reliability—will increase for at least the next 5-10 years. There's not enough public data on these complex areas for models to be trained effectively and generalize from. Building systems is hard.
I think most white-collar jobs will be impacted in this same way. Approximately 40% of current work will be automated, taking only 5-10% of the time it once did. But this will increase demand for the more complex and valuable skills that constitute the other 60% of the work.
So why the resistance to LLMs accessing data? Charitably, many people understand generative AI to be more magical and capable than it is. See the editors here in At Least Two Newspapers Syndicated AI Garbage. Software code generation was useless two years ago, a productivity sink due how bad, dangerous and buggy generated code was. But today its and incredible productivity boost, if you're an experienced and capable software engineer. A larger group might be able to fake it with AI-generated code, but in real-world software systems, security, inefficiency and lack of flexibility will wreck these efforts. As any experienced software engineer that uses LLMs today will tell you, once a model starts behaving badly when prompted with a particular codebase context, you're often forced to either prompt for a full refactor, guiding the LLM through how to split things up into small pieces that make sense, thus allowing more targeted prompting or you have to discard the codebase from the LLM's context entirely, prompt for what you want from net new code, and then integrate it yourself.
LLMs are a huge productivity boost for these experienced engineers. I save many hours a week with the advent of GenAI code models. This doesn't mean I'm X% more productive overall. Since at least the days of the dot-com bubble, writing code has never been the bottleneck (outside tiny startups). Figuring out what to build and how to build it has always been 80% of the work—even as corporate practices often tried to shift this strategic work to roles like Product Managers or Business Analysts (BAs in software development have largely gone the way of the dodo).
I suspect most other white-collar professions aren't fundamentally different. Their day-to-day tasks will also dramatically change, just as I now spend say a few hours a week writing and updating implementation and unit tests, down from many hours.
I think most people just don't want to deal with change, and so are trying to freeze time by trying to stop the growth of the tech. But this is like plugging a leak in a dyke with your finger. This outlook is often mistakenly conflated with the "Luddites". The Luddites weren't anti-machines; they were protesting the conditions and lack of societal support for displaced workers. As Malcolm L. Thomas argued in his 1970 history, The Luddites, and as historian Eric Hobsbawm noted:
"machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. 'These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made.'[10] Historian Eric Hobsbawm has called their machine wrecking 'collective bargaining by riot'..." (From Wikipedia, Luddite)
The productivity and efficiency gains from AI will ultimately benefit society, and there will still be meaningful work. However, we should enact laws, policies, and regulations that ensure a more equitable distribution of these benefits than we've seen with technological advancements over the past ~60-80 years (roughly since the mid-20th century).
I like the phrase "New boss, same as the old boss" here. I understand a lot of the AI resentment to be the failure of each of us and the rest of society to look out for one another, but at the end of the day, GenAI is no different from electricity or the lightbulb, at least for now and probably for at least the next 5-10 years. But even The Who's "Won't Get Fooled Again" knew that its not the new particulars that are the problem, its our old habits and failures that we keep repeating that are.
[1] While there's a moral and philosophical debate to be had about the fundamental sense of copyright law itself, that's not a focus of this article. Within our current American capitalist system, copyright remains a crucial, albeit imperfect, mechanism for wealth distribution. It neither covers everything we might deem valuable (Studio Ghibli's artistic style, for example, is arguably more valuable than any single character), nor is it the perfect solution for rewarding effort. However, I agree it's the best tool we currently have within our system. I’ve read countless times about inventions by one person being monetized by another, often decades later (e.g., steam engines, computers). Similarly, many inventions arise simultaneously from multiple independent creators (e.g., telephone, calculus). If copyright wasn't a clear positive force in these historical instances, when exactly is it unambiguously beneficial?
Comments
No comments yet. Be the first to comment!