Avoiding certain LLMs' output
Moderator: Moderators
Re: Avoiding certain LLMs' output
The difference is this stuff is pooping out media, chatbots have existed for ages. Whether they're driven by LLM or some other model makes no difference, it's not some new bespoke technology. Frankly I think the chat situation has decreased in quality despite the gains in media generation.
I think what we will see out of this generation of AI is maturation of an artistic design tool from what is getting a bunch of media as this general purpose stuff. Programming has its own necessities that I don't see it fully replacing. The danger in the algorithm learning like a human is now you just have twice the likelihood a wrong decision will be made "just because". Bad ideas and advice outnumber the good, something drawing from statistics and not true learned experience can never have the same perspective.
I think what we will see out of this generation of AI is maturation of an artistic design tool from what is getting a bunch of media as this general purpose stuff. Programming has its own necessities that I don't see it fully replacing. The danger in the algorithm learning like a human is now you just have twice the likelihood a wrong decision will be made "just because". Bad ideas and advice outnumber the good, something drawing from statistics and not true learned experience can never have the same perspective.
Re: Avoiding certain LLMs' output
The US Copyright Office has published a report hinting that training commercial LLMs and diffusion image generators on mass quantities of random copyrighted works is not fair use.
When a model is deployed for purposes such as analysis or research — the types of uses that are critical to international competitiveness — the outputs are unlikely to substitute for expressive works used in training," the office said. "But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.Further reading: Report by US Copyright Office; article by Lauren Edmonds of Business Insider
Re: Avoiding certain LLMs' output
They need to just rip off the bandaid and prosecute someone already. We live in a broken precedent-based legal system, if it's going to be that way we can at least use the lack of precedent to start out on the right foot. Sue OpenAI or whoever into the ground and then there'll be an established case future disputes can just point to and say "nope it's illegal". Until they stop jacking around and hold someone to account it's just gonna keep being this "we don't know what to do" situation. I swear the legal system in this world, people will get abducted and disappeared to foreign countries with no due process but we can't fine a company demonstrably stealing and flaunting that fact out in the general public? Law is dead, we killed it.
Re: Avoiding certain LLMs' output
https://arstechnica.com/tech-policy/202 ... -fair-use/
wtf...A day after the US Copyright Office dropped a bombshell pre-publication report challenging artificial intelligence firms' argument that all AI training should be considered fair use, the Trump administration fired the head of the Copyright Office, Shira Perlmutter—sparking speculation that the controversial report hastened her removal.
Re: Avoiding certain LLMs' output
Smell that? That's them shit-winds Mr.Lahey keeps talking about.
Re: Avoiding certain LLMs' output
It seems there is a strong consensus in NESdev community against LLMs.
I have no problem spotting an image who is AI generated, usually those are very reconiseable. However I must admit I am much less skilled to recognize AI-generated text. I could sort of recognize the "encyclopedic" tone that is the default in the most popular LLMs, however the presence of an encyclopedic tone is not a proof it was AI-generated - and it's easy to ask the LLM to use another tone than the default one.
AI detectors can give false positive as well as false negatives, as such they're not very useful. As they are themselves based on AI, I suspect that passing every message you reciece through them to make sure you're not fooled by AI will use the same controveresed massive amount of energy and ressources than embrassing AI and using it all over the place.
The problem is that it's already too late, AI is everywhere, most people seems enthusiastic about it, and my current employer seems to actively encourage their employees to use it for various things.
The most probable outcome in the short term is AI being eventually lightly regulated, such as being obligated to give it's sources and learning material, but still be (mis)used widly. This is concerning, both for the predictable intelectual decline that might occur in the following decades, and the major waste of energy this represents.
I have no problem spotting an image who is AI generated, usually those are very reconiseable. However I must admit I am much less skilled to recognize AI-generated text. I could sort of recognize the "encyclopedic" tone that is the default in the most popular LLMs, however the presence of an encyclopedic tone is not a proof it was AI-generated - and it's easy to ask the LLM to use another tone than the default one.
AI detectors can give false positive as well as false negatives, as such they're not very useful. As they are themselves based on AI, I suspect that passing every message you reciece through them to make sure you're not fooled by AI will use the same controveresed massive amount of energy and ressources than embrassing AI and using it all over the place.
The problem is that it's already too late, AI is everywhere, most people seems enthusiastic about it, and my current employer seems to actively encourage their employees to use it for various things.
The most probable outcome in the short term is AI being eventually lightly regulated, such as being obligated to give it's sources and learning material, but still be (mis)used widly. This is concerning, both for the predictable intelectual decline that might occur in the following decades, and the major waste of energy this represents.
Useless, lumbering half-wits don't scare us.
Re: Avoiding certain LLMs' output
That, or the Internet eventually turns into The Library of Babel. Though some could probably argue that it's already happened in certain areas.Bregalad wrote: Tue May 13, 2025 1:59 pm The most probable outcome in the short term is AI being eventually lightly regulated, such as being obligated to give it's sources and learning material, but still be (mis)used widly. This is concerning, both for the predictable intelectual decline that might occur in the following decades, and the major waste of energy this represents.
Quietust, QMT Productions
P.S. If you don't get this note, let me know and I'll write you another.
P.S. If you don't get this note, let me know and I'll write you another.
Re: Avoiding certain LLMs' output
Well then you get into dead internet theory in overdrive. Content designed to appeal to agents first who then decide the consensus is to appeal to agents, soon enough the majority of web traffic and content isn't for us, it's for a self sustaining reaction....
Re: Avoiding certain LLMs' output
Saveri Law Firm has been representing authors in civil court for the past few years, suing the generative tech giants behind major LLMs, GitHub Copilot, and image generators, and seeking class-action certification. It just takes a while.segaloco wrote: Mon May 12, 2025 1:39 pm They need to just rip off the bandaid and prosecute someone already.
Re: Avoiding certain LLMs' output
These things need to stop taking a while I swear our entire society is so high from huffing bureaucratic fumes that it forgets you can just....do things. The entire structure of our legal system is a farce and we severely limit ourselves in our ability to actually get *ANYTHING* done with endless procedural red tape. Nature does no such thing and survives blissfully in spite of the hell our species regularly unleashes on earth. I'll be damned if artistry dies as a matter of procedure. It just eats me alive how slow this all moves when it doesn't have to, it's all so incredibly arbitrary and serves no one. The environmental decimation from data centers doesn't wait on our sluggish legal processes. The violation of countless artists livelihoods is not standing at a turnstile waiting patiently for some leach in a legal office to flick their pen across the correct paper. We need to stop depending so much on load bearing ink and take action in our physical world with our physical bodies to actually correct the things bearing down on us. I'm so sick of paperwork and bureaucracy being an excuse for things not being done in the here and now.
No living thing artificially limits itself as much as ours, advanced species my ass.
Not mad at you personally by the way but I've just gotten this line of thought from a lot of people on a lot of things lately and Jesus Christ it's getting to the point I need to wait in a line for a week for a notorized form in triplicate to approve my request to pick up a cast off tissue off the floor. There's a speeding train headed towards us and we're sitting on the tracks agonizing over whether to use a 2B or 2HB pencil to plead with the conductor to consider pulling the brakes. Every other living thing in the world would step out of the way, or if physically possible, fight the train.
No living thing artificially limits itself as much as ours, advanced species my ass.
Not mad at you personally by the way but I've just gotten this line of thought from a lot of people on a lot of things lately and Jesus Christ it's getting to the point I need to wait in a line for a week for a notorized form in triplicate to approve my request to pick up a cast off tissue off the floor. There's a speeding train headed towards us and we're sitting on the tracks agonizing over whether to use a 2B or 2HB pencil to plead with the conductor to consider pulling the brakes. Every other living thing in the world would step out of the way, or if physically possible, fight the train.
Re: Avoiding certain LLMs' output
The paper trail is important because it reinforces the decision made later, and also allows that decision to be revisited/reevaluated later if need be. If everything were just a verbal contract, we'd have no cohesion, and it'd be easy to constantly flip-flop the decision.
The slow process is also what preliminary injunctions are for: they force you to stop what you're doing until the courts decide whether it's legal or not.
They benefited from it anyway while it was stalled in court? That's what damages are for.
If the courts can rule that APIs are non-copyrightable, then they can rule on this too.
The slow process is also what preliminary injunctions are for: they force you to stop what you're doing until the courts decide whether it's legal or not.
They benefited from it anyway while it was stalled in court? That's what damages are for.
If the courts can rule that APIs are non-copyrightable, then they can rule on this too.
Re: Avoiding certain LLMs' output
I wish I could be that optimistic but we've seen throughout our history examples of the horse being out of the barn and those with the power to make a difference just throwing their hands up and saying oh well we can't do anything. No amount of injunctions or damages collected later change what is happening literally in the here and now. There are damages being done on a social and philosophical level to our relationships with technology and each other that no amount of potential money extracted from these corporations is going to fix in the future. When something is so culturally impactful, manipulating the economics of it all isn't going to reach into peoples brains and rip out the damage it has already done to our perspective on what is and isn't just. That's the stuff that our legal system can never seem to fix because it is entirely focused on precedent and money. Damage to the human social condition is a real thing too, and it doesn't get fixed with money.
-
- Posts: 1931
- Joined: Tue Feb 07, 2017 2:03 am
Re: Avoiding certain LLMs' output
seems for now you can detect "other forms of space" to detect some AI outputs https://youtube.com/shorts/qt4r_Y3uz74