Hi Ellie, I'm one of these Clojure users who has vocally favored the use of LLMs in the Clojure community so I've been looking forward to contributing to a conversation about this. Thanks for bringing it up.
I'd like to address each of your points.
LLM use appears to have a bunch of ethical and practical problems
Information technology has a bunch of ethical and practical problems. That isn't new. One of the major ethical and practical problems with information technology today is copyright and patent law.
I don't know anything about law, but the lawyer at one point says: "This is a copyright infringement."
I'm okay with laws that prevent merchants from lying about the provenance of their goods, but I take offense with the notion that people are not free to transact honestly with one another using the information they already possess. And I reject the notion that people are not sovereign owners over the information they possess. I take offense with those who try to impose constraints on my sovereignty in that regard.
Also this field study that appears to be putting the plagiarism rate at seemingly at least 2-5%
Impersonation will increase with AI. And people who pretend to accomplish things they didn't actually accomplish deserve to be found out. Fraud is illegal and cheating is a punishable policy in most environments. But that should not be construed as restricting the reuse of publicly available information about how the world works. Reusing public information and repurposing it in some honest way that is useful in a transaction of mutual interest between people - that freedom must not be curtailed.
I don't know what this means legally, but at least morally and ethically this seems sad.
The legality of copyright and patent law under a liberal system where the subjects of a state are taken to be sovereign agents, with inalienable rights, has always been on a precarious crash course. Human individual sovereignty is simply not compatible with artificially imposed information embargoes for the sake of (supposedly) temporary economic monopolies.
And this crash course was always destined to explode when AI arrived. These two worlds cannot coexist. And we've known this for decades: all byte streams can be reinterpreted; you can't actually own bytes; a universally coherent patent system in time and space is not mathematically definable without letting one attacker patent the whole thing.
An AI future was never going to be compatible with this artificial information monopoly system that we greedily invented a few hundred years ago, under the newfound powers of the panoptical state.
Apparently, some people in the Clojure space already spoke out against LLMs, but I wasn't able to find any anti LLM policy.
I would encourage people in the technology community to debate these topics further before jumping to conclusions.
But if we look at where things are going with AI, I think it's fair to say that some percentage of their outputs will BS for a very long time.
But there's a silver lining to that headline - people will still be needed to filter that BS for a very long time.
So I think we'll develop human filtration systems to channel and filter out the noise coming out of these generators.
And the larger and more important a software project is, and the more risk there is to changing a given piece of code, the more human filtration we'll want in that pipeline.
For the Linux kernel, one would hope, so many human eyeballs have reviewed a given piece of code before it's committed that it shouldn't even matter whether a human wrote it - many humans agree the code is the right direction. That's what mattered.
So for projects like Clojure, you can have rules like "humans have to see this first," but that's already so obvious - everybody knows Rich would never let a branch of code enter core without his full agreement, even if an alien came from space and handed it to him.
For projects with very distributed control, where the direction of the project needs to be some principled philosophy that can survive any future leader's opinion on the project's direction, I can see the point in creating more abstract rules of engagement for how human filtration systems will limit the rate of BS leaking into a codebase.
Clojure isn't one of these distributed control projects. It's a collection of cool technology bits from a guy we trust won't let slop in, from humans or computers. That's already the value proposition. If I were to propose that Rich let more LLM generated content into Clojure, do we all not already know what the answer would be? Are these anti-LLM policy documents symbols of political solidarity around some group grievances regarding climate, plagiarism and slop? Or are you really worried Clojure might end up with slop in it?
Ultimately, going forward, given the avalanche of code that LLMs are about to create for us, I think having human-oriented slop filtration systems are going to be a necessary component of most open source ecosystems - so I'm not against Clojure having slop-prevention systems. But Clojure IMO is one of the least likely to ever have that happen anyway. It already has the strongest possible slop filtration system - all clojure core code changes must transact through a single person's mind named Rich Hickey. That's already the contract.
As a proponent of using more LLMs to help us explore the boundaries of what is possible, I'm also in favor of communities like Clojure adopting tools and policies and procedures to constrain the rate of change. So I wouldn't be mad at seeing policies from Clojure around it. I would just caution everyone against producing prolicy documents "Against AI" that won't even mean anything in two years when everyone has moved on to it being normal. I would frame it as being about preserving Clojure code quality as opposed to some sense in which we can turn back time and somehow not have code being generated by LLMs. That's not going to happen, folks.
Anyway. I have some strong opinions on these topics so I very much appreciate it when folks bring it up, giving us all the opportunity to think about these things a little bit deeper, so thank you for posting this. I think it would be unhelpful for the Clojure community to "go to war with AI," but I'm totally in favor arguing and having debates about pushing this stuff in the right direction and some of that will be policy docs and guidelines and whatnot. Just my 2 cents, take it with a grain of salt.