FOSS projects are built on trust. The developer removing the co-author attribute due to backlash followed by seemingly taunting people by telling them good luck to identify which is LLM code and which is human code is just plain bad behavior.
Own what you do. Be transparent with the community. The backlash isn't going to kill you. But you dig yourself a deeper grave by openly admitting to obfuscate the development process of a FOSS project.
My personal issue is his choice of the model used. He's chosen Anthropic which is complicit in a war, whose AI is being used by the military to further military interests. Out of many more ethical models out there, why go with that one specifically?
I wish you had addressed the first two paragraphs I wrote, as I feel they're a bit more relevant and tie into the developer's chosen behavior more than his choice of an AI helper.
What is the standard?
Many platforms make active efforts to suppress the propaganda. But I concede that people do have the need to choose a platform that reaches the widest possible audience, especially if it concerns a project that needs broader attention.
But an LLM isn't this. An LLM isn't a platform. It's a utility tool. One for creation. A previous commenter pointed out that the developer tried to pick a model that isn't helping the military. So this should show the developer has an ethical stance. Maybe this happened before Anthropic began aiding the military.
I wonder if his choice has been or if it will be changed.