I wonder if we'll reach a breaking point with public forges, where they'll simply reject hosting a repo if it isn't from someone with a vetted background or if it detects hallmarks of LLM slop (e.g., many commits over a short period of time or other LLM tells).
GitHub recently added new repository settings to turn off pull requests or limit them to approved contributors. The announcement doesn't mention AI agents, but that's certain relevant.
GH also needs to find a way to stop AI scraping of IP.
(Or not. It might be lucrative to host some novel algorithm on GH under a license permitting its use in generative LLM results, at a reasonable per-impression fee.)
I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.