The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.
This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."
It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.
I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
Getting your Google Workspace account nuked because an employee hooked their company Gemini account to OpenClaw would certainly be a novel business risk.
Isn't that pretty much par for the course for these megacorps? Account gets banned as a disproportionate response to something minor, or in many cases for no explicable reason at all, and anyone without enough of a platform to do "bad PR escalation" via social media or traditional media gets to learn the hard way that their "customer service" is just a brick wall that can't or won't do anything about it.
Adopting a massive dependency on a single company is generally a mistake.
You're not wrong but Google in particular paved the way for not doing support, or doing as little support as possible, and oftentimes things only get actioned if you generate enough clout on social media to attract a Google engineer's attention.
It's hard to avoid the massive dependencies, especially if you're starting small and moving fast, because something like Google Workspace or MS 360 or Slack is cost-competitive compared to spinning up your own internal stack of tooling. At least until it isn't, but hopefully your startup has grown enough by then that it can afford to address these concerns.
Google services are banned at the very large company I work at and that's not because they are technically poor.
It's just that the last time we had to deal with their customer support, they were so bad someone at the exec level said they were banned from now on. It's to the point we have to explicitely schedule high level meetings and carve out exceptions when they happen to buy products we use.
We work with nearly everyone in the cloud space except Google. That should tell you everything you need to know.
Google has gigantic power over its users. Consider that for some reason, Google banned your gmail account, which you are using for large number of logins for different essential services.
Yes I notice that too. I hide my last name now because at my company it's just firstname.lastname so easy to guess.
It helps a lot but I still get a lot of sales goons. A lot of them follow up constantly too "hey what about that meeting invite I sent you why did you not attend"? My deleted email box is full of them (I instantly block them the minute I get an invite to anything from someone I don't know, and I wish Outlook had the ability to ban the entire origin domain too but it doesn't)
Put an emoji after your name in LinkedIn. Something that obviously isn’t part of your name. All the bots that scrape LinkedIn and guess your email address will include the emoji when addressing you in an email; no humans will. You can then use this in a spam filter.
I’m a bit on the fence with this one. Sure, spam is bad, but they also enable you to reach out to somebody outside of the LinkedIn’s walled garden (personally, without automation).
If it enables a tiny startup trying to solve the exact problem I have to reach out to me – I’d say it’s a net positive (but not by a huge margin), and having to blacklist @mongodb.com with their certifications bullshit is a price I’m ready to pay. If more spammers get their hands on this kind of dataset though it’ll probably be a disaster.
I think your example goes out of the scope of an expensive platform like Apollo that exists to maintain a shadow profile based off of your online presence, though.
Maybe the thought occurs that one is only accessible on LinkedIn on purpose and just because a recruiter from 8 years ago has your number, it's not up for grabs?
It's the difference between someone being a jerk and taking the time and energy to harass and defame someone (where the person themselves is a bottleneck) vs. running an unsupervised agent to carpet bomb the target.
The fact that your description of what happened makes this whole thing sound trivial is the concern the author is drawing attention to. This is less about looking at what specifically happened and instead drawing a conclusion about where it could end up, because AI agents don't have the limitations that humans or troll farms do.
The simple fact that the owner of this bot wanted to remain anonymous and completely unaccountable for their harassment of the author, says everything about the validity of their 'social experiment' and the quality of their character. I'm sure that if the bot was better behaved they would be more than happy to reveal themselves to take credit for a remarkable achievement.
Something like OpenClaw is a WMD for people like this.
I've seen the internet mob in action many times. I'm sympathetic to the operator not outing themself, especially given how far this story spread. A hundred thousand angry strangers with pitchforks isn't the accountability we're looking for.
I found the book So You've Been Publicly Shamed enlightening on this topic.
I would never advocate for torches and pitchforks, I've been close to victims of that in the past.
It is, however, concerning that the owner of that bot could passively absolve themselves of any responsibility. The anonymity in that sense is irrelevant except that is used as a shield for failure.
There is a class of YouTube "content creators" who like to point out "cringe" individuals on the internet online for others to laugh at. They will often add a disclaimer to their videos saying "hey please don't go and harass this person, pinky promise!" But it never works. A hoard of internet randos will descend on the individual to say the most nasty words. When the YouTuber is pressed he or she will just say "I would never do that!" Even though he or she knew his or her video would have led to the harassment happening, or there would not be a disclaimer in the first place.
Not accusing you of trying to stir up harassment, but please consider the second order effect of the things you advocate for, in this case the disclosure of the identity of this AI guy.
Then there's the next level of content creators that only post videos about the original content creators who are behaving badly. They will report on their behavior and any repercussions. Some do it like they are reporting the news. It stokes the fire when these people should be ignored.
But in this case, isn't Rathbun's owner the YouTube guy in this scenario?
I totally understand why they're trying to stay anonymous; it's a very rational thing to do, because people will shit on them. But they or their creation is the one that started trying to play the name-and-shame game.
It's hard to stir up too many feelings of sympathy here.
Exactly. I'm not saying this person should disclose their identity, but they are very conveniently using anonymity and passive voice to make themselves unaccountable to the 'social experiment' they conducted. And that we all know that if it went differently they'd put their name all over it.
In as many words I'm just calling this person a complete asshole and if I were to ever know this person offline I would be quite clear in explaining that.
A "social experiment" but the guy was not even keeping track of the changes in the model's configuration
> What is particularly interesting are the lines “Don’t stand down” and “Champion Free Speech.” I unfortunately cannot tell you which specific model iteration introduced or modified some of these lines. Early on I connected MJ Rathbun to Moltbook, and I assume that is where some configuration drift occurred across the markdown seed files.
It definitely sounds like an excuse they came up after what happened. I would really like to accept them having good overall intentions but there are so many red flags in all this, from start to end.
I'm building a new hardware drum machine that is powered by voltage based on fluctuations in the stock market, and I'm getting a clean triangle wave from the predictive markets.
The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.
This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."
It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.
reply