Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The models used for apps like Codex, are they designed to mimic human behaviour - as in they deliberately create errors in code that then you have to spend time debugging and fixing or it is natural flaw and that humans also do it is a coincidence?

This keeps bothering me, why they need several iterations to arrive at correct solution instead of doing it first time. The prompts like "repeat solving it until it is correct" don't help.

 help



> as in they deliberately create errors in code that then you have to spend time debugging and fixing

No, all the models are designed to be "helpful", but different companies see that as different things.

If you're seeing the model deliberately creating errors so you have something to fix, then that sounds like something is fundamentally wrong in your prompt.

Besides that, I'm guessing "repeat solving it until it is correct" is a concise version of your actual prompt, or is that verbatim what you prompt the model? If so, you need to give it more details to actually be able to execute something like that.


> then that sounds like something is fundamentally wrong in your prompt.

I am holding it wrong?


> If you're seeing the model deliberately creating errors so you have something to fix, then that sounds like something is fundamentally wrong in your prompt.

No, all these models are just bad for anything that they weren't RLed for, and decent for things they were. Decent, because people who evaluate them aren't experts.


> No, all these models are just bad for anything that they weren't RLed for, and decent for things they were

Are you claiming that the models are RLed to intentionally adding errors to our programs when you use them, or what's the argument you're trying to make here? Otherwise I don't see how it's relevant to how I said.


No, I am making the argument that models have poor capabilities outside of tasks they are RLed for, and their capabilities inside those tasks are only as good as capabilities of people evaluating their responses, i.e. not great. Even if you instruct the model "don't do X" or "do X this way"—you cannot rely on the model following that instruction. This means that there is nothing you can do if model makes "errors."

Not necessarily relevant, but fun, I had the ChatGPT model correct itself mid-response when checking my math work. It started by saying that I was wrong, then it proceeded to solve the problem and at the end it realized that I was correct.


> Even if you instruct the model "don't do X" or "do X this way"—you cannot rely on the model following that instruction.

Why not? I can definitively fire of two prompts to the same model and harness, and one include "don't do X" and the other doesn't, and I get what I expect, one didn't try to avoid doing X, and the other did. Is that not your experience using LLMs?


It depends on the instruction, and how many other instructions there are. Models converge on doing things the way that emerged from their training, and with every turn the model cares less and less about your instructions. In practice, this means that after you had the model plan and execute the plan, you almost always end up having to iterate on the output because during the process of outputting the output the model began to derail and ignore instructions. You get things like "In a real app, we would do X, for now, just return null" or various subtle bugs.

It makes sense if you remember that it just predicts, what should probably be the next piece of text?


I understand how they work, as I do work with them everyday and been doing so for two years or so. What I don't understand, is how what you're saying is in any way related to the whole "deliberately create errors in code" part, which is where I jumped into the discussion.

Maybe I'm missing some bigger picture you're trying to paint here? I understand (and see) them making "mistakes" all the time, and I guess you could argue it's deliberate in some way, because it's simply how they work and adjusting the prompt and redoing usually solves the problem. But I'm afraid I don't see how it's connected, at least yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: