Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.

I am already using low code tooling with agents for some projects, in iPaaS products.

 help



> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.

I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.

The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?

I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.

Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.

All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.


> The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?

As a very practical problem the assembly would consume the context window like no other. And another is having some static guardrails; sometimes LLMs make mistakes, and without guard rails it debugging some of them becomes quite a big workload.

So to keep things efficient, an LLM would first need to create its own programming language. I think we'll actually see some proposals for a token-effective language that has good abstraction abilities for this exact use.


Lets say years of offshoring projects have helped to reach that opinion.

I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.

I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.


I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".

IMO Rust will strike a very strong balance here for LLMs.


Formal specifications and automated testing, will beat any language specific tooling.

Hardly much different than dealing with traditional offshoring projects output.


> Formal specifications and automated testing, will beat any language specific tooling.

I don't understand what you mean. Beat any language at what? Correctness? I don't think that's true at all, but I also don't see how that's relevant, it definitely doesn't address the fact that Rust will virtually always produce faster code than the majority of other languages.

> Hardly much different than dealing with traditional offshoring projects output.

I don't know what you mean here either.


Any tool that can plug into MLIR and use LLVM, can potentically produce fast code.

Also there is the alternative path to execute code via agents workestration, just like low code tooling work.

I see you never had the fortune to review code provided by cheap offshoring teams.


What is a programming language used for if not the most formal specification possible? Of course it doesn't matter what language you use if you perfectly describe the behavior of the program. Of course, there's also no point in using LLMs (or outsourcing!) at that point.

If the offshore company provides me a Rust crate that compiles, that is already a lot of guarantee. Now that does not solve the logic issues and you still need testing.

But testing in Python is so easy to abuse as LLM. It will create mocks upon mocks of classes and dynamically patch functions to get things going. Its hell to review.


Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: