Hacker Newsnew | past | comments | ask | show | jobs | submit | abelanger's commentslogin

Author here - I'm also generally skeptical of coding agents, but with the right problem domain and approach they can produce quality output when paired with humans. There was a point in time in the chess world where computer + human was stronger than computer or human alone. I think we're in that era for a handful of applications. Not for things like kernels, browsers, or databases.

> Besides, who is going to maintain that code?

I maintain the code. If Claude gets sunset tomorrow, I'll still be able to maintain and write it - I've already rewritten parts of it.

You could make the same argument for a team member leading a project that you've worked on. Is that code forever required to be maintained by one team member?

Previously the overhead of ensuring code quality when the development process was driven by Claude Code was greater than just writing the code myself. But that was different for this project.


Hi, thanks! To be clear, the demo there is merely a WASM-based Ghostty build which is rendering the TUI on a web page, just so people could try it out without needing to install anything. The actual TUI runs in your terminal. I'm guessing it's the WASM side of things causing the fans to spin, which you wouldn't see locally.

Hi everyone, I enjoyed building this TUI for myself and wanted to write down how I did it. I appreciate all the thoughts and feedback! The web app is our main investment, but I think there's a slice of developers who really like to interact with TUIs, so I'm going to keep working on it.

For the demo at https://tui.hatchet.run, to answer some messages asking about it: I built this with the fantastic ghostty-web project (https://github.com/coder/ghostty-web). It's been a while since I've used WASM for anything and this made it really easy. I deployed the demo across six Fly.io regions (hooray stateless apps) to try to minimize the impact of keystroke latency, but I imagine it's still felt by quite a few people.


You're right - I'll remove that now until we can get it more performant or drop it altogether. This wasn't something we caught during testing. I appreciate the feedback!

While you are at it, it would be good, if the post was readable at all, without having to run JS on the page.

It rendered perfectly, without JavaScript, in Emacs EWW.

I think perhaps Emacs does not support the `hidden` attribute?

https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...

If you check the source (not the DOM) the actual content is loaded in `<div hidden="" id="S:0"> ...` which is then moved/copied into the proper main content div in the DOM using a JS event it seems.


It must have sent it differently if the browser reports it can’t do JavaScript.

I used to try EWW sometimes, but it sometimes made whole Emacs crash at unpredictable times, so I stopped trying to use it. But good to know, maybe I will try again in the future, hoping it becomes more stable/safe.

I don’t think EWW has ever made my Emacs crash. I wanna say I’ve been using it regularly since Emacs 27.


Sounds like the CodeRabbit controversy: https://xcancel.com/harjotsgill/status/2004050004785484172


Perfect! Thanks so much!


Drawing the boundary at high throughput, huge fan-out and ultra-low-latency is correct - I'd also add that MQs are often used for pub/sub and signaling.

MQs are heavily optimized for reducing E2E latency between publishers and consumers in a way that DE engines are not, since DE engines usually rely on an ACID compliant database. Under load I've seen an order of magnitude difference in enqueue times (low single-digit milliseconds for the MQ p95 vs 10ms p95 for Postgres commit times). And AMQP has a number of routing features built-in (i.e. different exchange types) that you won't see in DE engines.

Another way to think about it is that message queues usually provide an optional message durability layer alongside signaling and pub/sub. So if you need a very simple queue with retries _and_ you need pub/sub, I'd be eyeing an MQ (or a DE execution engine that supports basic pub/sub, like Hatchet).

I wrote about our perspective on this here: https://hatchet.run/blog/durable-execution

( disclaimer - I'm one of the people behind https://github.com/hatchet-dev/hatchet )


This is seriously cool - it's exactly the DX and API I've been waiting for from sandboxed execution providers.

I'd love to be able to configure the base image/VM in a way that doesn't bundle coding tools or anything else I don't need, and comes with some other binaries installed (I'm more interested in using this as an API for a sandbox use-case I have). Is there a way to do this at the moment / is this on the roadmap?

Another option would be configuring the sprite via checkpoint and then cloning the checkpoint from a base sprite, but I don't see this option anywhere either.


This is on the roadmap. The open question right now is if we can just do "fork from checkpoint" for customized template environments, or if we need all the docker infrastructure.

If the fat bundled environment harmful for you, or just extra stuff you don't care about?


Not harmful for now - "fork from checkpoint" would be perfectly fine for me at the moment. The main issue (as flagged in the post) is that setting up additional tooling can take a while!

In the longer term, docker is nice from a reproducibility + CI perspective, and a docker build is already something can easily work with and track in my system.

One thing I've heard but not verified with other sandboxed execution providers is that startup times for custom images can be quite slow, so it could be a potential differentiator given Fly's existing infra.


Yes! It would be kinda cool to have the ability to docker-deploy (think the fly method even -- just to get your sprite on its feet the way YOU want it) a base sprite image and then just go from there in the normal sprite way from then on.


Hatchet | Founding Engineer | NYC or REMOTE (US and EU) | https://hatchet.run

Hey HN! I'm Alexander, one of the founders of Hatchet. Hatchet is an open-source platform for running background jobs at scale.

We're hiring engineers who are excited to build the next class of engineering primitives, starting with queues, background tasks and durable execution. We started in early 2024 after launching our distributed task queue (https://news.ycombinator.com/item?id=39643136).

Hatchet is currently used by thousands of engineers for all kinds of workloads: log ingestion pipelines, code review agents, video encoding, GPU scheduling, etc. Our target customer is fast-growing startups who have a strong need for background jobs system. These days, that tends to be AI companies, though we're general-purpose and not exclusively targeted for AI workloads.

Stack: Postgres, Go, Typescript, React, Kubernetes

Apply here: https://www.ycombinator.com/companies/hatchet-run/jobs/SNpCm...

Or email me at alexander [at] hatchet [dot] run


Your website is INCREDIBLY slow, and I have a very good laptop with dedicated GPU. Somehow some JS is killing the whole thing.

Anyways, the concept looks cool, but I'm failing to see a real value add to something like Temporal. What is it?


Oh no, haven't gotten reports about the website being slow, thanks for flagging! Which browser are you using?

Regarding Temporal, our goal is to create the best developer experience possible, and we started Hatchet because we felt that Temporal misses the mark (I used Temporal for years before starting Hatchet).

The primary difference is that we're not solely focused on durable workflows, we're a general-purpose background jobs platform which offers durable workflows as a feature. In our view there are a set of equally important primitives: tasks, events, streaming/pubsub, concurrency, priority, rate limiting, scheduling, and yes, durable workflows.

Tasks being the entrypoint to the platform, rather than immediately dealing with the overhead of durable workflows, generally makes Hatchet easier to adopt for engineering teams. I wrote a little more about how task queues relate to durable execution here: https://hatchet.run/blog/durable-execution

We've also invested quite heavily in platform features like logging, observability, alerting, and our UI which either aren't offered or are underdeveloped in Temporal.

But ultimately I'd encourage people to give both a try - we're both MIT licensed and can easily be run locally.


I’m on chrome. Maybe it was a hiccup on my end.

Looks interesting.

We are in the process of looking for some alternatives to temporal/prefect. Would you mind sharing your email so I can send a few questions along with my cofounder?


Yep! Feel free to email me at alexander [at] hatchet [dot] run


The website seems fine to me. I'm using Chrome on Linux with an X1 Carbon (so integrated graphics, no fancy GPU).


I didn't notice the website being slow (Firefox on android, midrange device)


I mentioned this towards the bottom of the post, but to reiterate: we're extremely grateful to Laurenz for helping us out here, and his post on this is more than worth checking out: https://www.cybertec-postgresql.com/en/partitioned-table-sta...

(plus an interesting discussion in the comments of that post on how the query planner chose a certain row estimate in the specific case that Laurenz shared!)

The other thing I'll add is that we still haven't figured out:

1. An optimal ANALYZE schedule here on parent partitions; we're opting to over-analyze than under-analyze at the moment, because it seems like our query distribution might change quite often.

2. Whether double-partitioned tables (we have some tables partitioned by time series first, and an enum value second) need analyze on the intermediate tables, or whether the top-level parent and bottom-level child tables are enough. So far just the top-level and leaf tables seem good enough.


I'd consider myself pretty familiar with postgres partitioning, and even worked with systems that emulated partitioning through complex dynamic SQL through stored procs before it was supported natively.

But TIL, I didn't realize you could do multiple levels of partitioning in modern postgres, found this old blog post that touches on it https://joaodlf.com/postgresql-10-partitions-of-partitions.h...

Something that stresses me is the number of partitions - we have some weekly partitions that have a long retention period, and whilst it hasn't become a problem yet, it feels like a ticking time bomb as the years go on.

Would a multi level partitioning scheme of say year -> week be a feasible way to side step the issues of growing partition counts?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: