we have in fact known this for years and the difficulty is to find a way to do it that maintainers agree comes at a reasonable maintenance burden).
I’m not a compiler developer by trade, although I’ve done all sorts of development over the years. I’m approaching this strictly as a user, perhaps a power user. I used to look at my needs and wants, and sulk because they were not addressed.
Damn, I can’t debug OCaml on my Mac because there’s no DWARF info.
Oh, wow, Jane St released OxCaml! Yay, native debugging on the Mac! Darn, all kinds of package hell is breaking loose. Let me offer my help… Tarides takes care of maintenance of these bits? I’ll talk to them, maybe they can hire me to fix it.
Alas, I’m still here and my needs are not addressed. But, hey, there’s AI and it seems to one-shot fairly complex stuff in different languages, from just a Github issue. Maybe I can try it…
Wow, oh wow! My needs are finally taken care of!
The code seems clean and well-written. I can understand what AI it’s doing and why. All tests pass, documentation and comments are in order. I can definitely use this!
I think that it is a case of different-to-the-point-of-being-incompatible software development processes (rather than a given process being fundamentally right or wrong), and I think that the uncertainty here is in part caused by our lack, on the upstream side, of a clear policy for what we expect regarding AI-assisted code contributions.
That is something I’ve been pondering myself. I tried approaching several projects this way, trying to take care of things that bother me. The reaction is similar across the board. Folks want a nuanced and thorough discussion, as well as buy-in, before an implementation is submitted.
This is incompatible with what I found to be the most efficient way of using AI, though. It doesn’t need that much input. I can kick off a project just by telling it to add DWARF debugging information to OCaml, such that breakpoints and source code listings, as well as variable printing, works in lldb and gdb. I tell it to follow the practices of the OCaml code base, and to make sure new tests are compatible with existing test infrastructure. Then I add that I want all new code to be thoroughly tested.
AI goes away and does the work, asking me for input. I review what it's doing and make sure it doesn't take shortcuts. This is more art than science at this point. Different models produce different work so you need to know which model to use. For example, some models are great at code review and some are better at coding. You also need to carefully steer the AI and make sure it stays on track.
I don’t know of a single project that’s ready for this kind of development process. It often gets emotions high (is artisanal coding dead? will AI take my job?) and creates a lot of friction.
This is why I decided to have AI write me a new Lisp compiler, targeting small binaries and bare metal targets. I’ve been cooking it for the last few days and the results are beyond awesome! Next, I will have AI write me a graph database and a Lisp version of Slint. I’m 100% sure it will work and work well, and I will have it in the next few weeks. The best part is that I won’t have to bother anyone with my forced contributions and can just showcase the end results.
To summarize, I love the new AI sausage and, having visited the sausage factory and done a thorough investigation, I’m not concerned with how the sausage is made. I won’t be forcing my sausage-making processes on anyone and will go make my own sausage!
.png)
![Switzerland's "Crocodile" Locomotive [video]](https://www.youtube.com/img/desktop/supported_browsers/opera.png)
