Mold 2.40.1 Delivers More Performance Including New ChatGPT Generated Algorithm

3 hours ago 2

PROGRAMMING

While a point release, Mold 2.40.1 is another notable step forward for this high-speed linker alternative to GNU LD/Gold and LLVM LLD. Mold 2.40.1 brings yet more performance improvements.

Mold 2.40.1 now eliminates unnecessary memory zero-initialization with the "--compress-debug-sections" option. This change makes debug section compression faster and can even yield --compress-debug-sections usage to be faster than without using it due to the reduced file I/O.

Mold lead developer Rui Ueyama confirmed this change can drop the time by about 1.2 seconds on an AMD Ryzen Threadripper 7980X system when linking an executable with around 5GB of debug info sections.

Threadripper 7980X

Mold 2.40.1 also speeds things up by using a linear-time algorithm for glob pattern matching rather than an exponential pattern matching algorithm. This new algorithm should be faster for any glob pattern.

Interesting Rui turned to ChatGPT AI for coming up with this new algorithm. He noted with this glob algorithm patch:

"Improve multi-glob pattern matcher so that it's linear-time

I asked ChatGPT how to match multiple glob patterns simultaneously with a given input string, while avoiding the textbook-style NFA-to-DFA conversion. Then it suggested that I implement a bitvector-based NFA simulation algorithm that I wasn't aware. I don't think I could have come up with it myself easily. This is impressive. ChatGPT is so good at programming and sometimes much better than me! It may not be to long before AI writes all the code for me."

More details on these Mold 2.40.1 changes via GitHub.

Read Entire Article