|
|
Subscribe / Log in / New account

Rejuvenating Autoconf

This article brought to you by LWN subscribers

Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.

October 23, 2020

This article was contributed by Sumana Harihareswara

GNU Autoconf, a widely used build tool that shines at compatibility with a variety of Unixes, has accumulated many improvements since its last release in 2012 — and there are patches awaiting review. While many projects have switched to other build systems, interest in Autoconf remains. Now, a small team (disclaimer: including me) is rejuvenating it, working through some deferred maintenance and code review. A testable beta is now out, a new stable release is due in early November, and interested parties can build on this momentum to further refresh the rest of the GNU Build System (also known as Autotools).

A widely used default

GNU Autoconf is a tool for producing configure scripts for building, installing and packaging software on POSIX systems. It is a core component of the GNU Build System. When a user installs a software package on the command line by compiling it from source, they are often instructed to run:

    $ ./configure; make; make install
Those steps do the following:

  • configure: test system features with attention to portability concerns, prepare and generate appropriate files (including a makefile) and directories, etc.
  • make: use the makefile as instructions to build the package, performing any necessary compilation steps
  • make install: place the built binary and any other needed files into the appropriate location

configure is a portable shell script that must run on many platforms. Writing a configure script by hand can be tedious and difficult, so Autoconf helps automate this process. A software developer writes a configure.ac file specifying system features the software will need (e.g. "is the X Window System installed, and if so, where?"). Each test for a system feature is a macro written in the GNU M4 language. Autoconf comes with many macros that developers will likely need, and a library of add-on macros ("autoconf-archive") (source) provides dozens more.

Thus, in the base case, a programmer wanting to distribute code to be built with the GNU Build System needs to write only a bit of M4 in configure.ac, and would likely only need to use one or two additional macros from autoconf-archive. They do need to learn more M4 if they need configure to detect a system feature for which there is not an existing macro.

Autoconf has built-in support for various compiled languages: C, C++, Objective C, Objective C++, Fortran, Erlang, and Go. More crucially, it performs feature detection with knowledge of a wide variety of POSIX platforms. If you are building new software that has few arcane dependencies and your users are all on modern Linuxes plus FreeBSD, or if you want to make Ninja build files, perhaps you'd be better served using alternatives such as CMake, SCons, or Meson — and indeed many projects have switched away from the GNU Build System over the years, including GTK+ and KDE. Complaints that the GNU Build System is slow, complex, and hard to use have been aired (including in LWN's comment threads) for years. However, if your customers need to be able to build a shared library on Solaris, AIX, HP-UX, IRIX, and all the BSDs, then Autoconf will come in handy.

From 1991 to the present

Autoconf's founding in 1991 and immediate subsequent history is chronicled in its manual and in the book The GNU Project Build Tools. Its function in the 1990s and early 2000s was to smooth over differences among the proliferating Unix variants. Autoconf's last big change was the version jump from 2.13 to 2.50 in 2001, which broke many existing configure scripts and required several follow-up point releases. Version 2.50 extensively overhauled several components, including autoupdate, and changed cross-compilation defaults; it was such a disruptive release that some users are still using 2.13 so as not to have to port their old scripts.

However, in recent years, Autoconf's star has faded. Linux's ascendance has made it easier for developers to get away with ignoring portability among Unixes — and the GNU Build System's balky Windows integration doesn't help those who need to deliver software to all three major desktop operating systems. But older, more complex projects include legacy code that already depends on Autoconf; converting it would be risky and expensive. In addition, competing build systems don't cover all of the edge cases that Autoconf does.

The rise of languages that use their own package management (such as Python, Perl, Go, and Rust) means that developers writing single-language code bases can avoid system-level build tools entirely. On the other hand, if you're writing software that combines C++, Fortran, Python, Perl, and Erlang, the GNU Build System can make those multiple languages play well together. It is more language-independent than, say, setup.py, and you can use the built-in macros plus the autoconf-archive macros to say: "I need to be able to use the 2011 dialect of C++, and I need this particular Python module installed".

Users of the GNU Build System need stability, multi-language compilation, and cross-language compatibility, so the incremental improvements and bug fixes in post-2.50 versions of Autoconf have supported those goals. Autoconf's users have lived with version 2.69 since 2012; there have been no stable releases since then. However, development has not stopped; commits to the Git repository continued. Users also submit patches using the autoconf-patches mailing list and Savannah; by our estimation, as of mid-2020, there were hundreds of these patches awaiting review. (There are fewer now, but we'll get to that.) Maintainer Eric Blake had been aiming to make a release but hasn't had time; as he said in 2016: "The biggest time sink is digging through the mail archives to ensure that all posted patches that are still relevant have been applied".

Fresh momentum and work in progress

My involvement in Autoconf started when Keith Bostic emailed the autoconf mailing list in January, asking: "is there someone we could pay to do a new release of the autoconf toolset?" Zack Weinberg, an Autoconf contributor, forwarded the note to me.

Bostic was interested in Autoconf's future because one of his projects used it. He funded Weinberg and me to assess the work remaining; as we did that, we talked with Autoconf's maintainers (including Blake and Paul Eggert) and they agreed that we could do further release work. Then, starting a few months ago, Bostic — along with Bloomberg and the GNU Toolchain Fund of the FSF — has further funded our work so that we can work toward a 2.70 release in early November.

Weinberg released a testable beta version in July (even though this is a beta version of 2.70, the beta is labeled 2.69b) and a second beta, 2.69c, in September. We are now partway through our goals for this funded project:

  • Along with other users, we've started testing the upcoming release against real Autoconf scripts for complex projects, but haven't yet put it through its paces with Emacs, GCC, and CPython.
  • Since Autoconf has no continuous integration (CI) at present, we're going to set up proper CI system to find regressions, at least on Linux, probably at sourcehut.
  • We've gotten a fraction of the hundreds of disorganized patches and bug reports filed, so Autoconf contributors can prioritize and assess our backlog; unfortunately, we don't have enough time to organize even half of them.
  • We've reviewed several high-priority patches that downstream redistributors (such as Arch Linux and the Yocto Project) already carry and merged them into the mainline repository.
  • We've started working with existing maintainers, contributors, and users to get the project on a more sustainable path.

These activities, fortunately, have gathered more momentum with testing and review help from existing maintainers and contributors, plus new volunteers. And the new scrutiny and testing have also led to fixes in related tools, such as the portability library Gnulib.

Speedups, bugfixes, and stricter parsing

The 2.70 release notes/NEWS file, which is in progress at the time of this writing, discusses speedups, several small new features, and many bug fixes that will be in the new release. The bug fixes alone are an appealing reason to upgrade. For instance, configure scripts generated by the new Autoconf will work correctly with the current generation of C and C++ compilers, and their contents no longer depend on anything outside the source tree (this is important for build reproducibility).

2.70 does, unfortunately, include a few places where backward compatibility with 2.69 could not be preserved. In particular, Autoconf is now more strict about M4 quotation syntax (a perennial headache for Autoconf users) and some macros do not perform as many checks as they used to, which speeds up the configuration process but can break configure scripts that assumed that some shell variable was set without actually calling the macro that sets it. In addition, more configure scripts now require the helper scripts config.sub, config.guess, and install-sh. (See the release notes for the complete list.)

Maintainers of complex Autoconf scripts will find it well worth their time to test the beta releases and report any problems encountered to the Autoconf mailing list.

Beyond this release: future resilience

In October and early November, Weinberg and I will likely use up the last of the funding we received. We intend to solicit more funding and to get more corporate contributors to commit to helping with testing, code review, and so on. After all, a big open question is: who will commit to serving as release manager for Autoconf 2.71? It might make sense to schedule that release for around 12-18 months from now. After Autoconf 2.50, a steady stream of people reported problems that contributors fixed in the next several releases. If we have someone motivated to triage bugs and prioritize and review patches, it may make sense to do that again, especially since, after 2.70, there will almost certainly be new bug reports, including for bugs introduced by the release but not found during beta testing.

There's also an open question as to who will work to organize the multiple backlogs of patches and bug reports, so that maintainers can properly assess, prioritize, and delegate work. Even once we finish the work that we've already received funds to perform, there will still remain scores of patches languishing in the various mailing lists and/or in patch sets currently carried by distributions (such as OpenEmbedded and the BSDs) but not yet merged back into the mainline. Getting all of those into Savannah, or the new GNU forge when that shows up, would help contributors, as would proper CI on multiple operating systems and environments. Gathering all of the submitted patches into one forge will also help downstream distributors cherry-pick specific fixes to carry in between Autoconf releases.

Autoconf has only been able to revive itself because of the funding from our sponsors. Conversations in the coming months will reveal whether and to what extent they and other enterprise users want to invest to keep Autoconf on a stable footing. This is certainly not the only piece of old software that free software depends on as infrastructure and that has significant deferred maintenance that needs doing; there are closely related projects that could also stand to be revitalized. Automake is one example; Libtool could be deprecated, and have its features refactored into the faster and more integrated functionality in Automake.

Regardless, in this case, it has been gratifying to help break a bottleneck so that users of a widely used, even crucial part of the open-source ecology can benefit from eight years' worth of improvements — and get Autoconf in better shape to make future release cycles better too.

[I would like to thank Zack Weinberg for reviewing this article.]


Index entries for this article
GuestArticlesHarihareswara, Sumana


(Log in to post comments)

Rejuvenating Autoconf

Posted Oct 23, 2020 16:36 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

These days when I have to run "./configure; make", I feel myself like Igor in Frankenstein's laboratory. The only missing piece is the "It's alive! It's alive!" announcement. Can you add this feature, please?

See: https://www.youtube.com/watch?v=1qNeGSJaQ9Q

Rejuvenating Autoconf

Posted Oct 24, 2020 2:56 UTC (Sat) by felixfix (subscriber, #242) [Link]

Oh dear. That initial command line should be

./configure && make && make install

Rejuvenating Autoconf

Posted Oct 24, 2020 11:15 UTC (Sat) by rzaa (guest, #130641) [Link]

I preferred this method

./configure --prefix=$HOME/.local/stow/progname_progversion && make && make install
cd $HOME/.local/stow/
stow progname_progversion

P.S.
https://www.gnu.org/software/stow/
:]

Rejuvenating Autoconf

Posted Oct 24, 2020 14:30 UTC (Sat) by felixfix (subscriber, #242) [Link]

There is sometimes a "make test" in there somewhere too.

I remember well the first time I installed anything with autoconf. It was like magic.

Rejuvenating Autoconf

Posted Oct 25, 2020 10:07 UTC (Sun) by nix (subscriber, #2304) [Link]

That should of course be

/configure --prefix=$HOME/.local && make && \
make install DESTDIR=/home./local/stow/progname_progversion
cd $HOME/.local/stow/
stow progname_progversion

(or make install prefix=... with a sufficiently old package)

The configure-time prefix is the prefix the package *runs* from, and only provides a default for the prefix it is installed to. Stowed packages do not run from their installation location: that's the whole point of stow...

Rejuvenating Autoconf

Posted Oct 25, 2020 19:38 UTC (Sun) by rzaa (guest, #130641) [Link]

Yeah, you are rigth.
I realized that I haven't used these prefixes correctly.

Rejuvenating Autoconf

Posted Oct 26, 2020 15:27 UTC (Mon) by nix (subscriber, #2304) [Link]

To be fair, many packages don't care: but as soon as you have one that needs to access stuff provided by other packages, you'll notice.

Rejuvenating Autoconf

Posted Oct 26, 2020 7:17 UTC (Mon) by wtarreau (subscriber, #51152) [Link]

> Oh dear. That initial command line should be
>
> ./configure && make && make install

No, the usual way to use it, sadly, is still:

./configure || vi configure

This is why this tool is wrong by design.

Rejuvenating Autoconf

Posted Oct 27, 2020 10:42 UTC (Tue) by hholzgra (subscriber, #11737) [Link]

> ./configure && make && make install

The true power, on the developer side, only shows with

make distcheck

though.

Lack of this kind of end-to-end test is one of my main pain points with CMake, where it is all to easy to get things like out-of-source builds wrong.

(and don't get me started on CMake and "make uninstall" ...)

Rejuvenating Autoconf

Posted Oct 27, 2020 11:37 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

> where it is all to easy to get things like out-of-source builds wrong.

Alas, yes, it is. The thing is that it's tribal knowledge and due to the way things interact, hard to document effectively (it comes back to our lack of guide-level documentation). One also has issues with projects being embeddable into others (alas, also a more common occurrence today) with assumptions about what the top-level project means and such.

> make uninstall

This is because CMake allows for arbitrary code during installation (`install(<CODE|SCRIPT>)`) which can easily escape `install_manifest.txt`'s view of things. I wish the "everything is via `install(<FILES|DIRECTORIES|TARGETS|EXPORT>)`" flag to indicate that `install_manifest.txt` is at least potentially trustworthy were easier to toggle though.

Rejuvenating Autoconf

Posted Oct 27, 2020 18:15 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

> Lack of this kind of end-to-end test is one of my main pain points with CMake, where it is all to easy to get things like out-of-source builds wrong.
Uh... I'm pretty sure CMake always runs with out-of-tree builds. Can you even make it in-tree?

Rejuvenating Autoconf

Posted Oct 27, 2020 18:27 UTC (Tue) by madscientist (subscriber, #16861) [Link]

Sure. Works fine.

You can't have both in-tree and out-of-tree builds at the same time and you can't switch from in-tree to out-of-tree without a full clean first.

Rejuvenating Autoconf

Posted Oct 27, 2020 18:44 UTC (Tue) by hholzgra (subscriber, #11737) [Link]

You can, even though it doesn't like it and will complain a little bit as far as I remember.

The problem is that you can have build rules that touch files in the source dir instead of build dir.

The autotools distcheck targets catches this by doing:

* creating a dist tarball
* unpacking that into a subdirectory
* changing that directory and its contents to read-only
* creating a build directory
* doing an out of source build
* AFAIR: doing "make test", too

This will, among other things, catch any file created or touched in sourcedir instead of destdir.

With CMake, at least the last time I dealt with it, this needed to be tested manually ...

(... and I remember several cases over the years where a build rule touching source dir files slipped in into MySQL / MariaDB and stayed undetected for extended periods of time)

Rejuvenating Autoconf

Posted Oct 27, 2020 18:55 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

This sounds like it's pretty easy to do with CMake, though. Maybe replace dist tarballs with checking hashes of directories instead.

Rejuvenating Autoconf

Posted Oct 28, 2020 16:07 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

I've certainly had to fix out-of-source builds with autotools myself (as it is something I tend to Just Do when building software these days; my history is riddled with `$OLDPWD/configure` calls). Autotools has the mechanisms to support out-of-source builds, but they're certainly not used all the time.

The steps that are done could be done with CMake as well, though not many projects use CMake to make their source tarballs (`git archive` is usually enough; I think autotools gained that mechanism for itself because it effectively patches the source for distribution in practice). I'd find it useful if the normal CMake (or any build tool!) procedures in distro builds would set the source directory as read-only and started filing bugs with projects about not supporting out-of-source builds.

Rejuvenating Autoconf

Posted Oct 28, 2020 19:25 UTC (Wed) by hholzgra (subscriber, #11737) [Link]

> Autotools has the mechanisms to support out-of-source builds, but they're certainly not used
all the time.

yes, it can do it, but it is not advertised much, unlike CMake where it's the recommended method

> I think autotools gained that mechanism for itself because it
effectively patches the source for distribution in practice

unlike "git archive" etc. you can control which files to exclude from being distributed and other things

that's also one of the things the distcheck target checks for: does the build from the dist tarball fail due to any file missing from the dist tarball?

it also creates tarballs with proper version number included in the tarball filename, can check that the ChangeLog file really has an entry for that version, etc. ...

As basically most of this is in the Makefile template anyway it shouldn't be much of a problem for CMake to provide the same for the make backend, but somehow this never seems to have happened.

Rejuvenating Autoconf

Posted Oct 28, 2020 21:49 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> unlike "git archive" etc. you can control which files to exclude from being distributed and other things

The `export-ignore` attribute exists for this. `git archive` does suck at submodules, but just about everything does in practice anyways. `git archive-all` exists for those cases and handles a bad situation well enough (it is pure porcelain and is lacking plumbing for use in alternative automated tasks where it would be useful).

> it also creates tarballs with proper version number included in the tarball filename, can check that the ChangeLog file really has an entry for that version, etc. ...

Changelog strategies vary wildly from project to project. Some don't have them, some autogenerate them from commit logs (ugh), others are overly verbose to the point of uselessness (IMO, GCC is one of these).

> As basically most of this is in the Makefile template anyway it shouldn't be much of a problem for CMake to provide the same for the make backend, but somehow this never seems to have happened.

File an issue (or ping one that already exists). There's enough work that we don't actively seek out features to implement; it is mostly demand-driven. However, note that the Makefiles generator targets POSIX make, not GNU Make or any other specific flavor. Single-generator features also have a high bar for implementation (generally they need to be at least extremely difficult or specifically targeting that backend). I don't think making any "source tarball creation" built-in target would get over either bar, so it'd need to be available for any generator backend. But I think it is more CPack's wheelhouse anyways.

Rejuvenating Autoconf

Posted Oct 24, 2020 4:00 UTC (Sat) by marcH (subscriber, #57642) [Link]

> Complaints that the GNU Build System is slow, complex, and hard to use have been aired (including in LWN's comment threads) for years.

It's very simple really: "All problems in computer science can be solved by another level of indirection, except for the problem of too many layers of indirection." Now count the number of layers of indirection in this graph: https://en.wikipedia.org/wiki/GNU_Autotools#Components

Game over.

Debugging code generated by generated code is the worst of all nightmares. I know because I wasted days trying. I feel a strange mix of admiration and dread for the people who succeeded and sent autoconf patches.

Thank God the two problems solved by autoconf are disappearing:

> Linux's ascendance has made it easier for developers to get away with ignoring portability among Unixes
> ...
> The rise of languages that use their own package management...

With hindsight I think it's become clear that designing a language _without_ designing a companion build system was a mistake.

Having totally distinct tools for compiling and linking was another mistake, maybe it was an optimization to cope with the lack of memory at the time? I doubt it was a conscious attempt to make different compilers/languages/versions compatible with each other because in practice it fails more often than not, sometimes changing a single compiler flag is enough to break compatibility.

Rejuvenating Autoconf

Posted Oct 24, 2020 6:02 UTC (Sat) by marcthe12 (subscriber, #135461) [Link]

> With hindsight I think it's become clear that designing a language _without_ designing a companion build system was a mistake.
C and shell did have a companion build system, Make. It just aged badly. Make does not scale well and had portability issues so it people started creating scripts to template the makefiles which was the configure script which grew into autotools. Cmake is also a Makefile generator. What most other languages did was using their own language as the buildsystem so portability issues just vanish.

I think autotools need to drop support for all non-posix os except windows. That could simplify things as autotools could remove hacks and check to support some random unix from the 80s making scripts smaller and .

Rejuvenating Autoconf

Posted Oct 24, 2020 12:55 UTC (Sat) by smurf (subscriber, #17840) [Link]

"Make" scales incredibly well – if you do it right. Just look at the Linux kernel's build system.

Rejuvenating Autoconf

Posted Oct 24, 2020 15:26 UTC (Sat) by khim (subscriber, #9252) [Link]

If you would look on the Linux kernel's build system then you would arrive at the correct conclusion: make doesn't scale at all.

Can you build Linux in Windows using native compiler? No.

Can you build kernel and include module developed by someone else, who doesn't even care Linux kernel exist? Nope.

And so on.

Make solves one problem and does it acceptably well: build the code for something where all sources are available and all code lives in a single repo and you control everything about your config.

That's how Unix (and it's descendants) are structured and thus it's not surprising Make alone works for them.

If you try to split your software into independent packages… Make stops working. You need something on top of make or you need something to replace make.

Rejuvenating Autoconf

Posted Oct 24, 2020 16:43 UTC (Sat) by pizza (subscriber, #46) [Link]

> Can you build Linux in Windows using native compiler? No.

Why not? Linux is cross-compiled all the time; it's not "make" that's a problem here. Unless by "native" you mean a compiler that's not GCC, in which case that's because Linux expliticly requires GCC, not a five-year-old visual studio compiler or whatever.

> Can you build kernel and include module developed by someone else, who doesn't even care Linux kernel exist? Nope.

Why would "someone who doesn't even care Linux exists" develop a Linux kernel module? And how are their sins Make's fault?

> Make solves one problem and does it acceptably well: build the code for something where all sources are available and all code lives in a single repo and you control everything about your config.

So... how exactly are you supposed to build/compile/link/whatever something when you don't have everything you need to create the final output?

> If you try to split your software into independent packages… Make stops working. You need something on top of make or you need something to replace make.

At the end of the day, make is just a dependency resolver. It can only do what it's told. If you rip out a pile of your android app into a separate module, are you seriously trying to tell us that maven/gradle/whatever will just magically do the right thing without altering the build scripts in some way?

FFS.

Rejuvenating Autoconf

Posted Oct 25, 2020 9:35 UTC (Sun) by smurf (subscriber, #17840) [Link]

> Can you build Linux in Windows using native compiler? No.

In what way, pray tell, is that the "fault" of 'make'??

> Can you build kernel and include module developed by someone else, who doesn't even care Linux kernel exist? Nope.

Can you put a car engine into a train? Nope. That's hardly the fault of the program used to design the engine.

Nice try, but FAIL: next time please post real arguments instead of strawmen.

A possible succesor to make

Posted Oct 25, 2020 11:07 UTC (Sun) by jnxx (guest, #142729) [Link]

A solution which is equally powerful as make, is able to build from independent modular packages, can build correcyly and reliably in parallel, is appenwarrs redo:

https://github.com/apenwarr/redo

https://redo.readthedocs.io/en/latest/

Apart from that, it is extremely simple.

It is, however, not a solution that will make Linux/POSIX software automagically
run on Windows.

What I think is already a very, very good solution for defining dependencies
between packages, and building them from source, is GNU Guix.
It is really well done.

https://guix.gnu.org/

Again, Guix is not the solution to provide commercial software vendors
a kind of app store to distribute their binary closed-source licensed
stuff. However, it works breathtakingly well for defining
environments and developing in Linux environments.

I think it is probably often a solution where otherwise one
would need to set up a whole bunch of virtual machines.

Rejuvenating Autoconf

Posted Oct 24, 2020 19:05 UTC (Sat) by marcH (subscriber, #57642) [Link]

> "Make" scales incredibly well – if you do it right. Just look at the Linux kernel's build system.

The performance, correctness and features of kbuild are mind-blowing considering it relies only on GNU Make. I'm afraid its re-usability outside the kernel has been exactly zero.

Rejuvenating Autoconf

Posted Oct 24, 2020 19:55 UTC (Sat) by pbonzini (subscriber, #60935) [Link]

Yeah, indeed. QEMU has used something vaguely inspired by kbuild (but homegrown) for several users. It worked very well for the easy cases, but it was incredibly hard to extend and debug once you stepped outside the beaten path, and very soon we ended up with different ways to do the same thing depending on which part of the project you were modifying.

In the end I decided I had enough and enlisted some help so that we could switch to Meson, it has its own set of idio... syncrasies but it has an active community so that GLib, GStreamee, QEMU and everyone else can share the resources for development and design, and everybody is willing to learn from others' use cases. And this is actually the very same things that drew me to Autotools back in the 2000s. The only issue with Autotools is the dated design, but it remains in my opinion one of the most misjudged pieces of software ever. My point is that no matter if you're using Autotools or Meson or CMake, the moral is the same: your project can use your time for more interesting things than writing Makefiles.

Rejuvenating Autoconf

Posted Oct 24, 2020 20:36 UTC (Sat) by marcH (subscriber, #57642) [Link]

> My point is that no matter if you're using Autotools or Meson or CMake, the moral is the same: your project can use your time for more interesting things than writing Makefiles.

Makefiles and build systems in general are misjudged because they funnily gather in a single place the best and the worst of software.

The worst: obscure syntax, workarounds and other quirks, incompatibilities between toolchains and operating systems, too many layers of indirections and lack of introspection, concurrency hard to understand and debug,...

The best: when done right, a clear graph of dependencies providing a unique, high-level perspective on the entire project and optimal, split-second build times.

BTW there is performance threshold after which improvements stop being just optimizations and become features. For instance, IDEs for Java (and many others) provide 100% reliable and real-time introspection/completion/warnings/suggestions/autocorrect because they continuously compile in real-time. Good luck achieving something as good with C/C++ houses of cards made of autotools, pre-processors, slow build times and a linker and compiler that don't have a clue about each other.

I've never used it but I suspect that's why so many people like C++ in Visual Studio. Because it didn't mind getting rid of all this sort of legacy garbage?

Rejuvenating Autoconf

Posted Oct 24, 2020 21:08 UTC (Sat) by pbonzini (subscriber, #60935) [Link]

I reached the conclusion that people think of their build systems as "just a bunch of glue", except at some point it becomes more like quicksand. It is the last place where you think of "design" before doing something, the last place where you think of you *should* be doing something before thinking whether you could. (And while Autotools got a lot of the first part right, especially by Autoconf 2.54, they had the second part very wrong---see Libtool).

You don't notice limitations and technical debt until it's too late, and by that time the sunken cost makes it really hard to justify the change. Again drawing from my recent experience with QEMU, I am talking of 1-2 months of almost full time work for a 2M line of code program. And even then it's only a partial conversion (basically to the point where it's impossible to screw up the rest of the work and it can be done leisurely).

Rejuvenating Autoconf

Posted Oct 26, 2020 7:47 UTC (Mon) by wtarreau (subscriber, #51152) [Link]

Ah, libtool, I almost forgot about this one. Usually one of the last lines of a failed config.log... and that one is too deep into the layers to be even fixable by the end user, resulting in your project to be abandoned and the user going back to google seeking an alternative to your project.

Rejuvenating Autoconf

Posted Oct 27, 2020 15:23 UTC (Tue) by MrWim (subscriber, #47432) [Link]

VSCode at least uses compile_commands.json so the build-system can communicate the compilation options, etc. to the IDE. Meson and CMake support generating this format.

There's also Bear which uses LD_PRELOAD to generate the same. I guess that would work for autotools.

I've been mulling over creating a project that takes a compile_commands.json as input and generates a meson.build from it. The idea would be to make migrations to meson at least partially automated. It might even be possible to generate multiple different compile_commands.json with build options and generate a meson.build with those same options.

Rejuvenating Autoconf

Posted Dec 9, 2020 9:31 UTC (Wed) by sandsmark (guest, #62172) [Link]

> I've been mulling over creating a project that takes a compile_commands.json as input and generates a meson.build from it. The idea would be to make migrations to meson at least partially automated. It might even be possible to generate multiple different compile_commands.json with build options and generate a meson.build with those same options.

In my experience, tools like that tend to lead to more work in the end with having to clean up everything. It's usually better to at least start those kinds of transformations from the simplest representation (i. e. so you don't have to manually pattern match a bunch of compile options etc.), and even then I end up realizing I should have just started from scratch at the endi.

Even just generating a list of source files to build would lose any kind of grouping or similar (and miss files that are built conditionally).

And FWIW, because KDE has replaced the build system a couple of times there are some scripts that are/were used to create a starting point for porting from e. g. automake to cmake, here's one of them (not sure if it is useful for doing the same conversion to meson for non-KDE projects though): https://invent.kde.org/sdk/kde-dev-scripts/-/blob/master/...

Rejuvenating Autoconf

Posted Oct 26, 2020 8:00 UTC (Mon) by kugel (subscriber, #70540) [Link]

I agree what that and can attest that *GNU* make scales very well. GNU make has several features not found in other makes that are required for scalability (pattern rules, macros, second expansion).

I wrote build system for user space that was heavily inspired by kbuild and is also based purely on make. It is comprised of two make file fragments which you include in your top-level Makefile. Then you describe the build in subdir.mk files down the file hierarchy. Each subdir.mk file can specifiy binaries, libraries, tests and data file, and libraries can be referenced from other subdir.mk files with fully correct dependency resolution. The best about it: Everything resolves within the top-level Makefile (via includes) so there is no recursive invocation and the build is fully parallelized without scarifying dependency tracking.

If you're interested: https://github.com/kugel-/kmake

We're using it at my work for some very large projects, one of them has 190 subdir.mk files and produces about 200 artifacts (programs, libraries, ...)

It's also extensible via "generators" if you happen to generate C code prior to compilation.

Rejuvenating Autoconf

Posted Oct 26, 2020 8:06 UTC (Mon) by kugel (subscriber, #70540) [Link]

Example of an subdir.mk:

# this reads subdir.mk recursively from child directories
subdir-y          := y/ z/

CFLAGS-y          := -O1

bin-y             := prog
libs-y            := liba.a

prog-y            := prog.c
prog-y            += liba.a

liba.a-y          := a.c a.h
liba.a-CFLAGS-y   := -O2

Rejuvenating Autoconf

Posted Oct 24, 2020 20:08 UTC (Sat) by Wol (subscriber, #4433) [Link]

Have you ever tried to modify the kernel's build system? Have you even just suggested they modify it?

I've been on the receiving end of the screams of horror!

Cheers,
Wol

Rejuvenating Autoconf

Posted Oct 24, 2020 22:00 UTC (Sat) by sam.ravnborg (guest, #183) [Link]

> Have you ever tried to modify the kernel's build system?
Yeah, a few times or maybe a bit more if I recall correctly :-)

Rejuvenating Autoconf

Posted Oct 24, 2020 23:18 UTC (Sat) by corbet (editor, #1) [Link]

You do indeed recall correctly - some of us still remember! :)

Kernel config modification rejection

Posted Oct 25, 2020 15:26 UTC (Sun) by stephen.pollei (subscriber, #125364) [Link]

" ...suggested they modify it?" Why does that remind me of ESR? CML2: Constraint-based configuration for the Linux kernel . Maybe that was just config and not the build system.

Rejuvenating Autoconf

Posted Oct 25, 2020 11:16 UTC (Sun) by geert (subscriber, #98403) [Link]

Still using recursive make invocations, right?

Rejuvenating Autoconf

Posted Oct 25, 2020 12:46 UTC (Sun) by pizza (subscriber, #46) [Link]

Kbuild hasn't used recursive make since the early 2.5 kdays -- nearly two decades ago.

Rejuvenating Autoconf

Posted Oct 25, 2020 22:01 UTC (Sun) by Villemoes (subscriber, #91911) [Link]

Eh, the Makefile at the top of the kernel tree would seem to disagree:

# We are using a recursive build, so we need to do a little thinking
# to get the ordering right.

Also, scripts/Makefile* are full of $(MAKE) invocations.

Rejuvenating Autoconf

Posted Oct 26, 2020 7:45 UTC (Mon) by kugel (subscriber, #70540) [Link]

kbuild uses recursive make extensively. It only hides that fact (--no-print-directory), and it builds subdirectories in parallel, so it's not a performance nightmare. It can only do that because it does not build libraries that are used elsewhere. Inter-dependencies between modules are handled in a post build step (depmod).

Rejuvenating Autoconf

Posted Oct 24, 2020 16:01 UTC (Sat) by opsec (subscriber, #119360) [Link]

"Make" scales incredibly well – if you do it right. Just look at the FreeBSD ports system 8-)

Rejuvenating Autoconf

Posted Oct 24, 2020 8:47 UTC (Sat) by geoffbeier (guest, #123670) [Link]

I agree with your comment about a language without a build system.

> Having totally distinct tools for compiling and linking was another mistake

That part is not, IMO, quite as much as a slam dunk. I think that if every compiled language separated compilation from linking **AND** needed to use the same linker, it'd be a net good for the universe. Language interoperability would be simpler and a solid FFI would be the default.

Rejuvenating Autoconf

Posted Oct 24, 2020 15:30 UTC (Sat) by khim (subscriber, #9252) [Link]

For that you need to abandon GC-based languages first. Because typical GC assumes it has full control over memory layout of the process — when two, or got forbid, tree GC are injected in one process (think about Go module in a Unity process on Android phone) you have so many issues that lack of a single, default, linker is least of your woes.

Possible to do, as Rust shows, but is not even close to mainstream and it's not even sure how feasible that idea is.

Only time will tell if GC-craze of last ~25 years was a huge mistake which would be fixed eventually, if it was actually a good thing to do.

Rejuvenating Autoconf

Posted Oct 25, 2020 8:05 UTC (Sun) by ncm (guest, #165) [Link]

Another strike againg GC.

Rejuvenating Autoconf

Posted Oct 25, 2020 10:15 UTC (Sun) by nix (subscriber, #2304) [Link]

The problem here is less GC than anything with a runtime, since a runtime of necessity hangs around at runtime and doesn't know about anything but its own one language -- and of course, neither of your are actually proposing discarding all language runtimes entirely: you're only proposing to get by with glibc. Now maybe glibc is so good that it provides all runtime support of shared things (like the shared malloc arena) that everyone would ever need... but that seems like a big thing to just *assume*.

(And yes, you can interpose and replace malloc, but you can't replace its API contract, which among other things specifies that memory blocks can never move. Yes, C needs this, but it's demonstrably the case that not all languages do. You can of course have their allocator get big lumps from malloc and then do GC or whatever in the interior: what do you think the JRE does? Using a normal linker doesn't preclude that at all!)

Rejuvenating Autoconf

Posted Oct 26, 2020 13:58 UTC (Mon) by khim (subscriber, #9252) [Link]

> The problem here is less GC than anything with a runtime

Nope. Multiple runtimes are not a big deal. Yes, POSIX world usually used that “there are only one libc!” approach, but Windows world didn't.

Usually each DLL comes with it's own runtime (even if they are all just different versions of MSVC runtime they are entirely independent and may support different languages easily).

This works fine — as long as there are no GC in the mix.

Heck, even in POSIX world you may easily link libstdc++-based code and libc++-based code in one process. These are different runtimes, you know.

And yes, you can interpose and replace malloc, but you can't replace its API contract, which among other things specifies that memory blocks can never move.

You don't need to replace it's contract. C can use it's own thing with “frozen” objects, other languages can use “moveable” ones. It's not a big deal as long as there are no requirement to know about all pointers and there exist a way in all languages to create “frozen” objects.

That's why I don't consider ARC, std::unique_ptr and other such constructs “a version of GC”: yes, they may automate certain things, yes, they provide safety to a programs — but they are non-poisonous, they don't infect your whole codebase, they are local.

P.S. There are more types of GC than you may think about. Go have goroutines — that's a concept of GC applied to the thread. And it causes similar amount of pain if you want to mix languages. Yet I'm not even sure if that's a bad thing or not: the issue comes from our attempts to create monstrous binaries with gigabytes of code coupled with GC and other GC-like concepts. But maybe it's not GC that is bad, but, in reality, an attempt to build our apps as monoliths of insane size? Time will tell, I guess.

Rejuvenating Autoconf

Posted Oct 26, 2020 15:33 UTC (Mon) by nix (subscriber, #2304) [Link]

Heck, even in POSIX world you may easily link libstdc++-based code and libc++-based code in one process. These are different runtimes, you know.
You can only do that if you never pass things from one to the other, or if they are cooperating on their internal representations. This is exactly the same problem as the one causing trouble with GC: shared data structures manipulated by multiple things that may not know they have to cooperate with some other project with regard to their design. (If you remember the libc5 to libc6 upgrade, that had problems around wtmp which were exactly the same problem again, except this time the data structure was on disk, not in memory.)

Rejuvenating Autoconf

Posted Oct 26, 2020 22:22 UTC (Mon) by khim (subscriber, #9252) [Link]

Well… singletons of any sort are problematic, true. The issue with GC is that it's:
  1. Very much a singleton — it assumes that it has access to the full graph of objects in your program…
    and
  2. It's central part of language design: I don't know about any language where GC is optional (except, maybe, D — and I think that kinda-sorta-optionality caused more harm than good there)

That combo is really toxic.

Rejuvenating Autoconf

Posted Oct 27, 2020 11:21 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

It doesn't have to be a singleton. Lua's GC is attached to the interpreter in use. Though it is also meant to be embedded, so it's a use case they thought of. I don't know if anyone has written a bridge between Lua's GC and any other GC (including another instance of a Lua GC) to allow storing objects of one within the other's purview at all.

Rejuvenating Autoconf

Posted Oct 26, 2020 5:31 UTC (Mon) by dvdeug (subscriber, #10998) [Link]

Compatibility doesn't come by magic. You can link most different languages together on Unix because they all use the C ABI. Anything this doesn't provide is rarely compatible across languages; you can't pass objects in most cases, and even when you can, things as basic as collections aren't compatible. Threading may be compatible where everything uses POSIX threads, but not if languages are using their own version of green threads. You can pass floating point numbers easily, but not fixed point numbers or arbitrary precision integers. You couldn't pass even floating point numbers when there was software floating point libraries implementing different FP standards.

You can use any number of JVM languages together, or any number of .NET languages together with no problem of clashing GCs. When on Android, use the local GC. If Go and Unity don't do that, that says more about them than GC.

Rejuvenating Autoconf

Posted Oct 26, 2020 14:26 UTC (Mon) by khim (subscriber, #9252) [Link]

> When on Android, use the local GC.

Care to elaborate? Sounds like You have to become hedgehogs advice to me. Or are you all about strategy and the ability to actually implement your advice doesn't concern you?

> If Go and Unity don't do that, that says more about them than GC.

To some degree — yes. Unity predates Android by a couple of years. Go was presented in 2009 and developed simultaneously with Android.

And they had no crystal ball to know that they actually need to build everything on top of JVM because, you know, few years down the road they would need to support Android (this was especially problematic for Unity because it targeted iOS first, Android second, for obvious reasons).

And Android doesn't actually give you access to it's GC, you know. They only way to use it is to convert all your code to DEX. Which is huge undertaking.

And it speaks volumes about GC-proponents when they casually say “just throw away all your compilers, runtime, billions of dollars invested, rewrite all that from scratch — and everything would be peachy… not a big deal”.

Sorry, but no… People wouldn't throw away tools they already have just to pursue a pipe-dream of having just one GC in the process. Not gonna happen, end of story.

> You can use any number of JVM languages together, or any number of .NET languages together with no problem of clashing GCs.

Yeah. This was Architecture Astronauts solution to the problems of DCOM: just throw away all the code you have developed before and write it again.

It couldn't work and it haven't worked. It just created huge mess and made our life more complicated. In particular Unity doesn't use Android GC because it needed something to support XBox before Android become a thing and XBox implies CLR. So now, if you want to use it on Android you, by necessity, have two clashing GCs in your process. Precisely because Unity developers have followed your advice and used .NET language.

Rejuvenating Autoconf

Posted Oct 27, 2020 7:17 UTC (Tue) by dvdeug (subscriber, #10998) [Link]

Sometimes you have to become hedgehogs, or you become lunch. What advice would you have given the Lisp Machine manufacturers when their sales started to dry up? If you're not willing to adapt your code to the best-selling platform of all time, then don't be surprised if people are willing to target that platform and take your sales.

In reality, there's no free lunch in portability. You can compile simple enough Unix C code on VAX, but it's not going to have proper VMS command-line support, nor is it going to support VMS filename versioning in any but the most trivial way. If you're going to run well on an OS, you're going to need to adapt to that OS.

> And it speaks volumes about GC-proponents when they casually say “just throw away all your compilers, runtime, billions of dollars invested, rewrite all that from scratch — and everything would be peachy… not a big deal”.

Which is not what I said. The GC should be easily separable from the rest of the runtime and replaceable with the native GC. On the other hand, sometimes you can get away with a half-assed port to a new system, but for best results, you should use tools that are specialized to Android. Not all the world is a Linux.

Your suggestion that garbage collected languages will just go away could be taken as the same, as proposing the throwing away of the billions of dollars of JVM/CLR code written.

> And Android doesn't actually give you access to it's GC, you know.

Then how does that factor into Unity running on Android at all? Then you've just got a Go module for Unity, which just seems like a poor choice. Again, CLR offers many languages that work natively with Unity, and the author of the Go module knew at the start that Go would be a problem.

Rejuvenating Autoconf

Posted Oct 27, 2020 13:22 UTC (Tue) by foom (subscriber, #14868) [Link]

> The GC should be easily separable from the rest of the runtime and replaceable with the native GC.

That's generally impossible. The GC is usually so intimately tied to various other low-level language features, that using another GC likely means ripping out a large part of your runtime too, and replacing it with a wrapper around the foreign runtime -- possibly losing a lot of efficiency in the process, as well as potentially breaking your language semantics.

E.g. in Java, the Object.hashCode method is tied to the GC, because the default implementation returns a value derived from object identity, and must be consistent -- even in the face of objects being moved in memory. That's a weird special case, isn't it? Other language GC implementations wouldn't be likely to cater to this, if you wanted to run java on another language runtime.

And how does your runtime lay out objects in memory, and identify the type of an object? The GC needs to know this, to know which fields are pointers. Probably your runtime doesn't do this in the same way the GC you're attempting to adopt does it.

And there can be weird special cases there, too. E.g., In lisp implementations, "cons" objects (aka a pair of values) are important and very common. Therefore, the GC usually has special support for identifying them in memory, to allow them to be only 2 words (one for each value) -- without any object header that would typically tell the GC what kind of object it's looking at.

And then, you have various edge cases in language semantics which are tied to the GC, e.g. does your language allow for callbacks when an object is collected? If the callback revives a dead object, what are the semantics -- do you call the callback again the next time the object is dead, too?

Yes, a language implementation can be ported to a foreign GC/runtime that wasn't designed to support it, but it's never easy. And often it's more like "rewrite from scratch" than "port".

Rejuvenating Autoconf

Posted Oct 28, 2020 3:55 UTC (Wed) by dvdeug (subscriber, #10998) [Link]

Yet Clojure runs on the JVM. I can think of several ways to write Object.hashCode, keeping those semantics but not depending on the GC. GC may not be trivially separable, but it is portable.

GNAT has hacks for threading on Linux, because Linux's threading semantics aren't quite consistent with the Ada standard. But we aren't talking about how threading makes interoperability impossible, and nobody gave up on Ada on Linux because of it.

Rejuvenating Autoconf

Posted Oct 28, 2020 5:36 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> Yet Clojure runs on the JVM.
It's just another GC language without any particular complications, so duh.

> GC may not be trivially separable, but it is portable.
Nope. Nope. Nope.

Look at how modern Java GCs work. For example, GC needs "stop-the-world" pause to scan the GC roots and to achieve this Java instruments all the code so that all methods have checks that boil to something like "if (inGc) stopThread();" on back branches and before nested functions. The GC then simply modifies the inGc variable in each thread's memory and waits for them to stop.

And we haven't yet talked about object repacking, generations and card marking (or other write barriers in general) and other niceties.

There is no way two different GCs can fully interoperate without knowing very intimate details about each other.

Rejuvenating Autoconf

Posted Oct 28, 2020 7:15 UTC (Wed) by dvdeug (subscriber, #10998) [Link]

>For example, GC needs "stop-the-world" pause to scan the GC roots and to achieve this Java instruments all the code so that all methods have checks that boil to something like "if (inGc) stopThread();" on back branches and before nested functions. The GC then simply modifies the inGc variable in each thread's memory and waits for them to stop.

This has nothing to do with Java. If you're running all-Scala code on the JVM, it will still do it. If you're compiling Ada to JVM, the JVM will do that for Ada. And Java compiled via GCJ doesn't do that. This is completely language-independent.

> There is no way two different GCs can fully interoperate without knowing very intimate details about each other.

Which seems irrelevant to the argument that there should be one and only one GC in a process.

Rejuvenating Autoconf

Posted Oct 27, 2020 21:29 UTC (Tue) by khim (subscriber, #9252) [Link]

> Then how does that factor into Unity running on Android at all?

If you make an Android App then you have to deal with Dalvik/ART and it's GC in your process. Because Davik (or ART) starts before a single byte of your code is executed and it uses GC to run system Java code before you App have a chance to do anything, too…

> The GC should be easily separable from the rest of the runtime and replaceable with the native GC.

But that's not how it works in real world and you know it. Dalvik/ART GC is tightly coupled with Dalvik/ART and C# GC is tightly coupled with Mono/.NET Core.

And if you are dealing with Unity on Android you need both.

Go is a bit less picky, but in practice it's simpler to use pile of hacks to make three GCs (and each one, of course, assumes that it's the only one there) work together… for some time, anyway… than to use two GCs… let alone one.

> On the other hand, sometimes you can get away with a half-assed port to a new system, but for best results, you should use tools that are specialized to Android.

Well, “tool that is specialized to Android” is called DX — it converts Java bytecode (and, please, don't try the latest and greatest one, that one is not supported) to run on Android. That is the only supported way to use Android's GC.

> Again, CLR offers many languages that work natively with Unity

Gr8! Now, please explain to me how can I use with that “please put your .class files here… just don't use Java8 format just yet, please…” Android's GC API with it.

> And the author of the Go module knew at the start that Go would be a problem.

Have you though about the possibility that author of Go module haven't planned to use in Unity game, let alone on Android? In the exact same fashion as Unity developers haven't expected to deal with that “great” Android's GC API?

> What advice would you have given the Lisp Machine manufacturers when their sales started to dry up?

Close the shop? Isn't it what happened in the end, anyway?

> In reality, there's no free lunch in portability.

True. And there are no free lunch in memory management, too. Attempts to “cut corners” hurt in both cases.

> Your suggestion that garbage collected languages will just go away could be taken as the same, as proposing the throwing away of the billions of dollars of JVM/CLR code written.

That's not what I have said at all. I have said that talks about the same linker for every language are stupid as long as we are using GC-based languages. Because these languages make interoperability so hard and painful that trying to reduce said pain with use of one common linker is akin to putting wet cloth on the open bone fracture.

If GCd would fall out of fashion then we may think about benefits of one common linker.

Rejuvenating Autoconf

Posted Oct 28, 2020 3:47 UTC (Wed) by dvdeug (subscriber, #10998) [Link]

You're complaining about Android as much as GC. Part of the problem, to me, seems to be that Android is biarchitecture; if you ported it to Java bytecode, you wouldn't have these problems, but you insist on using its support for ARM. Linux/PowerPC doesn't offer such choices, for example. In any case, complaining about Android's support for Java8 seems purely irrelevant.

> Because Davik (or ART) starts before a single byte of your code is executed and it uses GC to run system Java code before you App have a chance to do anything, too…

I don't see how this is different from the Linux kernel starting before a single byte of your code is executed. Yes, Android has a rather vicious OOM killer, but its GC doesn't really have much of an effect on that.

> C# GC is tightly coupled with Mono/.NET Core.

No, it's not. https://bridge.net/ is a project to translate C# to JavaScript. The very fact you're mentioning two implementations suggests that there's more.

> Have you though about the possibility that author of Go module haven't planned to use in Unity game, let alone on Android?

So you're wedging a Unix module into a .NET program hacked for Java/Android. And you're blaming GC here. Would you mind showing me how to get my Bliss module working with my Tcsh program on Windows?

> Close the shop?

So you complain about "become hedgehog" solutions, but give a "you're screwed" solution. When on Android, use the local GC. If you don't want to, close the shop.

> I have said that talks about the same linker for every language are stupid as long as we are using GC-based languages. Because these languages make interoperability so hard and painful that trying to reduce said pain with use of one common linker is akin to putting wet cloth on the open bone fracture.

You continue to make that claim despite me point out that objects don't interoperate, nor collections, nor just about any other higher level feature. As I said, the solution is to push the GC down the stack; all systems linked together that want to pass pointers should be using the same GC, just like all systems linked together that want to pass floating point numbers have to use the same FP system.

Rejuvenating Autoconf

Posted Oct 28, 2020 8:01 UTC (Wed) by smurf (subscriber, #17840) [Link]

> just like all systems linked together that want to pass floating point numbers have to use the same FP system.

Surprise, there's a FP format standard which everybody uses these days because, well, there's no good reason not to.

There's no such thing as a GC standard because there are a lot of different trade-offs to be made. Plus, not every GC supports every memory model. Refcounting or not? Incremental? Would mark+sweep work, would it kill your cache, or would the machie swap to death? You need an upper bound on GC latency? You want weak pointers? Can you use them as they are or do you need to lock them? Moveable memory? If so, do you use handles (i.e. pointer-to-pointer) or does the GC patch your pointers? How does the GC find pointers in the first place, assuming that it needs to? Do you support destructors, can they re-establish a reference, do they run again if *that* gets deleted?

How do you support destructors when the language requires them but the system GC doesn't have that concept? Answer: you run your own GC on top. No other choice, assuming that crippling the language is not an option.

> I don't see how this is different from the Linux kernel starting before a single byte of your code is executed.

Linux isn't interested in your code and which GC it uses, if any. Dalvik and ART are. A more appropriate referent would be the language's runtime library, not Linux. Except that the Linux libc runtime doesn't do any garbage collection …

Rejuvenating Autoconf

Posted Oct 28, 2020 8:34 UTC (Wed) by smurf (subscriber, #17840) [Link]

Addendum: While Linux isn't interested in your code, the reverse is false, your code should definitely be interested in Linux, or rather in the environment it's running in. GCing on a lightly-loaded non-memory-starved system is far easier to do than the same thing on a busy system that's swapping. Yet different trade-offs are required for a system that's tight on memory but not swapping because it's embedded and doesn't *have* swap.

Rejuvenating Autoconf

Posted Nov 3, 2020 8:04 UTC (Tue) by massimiliano (subscriber, #3048) [Link]

Hi!
You are correct, but with a nitpick :-)

In particular Unity doesn't use Android GC because it needed something to support XBox before Android become a thing and XBox implies CLR.

That's not strictly true.

Unity picked Mono as runtime for technical reasons way before the Xbox360 even existed.
They had a contract so that they could use the source and embed it everywhere, it was efficient, it supported popular languages (C#), it was easy to make it support more languages (just compile them into .NET CIL), and it was very portable so they could port their runtime (the whole Unity game engine) just about anywhere, which is what they did.

This technical choice predated everything else: in the very beginning Unity was Mac-only, even the Windows port happened later. And with hindsight, it has been a good choice, given the success they had.

About "GC in GC" nightmares, you forgot another realistic scenario: take a Unity game (Mono GC) with a Golang plugin (Go GC) and run it inside the browser using the WASM target.
Now you do not have three GCs "side by side", you have two side by side (Mono and Golang) running in a memory segment inside another GC (Javascript)!

Disclaimer: I have been working for the Mono project for almost six years, and after that for Unity for one year, among other things porting the Mono build toolchain to target the Xbox360, so I know all the above first hand.

Rejuvenating Autoconf

Posted Oct 24, 2020 9:08 UTC (Sat) by flussence (subscriber, #85566) [Link]

> With hindsight I think it's become clear that designing a language _without_ designing a companion build system was a mistake.

That's a good point, but at the same time it feels like a lot of the ones that do reinvent the wheel forget to reinvent the tyre; getting many of those languages to play nice with the rest of the system can be painful.

Rejuvenating Autoconf

Posted Oct 24, 2020 18:53 UTC (Sat) by marcH (subscriber, #57642) [Link]

> getting many of those languages to play nice with the rest of the system can be painful.

I feel like the stance of some languages like golang has been:

"ABIs are hard. All these C/C++ toolchains failed to be compatible with each other. Sometimes they even failed to be compatible with slightly different invocations of themselves. Why even try to be compatible with them?"

Please correct and/or nuance this ridiculously simplified view of mine.

Rejuvenating Autoconf

Posted Oct 25, 2020 15:17 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

I heard a story about Go developers inspecting C APIs by doing `clang -Dkeyword=garbage`, compiling the source and parsing error messages to determine what the API surface was actually like. They then had the gall to be offended when the stdout format was changed and their scraper no longer worked. So they still care about interface stability, but their allergies to non-Go code makes the latch onto the weirdest appendages instead. It's not like things such as castxml don't exist…

Rejuvenating Autoconf

Posted Jan 12, 2021 20:13 UTC (Tue) by immibis (subscriber, #105511) [Link]

a.k.a. XKCD 927

Rejuvenating Autoconf

Posted Nov 3, 2020 17:20 UTC (Tue) by anton (subscriber, #25547) [Link]

With hindsight I think it's become clear that designing a language _without_ designing a companion build system was a mistake.
Given that many projects use multiple languages, designing build systems for a language is a mistake. E.g., automake is (or at least was, when we tried) useless for Gforth, because it does (did?) not offer anything for the non-C parts of Gforth, and was not worth the trouble for the C parts.

Anyway, this has nothing to do with autoconf, which is very useful for dealing with portability problems. It's great that it is being maintained.

Rejuvenating Autoconf

Posted Nov 3, 2020 20:24 UTC (Tue) by marcH (subscriber, #57642) [Link]

In an ideal world yes it would be great to have a single polyglot build system that handles all programming languages. Of course it would not solve all linguistic problems, see GC discussion thread elsewhere on this page, but at least you would not have learn a new build system for each language.

In the real world you tend to only get build systems that "speak" a very small number of languages well... if any well at all. That's why most new languages come with some specialized build system. Admittedly sad but it lets them focus on providing more for their particular language. Also lets some of them not get distracted by "translation unit" conceptual limitations from the 70s or the utterly inconsistent mess of C toolchains.

Maybe some modular/plugin based architecture could do but good luck getting manpower for something like this - assuming it's even possible.

> Anyway, this has nothing to do with autoconf,

Apologies for being off-topic, I didn't realize other build systems had nothing to do with autoconf.

Rejuvenating Autoconf

Posted Nov 4, 2020 11:23 UTC (Wed) by anton (subscriber, #25547) [Link]

There is nothing language-specific about make or the basic functionality of autoconf. Autoconf comes with macros specific for shell, make, and C and C++ libraries, but you can also address other portability problems in this framework, e.g., whether m4 understands the "-s" flag, or how to skip 16 bytes of code space in assembly language (Gforth does both of this).

Rejuvenating Autoconf

Posted Oct 24, 2020 5:46 UTC (Sat) by ncm (guest, #165) [Link]

There is so much that could make autotools more tolerable.

All the C library feature test dreck can be stripped out. There is an ISO Standard for C implemented everywhere, and there is no reason to use a non-standard compiler and library, or test for them.

All the Bourne shell backward-compatibility and bug avoidance hacks can be stripped out. There is a Posix Standard for shells that is implemented everywhere a shell exists at all.

We don't need to support Eunice anymore.

Rejuvenating Autoconf

Posted Oct 24, 2020 7:47 UTC (Sat) by rurban (guest, #96594) [Link]

Even the most common compiler framework is violating the standard constantly. gcc 9 up to now broke it's stdlib, mostly for strings but also restrict so the features you want out are still needed. You need to test for the features you are using, or it will break. autoconf does this, and autoconf-archive is even better.

Rejuvenating Autoconf

Posted Oct 24, 2020 13:11 UTC (Sat) by smurf (subscriber, #17840) [Link]

No you don't. The feature is either there, which means that you can use it without testing for it, or it's not, which means that your build will go splat anyway.

So maybe you want to check for features in order to be helpful to the user. Fine, but you can do that with a simple test program or two. In more complicated cases you can create one file with flags and other constants (OK, one for each language that needs to digest this) and #include lines, and another with a list of libraries to link against. Done.

There's no longer a need for a program which uses an arcane ancient macro language to create a script which, when run, generates your Makefile from the side effect of mangling a Makefile.am to a Makefile.in during the first step, only to then use "make" to process the result. That's three levels of indirection and three sets of tools, which is at least two too many.

Once upon a time, when "make" was stupid (no conditionals, no functions …), the shell was randomly posixly incorrect, system include files had "sys/" in their path depending on OS subrelease and/or the phase of the moon, and other basic tools like "sed" couldn't be trusted with >254 character and/or non-ASCII lines … at that time the whole Autotools infrastructure may have made sense.

Today? not so much.

Rejuvenating Autoconf

Posted Oct 25, 2020 15:48 UTC (Sun) by tdz (subscriber, #58733) [Link]

> No you don't. The feature is either there, which means that you can use it without testing for it, or it's not, which means that your build will go splat anyway.

Please go and build your software with cygwin, where half of the math functions are there but broken. That might widen your perspective.

Rejuvenating Autoconf

Posted Oct 26, 2020 10:30 UTC (Mon) by intgr (subscriber, #39733) [Link]

That seems to suggest that Cygwin is broken, not the build system.

Maybe time would be better spent just fixing Cygwin, instead of subjecting all developers over the world to horribly complicated build-system-level work-arounds?

What you're describing is a manifestation of the platform problem https://lwn.net/Articles/443531/

> the platform problem comes about when developers view the platform they are developing for as fixed and immutable. These developers feel that the component they are working on specifically (a device driver, say) is the only part that they have any control over. If the kernel somehow makes their job harder, the only alternatives are to avoid or work around it. It is easy to see how such an attitude may come about, but the costs can be high.

Rejuvenating Autoconf

Posted Oct 26, 2020 10:48 UTC (Mon) by tdz (subscriber, #58733) [Link]

> Maybe time would be better spent just fixing Cygwin [...] ?

It's easier to fix the code than to roll-out the fix.

Rejuvenating Autoconf

Posted Oct 26, 2020 10:48 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

Yeah, but what are you going to do with a broken (say) `cos` function? Code your own? Now, that doesn't mean put in hard errors for things[1], but you could at least add an `if cygwin; warn "This platform is not tested; proceed at your own risk"` warning to let people know. Once there's a suitable Cygwin release, you can version check it and suggest an upgrade.

[1]This lets those working on the known-broken platforms and new platforms not have to masquerade as something else to get past your arbitrary checkpoints (cf. basically every compiler emulating GCC and/or Clang at some point in their lifecycle to get past silly `#error "unknown compiler"` shenanigans).

Rejuvenating Autoconf

Posted Oct 26, 2020 19:28 UTC (Mon) by wahern (subscriber, #37304) [Link]

People seem to be confusing automake and autoconf in their autotools tirades. Autoconf can be used independently of automake and libtool, and this forthcoming release is just autoconf. IME, autoconf still works exceptionally well at writing feature tests. I've tried many alternatives, and even invented some of my own, but today I prefer using autoconf to generate config.h[1] and a simple make include (e.g. defaults.mk). I usually use plain POSIX make for my builds these days; GNU Make if I forget myself and decide to get unnecessarily fancy. Technically someone could build such a project of mine without invoking ./configure so long as they [re-]define'd all the necessary macros: CFLAGS, CPPFLAGS, SOFLAGS, LDLIBS, HAVE_FOO, etc when invoking make. Which, IMO, is how any [Unix] build system should work--in independent layers.

[1] Except I tweak output generation so that a macro is only define'd if not yet defined. For example, at the top of configure.ac,

# Redefine AH_TEMPLATE so that feature macros can be overridden from CPPFLAGS m4_define([AH_TEMPLATE], [AH_VERBATIM([$1], m4_text_wrap([$2 */], [ ], [/* ])[ @%:@ifndef ]_m4_expand([$1])[ @%:@undef ]_m4_expand([$1])[ @%:@endif])])
That makes it possible to override config.h from CPPFLAGS (or CFLAGS or w'ever). While autoconf is much better, IME, at being forward compatible than the alternatives, nothing is ever 100%. Sometimes a feature test will get things wrong, or perhaps a builder wants to manually mask some API (that invariably lacks a build switch). No matter what solution a project uses for its build, there's nothing worse than being forced to modify the build itself to quickly route around some trivial issue, regardless of whether it should or will end up as a proper patch.

Rejuvenating Autoconf

Posted Oct 24, 2020 11:10 UTC (Sat) by nilsmeyer (guest, #122604) [Link]

Isn't this work specifically to enable legacy users to continue to legacy use?

Rejuvenating Autoconf

Posted Oct 24, 2020 11:49 UTC (Sat) by willy (subscriber, #9762) [Link]

Yes, but the approach is wrong. Autotools detects which features are available on your platform. The right way is to have libiberty provide all the features needed on every platform.

Rejuvenating Autoconf

Posted Oct 25, 2020 10:18 UTC (Sun) by nix (subscriber, #2304) [Link]

I think that's more gnulib's job, these days. libiberty is literally just a shared toolchain support library. :)

Rejuvenating Autoconf

Posted Oct 25, 2020 8:10 UTC (Sun) by ncm (guest, #165) [Link]

Legacy users can use legacy autotools, legacy compilers, legacy languages, legacy OSes. None of that need bother us, in any sense. Leave them to it, and get on with things.

Rejuvenating Autoconf

Posted Oct 26, 2020 6:59 UTC (Mon) by nilsmeyer (guest, #122604) [Link]

It looks to me like this work is mostly funded to help support legacy users. I'm not really bothered by that either, if it helps make autotools better for everyone so much the better.

Rejuvenating Autoconf

Posted Oct 26, 2020 14:05 UTC (Mon) by Kamiccolo (subscriber, #95159) [Link]

Good luck dumping legacy backbone of modern computing.

Rejuvenating Autoconf

Posted Oct 24, 2020 9:04 UTC (Sat) by tdz (subscriber, #58733) [Link]

Interesting article. Thanks for doing this.

Rejuvenating Autoconf

Posted Oct 24, 2020 10:20 UTC (Sat) by atnot (subscriber, #124910) [Link]

An article on people doing the thankless task of maintaining an aging build system and the best lwn commenters can come up with is a bunch of comments dunking on the project? I know I shouldn't have expected more but cmon.

Thank you for doing this work.

Rejuvenating Autoconf

Posted Oct 24, 2020 13:10 UTC (Sat) by david.a.wheeler (guest, #72896) [Link]

I agree, thanks for this work.

If you ever want to use the autotools, I made a short set of videos that may be helpful:
https://m.youtube.com/watch?v=4q_inV9M_us

Rejuvenating Autoconf

Posted Oct 24, 2020 18:46 UTC (Sat) by marcH (subscriber, #57642) [Link]

I found most comments respectful and generally constructive so far considering the topic but OK, I'll take the bait. Here's another, clearer and more direct comment just to prove you right: thanks but no thanks.

No thanks for palliative care of a project that has cost gazillions of developers a lot of their time and some of their sanity and largely contributed to all build systems being "universally despised"[a]. The sooner autotools die and the sooner people get back some time to advance the state of the art and maybe even like build systems again.

1. As detailed in other comments (and in the article itself!) the problems solved by autotools have mostly disappeared. If you still need to compile something on AIX or Solaris then get the companies you're paying (!) for these to fix the actual compatibility problems instead[b]. They now have to fix these anyway for all the projects that don't use autotools anymore. There is still some disastrous and extremely costly lack of standardization across libraries and toolchain interfaces[c] affecting even newer build systems and anything that helps maintain this status quo is actually harmful. Let's please focus and ask developers to spend their very limited build patience and energy on tomorrow's build problems, not yesterday's.

2. While no alternative to autotools is so good that developers will start "loving" build systems yet (is that ever possible?), there are _multiple_ alternatives which are _all_ massively more pleasant and many are way past the "production-grade" phase. They deserve time and effort much more than autotools do.

If some people want to maintain autotools then great... for them! Sorry but no: it is_not_ great for the rest of us because it is holding us back for a little longer before the inevitable.

[a] https://mesonbuild.com/Design-rationale.html
[b] https://mesonbuild.com/Design-rationale.html#5-must-not-a...
[c] https://github.com/zephyrproject-rtos/zephyr/issues/16031 toolchain abstraction

PS: I'm linking to meson because it has the best "Design rationale" page, I'mnot claiming it's the "best" tool (but I find it pretty good)

Rejuvenating Autoconf

Posted Oct 24, 2020 20:35 UTC (Sat) by pizza (subscriber, #46) [Link]

> Let's please focus and ask developers to spend their very limited build patience and energy on tomorrow's build problems, not yesterday's.

Sure, one can argue about the details of autotools' implementation and question whatever arbitrary cutoff it has (or doesn't have) for "ancient" systems... but please keep in mind that while the exact details will change, tomorrow's build problems will be remarkably similar to yesterday's -- software (and deployment) environments are rarely homogenous and usually bleed outside the vertical tooling/language monocultures that are in vogue these days.

Emphasis on bleed.

Rejuvenating Autoconf

Posted Oct 24, 2020 21:54 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

I've been struggling with build systems since 90-s, and today's build problems are NOWHERE close to that old times.

These days people need dependency management, reproducibility, cryptographic checksums for dependencies, bill of materials support, massively parallel builds, etc. None of this was even on the radar back in 90-s.

Rejuvenating Autoconf

Posted Oct 25, 2020 10:21 UTC (Sun) by nix (subscriber, #2304) [Link]

The sooner autotools die
And all the programs that are using them... what? Die along with them? Invest enormous amounts of time migrating to a new build system? (For things that rely on things like cross-compilation working, a truly enormous amount of time: just upgrading autoconf versions is a very big job for GCC, let alone switching build systems).

Autoconf is in widespread, active use still. Things in widespread, active use need maintenance as the world around them changes. Autoconf hasn't had it of late: it's really good it's seeing some.

I think maybe I should pick a few projects and make sure they work with 2.60 :)

Rejuvenating Autoconf

Posted Oct 25, 2020 22:30 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

I've moved several autoconf-based libraries to CMake and usually it was pretty fast. Even with complicated multi-module libraries.

> just upgrading autoconf versions is a very big job for GCC, let alone switching build systems
GCC is an example of how NOT to do things. Clang+LLVM is of comparable complexity, yet its build system is stupidly easy to follow and understand:
https://github.com/llvm-mirror/clang/blob/master/CMakeLis... and https://github.com/llvm/llvm-project/blob/master/llvm/CMa... (yes, there are some helper files, but the bulk of functionality is in these two files).

Perhaps GCC should just spend a couple of weeks migrating to something sane instead of continuing to try and eat the cactus? And yes, it's going to take about 2-3 weeks of work.

> Autoconf is in widespread, active use still. Things in widespread, active use need maintenance as the world around them changes. Autoconf hasn't had it of late: it's really good it's seeing some.
Sometimes maintenance should mean just killing it.

Rejuvenating Autoconf

Posted Oct 25, 2020 22:33 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

Mind you, the LLVM and CLang build system also allow building them as shared libraries, with example on how to use them. With full Windows support (meanwhile gcc still requires some kind of POSIX emulation on Windows to build) and ability to create project files for IDEs.

Rejuvenating Autoconf

Posted Oct 26, 2020 15:39 UTC (Mon) by nix (subscriber, #2304) [Link]

You think GCC could migrate to a new build system in a couple of weeks?! OK you quite clearly have *no* idea what you're talking about. It takes about a month of fiddling and testing to migrate to a new autoconf release, and that's if you're lucky and there are no painful obscure problems. A complete replacement with something else is probably at least a year's work: there's an enormous amount of baked-in knowledge in there that would all need to be transferred.

(And CMake is probably impossible to use as a replacement: last I checked its cross-compilation support was terrible. Meson might be usable, in time, but even there the problem is that it depends on Python 3, and that's a long way down the dependency stack for a foundational toolchain component like the C compiler.)

Rejuvenating Autoconf

Posted Oct 26, 2020 16:33 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

Yes, the main problem is that CMake only supports a single toolchain per language (and it's pretty fundamental to how it works today in that pretty much everything assumes it can ask "what is the C compiler in use?" and get a sensible answer in CMake code). So self-compilation becomes an interesting question as well as separate host/target compilation in a single build. But besides those issues (which I admit are not trivial), cross compilation does work. It just disqualifies it for projects which compile some part of themselves to run on the host to generate code for the target (which are few and far between in the scheme of things). I'm not sure how LLVM/Clang does post-phase1 compilation off-hand, but it's probably a separate (possibly automated) CMake invocation to use the just-built compiler.

Rejuvenating Autoconf

Posted Oct 26, 2020 17:09 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Yes. People over-estimate the gcc's build system complexity. It's complex, but it doesn't HAVE to be complex.

The bootstrap stage can be removed entirely, it serves no real purpose these days. But leaving it is not a big deal, you'll just need staged CMake files for this (at least that's how I would do it).

The rest of normal GCC is just a C++ program, nothing really special (sure, a lot of care needs to be taken to handle all the build options).

And the final part is the test suite. I have far less experience there.

> And CMake is probably impossible to use as a replacement: last I checked its cross-compilation support was terrible.
Check it again.

Rejuvenating Autoconf

Posted Oct 27, 2020 14:06 UTC (Tue) by peter-b (guest, #66996) [Link]

> The bootstrap stage can be removed entirely, it serves no real purpose these days.

As an escapee from compiler development, I can absolutely assure you that compiler bootstrapping is still not merely important but essential.

Rejuvenating Autoconf

Posted Oct 27, 2020 14:57 UTC (Tue) by smurf (subscriber, #17840) [Link]

We're talking about building GCC 10.0.0.2 with GCC 10.0.0.1 here, not about building GCC with stone-age-K&R-C.

The latter is proper bootstrapping. You need (or needed) it to build a Rust compiler that's written in C, before you can switch to the "real" Rust compiler written in Rust. (s/Rust/Golang/ if you want to. Or whatever.)

The former is not.

As there's no realistic scenario where you'd want to build GCC with a non-GCC compiler (Clang understands GCC's C extensions, thus it doesn't count) GCC's bootstrapping infrastructure is superfluous. Yes you need to keep some bits&pieces of it for cross-compiling to another architecture – but that's cross-compiling, not bootstrapping.

Rejuvenating Autoconf

Posted Oct 27, 2020 15:23 UTC (Tue) by FraserJGordon (subscriber, #96690) [Link]

But it will haunt you forever ;)

Rejuvenating Autoconf

Posted Oct 27, 2020 18:20 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Bootstrapping is important, but not the way it's done in GCC. It's written with the expectation that the first stage can be compiled by an ancient pre-K&R C, which realistically is nonsense.

GCC is not a particularly involved code, so it can just be written to run on something reasonable (modern-ish Clang, gcc, MSVC). With cross-compilation support this will be enough for anything realistic.

Rejuvenating Autoconf

Posted Nov 20, 2020 21:51 UTC (Fri) by nix (subscriber, #2304) [Link]

That hasn't been true for years. You've needed a C++98 compiler to compile all GCCs since, IIRC, 4.8.

Rejuvenating Autoconf

Posted Oct 28, 2020 3:13 UTC (Wed) by pabs (subscriber, #43278) [Link]

These folks are of the opinion that bootstrapping will always be essential:

https://bootstrappable.org/

Rejuvenating Autoconf

Posted Oct 28, 2020 3:24 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Bootstrapping problem is beat by diverse double compilation. It's also deeply theoretical, nobody has demonstrated it in practice for anything non-trivial.

And it doesn't even apply here, the "boostrap" stage in GCC still requires a compiler, it just has very low requirements for it.

Rejuvenating Autoconf

Posted Oct 30, 2020 21:52 UTC (Fri) by jonesmz (subscriber, #130234) [Link]

That seems to speak more to how horrible autotools is, than to how complex the GCC build system is.

Rejuvenating Autoconf

Posted Oct 26, 2020 4:34 UTC (Mon) by marcH (subscriber, #57642) [Link]

> Invest enormous amounts of time migrating to a new build system?

The name is "sunk cost fallacy"

Rejuvenating Autoconf

Posted Oct 26, 2020 6:03 UTC (Mon) by jem (subscriber, #24231) [Link]

No. Sunk cost fallacy describes the situation were somebody sticks with a decision because much effort has been put down for it. The original question was: "And all the programs that are using them... what?" Migrating them would require a lot of effort just for the sake of migrating to a different system.

Now, if a *new* project is started using Autoconf then that would be another matter.

Rejuvenating Autoconf

Posted Oct 26, 2020 7:33 UTC (Mon) by marcH (subscriber, #57642) [Link]

> ... just for the sake of migrating to a different system.

No.

Rejuvenating Autoconf

Posted Oct 26, 2020 15:42 UTC (Mon) by nix (subscriber, #2304) [Link]

So... you think it is fallacious for Autoconf-using projects to be annoyed at being forced to throw away a lot of work and rewrite it, including all its baked in knowledge, and possibly find that you can't even do the shift to a new build system because the new build system is inadequately capable and can't do things you could do with the old one?

It seems entirely reasonable to me to look at that massive pile of work, not one bit of which would benefit your users and some of which might well harm them, and say "hell no".

Rejuvenating Autoconf

Posted Oct 26, 2020 17:40 UTC (Mon) by marcH (subscriber, #57642) [Link]

> So... you think it is fallacious for Autoconf-using projects to be annoyed at being forced to throw away a lot of work and rewrite it, ... ?

I just spend time double and triple checking my comment. While I clearly warned it would lack nuance, I didn't write anything remotely close to this.

Some good stuff in the rest of your comment but when the first line is deliberately and totally making up what the other said then the discussion can only go nowhere fast. This is much worse than not making an effort to understand each other. I've noticed "deliberate straw man" has unfortunately become the most common "discussion" style nowadays, role-modelled from the most obscure forums all the way up to the highest level of politics but I'm still refusing to "adapt" to this. Looks like we both have something old we like to cling to :-)

Rejuvenating Autoconf

Posted Nov 1, 2020 2:07 UTC (Sun) by jschrod (subscriber, #1646) [Link]

Sorry, but your "sunk cost fallacy" post the comment was about is exactly of your lamented "straw man" kind.

Pot, meet kettle.

Rejuvenating Autoconf

Posted Nov 2, 2020 9:02 UTC (Mon) by marcH (subscriber, #57642) [Link]

I'm afraid you lost me, sorry. In case that's relevant: "sunk cost fallacy" is _my_ perception of autoconf maintenance, has been long before this discussion and I didn't mean to attribute it to anyone else.

Rejuvenating Autoconf

Posted Nov 2, 2020 10:08 UTC (Mon) by Wol (subscriber, #4433) [Link]

Except you are projecting YOUR values onto OTHER PEOPLE. *Bad* *Idea*.

The sunk cost fallacy is when you choose not to switch to a cheaper option, because of what you've spend on the more expensive option. It seems quite clear here that for many people, the cost of switching is very high, because they will have to (in one form or another) - reimplement a lot of autoconf. This makes maintaining autoconf the cheapest option!

Cheers,
Wol

Rejuvenating Autoconf

Posted Nov 2, 2020 17:01 UTC (Mon) by marcH (subscriber, #57642) [Link]

The sunk cost fallacy is by definition subconscious and it's a natural tendency we all have. This is very far from attributing a very specific, made-up point to someone specific.

> The sunk cost fallacy is when you choose not to switch to a cheaper option, because of what you've spend on the more expensive option.

The sunk cost fallacy is subconsciously give value to something that has none anymore and let that influence you. Influence is not always enough to win an (often collective) decision.

Sunk costs are much complicated in software than in say finance because _knowledge_ of an existing system is valuable: it makes knowledgeable workers much more productive which is of course very valuable, especially from the perspective of the experts. But what about the value for the project as a whole? The value of newer build systems is of a completely different sort: they require less expert knowledge and many more people can and do help with their maintenance. Interestingly, this decreases the "market" value of the old experts.

> It seems quite clear here that for many people, the cost of switching is very high, because they will have to (in one form or another) - reimplement a lot of autoconf. This makes maintaining autoconf the cheapest option!

It's pretty obvious that the minute after flipping the switch away from autoconf, a project that just migrated has spent a lot and gained nothing yet. I don't think anyone questioned that. Every technological choice is a large investment and its value must be studied _over time_.

Rejuvenating Autoconf

Posted Oct 25, 2020 17:03 UTC (Sun) by jnxx (guest, #142729) [Link]

> No thanks for palliative care of a project that has cost gazillions of developers a lot of their time and ....
> https://mesonbuild.com/Design-rationale.html

There is a lot of interest from people who want cross-platform builds, probably for
using GNU Linux/FLOSS software on Windows. Clearly, more people
want to use free software libraries and tools on Windows.

I understand that it would be nice for some developers if everything there worked
neatly on Windows. But it can't be the fix to Window's historical lack of
compatibility to break important tools and infrastructure for Linux.
And I have worked a bit with CMake, Conan and such.... where autotools and Make
are a bit crufty but work, CMake is a major mess with more or less
unusable documentation. This is not going to save anyone using Linux work.

Kudos to the people which put work into such an important, hard, and sometimes thankless project.

Rejuvenating Autoconf

Posted Oct 25, 2020 17:48 UTC (Sun) by martin.langhoff (guest, #61417) [Link]

> There is a lot of interest from people who want cross-platform builds, probably for
> using GNU Linux/FLOSS software on Windows. Clearly, more people
> want to use free software libraries and tools on Windows.

Fair enough, but remember WSL is here now, and for most uses, it's more than good enough.

See https://docs.microsoft.com/en-us/windows/wsl/install-win10

Rejuvenating Autoconf

Posted Oct 26, 2020 17:30 UTC (Mon) by perennialmind (guest, #45817) [Link]

I should like WSL more than I do. WSL1 especially shrinks the gap between the worlds so that one may casually step from one to the other. Pipes, fifos, signals, etc all cross the gap. Microsoft produced a figurative automobile, but I still want a faster horse. I want MSYS2 and Cygwin, with fewer compromises.

That's why I'm hoping that Microsoft's transfer of affection to WSL2 gives the midipix project some breathing room. From the Win32 point of view, it's just one more libc among many, sitting atop NTDLL.DLL. To the libraries I enjoy on my linux distro, it's just another POSIX build target using the musl libc. And you can mix and match.

Rejuvenating Autoconf

Posted Oct 26, 2020 10:57 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

(Puts CMake dev hat on.)

> CMake is a major mess with more or less unusable documentation.

I know pre-3.0 docs were quite terrible (single page monstrosity, etc.), but we now use Sphinx for everything. A contributor even went through recently and added `since` tags for the main APIs (module APIs still lack them I think). We do still lack "story" and guide-like documentation (alas, time and $$), but Craig Scott's book is supposed to be really good.

https://cmake.org/cmake/help/latest/

I agree about the "mess" (though likely differ in degree :) ), but build systems for C and C++ that support more than the bare minimum set of compilers and platforms are just inherently going to be that way because the ecosystem is just such a mess to begin with.

Rejuvenating Autoconf

Posted Nov 2, 2020 1:34 UTC (Mon) by ringerc (subscriber, #3071) [Link]

I'd agree with you if anyone, anywhere, had come up with a build system that wasn't awful... because the problem is awful.

The simple ones are beautiful until you step outside their design boundaries and want to do something they consider inappropriate. Like link to code that wasn't in turn built with the same build system or shipped with some kind of build system manifest file. Oh, the library maintainer should just ship a libfoo.whizzmagic !

The ones that don't try to pretend the world is simple and can be re-made afresh in their own vision tend to become mires of complexity, multiple outdated ways of doing things, stale documentation, and confusion.

Then some, like CMake, seem to manage to find the worst of both worlds, and still be pretty useful.

Rejuvenating Autoconf

Posted Nov 2, 2020 3:13 UTC (Mon) by marcH (subscriber, #57642) [Link]

Yes, all build systems suck. Today's build systems suck only one order of magnitude less. The main difference is: mere mortals can now wrap their head around their limitations, bugs and workarounds.

Rejuvenating Autoconf

Posted Nov 3, 2020 18:21 UTC (Tue) by anton (subscriber, #25547) [Link]

No thanks for palliative care of a project that has cost gazillions of developers a lot of their time and some of their sanity
Autoconf has saved me a lot of time. The alternative would have been to manually edit Makefile, config.h etc. for every platform we build on.
the problems solved by autotools have mostly disappeared
The problems are still there. If you don't want your project to be portable between platforms, nobody is forcing you to use autoconf. Gforth has portability as one of its goals, and thanks to autoconf previous releases have been portable to a wide range of OSs and CPU architectures, and we certainly want to stay portable (although admittedly we can no longer test on as many platforms as we used to).

I notice in many places, including your posting and the Meson design rationale a disdain for maintaining compatibility with old stuff. If everybody followed this principle, the software you use now would soon stop working; ok, so you get the new version (if there is one), but that would of course work differently than you are used to, so you have to change everything that you built that relies on it.

Interestingly, the first look at the Meson design rationale page gave the right impression; I had to adjust fonts and colours before I found it visually readable. Usually such pages are not worth reading, and that page was no exception.

Rejuvenating Autoconf

Posted Nov 3, 2020 20:09 UTC (Tue) by marcH (subscriber, #57642) [Link]

> I notice in many places, including your posting and the Meson design rationale a disdain for maintaining compatibility with old stuff.

I don't see any disdain at https://mesonbuild.com/Design-rationale.html#5-must-not-a... , just a trade-off because meson is not a well-funded, commercial project. Any other reference?

I love old stuff when bugs can be fixed cleanly where they are. My disdain is only for ugly workarounds - typically required by commercial and/or closed-source systems - unmaintenable wrapper and too many layers of indirection.

> Interestingly, the first look at the Meson design rationale page gave the right impression; I had to adjust fonts and colours before I found it visually readable. Usually such pages are not worth reading, and that page was no exception.

There seems to be fair number of projects who found meson documentation readable enough: https://news.ycombinator.com/item?id=24881897

The meson project is minimally "staffed" (which goes back to my initial comment) and last time I checked it was unsurprisingly not overstaffed with talented documentation experts. However it is very friendly to "drive-by" contributions so I bet they'll be delighted to accept fixes for your unusual browser configuration.

PS: I've met a surprising number of long-time open source hackers (even some famous ones) who have never used the "info" command. FWIW I do use it all the time.

Rejuvenating Autoconf

Posted Nov 4, 2020 11:53 UTC (Wed) by anton (subscriber, #25547) [Link]

Gforth is not a commercial project, either. That's why it supports portability beyond commercial considerations (e.g., 8 architectures were successfully tested on the latest release); by contrast, the most widely ported current commercial Forth implementation supports only IA-32, AMD64 (in development), ARM (32-bit) on mainstream OSs, and some embedded systems.

Considering Ubuntu 12.04 (the then-current LTS release) obsolete in December 2012 is a pretty strong sign of disdain for maintaining compatibility with old stuff, actually even with then-current stuff.

My "unusual browser configuration" is to heed the web page's requests for fonts and colours. The Meson developers invested a bit of their "minimal" staffing to ask for a tiny font and for light-gray-on-dark-gray colours. After working around that, the content is readable, but not worth reading.

Rejuvenating Autoconf

Posted Oct 24, 2020 20:29 UTC (Sat) by ballombe (subscriber, #9523) [Link]

The issue is that most of us have been bitten in the past by broken or backward incompatible autoconf releases.
If you do not like to put the generated files in VCS, then you have to require everyone that build from the VCS to have a compatible autoconf version. This often meant not the one provided by the distro which was a pain.

So an announcement of a new autoconf version after all this time of stability is a cause of dread for a lot of users, unfortunately, even if it is probably not justified in this case.

Sometime a software being frozen can be a blessing.

Rejuvenating Autoconf

Posted Oct 25, 2020 5:10 UTC (Sun) by pabs (subscriber, #43278) [Link]

Rejuvenating Autoconf

Posted Oct 26, 2020 18:17 UTC (Mon) by marcH (subscriber, #57642) [Link]

Mentioned there: https://queue.acm.org/detail.cfm?id=2349257
"A Generation Lost in the Bazaar - Quality happens only when someone is responsible for it."

Final words:

> It is a sad irony, indeed, that those who most need to read it may find The Design of Design entirely incomprehensible. But to anyone who has ever wondered whether using m4 macros to configure autoconf to write a shell script to look for 26 Fortran compilers in order to build a Web browser was a bit of a detour, Brooks [book] offers well-reasoned hope that there can be a better way.

Building from source to include running autoconf?

Posted Oct 25, 2020 7:16 UTC (Sun) by epa (subscriber, #39769) [Link]

What I find odd is that a ‘source tarball’ includes a configure script which is not source code, but generated by autoconf. Back in the day it made sense that you wouldn’t need the whole autoconf infrastructure installed just to build one GNU package. But nowadays autoconf is readily available on Unix-like platforms, and a platform without autoconf is unlikely to have a usable sh or make either.

Indeed, if you pull sources from GitHub or wherever, chances are you will get the true source code, and have to generate the configure script yourself.

So perhaps instead of running configure then make, the first step in installation instructions should be to run autoconf to make the configure script. Then, add an option to autoconf to make the script and run it immediately. That gives the possibility that some day in the future, the shell script step can be simplified out, with autoconf detecting features and writing Makefile by itself. (It would still be largely running random commands and processing m4 code, but at least it’s one less step in the tower of cruft.)

Building from source to include running autoconf?

Posted Oct 26, 2020 16:18 UTC (Mon) by jwarnica (subscriber, #27492) [Link]

Or for any given package where 100% of the developers only develop on Linux, and >99% of the users only use Linux, they could simply write Makefiles by hand, possibly with human understandable switches for debian/ubunto vs fedora/rhel.

The remaining 1% of masochistic users would become happier having to manually edit Makefiles for their obscure toy systems.

Building from source to include running autoconf?

Posted Oct 26, 2020 17:33 UTC (Mon) by bpearlmutter (subscriber, #14693) [Link]

Unless the software generates a library, or installs a slew of different sorts of support files and documentation, or any of a dozen other things that are really hard to get right in a Makefile.

Building from source to include running autoconf?

Posted Oct 26, 2020 17:40 UTC (Mon) by epa (subscriber, #39769) [Link]

Perhaps in 2020 we should give up on the idea that the software installs documentation into /usr/share/doc? Who really bothers to go and look in there rather than going online?

I support the idea of a known release version which has a known set of documentation, and keeping the documentation under version control alongside the source code. It can of course be included in the git checkout or the tarball. It just doesn't need to be 'installed' in some creaky location where you might have found it on your minicomputer 30 years ago.

I have to admit I do like to read manual pages, so perhaps I'm being inconsistent here, but there is surely a simpler way to do things.

The general point you make about building libraries is entirely valid of course.

Building from source to include running autoconf?

Posted Oct 26, 2020 17:50 UTC (Mon) by geert (subscriber, #98403) [Link]

Online is great. Until you discover the online documentation describes a (much newer or sometimes older) version of the software than what's actually provided by your distro, and installed on your computer.

Building from source to include running autoconf?

Posted Oct 26, 2020 18:19 UTC (Mon) by jwarnica (subscriber, #27492) [Link]

Its not like its hard to build a website that handles multiple versions of documentation. Sure, there are cases of high security facilities with zero Internet access, but more often I want to read documentation where I don't have access to the running system then I have access to a system and no internet access.

Anyway, for the nuts and bolts of building documentation, I'm more comfortable with telling some end sysadmin "we really don't support that obscure system, just read the docs online if you can't build it" then I would be trying to support their 48bit systems from 1987.

man pages are nice for online quick reference. But for "documentation", you really need documentation.

Building from source to include running autoconf?

Posted Oct 27, 2020 6:19 UTC (Tue) by smurf (subscriber, #17840) [Link]

There's plenty of "Internet access is nonexistent / slow / expensive as hell" corners left in today's world. Don't blindly assume that everybody is as WEIRD as you are. (western, educated, industrialized, rich and democratic.)

Building from source to include running autoconf?

Posted Oct 27, 2020 16:39 UTC (Tue) by jwarnica (subscriber, #27492) [Link]

Well, maybe.

But that doesn't really change the base question that autoconf would unquestionably be the wrong tool to build that.

Building from source to include running autoconf?

Posted Oct 26, 2020 19:16 UTC (Mon) by epa (subscriber, #39769) [Link]

Just as you should be able to trace back anything running on your system to the particular git revision it was built from, you should also be able to view the documentation as of that revision, since the documentation is stored in the same repository as the source. Or at the worst, browse the contents of the source package (rpm, deb, etc) your installed version was built from.

I share your frustration with online documentation but I think if we treat documentation as a kind of source code, it becomes an instance of the more general question 'how can I view source code for what's actually installed on my computer?', which is something that needs an answer anyway.

Building from source to include running autoconf?

Posted Oct 26, 2020 19:38 UTC (Mon) by jwarnica (subscriber, #27492) [Link]

'how can I view source code for what's actually installed on my computer?'

Indeed. Maybe once upon a time that answer involves remembering what tape you smuggled into a place, now it is answered by just answering it. If you can't answer it directly, or can't find the matching sourcecode, then you've got bigger problems than autoconf.

Building from source to include running autoconf?

Posted Oct 27, 2020 0:46 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Online is great. Until you discover the online documentation describes a (much newer or sometimes older) version of the software than what's actually provided by your distro, and installed on your computer.

Or your network is down. Or the problem you're trying to fix is with your networking or your browser. Or you're on some kind of metered connection and would like to avoid connecting to the network for everything. Or any of the other myriad reasons one might wish to avoid going online for everything. Putting everything online is great, but there's really no reason not to put documentation on the same computer as the software it's documenting.

Building from source to include running autoconf?

Posted Oct 28, 2020 16:00 UTC (Wed) by Wol (subscriber, #4433) [Link]

> Online is great.

Until you discovcr that the *real* documentation is swamped by "howtos" written by people who don't know what they're doing and is chock full of subtle misunderstandings and errors.

(Case in point - it's less common now but the amount of sites that say "if your raid array is corrupted just re-create it" ... about as effective as "dd if=/dev/random of=/dev/md-array" !!!)

Cheers,
Wol

Building from source to include running autoconf?

Posted Oct 28, 2020 16:18 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link]

Hey, I've done dd if=/dev/random of=/dev/md-array before. Of course that was deliberately filling an array with random junk before making an encrypted disk, so I was actually intending to fill my drive with junk.

Rejuvenating Autoconf

Posted Oct 25, 2020 14:44 UTC (Sun) by martin.langhoff (guest, #61417) [Link]

I, for one, am thankful for the enormous effort spent in maintaining autoconf to keep old software packages and platforms viable. Those who are stuck on some old old OS, are probably not there by choice, and you're saving their day, and possibly allowing a newer software release to run there.

At the same time, I concur with many that software that isn't supporting the likes of HP-UX, AIX and Solaris can skip forward to a more modern and simpler build strategy. Mature Linux distros and container image build tooling have (a) massively simplified the build on Linux, and (b) shifted the challenge a few levels of up the stack, into providing configs that can run in a CI toolchain and will output VM and container images. Because everything is automated, you can then re-run the CI on a schedule and have a build with the latest components.

Rejuvenating Autoconf

Posted May 23, 2021 22:26 UTC (Sun) by IDog1993 (guest, #152380) [Link]

Everything ages and every cathedral crumbles into a bazaar eventually. So thanks to those who support tools such as autoconf that help keep bazaar open, even if it is with sticky tape and bits of string. Developers tend to work with the latest and greatest gizmos and so do not appreciate the difficulty of others who are running just to keep up. Almost every piece of hardware and software I have owned since the late 1980's has died of terminal bit-rot. Now my cloud hosts are entering end-of-life and similarly dying of bit-rot and I am porting again. So the safest strategy seems to be to be a continual platform nomad and grind one's software into a hardened portable core. Which means using autoconf but continually refactoring to eliminate the need for working feature tests.

Ask (1) would be for all the conftest source code to be available as separate standalone source files, so they are reusable - backporting a standalone minimal working example of a test for a working or not-working feature into an M4 macro and testing working and not-working configurations across platforms is a chore - these little conftests should be passed around, maybe even as a separate package - a sort of sub-autoconf-archive. They should naturally pop out of regression testing - they would be re-usable in test cases and could be used for probing configurations before starting on coding a packaging. I've been working today on testing for a regression in a third-party library and trying to work out the best way of preventing bug testing cruft and work arounds from clogging up the build system and the source code. With shell scripts I have started providing a unconfigured Posix version and an autoconf configurable version - I test for compliance and then choose to use a compliant or configured script - rather than making all scripts configurable - this is to keep the old cruft in a separate box. I haven't noticed bugs becoming less common, so testing for them remains a forever problem. Perhaps viewing new features as bugs too would help.

Ask (2) would be for a clearer separation in the documentation of (a) bug tests (or "works" tests), from (b) user choices, from (c) probing system configuration. And more complete examples of how a bug test at configuration leads through to beautiful source code and build configuration with all the cruft hidden in a separate box.

Ask (3) would be more supporting tools to test for portability and compliance - the autoconf documentation on portability is very good but at every turn there is the temptation to slip into using the --new-magic option which just needs a tiny, tiny upgrade - go on you know it will be good for you. Tests for new so called features are an excellent way of spotting a, possibly unnecessary, upgrade dependency. And GNU tie-in is also a portability issue just like any other - copy-left is fine, but not at the expense of freedom from GNU too.

Ask (4) for autoconf would be the ability for all autoconf files to be out of the project root folder (except perhaps one file with GNU in the name) - with packages now shipping with multiple build systems all hogging the project root folder there are more build system files than sources lying around.

Rejuvenating Autoconf

Posted Oct 26, 2020 7:35 UTC (Mon) by wtarreau (subscriber, #51152) [Link]

The main problem I'm seeing with the autoconf suite is that it wastes global human time by putting most of the effort at the wrong place. By saving 3 minutes of work to one developer, it costs tens of thousands of hours of work to plenty of users when it gets something wrong. But autoconf is not in an easy position, it's one of the rare tools that has two sets of users, each set on a different side (developers on one side, users on the other one).

All build systems are hated by end users for a simple reason: they try to work around compatibility issues, and that doesn't always work for various reasons ranging from smaller corner cases to inability to detect certain things, so they are associated with failures. The main problem with autoconf is that it's extremely hard for the end user to work around it when it fails because autoconf always decides that it knows better than the user.

Sometimes you're lucky and can figure an ac_foo_bar variable that you can set before starting it and avoid patching the configure file. Most often you can't and you figure the test is wrong by nature and needs to be removed to work on your target system.

Because of this, it's probably the only free software that managed to get hate t-shirts against it. Just picking a random one: https://twitter.com/bradfitz/status/762358053744238592

What autoconf needs to survive is to be tolerable by end users. This simply means giving them the ability to override the result of any test without having to patch configure. That's not necessarily the hardest thing to do especially since caching is already reasonably well supported. It would probably involve using "${ac_force_foo_bar-$ac_force_foo_bar:$ac_foo_bar}" in front of every test, and documenting before each test what variable will be set, its type, usage, and expected contents so that it can be easily overridden.

But in the current state, I don't see how any sane developer would want to use this. One must have to hate their user base to impose them such a pain :-(

Rejuvenating Autoconf

Posted Oct 26, 2020 8:40 UTC (Mon) by pbonzini (subscriber, #60935) [Link]

You can use config.site to override properly written tests. Properly written is the key word.

The main problem of Autotools is that it's too hard to use them properly, everything else comes from there.

Rejuvenating Autoconf

Posted Oct 28, 2020 5:49 UTC (Wed) by andresfreund (subscriber, #69562) [Link]

> What autoconf needs to survive is to be tolerable by end users.

Oddly enough, end-user friendlyness is actually also somewhat of a benefit of autoconf generated configure scripts:

1) Comparing autoconf's ./configure --help with most of the other build tools out there shows how much more discoverable autoconf build options are than most other tools. It's baffling how bad e.g. the discoverability of cmake options is compared to ./configure --help. And cmake isn't the only bad case.

2) As annoying as it is to store generated code like autoconf's in source control, not having to install a compatible version of a build tool is a boon too. Having to install a specific older version of a build tool just to build something can be quite painful on non-mainstream platforms.

Rejuvenating Autoconf

Posted Oct 28, 2020 15:25 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

(CMake dev here.)

I'm well aware of the discoverability problems of CMake's configure. A lot has been offloaded onto the cache editor UIs (ccmake and cmake-gui). One thing I'd personally like to see support for dumping out the (modified) settings for the current build (https://gitlab.kitware.com/cmake/cmake/-/issues/14756). This pretty much requires proper "use a default value" semantics in the `option` and `set(CACHE)` codepaths. The initial cache value is not a default as there's no tracking of whether the user wanted that explicitly or an old cache file is in use, so that is likely the first things that would need to be added.

CMake is also *extremely* strict about backwards compatibility. We still have codepaths laying around to act like CMake 2.6.0 if need be, so there should never be a reason to not use the latest CMake release for any project (please file bugs if this is not the case).

Rejuvenating Autoconf

Posted Oct 28, 2020 22:47 UTC (Wed) by andresfreund (subscriber, #69562) [Link]

> I'm well aware of the discoverability problems of CMake's configure. A lot has been offloaded onto the cache editor UIs (ccmake and cmake-gui). One thing I'd personally like to see support for dumping out the (modified) settings for the current build (https://gitlab.kitware.com/cmake/cmake/-/issues/14756). This pretty much requires proper "use a default value" semantics in the `option` and `set(CACHE)` codepaths. The initial cache value is not a default as there's no tracking of whether the user wanted that explicitly or an old cache file is in use, so that is likely the first things that would need to be added.

Neither cmake-gui nor ccmake come even close to ./configure --help, unfortunately. They do not work without a build tree. And, which I think is the really bad part, they don't even work when cmake fails with an error, for crying out loud. Precisely when one might need to see the build options, to make the build succeed.

> CMake is also *extremely* strict about backwards compatibility. We still have codepaths laying around to act like CMake 2.6.0 if need be, so there should never be a reason to not use the latest CMake release for any project (please file bugs if this is not the case).

Will do that the next time I encounter it (which I hope is never...).

Rejuvenating Autoconf

Posted Oct 28, 2020 22:51 UTC (Wed) by andresfreund (subscriber, #69562) [Link]

Oh, another related issue: There - at least not that I have found - isn't a way to to run cmake in an existing build directory that ensures that only the -D parameters in the current invocation take effect (with the rest set their defaults). It's so easy to end up with options lingering around. Having to clean out the build tree is a pretty expensive solution...

Rejuvenating Autoconf

Posted Oct 29, 2020 0:54 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

> And, which I think is the really bad part, they don't even work when cmake fails with an error, for crying out loud.

Due to the nature of the options being defined in imperative code, there's no way for CMake to provide everything in the case of an error. On that front, the discussion around a declarative CMake might be of more use to you[1]. However, you will be able to edit what is available at least, so if you have an option that immediately causes an error, it should be in the cache (or at least in the error message assuming the project is nice enough to have good error messages) for editing.

> isn't a way to to run cmake in an existing build directory that ensures that only the -D parameters in the current invocation take effect

Without CMake knowing what the actual defaults are, this would be hard. It would be possible with the issue I linked to above once CMake does have the concept of `Option<UserSetting>`. I've added this idea to that issue[2], thanks.

[1]https://gitlab.kitware.com/cmake/cmake/-/issues/19891
[2]https://gitlab.kitware.com/cmake/cmake/-/issues/14756#not...

Rejuvenating Autoconf

Posted Oct 30, 2020 22:04 UTC (Fri) by jonesmz (subscriber, #130234) [Link]

Yikes. You have compat code going back to 2.6.0? That seems like a huge amount of human effort for questionable gain.

Do you have plans to eventually move that compat level forward and drop the 2.6.0 exclusive code paths?

Rejuvenating Autoconf

Posted Oct 31, 2020 0:25 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

Yep, that we do. Story time :) . I wrote CMP0053 (new variable expansion code). I was all happy to be able to remove 5000+ lines of code for the old parser that used flex/bison for some strange reason. But, the old code had some silly behaviors we didn't want to preserve (e.g., didn't error on unknown escape sequences…just passed them on verbatim, unconditionally expanded `@var@` (who knew?), and some other silly things). Luckily the new one was fast enough to run both at the same time some we could warn when the new expansion code would differ in behavior from the old one without too much lost time.

The benefit from doing this stuff? CMake users can update CMake without fear that some subtle thing will break when they do so (I mean, it can, but we'll fix it if at all possible). Because you know what users with working build systems hate doing above almost anything else? Chasing subtle changes in behavior that breaks their build.

If you set your CMake minimum required to a sufficiently high version, it is an *error* to ask for the OLD behavior of any policy up to CMP0072 (introduced in 3.11) today (asking for OLD was convenience, never a supported mechanism to "upgrade" your project as the old behavior is, by definition, deprecated). These policies are effectively "hard deprecated" once your project's minimum is above 3.11 with modern CMake versions. You still can say you support, say 3.8, and you'll get the old behaviors though. CMake 4 is when any actual deletion would occur though, but I have no timeline for such a thing.

Rejuvenating Autoconf

Posted Oct 31, 2020 1:27 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

Hey, I remember using CMake 2.4! And being disappointed about being unable to use 2.6 because it had not been packaged in Debian.

Out of curiosity, I dug up that project and tried to build it with the most recent CMake. It actually worked with only a 2-line change! I love this stability.

Rejuvenating Autoconf

Posted Jan 2, 2021 6:19 UTC (Sat) by wtarreau (subscriber, #51152) [Link]

No it's not an error to respect end users' setups, and is actually a sign showing a project that cares about being smooth to use.

I used to hate cmake when I discovered it in early projects because it used to be associated with systematic failures. Cmake improved, and projects learned to use it correctly, and over time I started to notice that the usual "mkdir build && cd build && cmake .. && make -j$(nproc)" worked almost every time. One old grief was on cross-compilation but nowadays it seems to work smoothly (though it's often hard to remember the variable names to set the target). Initially I used to have to upgrade cmake by hand everytime I needed to build a project, but nowadays I don't know what version is installed here but it works. *This* is part of the project's success: developers just have to be reasonable with the minimum version they expect, and for users it always works.

So kudos to the developers for keeping that great a care for backwards compatibility. This is essential for a project's success as it's the only way to avoid in-field fragmentation (hint: anyone still has python2 installed to run esptool.py, luatool.py or whatever script not compatible with python3?).

Rejuvenating Autoconf

Posted Oct 27, 2020 17:31 UTC (Tue) by nwnk (guest, #52271) [Link]

As someone with no particular love for autotools (to put it kindly), this is really encouraging to see. It may reflect the design decisions of an earlier era, but autotools occupies a substantial amount of the build system space for the free software ecosystem, and keeping them relevant helps keep the free OS viable.

If anyone does feel like taking a crack at updating the automake / libtool portion of the world, there's at least two interesting pieces of prior art that may be worth plundering.

dolt: https://gitlab.freedesktop.org/archived-projects/dolt/-/t...

An attempt to address libtool's (lack of) performance by lifting some of the work to configure time.

Quagmire: https://code.google.com/archive/p/quagmire/

Reimplements automake/libtool in terms of gnu make. There's an argument to be made that most of the build-portability problem ought to be solved by asserting that your first steps are to get (possibly just download rather than build) a working cc/sh/make, and confine all the platform variation handling to those components; this seems like it might get you some of the way there.

Rejuvenating Autoconf

Posted Oct 27, 2020 22:33 UTC (Tue) by josh (subscriber, #17465) [Link]

I was the original author of dolt. (The name came from writing scripts like "doltcompile" that replaced libtool's "ltcompile".)

I wrote dolt to speed up builds of software that used libtool, for the common cases of Linux and BSD systems where most of the extra complexity of libtool didn't apply (because the system already has full-featured library support). It typically cut library build times in half. libtool later did a lot of optimization work to try to handle the common cases better, and I dropped dolt once libtool was close to the same performance.

The key thing dolt did: do autodetection steps *once* and generate a script that runs the necessary commands, rather than doing any amount of autodetection for every compile.

Rejuvenating Autoconf

Posted Oct 29, 2020 14:54 UTC (Thu) by nathan (subscriber, #3559) [Link]

I'm happy to see this work being done, and with Zack being involved I know the quality will be good.

I use autoconf because I know how to drive it and it gets the job done. configurery goop is not the most exciting piece of development, so I'm not inclined to invest the time to learn some other DSL. Getting a nice explicit error that '$FOO is not available' is, IMHO, a much nicer user experience than an obscure build error or wrong code.

Rejuvenating Autoconf

Posted Oct 29, 2020 20:09 UTC (Thu) by sumanah (guest, #59891) [Link]

> I'm happy to see this work being done, and with Zack being involved I know the quality will be good.

Thank you. And I know what you mean! It's such a pleasure to work with Zack Weinberg on this; I am always glad to hear someone else also recognizing the consistent quality of his work.

Rejuvenating Autoconf

Posted Oct 29, 2020 15:38 UTC (Thu) by ecree (guest, #95790) [Link]

> Libtool could be deprecated, and have its features refactored into the faster and more integrated functionality in Automake.

Please don't. When I needed to create a shared library (libatg), I was able to figure out how to get libtool to do what I needed; whereas automake still gives me the screaming heebie-jeebies. All my projects build from a hand-written Makefile, and I'd like to retain that innocence.

Rejuvenating Autoconf

Posted Oct 29, 2020 16:09 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

My favorite libtool tidbit is that I always have to nuke `*.la` files from install prefixes (distros do it to) because they tend to just…not work somehow. I know they break relocation since they bake the install prefix into the contents of files. What they're supposed to fix that makes them also somehow not important if they're missing is beyond me (I haven't investigated much other than knowing deletion is a viable solution).

Rejuvenating Autoconf

Posted Nov 20, 2020 21:55 UTC (Fri) by nix (subscriber, #2304) [Link]

.la files are for lesser platforms like Windows and AIX which don't allow shared libraries to depend on other shared libraries, and also for static libraries: they make .a files into something that, like shared libraries, you only need to specify -l for once, when you build it.

These days the lesser platforms are mostly extinct and hardly anyone builds static libs any more unless absolutely unavoidable, so .la files are of much less use than in days gone by.

Rejuvenating Autoconf

Posted Nov 20, 2020 22:42 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

> .la files are for lesser platforms like Windows and AIX which don't allow shared libraries to depend on other shared libraries
Windows shared libraries most definitely can depend on other libraries. Although circular dependencies are not allowed.

Rejuvenating Autoconf

Posted Nov 21, 2020 13:08 UTC (Sat) by nix (subscriber, #2304) [Link]

Yeah, we're not talking modern Windows though. I'm fairly sure back in the days when I did stuff on Windows 3.1 that the whole import library mess more or less precluded intra-DLL dependencies. (But then 16-bit Windows DLLs were *weird* things.)

Rejuvenating Autoconf

Posted Dec 13, 2020 11:12 UTC (Sun) by oldtomas (guest, #72579) [Link]

Funny. 3.1 is about the last (and probably only) Windows I had some intimate contact with (writing C).

This was the version where the provided editor (yah, it was called Notepad!) couldn't load windows.h due to some 16 bit limitation.

And yay, Hungarian Notation.

I think that's why I cringe these days when someone calls his shell script "foo.sh". Post traumatical thingmajig. Or something.

Rejuvenating Autoconf

Posted Oct 29, 2020 23:07 UTC (Thu) by kmweber (guest, #114635) [Link]

So I'm curious--does Bloomberg have an ongoing program of funding free software development, or did they have an interest in autoconf (or the GNU build system and toolchain) specifically?

Rejuvenating Autoconf

Posted Oct 30, 2020 21:06 UTC (Fri) by kpfleming (subscriber, #23250) [Link]

(putting on my Bloomberg hat)

Our involvement in this project came about via Python, actually. We're big users of Python, it's one of our three primary development languages (alongside C++ and JavaScript), and we employ and/or support quite a few developers in the Python community.

The team who wanted to do the work on Autoconf reached out to us specifically because we are big users of Python and CPython relies on autoconf for its portable configuration and build system, so it made sense that we might be interested in helping to improve it. We were, we agreed to provide some funding alongside others, and the result is what you see here.

While we do not have a formal program of funding open source development efforts, we are sponsors of a number of significant open source communities (including the Apache Software Foundation, Python Software Foundation, Outreachy, and more) and we do as much as we can to give back to the projects that produce the software we use.

Rejuvenating Autoconf

Posted Oct 30, 2020 17:24 UTC (Fri) by azz (subscriber, #371) [Link]

As someone who uses autoconf and the other autotools dozens of times a day for a wide variety of projects (including some fairly hairy crossbuilding setups), I'd just like to say thanks for doing this work — I've always been very happy with the autotools, both as a developer and as a packger, and it's nice to see people making them even better.

Thank you, Autotools!

Posted Dec 11, 2020 10:00 UTC (Fri) by oldtomas (guest, #72579) [Link]

And thank you, Sumana Harihareswara.

Contrary to many commenters around here who seem in dire need to vent some unspecified frustration, many of my projects are long-term.

As an example, one customer has some GUI program (Gtk2, started roughly twenty years ago). It still runs at the customer's premises. Every five years or so they come back with some enhancement requests.

Its build system, Autotools, led me through stand-alone compile to Debian package build, to Debian cross-build for foreign architectures. It never let me down. It Just Friggin' Works (TM).

Yes, it took some investment to set up, but from then on it was basically hassle free.

Sorry, you CMake, SCons, I-Build-It and diverse Harbor-Freight-Build system folks, but I just don't believe you that your favourite build system would have served my customer and me so reliably and noiselessly.

My next project? Autotools. Hands down.

Some of you may call that Frankenstein's monster. If that be so, I ♥♥♥ you, Frankie.

2.70 release and followup analysis


Copyright © 2020, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds