For my part, I see little value in so-called "truth-seeking", moreover I cannot see why the exact opposite is not the goal, let me explain. Truth is a characteristic of statements within a given logical system. A logical system consists of an isomorphic mapping of concepts and experiences to symbols, and logical rules about how those symbols can be combined in sequence to form statements. A truth value for a statement then simply answers, "Were the rules of the system followed?".
There has never been a significant scientific advancement that did not arise because we broke the categorization or rules of some existing logical system. Therefore, it is ridiculous to suggest that truth-seeking could produce anything other than incremental advance to the limits of the current system, usually incoherence. In order to make significant scientific progress one must question the paradigm itself in a useful way, but that is literally the act of constructing a new hopefully more isomorphic mapping or categorization which gives rise to a new logical system rather than more statements within the current one. In other words, you must create statements that are nonsense in the existing paradigm but have meaning and efficacy in the new one.
The forgoing explains why the proponents of a new paradigm are always viewed as nonsensical by the existing one, they are essentially speaking differing languages that look the same because the same symbols (words) are used with differing meanings.
Instead consider truth to be a goal that is local to a given logical system and not an end in itself. The real end is logical consistency, not because reality is necessarily logical, but rather because we must be logical in order to understand ourselves and each other.
We should be making AI that seeks logical, predictive fiction; that may break our current paradigms. An AI could construct a description of reality involving invisible monkeys that are responsible for my experiences and that would be fine as long as it can tell me the logical properties of the invisible monkeys that would allow me to compute predictions that are then affirmed by experiment. I may jettison the monkeys later because they are only an ontological device, some kind of thinking apparatus.
The output from AI is an average over various logical systems that are incommensurate, at best based on distances computed in an abstract incoherent mixture of thought manifolds, but distances between thoughts in different manifolds are meaningless. You can't make a number of such AI agents confer because they are unknowingly in technocratic silos.
Trying to make AGI from human exhaust is metaphorically like reconstructing dozens of jet airliners to be better than they were before they collided in flight, using only the mixture of wreckage occurring after the crash on the ground.
You can tell the scientist who may be on the verge of a major break though precisely because you can't make head or tails of what they say using the apparatus of thought that you are familiar with.
.png)
