For the past few months I have had a love-hate relationship with the rationalist movement. Their capacity to produce extremely insightful analyses on very diverse subjects is particularly exciting. But I am bothered by their hubris. It is clear that they have the capacity to accumulate, assimilate and analyze new ideas. But I am not convinced that “reason can be the only source of knowledge”. As it is impossible for one person to completely understand most facets of most problems, a healthy dose of (reasoned) trust is necessary. Also, I am bored by their fascination with the AI alignment problem.

In short AI alignment deals with the problem of building artificial intelligence that will not kill us all. The urgency of this problem stems from the fact that any real AI will be very fast and capable to improve itself exponentially, giving rise to a singularity. Any non-zero chance that this will result in the total extermination of humanity is too much, so we must strive to control such an entity or at least to ensure that it will be aligned with our interests.

To me, it seems that concentrating too many resources on this problem violates the rationalist’s “effective altruism” goal. There are quite a few individuals and groups (group = artificial intelligence?) that are capable of exterminating humanity (or a significant part of it) in a relatively short time. Just think of (bad) people controlling nuclear arsenals or bioweapons or significant space capabilities or pushing for more fossil-fuels use. If most rationalists are concentrated on AI alignment who’s going to solve these other problems?