Are organisations superintelligent?

In What Are Reasonable AI Fears? Robin Hanson makes the claim:

AIs today are mainly supplied by our large “superintelligent” organizations—corporate, non-profit, or government institutions collectively capable of achieving cognitive tasks far outstripping individual humans.

This jumped out at me. Is it really true that organisations can be considered superintelligent? Are they superintelligent in the same way that AI might be?

He later says, with the implication that AIs would behave similarly:

Even our most powerful organizations are usually careful to seek broad coalitions of support, and some assurance of peaceful cooperation afterward.

I’m not sure to what extent these claims are load-bearing in his argument, but the fact that he mentions them indicates that they are at least evidence to him of a particular worldview: one in which, if I’m following his argument correctly, AIs will simply be interacting with us as part of – or at least in similar ways to – the sorts of organisations that we’re already familiar with.

But it is not at all clear to me that current organisations can be considered superintelligent (particularly after having worked in some of them), and certainly not in the ways that AIs might be. So I’m mostly writing this because these claims jumped out at me and I want to think them through.


Yes, I would expect (at least some) organisations to be able to outsmart (at least some) individuals in most cases, which might imply superintelligence of some sort. But I can think of a whole bunch of caveats to this.

As Robin Hanson himself must surely be aware after his work with prediction markets, management egos and internal politics can be very strong forces against the production of knowledge. Managers frequently rejected the information offered by prediction markets on the grounds that it could be embarrassing to them. These internal politics are something that an individual can be largely immune from.

I know that Hanson is sceptical of the prospect of AIs recursively self-improving, but organisations seem to be in a far worse position: they seem to have a tendency to degrade. Without exceptionally good leadership this seems to be the default. It seems very difficult for organisations to hold themselves together at all coherently for long periods. There are a few exceptions that one can name: the Catholic church is coming up on 2000 years (though not without some schisms along the way). But I’d say that they are one of the exceptions.

What is the nature of this degradation process? Perhaps a reasonable analogy would be cancer. The self-destruction is often the result of the short-term selfish actions of individuals and teams within the organisation. One of the key parts of my entirely informal model of organisations is that, as they acquire resources, it becomes more tempting for the individuals within the organisation to try to grab larger pieces of the pie for themselves rather than to expand the pie for the whole organisation, much to the detriment of the organisation.

Additionally, organisations often have very slow OODA loops. While an organisation might be able to focus a lot of analytical power at a situation, actually coordinating itself to respond to the situation may take a very long time. This is often the main advantage that smaller organisations have over larger ones, and a strong limitation on the intelligence benefits that you can gain with size.

So it isn’t at all clear to me that organisations are superintelligences in the ways that AIs might be. AIs might be able to better maintain internal coherence as they acquire power and to have much faster OODA loops than human organisations are capable of (indeed, faster than human individuals are capable of).

And AI superintelligences can jettison one of the things that keeps current organisations roughly aligned with humans more broadly: that they are staffed by humans. I’m absolutely willing to accept that organisations have somewhat different values to the individuals that comprise them. But you do have to keep those individuals at least somewhat onside. Yes, you can have organisations that treat outsiders very badly indeed, and yes, you can keep members in line with threats of punishment to some degree, but you’re very much working against human instincts towards compassion when you do so. And if the members of an organisation are not on board with the organisation’s goals then they generally have options including simply leaving or dragging their feet, all the way up to outright sabotage. We’ve never yet had to deal with superintelligences that don’t have this limitation.