Humans are like legacy code
The organisation has deployed billions of (mostly) self-organising copies of the system, each running slightly different code. These instances often conflict with each other, even though you sometimes need hundreds of instances to complete a single task.
There’s code that’s left over from early versions, which probably does nothing, but everyone is afraid to take it out just in case.
Nobody knows what it all does. Some code was copied and pasted in from some other unrelated system.
Some processes crash so frequently that other processes exist to delete them. This does not always work, which can result in zombie processes which replicate while refusing the kill command. There are whole departments that are tasked with fighting these processes.
There are functions that are no longer used except when a bug happens.
A single change can impact multiple unrelated areas.
Some classes have multiple unrelated responsibilities. Some features rely on undefined behaviour in other sections.
There is a redundant copy of the code, which differs hugely. Both are in regular use. The code makes copies of itself frequently and with less than stellar accuracy.
The whole thing might be an old AI experiment. It works best with regular data input, taking years to fully train. It is okay at a range of tasks but sometimes outputs entirely fictional nonsense such as a flat earth, lizard people, and democracy if not carefully trained.
The whole system exists primarily to generate a replacement codebase for when the current one inevitably stops working. This can introduce new bugs.