There has been a lot of talk about AIs taking over the world.
Much of that focuses on Yudkowsky’s description of a hypothetical Bayesian intelligence integrating sensory data to form an accurate model of the world much more quickly than a human would.
But what about problem domains that don’t require sensory data?
Examples: Chess! All the game state is visible, and the only hidden information is the algorithm being followed by the opponent. Poker! You know how many cards there are, but you don’t know which ones your opponents have, which requires a probabilistic approach. Rubik’s cube! Go! Prisoners dilemma!
All of these problems have a compact formal description that can be provided to the agent, so it has a perfect understanding of the laws of physics.
Alternatively they can be formulated as utility functions instead, and the physics can be a formal description of a virtual machine that will run the algorithm to solve the problem. That’s actually isomorphic, but might be easier to think about.
This eliminates a huge amount of complexity, while still being very difficult: can you write an algorithm that can solve arbitrary logic puzzles, that works better than the specialized algorithms that have been written for each puzzle?
(Hint: no you can’t, not yet, anyway. Maybe one day).
It may seem tempting at this point to say well, maybe we can’t write this algorithm, but we can write a simpler algorithm that can recursively self-improve to the point that it can write this algorithm!
I contest this assertion.