There is a concept that AI will eventually replace human brainpower. The field of emergence has been expanding into new technologies, from Selfridge's Pandemonium architecture, SimCity and Nano Quadrotors. The jump from thinking linearly to thinking relationally may be done with a combination of an emergent programming language (we're already halfway there with erlang) that also copies imperfectly. We could even eventually get AI that evolves much like living forms.
Rather than dooms-daying, we're going to have to find our new place in the market.We'll have to find what we can do that AI can't do. At the moment the concept seems bleak: from beating chess masters to AIs coming out in tests as being more human than humans. How is this possible? Is human behaviour really that easy to simulate? Maybe its time we re-conceptualise the Turing test to better define us from AI.