In the current AI hype the press is discussing super-human AI and ethical implications of machine learning. But a very basic question, which has been bothering philosophers even before computers were invented, is not asked any more: Is AI even possible?
Warning: I am not going to answer this question in this article, I rather use the writing process to clear my own thoughts and hope to entice others to brood over it.
The first question we would have to answer is: What is Artificial Intelligence? An unanswerable question, which I discuss in an earlier blog series "What is AI". For now, let's assume AI to be something more than what computers are doing right now, for example (fully) driving a car, doing our groceries or educating children, all those things that would seriously make people redundant (or more relaxed, depending on your viewpoint). If you believe we already have AI, you can stop reading.
Take a moment to consider what computers are: they are machines built for doing calculations, i.e. performing mathematical operations on numbers. That was the whole idea: doing math. Have you ever tried to have an in-depth discussion on politics with a pocket calculator? Or asked your vacuum cleaner to take out the trash? Why do we assume that a machine that has been built for doing math can behave like a human being?
Humans as Machines
The idea of re-building humans is older than computers, just consider Mary Shelley's Frankenstein. But the idea of intelligent computers has a famous advocate: Alan Turing. In his 1950 paper "Computing machinery and intelligence"  he defines his famous Imitation Game (now known as the Turing Test) to give the notion of Machine Intelligence a well-defined basis, but also to argue for the intrinsic capabilities of computers to show intelligent behavior.
Turing assumes that humans are a kind of machine. Most scientists today would agree in the sense that human thinking is a result of physical, chemical and biological processes (the same processes that apply to rocks, plants and electronic devices). If we rule out any mystical soul that works differently than the rest of nature, we have to assume that humans are a kind of machine.
But Turing goes one step further. He considers humans to be computing machines:
The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer. 
And then he goes on as if the imitation game were to be played by a human computer. Remember: "computer" was a job description at that time, I guess also someone who is doing computations. Turing was not a psychologist, he was a mathematical genius. To my knowledge he conducted no experiments or observations on humans and did not base his assumption on any psychological theory or empirical findings. He may have been judging from himself, but he was not an average human, he was spending a lot of time doing math.
By the way, Turing's theory of computation, now known as the Turing Machine, is also based on this idea of human thinking:
We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions [...] In the same paper, he dedicates a whole section to making the connection between numbers, symbols and human computation.
The Right Machine?
The main point that Turing is missing in my opinion, is that computation is a very special type of activity for humans. Most of the time we are doing other things. Why should all those other things be a special case of computation?
If we agree that humans are a kind of machine (in the sense that they obey natural laws), we can expect some machine to imitate humans (mentally and physically). But why should this machine be one that was constructed to do computations? And even if the computer is the right type of machine, we do not know whether it is the right tool to rebuild intelligence. Let me illustrate this point by the analogy of programming languages: There is mathematical proof that our typical programming languages are all equivalent regarding their power of expressing computations. Any program I can write in Clojure I can be sure I could also write in Python or C or assembly language. But I know that I can write programs much faster and more concisely in Clojure than I can in Python or in C or even assembly language.
The question is whether the best, highest-level programming languages that we know give us enough expressive power to encode intelligence, or even whether the whole architecture of digital computers are suited to do so.
The architecture of computers is certainly very different from the architecture of our brain (and artificial neural networks only share the name with natural neural structures, their architecture is a far cry from what is going on in the brain). The hope of AI researchers (and myself) is that we do not have to mimic the whole neural architecture, but can "jump in" at some intermediate level of intelligent-producing processes. But whether these processes can be mapped to mathematical operations... I am not sure.
- Computing machinery and intelligence. Mind, 59(236), pp. 433 – 460, 1950.
- The Annotated Turing. Wiley Publishing. 2008.
- On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2nd series(42), pp. 230 – 265, 1936.