Every so often a new neural network makes headlines for solving a computation problem. It is sometimes hard for me to judge how impressive these achievements are without diving into the details of the models. But my criteria are always the same and it should be easy for those who are familiar with their models to evaluate based on these criteria. For this purpose I have made a flowchart for how impressed I would be at a neural network. If you know of a new neural net that reaches “wow” please let me know about it, and if it reaches “mind-blown” you have permission to wake me up in the middle of the night – since I know no examples.
This flowchart is supposed to evaluate a specific network, after learning. It is not intended to evaluate how impressive a learning rule/algorithm is. (I may one day expand this flowchart to include network learning-rules but for now I am focused on claims about trained neural nets being able to solve difficult problems of computation). I ignored some of the contemporary challenges of neural nets (e.g. adversarial examples) partly because I don’t fully understand them, but also because I want to focus on a deeper challenge with roots in classical theory of computation. I cannot fathom how we will achieve what is today called “artificial general intelligence”, in a system that does not rise to the level of lambda calculus, Turing machines, or modern-day programming languages. The problem of whether it is possible to mechanically implement reasoning and intelligence traces back to Leibniz and was arguably solved in the 1930s by Church, Turing, and Godel (see the Universal Computer by Martin Davis). This nearly 90-year old piece of insight should not be overlooked by AI researchers today.
I may update this blogpost in the coming days with some example papers and how well I think they do in this flowchart. In the mean-time, I hope you find it useful!