Ah yes, that favorite topic that is so close to my heart and one that I've spent ages explaining as well as debating. Artificial Intelligence or AI. The debate I'm referring to is not about whether AI will take over the world one day and suppress humanity but rather, it is about whether AI is even possible. Before this goes any further, AI here refers to absolute intelligence in that the entity would perform similar to the human mind, capable of making all decisions without human intervention.
While there are plenty of machines around that display pseudo-intelligence (Honda's ASIMO for one), true AI is really decades away unless a huge breakthrough occurs in our understanding of how the human mind works. As with all cases, you cannot replicate what you don't know completely and the human brain is one of the most complex entities in the world. The split in the AI community occurs at this understanding. Random vs non-random. The breakdown is really not an oversimplification at all. Since I always end up having the same debate over and over again (one as recently as two weekends ago) every time I mention my Masters degree falls under the robotics stream, I decided it might be best to have the two different view points on AI here. I guess my intent here is to argue against both schools of thought to show AI isn't going to be possible anytime soon.
The Random theory : This school of thought believes that as part of nature, humans are random and possess free will. The argument as to why randomness can't be factored into a system is pretty easy. If nature inherently possesses randomness, then how would an entity that has just two states - on or off, 0 or 1- have the ability to be random? The existing random number generators are all pseudo-random. Part of a very large sequence but part of a sequence nevertheless and so, need a fixed seed point. This lack of randomness means that every action taken by any AI would not be "intelligent" because it would be dependent on an extensive set of rules and not every event obeys these rules. In a situation where the "AI" finds itself where there are no rules to define its next step, the system fails. When humans are given a set of rules in various experiments, singularities do not cause a catastrophic failure in the system because it could be argued that the randomness allows us to function on despite a situation that is flawed. While pseudo-randomness flies with current algorithms, it would fail catastrophically in real-life situations purely due to complexity and the inability to break down said complexity. Once again, I'm steadfastly refusing to touch the "free will" argument.
The Non-Random theory : This school of thought considers humans as non-random creatures where actions are always described by set rules and every action follows a pattern and all random instances are merely pseudo-random i.e part of a larger defined sequence. The biggest obstacle here is simply a lack of understanding of how the human mind and nature itself works. Even at the current level of understanding of the human mind, 90% of brain activity falls under the gray area (my stats are a little old and corrections welcome). Ergo, the statement that one cannot replicate what one doesn't understand holds good.
One more rather large obstacle that stands in the way of AI is the sheer computing power required for day-to-day mundane activities. Even with the advances made thanks to various technological advances, the sheer effort required to process the amount of data to deal with such scenarios is insurmountable. Add in multiple real-world scenarios and the scale of the problem increases exponentially. My favorite example, although something not really related to AI but more to robotics, is Honda's bi-pedal robot - ASIMO. The next time you view an picture of it, look for the big box it carries around. The fascinating thing about it is that the amount of computation required to keep it on its feet as well as path planning takes up the majority of its available computational abilities. Contrast that with a human child who is able to walk without pausing to think about it or pick a path to get to a target point.
While solving the computational problem is not an impossibility, unless a major breakthrough occurs, the goal of reaching true AI is decades away, even after we take into account the jump in computational power over the past two decades. Nanotechnology might be the way, but advances in computational efficiency alone cannot carry us over the hill. The understanding of either the randomness or non-randomness in nature combined with a complete mapping of brain functions with relation to every scenario possible would definitely be a pre-requisite to any model that will lead us to true AI.
One regret I have on this whole post itself is it might not be the most lucid post I've written, especially since the topic is quite close to my heart and the fact that I'm still rusty. Long absences tend to do that I guess but I've really tried to simplify my usual points for someone new to AI to understand the complexity of it. (An older article I'd written was much much longer and in-depth but that was written when the target of the argument was a fellow lab-mate.) That said, if this doesn't provide an insight into how difficult something like getting a computer to do even menial tasks, much less be capable of intelligence itself is, then I've sort of failed in getting the point across.
The very fact that our mind can do things effortlessly that takes so much effort to replicate in a machine (basic stuff even, like path planning, grasping an object etc) always blows me away. Never let a mind go to waste because there are few things as valuable as the human mind.