Artificial Intelligence Students on The Wrong Track

There have certainly been incredible strides made in the field of artificial programming and how these systems can interact with humans. Unfortunately the results as good as they are do not work as good as they need to. In fact a human who is asked to consider if they are discussing information with a machine or a human online can usually always pick out which is which. So is this a litmus test? No, but if we are going to have computers and robotics interact with people, we must do better than we are currently.

In fact what I see as a major problem currently is that many of the Artificial Intelligence grad students and their professors are making errors in their assumptions of how best to design these artificial intelligent systems. Therefore their programming efforts are not succeeding as well as they should at places like Carnegie Mellon, MIT and Berkeley. Is this to say I am criticizing their efforts entirely? No, however their mirroring theories of mimicking the human when interacting are not enough. Humans do not always want interaction with like minds of other humans. As we know opposites attract.

It is for this reason that I am calling for some interactive personality programs which run over one another to be integrated into the AI systems with some mirroring which has already been developed to remain and thus we will be better to simulate human behavior in machines which will be interactive robotic assistants. Consider this in 2006.

Lance Winslow - EzineArticles Expert Author

"Lance Winslow" - Online Think Tank forum board. If you have innovative thoughts and unique perspectives, come think with Lance; http://www.WorldThinkTank.net/wttbbs/