Posted by shannonclark on February 26, 2003
Part one of a series (linked to printable version – but do support Salon by looking at their ads, I may not always agree with them on all issues, but they publish a lot of great stuff)
This article covers something that, as a research into AI albeit obscure branches of and in a non-academic setting, I am very interested in. That is, the odd disconnect betwee the “AI establishment” and reality and practicality.
While personally I am not sure that the Turing Test is the be all and end all of tests (there are plenty of humans with whom a conversation might be rather stilted), I do not think that dismissing the practical attempts to test it points to a very professional approach on the part of the “establishment”.
Further, I personally don’t think that LISP based approaches to AI are the way to go – rather I tend to view “AI” as needing far more complex approaches than the rules based approaches of LISP.
In my own work, which is very narrowly focused, I look at building systems that are self-modifying – that is, they are literally built by a reaction to the information and task at hand (literally in terms of the code that is run in many cases). I would argue that this approach, combined with some “learning/memory” based on the past, as well as a focus on what needs to be done is a highly productive approach to AI development.
Here the goal is not fully autonomous systems, but rather to build systems that automate repetative human tasks – my usual target being 80-90% of a given task. This allows the systems to speed up humans on their daily tasks and/or free up time for more important work (so either more work of a given type can be performed, or attention can be shifted to more value added tasks).
If you are reading this and are interested in learning more about my research and software,f feel free to contact me – I would be happy to point you to some demos and/or discuss what I am working on.