{"id":66,"date":"2002-11-08T00:52:10","date_gmt":"2002-11-08T05:52:10","guid":{"rendered":"http:\/\/www.alevin.com\/?p=66"},"modified":"2002-11-08T00:52:10","modified_gmt":"2002-11-08T05:52:10","slug":"computers-wont-be-reading-plato-any-time-soon-part-2","status":"publish","type":"post","link":"https:\/\/www.alevin.com\/?p=66","title":{"rendered":"Computers won&#8217;t be reading Plato any time soon, Part 2"},"content":{"rendered":"<p>Thanks to <a href=\"http:\/\/lynnparkplace.com\">Ed Nixon<\/a> for a link to an <a href=\"http:\/\/www.kurzweilai.net\/meme\/frame.html?main=\/articles\/art0499.html\">interesting article<\/a> by philosopher<br \/>\nJohn Searle, arguing against Ray Kurzweil&#8217;s contention that computers<br \/>\nwill soon be smarter than people.<br \/>\nThe strong part of Searle&#8217;s article is the argument that &#8220;syntax is not<br \/>\nsemantics&#8221; &#8212; a computer that can calculate chess moves based on<br \/>\npre-defined algorithms does not actually understand chess. Searle argues<br \/>\nsuccessfully that Deep Blue is unintelligent in the same way that a<br \/>\npocket calculator is unintelligent; it is simply manipulating symbols,<br \/>\njust as a human who speaks Chinese phrases using a transliteration is<br \/>\nmanipulating symbols but does not understand Chinese.<br \/>\nSearle is right that Deep Blue is very far from being conscious. The<br \/>\nfact that a computer can beat a human at chess means about as much as<br \/>\nthe fact that an automobile can move faster than a runner.  Humans<br \/>\ndesigned the automobile; and human programmers chose the heuristics that<br \/>\ndrive Deep Blue&#8217;s decisions.<br \/>\nSearle is less successful with the argument that a computer cannot have<br \/>\nintelligence, since a computer contains a mere model of intelligent<br \/>\nprocesses; and models are different from the physical things that they<br \/>\nrepresent.<br \/>\nSearle acknowledges that human intelligence is an emergent property of<br \/>\nneurons firing in the brain.  This means, though, that intelligence is<br \/>\nbased on circuitry, a pattern of information.  Similarly, scientists are<br \/>\ngradually deciphering the informational patterns of genes and gene<br \/>\nexpression. The lines between information and reality are not so clear<br \/>\ncut; it may be possible to develop living, even intelligent patterns in<br \/>\nsome other medium.<br \/>\nHuman intelligence probably has subtle dependencies on the biochemical<br \/>\nnature of the brain and the organism. Tom Ray makes this point<br \/>\n<a href=\"http:\/\/levin.blogspot.com\/2002_10_01_levin_archive.html#85511581\">beautifully<\/a>.  But it does not follow that the only possible kind of<br \/>\nintelligence requires a body; it certainly does not follow that theonly<br \/>\nkind of intelligence requires this sort of body.<br \/>\nIt may be theoretically possible for intelligence to develop in some<br \/>\nother medium. But despite Kurzweil&#8217;s optimism,  there is little evidence<br \/>\nthat we have any idea how to do this.  Searle is right that just because<br \/>\nwe can program computers to play chess does not mean we are anywhere<br \/>\nnear creating computers with conscious minds.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Thanks to Ed Nixon for a link to an interesting article by philosopher John Searle, arguing against Ray Kurzweil&#8217;s contention that computers will soon be smarter than people. The strong part of Searle&#8217;s article is the argument that &#8220;syntax is not semantics&#8221; &#8212; a computer that can calculate chess moves based on pre-defined algorithms does &hellip; <a href=\"https:\/\/www.alevin.com\/?p=66\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Computers won&#8217;t be reading Plato any time soon, Part 2&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[3],"tags":[],"class_list":["post-66","post","type-post","status-publish","format-standard","hentry","category-science"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/prDRq-14","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.alevin.com\/index.php?rest_route=\/wp\/v2\/posts\/66","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.alevin.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.alevin.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.alevin.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.alevin.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=66"}],"version-history":[{"count":0,"href":"https:\/\/www.alevin.com\/index.php?rest_route=\/wp\/v2\/posts\/66\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.alevin.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=66"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.alevin.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=66"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.alevin.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=66"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}