Tuesday, May 18, 2010

Questioning the Answers

Archimedes
Why would computers deprive us of insight? It's not like it means anything to them...

Surreal story time! The setting: Cornell University. Fellow scientists Hod Lipson and Steve Strogatz find themselves thinking about our scientific future very differently in the final story of WNYC Radiolab's recent Limits episode. In the relatively short concluding segment, "Limits of Science", Dr. Strogatz voices concern about the implications of automated science as we learn about Dr. Lipson's jaw-dropping robotic scientist project, Eureqa.



I can relate with Steve Strogatz' concern about our seemingly imminent scientific uselessness. But is there actually anything imminent here? Science is the language we use to describe the universe for ourselves. Scientific meaning originates with us, the humans that cooperate to create the modal language of science. What are human language or 'meaning' to the Eureka bot but extra steps to repackage the formula into a less precise, linguistically bound representation? If one considers mathematics to be the most concise scientific description for phenomena, hasn't the robot already had the purest insight?

Given the sentiments expressed by Dr. Strogatz and Radiolab's hosts Jad and Robert, it's easy to draw comparisons between Eureqa and Deep Thought (the computer that famously answered "42" in The Hitchhiker's Guide to the Galaxy). Author Douglas Adams was brilliant satirist as much as prescient predictor of our eventual technological capacity (insofar as Deep Thought is like Eureqa). The unfathomably simplistic answer of "42" and the resulting quandary that faced the receivers of the Answer to Life, the Universe, and Everything in HHGTTG is partially intended to make us aware that we are limited in our abilities of comprehension.

More importantly, it shows that meaning is not inherent in an answer. 42 is the answer to uncountable questions (e.g. "What is six times seven?") and Douglas Adams perhaps chose it bearing this fact in mind. Consider that if the answer Deep Thought gave was a calculus equation 50,000 pages long, the full insight of his satire might be lost on us; it's easy to assume an answer so complicated is likewise accordingly meaningful, when in fact the complex answer is no more inherently accurate or useful in application than the answer of 42.
Deep Thought
The Eureqa software doesn't think about how human understanding is affected by the discovery of formula that best describe correlations in the data set. When Newton observed natural phenomena and eventually discovered his now eponymous "F = ma" law, he reached the same conclusion as the robot; the difference is that Newton was a human-concerned machine as well as a physical observer. He ascribed broader meaning to the formula by associating the observed correlation to systems that are important for human minds, the scientific language of physics, and consequently engineering and technology. A robotic scientist doesn't interface with these other complex language systems, and therefore does not consider the potential applications of its discoveries (for the moment, at least). 

Eureqa doesn't experience "Eureka!" insight because it isn't like Archimedes, Man. Man so thrilled by his bathtub discovery of water displacement that legend remembers Archimedes as running naked through the streets of Syracuse. He realized that his discovery could be of incalculable importance to human understanding. It is from this kind of associative realization that emerges the overwhelming sense of profound insight. When Eureqa reaches a conclusion about the phenomena it is observing, it displays the final formula and quietly rests, having already discovered everything that is inherently meaningful. It does not think to ask why the conclusion matters, nor can it tell as much to its human partners.

"Why?" is a tough question; the right answer depends on context. Physicist Richard Feynman, in his 1983 interview series on BBC "Fun to Imagine", takes time for an aside during a question on magnetism. When asked "Why do magnets repel each other?", Feynman stops to remind the interviewer and the audience of a critical distinction in scientific or philosophical thinking: why is always relative.
 
"I really can't do a good job, any job, of explaining magnetic force in terms of something else that you're more familiar with, because I don't understand it in terms of anything else that you're more familiar with." - Dr. Feynman

Meaning is not inherent or discoverable; meaning is learned.
blog comments powered by Disqus

Think about...

Random Thoughts

Where Thinkers Come From
 
Real Time Web Analytics