Wednesday, October 20, 2010

Review: The Moral Landscape: How Science Can Determine Human Values

The Moral Landscape: How Science Can Determine Human ValuesThe Moral Landscape: How Science Can Determine Human Values by Sam Harris

Harris says a lot of things that need to be said. Doesn't mean I always like the way he says it.

The Moral Landscape is a look at morality as a landscape, with peaks and valleys of "well-being", essentially 'goodness', happiness, or fulfilled life. The conceptual premise is that humanity should use science to navigate the landscape, with the goal of moving people to higher moral ground through objective consideration. Through this metaphor Harris advocates a Science of Morality, where values can and should be determined strictly via scientific endeavor, rather than via purportedly causally distinct domains such as religious belief or abstracted philosophy. Harris describes this moral question as an illusory fact-value distinction that can be reduced to fact only, and as such determined scientifically. This is in opposition to David Hume's well-known "is-ought" distinction, and it seems there's been plenty of controversy over this type of philosophical supposition ever since. I'm inclined to side with Harris overall, but I can still logically at least entertain some of these philosophical challenges, making it difficult for me to endorse Harris wholeheartedly from the outset.

I find Harris' vitriolic rejection of competing ideologies off-putting, but I can understand why he would mix invectives into his logical argument if trying to garner support for an inchoate field of inquiry against sometimes even more zealous opponents. The parts of the book I found most interesting were where Harris takes on the gentler logical tone and analyzes contemporary academics in Positive Psychology, a branch of psychology focused on the concept of happiness, primarily through the lens of clinical studies, statistical analysis, and neuro-imaging (i.e. scientifically). I've enjoyed a number of the prominent books in this field recently and I was initially surprised when Harris directly contradicts the authors at times throughout the book (e.g. Jonathan Haidt in particular), making a point of exposing problems with certain sociological conclusions. I did not find these to be refutations of these psychologists' findings, but rather a compelling challenge to rethink conclusions that may be unwarranted.

In total I find it hard to argue against the proposition that science will influence mental states and understanding given the society we are living in today. Modern use of non-invasive technology (e.g. transcranial magnetic stimulation of spirituality) and manipulation of biochemistry provide convincing examples of how physical action is causally linked to belief and states of being, and therefore a science of morality could be used to determine effective action, whatever that may be. That is to say, while a science of morality may be continually revising the ends, hypotheses about what constitutes goodness, as the "is" we discover physically redefines the "ought" we perceive philosophically, we can be sure it will refine the means to further moral understanding. As Harris so forcefully asserts, we do ourselves disservice by objecting to scientific inquiry. Perhaps "ought" is merely the best possible "is" we can envision, and moral science a powerful tool to widen our metaphysical gaze.

(Additional note: Much of the material is a general condemnation of religiosity, and "New Atheism" seems to be the term for Harris' particular brand of it. To me New Atheism doesn't really seem to be a different kind of atheism so much as a category to address modern outspoken religious critics like Harris, Dawkins, Hitchens, etc.)

View book list on Goodreads

Wednesday, June 30, 2010

The Adumbration

The Adumbration lumbered in the distance, outline clouded by horizon's haze.

"What is it?" the countryside cried, "What will it mean?" the elders worried.

Its dusty gray obscuring more and more into the clear blue sky, the Adumbration loomed.

"What should we feel?" the townspeople asked, "Every man think for himself!" the scattering elite replied.

Indiscernible, nearly there but not yet here, almost not yet a thing unto itself, the Adumbration rust the land.

"We must know if we are to go on!" the people plead, "We can't really tell." duly unspoken.

Coming still, roiling yet becalmed, ephemeral but always, the Adumbration was unseemly.

"We will distract ourselves with seeking" some said, "We will distract ourselves with providing." said some.

Almost invisible the Adumbration stayed.

"Now we are certain!" proclaimed the everyman. "Now we are content." thought his mind.

The Adumbration continued.

Friday, June 25, 2010

What would happen to me if I fell into a Black Hole?

A Black Hole
It's safe to say you wouldn't survive the trip, so stay on this side of the Event Horizon if you ever want to be seen again.

Black Holes are massive objects occupying a tiny volume in space. When a super-massive star (many times larger than the Sun) stops sustaining enough nuclear fusion at its core to support its size, its mass may collapse into itself and form a Black Hole, sucking in everything around it. Our Milky Way galaxy spirals around a super-massive Black Hole at its center.

Since nothing can escape from the incredible gravitational pull of a Black Hole (even light itself), scientists can only speculate on what would happen to a person falling into a Black Hole. However, it does seem evident that the crushing gravity would not be kind to your body. Soon after passing the Event Horizon, the point of no return, your body's atomic structure would be ripped apart. The parts of your body closer to the singularity experience a stronger gravitational pull than the parts of your body further away from the singularity. This "tidal gravity" creates a differential gravitational pull on your body that literally stretches you out as you fall in. Alas, the rack of space-time is unforgiving to even the most pliant mind, and ultimately it's impossible to keep yourself together.

Interestingly, getting sucked into a Black Hole doesn't necessarily mean all trace of you is permanently erased. Physicist Stephen Hawking recently theorized that Black Holes emit minute amounts of radiation energy, like quantum information signatures of the stuff pulled in. So-called "Hawking Radiation" retains the information characteristics of the stuff that the Black Hole gobbled up; if a carbon atom gets sucked into the Black Hole, eventually the energy equivalent of a carbon atom will spew back out as Hawking Radiation. The information of the universe is conserved. So, at least in quantum theory, you could be reconstituted bit-by-bit if an outside observer were able to interpret Hawking Radiation and piece you together.

Cosmologist Ted Bunn's Black Hole FAQ (Frequently Asked Questions) offers many expanded answers:

Wednesday, June 2, 2010

Simpsons Paradox

Consider the following cartoonish thought experiment:

Homer and Lenny get into a week-long grudge match at the nuclear power plant over which of the two can eat more donuts. They decide to settle it once and for all in a weekend donut-eating contest: each of them gets 100 donuts, whoever eats more by the end of the weekend wins.

Lenny secretly knows Homer will be able to out-eat him, but he also knows something about statistics that he's hoping Homer doesn't. Lenny suggests, and Homer agrees, that Lisa will be in charge of moderating the match to keep things fair.

Lisa will buy 100 donuts each morning and divide the 100 donuts into two boxes, one for Homer and one for Lenny. After setting the two boxes out for the day, Lisa will return periodically to mark the percentage of each box's donuts that have been eaten. In this way Homer and Lenny know how they're faring against each other and each can adjust their eating behavior over the day to try and keep up with the other.

Here's how it goes down:
    1. On Saturday, Homer eats more of his box of donuts than Lenny eats of his box of donuts.
    2. On Sunday, Homer eats more of his box of donuts than Lenny eats of his box of donuts.
    3. On Monday, Homer is shocked to find that Lenny has won in the final tally by over a dozen donuts!

    Wait, how did that happen? Lenny didn't cheat and Lisa didn't divide them unfairly; each got the opportunity to eat 100 donuts. So why didn't Homer win when the percentages always showed him in the lead? Lisa divided each daily allotment of 100 donuts into a box of 90 donuts and a box of 10 donuts. Lenny won by weighting.

    Simpson's Paradox, as explained by a singingbanana:

    Why is Simpson's Paradox important to remember in the real world? As mentioned in the video above, direct percentage comparisons of weighted data is a risk in any field using statistical analysis, especially the social sciences. Failing to consider the meaning of statistical percentage comparisons can lead to less favorable outcomes given seemingly favorable supporting data. If you want your doctor to pick the best medicine (Drug A) for you, you'd better hope he gets the proper recommendation from the groups running the statistical analysis first. Otherwise you may get worse medication despite the availability of a more effective alternative, and neither you nor your doctor will be the wiser.

    Statistical analysis is an important way to get a holistic understanding of phenomena, but interpreting test results isn't as easy as it looks; sometimes you miss the holes staring you right in the face.

    Tuesday, May 18, 2010

    Questioning the Answers

    Why would computers deprive us of insight? It's not like it means anything to them...

    Surreal story time! The setting: Cornell University. Fellow scientists Hod Lipson and Steve Strogatz find themselves thinking about our scientific future very differently in the final story of WNYC Radiolab's recent Limits episode. In the relatively short concluding segment, "Limits of Science", Dr. Strogatz voices concern about the implications of automated science as we learn about Dr. Lipson's jaw-dropping robotic scientist project, Eureqa.

    I can relate with Steve Strogatz' concern about our seemingly imminent scientific uselessness. But is there actually anything imminent here? Science is the language we use to describe the universe for ourselves. Scientific meaning originates with us, the humans that cooperate to create the modal language of science. What are human language or 'meaning' to the Eureka bot but extra steps to repackage the formula into a less precise, linguistically bound representation? If one considers mathematics to be the most concise scientific description for phenomena, hasn't the robot already had the purest insight?

    Given the sentiments expressed by Dr. Strogatz and Radiolab's hosts Jad and Robert, it's easy to draw comparisons between Eureqa and Deep Thought (the computer that famously answered "42" in The Hitchhiker's Guide to the Galaxy). Author Douglas Adams was brilliant satirist as much as prescient predictor of our eventual technological capacity (insofar as Deep Thought is like Eureqa). The unfathomably simplistic answer of "42" and the resulting quandary that faced the receivers of the Answer to Life, the Universe, and Everything in HHGTTG is partially intended to make us aware that we are limited in our abilities of comprehension.

    More importantly, it shows that meaning is not inherent in an answer. 42 is the answer to uncountable questions (e.g. "What is six times seven?") and Douglas Adams perhaps chose it bearing this fact in mind. Consider that if the answer Deep Thought gave was a calculus equation 50,000 pages long, the full insight of his satire might be lost on us; it's easy to assume an answer so complicated is likewise accordingly meaningful, when in fact the complex answer is no more inherently accurate or useful in application than the answer of 42.
    Deep Thought
    The Eureqa software doesn't think about how human understanding is affected by the discovery of formula that best describe correlations in the data set. When Newton observed natural phenomena and eventually discovered his now eponymous "F = ma" law, he reached the same conclusion as the robot; the difference is that Newton was a human-concerned machine as well as a physical observer. He ascribed broader meaning to the formula by associating the observed correlation to systems that are important for human minds, the scientific language of physics, and consequently engineering and technology. A robotic scientist doesn't interface with these other complex language systems, and therefore does not consider the potential applications of its discoveries (for the moment, at least). 

    Eureqa doesn't experience "Eureka!" insight because it isn't like Archimedes, Man. Man so thrilled by his bathtub discovery of water displacement that legend remembers Archimedes as running naked through the streets of Syracuse. He realized that his discovery could be of incalculable importance to human understanding. It is from this kind of associative realization that emerges the overwhelming sense of profound insight. When Eureqa reaches a conclusion about the phenomena it is observing, it displays the final formula and quietly rests, having already discovered everything that is inherently meaningful. It does not think to ask why the conclusion matters, nor can it tell as much to its human partners.

    "Why?" is a tough question; the right answer depends on context. Physicist Richard Feynman, in his 1983 interview series on BBC "Fun to Imagine", takes time for an aside during a question on magnetism. When asked "Why do magnets repel each other?", Feynman stops to remind the interviewer and the audience of a critical distinction in scientific or philosophical thinking: why is always relative.
    "I really can't do a good job, any job, of explaining magnetic force in terms of something else that you're more familiar with, because I don't understand it in terms of anything else that you're more familiar with." - Dr. Feynman

    Meaning is not inherent or discoverable; meaning is learned.

    Saturday, May 15, 2010

    Making Virtual Sense of the Physical World

    You'll remember everything. Not just the kind of memory you're used to; you'll remember life in a sense you never thought possible.

    Wearable technology is already accessible and available to augment anyone's memory. By recording sensory data we would otherwise forget, digital devices enhance memory somewhat like the neurological condition synesthesia does: automatic, passive gathering of contextual 'sense data' about our everyday life experiences. During recollection, having the extra contextual information stimulates significantly more brain activity, and accordingly yields significant improvements in accuracy.

    This week, Britain's BBC2 Eyewitness showed off research by Martin Conway [Leeds University]: MRI brain scan images of patients using Girton Labs Cambridge UK's "SenseCam", a passive accessory that takes pictures when triggered by changes in the environment, capturing momentary memory aids.

    The BBC2 Eyewitness TV segment on the SenseCam as a memory aid:

    The scientists' interpretation of the brain imaging studies seems to indicate that vividness and clarity of recollection is significantly enhanced for device users, even with only the fragmentary visual snapshots from the SenseCam. One can easily imagine how a device that can also record smells, sounds, humidity, temperature, bio-statistics, and so on could drastically alter the way we remember everyday life!

    Given this seemingly inevitable technological destiny, we may feel the
    limits of human memory changing dramatically in the near future. Data scientists are uniquely positioned to see this coming; a recent book by former Microsoft researchers Gordon Bell and Jim Gemmell, Total Recall: How the E-Memory Revolution Will Change Everything, begins its hook with "What if you could remember everything? Soon, if you choose, you will be able to conveniently and affordably record your whole life in minute detail."

    When improvements in digital interfacing allow us to use the feedback from our data-collecting devices effortlessly and in real-time, we might even develop new senses.

    A hypothetical example: my SkipSenser device can passively detect infrared radiation from my environment and relay this information, immediately and unobtrusively, to my brain (perhaps first imagine a visual gauge in a retinal display). By simply going through my day to day life and experiencing the fluctuations in the infrared radiation of my familiar environments, I will naturally begin to develop a sense for the infrared radiation being picked up by the device. In this hypothetical I might develop over time an acute sense of "heat awareness", fostered by the unceasing and incredibly precise measurements of the SkipSenser.

    Of course I'm not limited to infrared radiation for my SkipSenser; hypothetically anything detectable can stimulate a new sense. The digital device acts as an aid or a proxy for the body's limited analog sense detectors (eyes, ears, skin, i.e. our evolutionary legacy hardware) and also adds new sense detectors, allowing the plastic brain to adapt itself to new sensory input. I could specialize my auditory cortex, subtly sensing the characteristics of sound waves as they pass through the air, discovering patterns and insights previously thought too complex for normal human awareness. I could even allow all of my human senses to slowly atrophy in favor of fomenting a set of entirely unfamiliar senses, literally changing my perception to fit some future paradigm.

    NASA Interferometer Images

    Augmenting our sensory systems isn't new, it's what humans are naturally selected for. Generally speaking, 'tool' or 'technology' implies augmentation. If you drive a car, your brain has to adapt to the feel of the steering wheel, the pressure needed to push the pedals, the spatial dimensions of the vehicle, the gauges in the dashboard. While you learned how to drive a car (or ride a bike), your brain was building a neural network by associatively structuring neurons, working hard to find a system good enough to both A) accurately handle these new arbitrary input parameters and B) process the information at a rate that allows you to respond in a timely fashion (i.e. drive without crashing). That ability to restructure based on sensory feedback is the essence of neuroplasticity; it's how humans specialize, how humanity shows such diverse talent as a species.

    That diversity of talent seems set to explode because here's what is new: digital sensors that are easy to use, increasingly accessible, and surpassing human faculty. Integrated devices like the SenseCam continue to add functionality and shrink in size and effort required, now encompassing a sensory cost-benefit solution that appeals not only to the disabled, but to the everyman.

    There may be no limits to the range of possible perception. Depending on your metaphysical standpoint, this might also mean there may be no limits to the range of possible realities.

    Wednesday, May 5, 2010

    Vanishing Words Tell Illuminating Tales

    The Library of Congress set up a deal a few weeks ago to acquire Twitter's complete archive of public messages. It's not a particularly impressive number of bytes by itself, but it's a goldmine for computational analysis. And that academic potential is behind the government wanting to obtain what might seem like a vast cacophony of meaningless chatter.

    In the WNYC Radiolab podcast released today, "Vanishing Words", Jad and Robert look at linguistic computation. Specifically, the idea that you can identify and predict dementia using word analysis of personal history, say a collection of letters or diary entries. Or if you're Agatha Christie, crime novels. If you've got a minute let Jad Abumrad & Robert Krulwich tell you about this:

    Working with Jad's mention of "the age of Twitter": online services like Twitter, Facebook, Google, and so on are quite earnestly working with words as scientific data; it's a core element of staying competitive in their business. Computational language analysis is a fascinating field, and luckily it also seems to have powerful economic incentive.

    Word data is probably still the easiest way to directly get highly personalized information about a person (e.g. a status update, a tweet). Facebook Data Scientists, for example, work primarily to teach computer models to interpret the words used in Facebook status updates into meaningful demographic data. The computers gather information and the scientists pick out interesting patterns so that better, more personalized advertising can be served. Better targeted ads translate to actual interest in ads, which translates to business.

    Computational research and analysis (like the studies mentioned in this Radiolab podcast) is exploding commercially and academically, like a virtual internet gold rush. Supply is growing exponentially as hundreds of millions of people use online services to communicate publicly. Demand is blowing up too, because we're realizing, like these scientists discovering something deeply personal about Agatha Christie, just how much we can learn from a simple collection of words.

    It's exciting to consider how much we may be able to learn about ourselves using non-contextual information. Words unrelated to each other in everyday usage still form patterns unseen on a larger scale. Everything you do leaves a mark on the world, and soon we may be able to better understand our markings and appreciate our histories holistically.

    I imagine the future like learning the answers to questions we never thought to ask.

    Edit 5/11/10: Agatha Christie also wrote dozens of diary entries and notes about books that may have shown signs of dementia. (via @JadAbumrad "Agatha Christie's deranged notebooks (interesing to read after the latest @wnycradiolab podcast) -"

    Edit 5/14/10: For an interesting exemplar of Facebook linguistic data-mining, see their Gross National Happiness trend index. The study describing the methodology used is cited below the chart.

    Wednesday, April 14, 2010

    Needs Less Cowbell (Trololo Explained)

    In early March I thought about Eduard Khil and his new claim to fame as the Trololo Man thanks to viral internet sharing of his 1976 vocalization performance. The song, "I Am So Glad To, Finally, Be Returning Home", was originally composed with lyrics, but composer Arkadiy Ostrovskiy and Eduard Khil together decided before showtime to strip the song of its content and replace the lyrics with vocalization singing, substituting vowel sounds for the words and resulting in the video you see today.

    So what happened to cause Arky and Edik to scrap the lyrics before showtime? Russian news org Life News asked Eduard:
    "Originally, we had lyrics written for this song but they were poor. I mean, they were good, but we couldn't publish them at that time. They contained words like these: "I'm riding my stallion on a prairie, so-and-so mustang, and my beloved Mary is thousand miles away knitting a stocking for me". Of course, we failed to publish it at that time, and we, Arkady Ostrovsky and I, decided to make it a vocalisation. But the essence remained in the title. The song is very naughty – it has no lyrics, so we had to make up something for people would listen to it, and so this was an interesting arrangement." - Eduard Khil 14.3.2010

    Soviet Coat of Arms
    The Trololo video was filmed in 1976, a time when Russian media was widely and routinely censored by the controlling USSR regime. Though Eastern bloc censorship in the 60's and 70's had declined comparatively after the Khrushchev Thaw marking the end of Stalin's oppressive rule during the first half of the 20th century, the communist-state controlled media still felt strong pressure to reinforce a Socialist Realism narrative and repress contrary narratives in the shadow of the lingering Cold War.

    Arkady and Eduard were no strangers to the arts in this environment. They knew that lyrics about a cowboy and his pioneer wife evoking vivid landscapes of the American West would immediately raise a red flag (sans hammer & sickle) for the television broadcaster. Knowing that censorship officials would almost certainly reject the song with seemingly pro-Western lyrics, the change-up to vocalization was the only viable option. A meme was born that day, but it would be decades before anyone knew it.

    In the 1980's, about a decade after filming (and about two decades before the internet would remind the world of the video's existence by dubbing it with the onomatopoetic sobriquet "trololo"), Mikhail Gorbachev, the last General Secretary of the Soviet Union, began implementing his policies of Perestroika and Glasnost. Glasnost, a policy of "maximal publicity, openness, and transparency in government", brought with it freedom of information and new cultural freedoms for citizens of the Soviet states. This time of sweeping change under Gorbachev's leadership eventually led to the collapse of the USSR and the emergence of modern Russia and the independent nations out of the former bloc states of eastern Europe.

    A Soviet stamp propagandizing Perestroika and Glasnost

    So time passes and along comes the Интернет, where some nostalgic chap casually drops the clip onto YouTube under its original Russian title, "Я очень рад, ведь я, наконец, возвращаюсь домой". It sits for a little while as an esoteric example of bygone Russian entertainment, until earlier this year. In February and March 2010, the video shot upwards in popularity when it was 'discovered' for its quirkiness and re-purposed as a 'bait-and-switch' comedic device (see Rickroll). Hundreds of spoofs soon spawned around the original video, and with the help of large audience propagators (e.g. The Colbert Report) the trololo internet meme was well on its way. The original clip on YouTube alone has quintupled to five million views since last month.

    Eduard Khil welcomed the sudden flood of attention after apparently first learning about the phenomenon from his 13 year old grandson, who purportedly came home from school one day whistling the tune and had to explain to his grandfather why this old song was popular now because of the Internet (it's a series of YouTubes*).

    Partly due to the rush of Russian media attention, Eduard began making a handful of public appearances in mid-March, just weeks into the meme's upswing. In a broad response to the often-asked question of lyrics, Eduard published a video address in which he suggests that his fans collaborate to write new lyrics for the song. His earnest proposition is testament to our global society's relatively modern freedom to create and share with impunity. No governing body can truly censor media that they can't predict or intercept. Cultural memes like trololo are exemplary of the explosions of creativity that happen when it is both easy to create and easy to share.

    So... be creative and prolific! Don't forget how much power the Internet as a communicative medium grants you; even if there exist those who would censor you, you need not alter the fruits of your labor to fit another's narrative.

    Sunday, April 4, 2010

    Why it feels like Easter time

    Two quick Easterly follow-ups to the thought a few days ago on April Fools' Day as a holiday in celebration of the vernal equinox (i.e. spring).

    • The vernal equinox, I've since learned, can be considered either the 'first day of spring' or the 'middle of spring' for the northern hemisphere depending on your perspective (ground temperature change versus scientific equinox of when sunlight is at a precise midpoint on the earth's surface, respectively).
    • It's Easter! Why is it "Easter"? Easter is a critically important religious holiday for Christian faiths. So why not call it Resurrection Day (a few do), or the Festival of the Ascendance, or Jesus April Fools Day? According to the Oxford English Dictionary's "Etymologically, 'Easter' is derived from Old English. Germanic in origin, it is related to the German Ostern and the English east. [Bede] describes the word as being derived from Eastre, the name of a goddess associated with spring." So, at least in name if not spirit, Easter has strong ties to the season of spring.

    Ok, one more:
    • Easter Bunnies and Easter eggs came into the picture about a millennium and a half after the holiday got its roots, around the 1600's in medieval Germany (the Holy Roman Empire). Originally, the German tradition of bringing eggs was not linked to Easter, nor were the eggs edible. America especially liked the tradition and adopted it from German immigrants (similar to the idea of Kris Kringle) and in the modern era the Easter bunny and colorful eggs are the ubiquitous symbols of a secularized Easter. This linking of imagery was not threatening to the Christian churches because bunnies and eggs are ancient symbols of fertility. From Wikipedia: "Eggs, like rabbits and hares, are fertility symbols of extreme antiquity. Since birds lay eggs and rabbits and hares give birth to large litters in the early spring, these became symbols of the rising fertility of the earth at the Vernal Equinox."
    I'll close with an intriguingly opposed perspective (so to speak) from an Australian social researcher, Hugh Mackay, on Easter:
    "A strangely reflective, even melancholy day. Is that because, unlike our cousins in the northern hemisphere, Easter is not associated with the energy and vitality of spring but with the more subdued spirit of autumn?" - Hugh Mackay

    Thursday, April 1, 2010

    All Fools Today

    Jester Mask
    Did you know that Americans originally created the Joker for playing cards in the 1800's as the highest card for the game of Euchre? Juke, but no joke.

    It's April 1st, and that means you've been made a fool. Not by me of course; the tidbit above is not prevarication despite your uncertainty. Nevertheless, you are being foolish. You look foolish right now and I can't even see you! You act downright medieval you're such a fool!

    (Please excuse the jester.)

    The exact founding of this "All Fools Day" on the first of April isn't known, though it is known that the practice has a history going back hundreds of years with about as many theories as to its origin. Owing to its long history (and reasons I'll detail), April Fools' Day is very popular in Westernized countries around the world. Interestingly, in the traditional culture of some countries such as the UK, Australia, and South Africa, you're only supposed to 'fool' before noon. If you prank someone after noon you're considered an April Fool.

    April Fools' Day thrives because we're particularly ready for a goofy time of year, so to speak. With a physiological basis in enjoying the anticipation of unexpected thrill (à la dopamine), a mood of lighthearted puckishness meshes well with the time of year; moving from cold Winter indoors to the onset of sunny Spring outdoors. The months following the winter holidays are perceived as particularly dreary, so by the time April arrives people are anxious to celebrate the seasonal change. Thinking sociologically, it's how societies celebrate the vernal equinox on a day that is easier to remember.

    Even though the jocundity we experience personally on this day isn't always memorable, we are always eager to hear about clever pranks. Lists of both the well-known and best recent April Fools circulate widely every year as testament to this desire, and you'll probably read at least one by the end of the day. Each communicative medium has its own class of hoaxes, from print to radio to TV. And now of course there's the Internet, the most likely source of your prank news in the modern era.

    The desire and expectation of deception is problematic, however. That's probably why we do this hoax holiday only once a year. Important factual news announced on April 1st is automatically doubted by the public, wary of being fools caught unawares. We keep ourselves at a distance lest we fall into some emotional trap and look silly (even as we quietly desire the silliness). Distancing is normal behavior we employ all the time, but it's especially pronounced when you're keeping yourself constantly alert to trickery. Problems occur when this heightened skepticism affects our perception of serious stories we would otherwise give their accordingly serious consideration. We restrict receptiveness and compliance, which can incapacitate systems that rely on precise communication or timely cooperation.

    Illustrating the effects of this profound shift in our approach to news on April Fools' Day, one need only look back at the stories that emerged after the last time this happened. On April 1st 2009, a school was almost burned to the ground in the town of Albertslund, Denmark because the fire department refused to believe that the news was true the first two times that people called to report the fire. Naturally the firemen, being normally helpful people, rushed to extinguish the flames after repeated communication attempts forced them to realize their mistake, and the school was fortunately not a total loss. Nonetheless, the anecdote is indicative that losing response time in a time-critical situation can have catastrophic consequences.

    Terrorizing your lighthearted day of puckishness a little more personally, one can easily imagine that the psychological caution we employ on April Fools' Day acts as convenient cover for malicious pranksters. In another story from last year, on and around April 1st 2009 there was a great deal of American mainstream media attention concerning about a rapidly spreading computer worm (often cited as a variant of Conficker). Without knowing enough to assess the immediate danger of the virus, news outlets warned the public at the speed of panic, as news is wont to do.

    Unlike the Danish school fire, the danger of the worm was relatively minimal, especially in contrast to the slew of new viruses unleashed on the web everyday. Yet still alike the Danish incident, the ambiguity of the purported threat still led to overreaction. In the case of the fire department it was inattentiveness; in the case of the news outlets it was over-attentiveness, drowning out other more relevant news. Either extreme leads to neglect.

    Now that you know you're playing the fool whether you like it or not, bear in mind the distinction between rational response and irrational response as you take in the day's news this year. After all, it's April 1st and everyone's a bit foolish. So keep your jokes... practical.

    Wednesday, March 31, 2010

    Why Is Philosophy

    When someone asks what philosophy is, ask them why until they get it.

    Tuesday, March 23, 2010

    So what if the cake is a lie?

    The scientastic storytellers over at WNYC Radiolab released a new short today, "The Bus Stop". In ten minutes they weave a beautifully bittersweet tale about a simple way of coping with the 'necessary evils' involved in helping dementia and Alzheimer's sufferers.

    Elderly patients often wander away into dangerous scenarios due to temporary confusion or disorientation. In serious cases, it becomes enough of a problem that patients have to be locked in for their own safety, a decision hard on patients and caretakers alike. The Benrath Senior Center in Düsseldorf, Germany came up with "an idea so simple you almost think it wouldn't work."

    Here's the full Radiolab short:

    The bus stop idea is a beautiful method of mitigating a widespread fear of assisted living. I love this story and I especially like the idea Radiolab is conveying here about natural, personal, ways of assisting the confused or delirious.

    In that spirit of support, however, I would object to Lulu and Jad’s stated assumption that the bus stop is a "lie". No one deliberately deceives the patients into waiting at the bench or tells them that something is going to happen if they go there. We think of it as a lie because, to us, the important part is our expectations about the bus: when it comes and where it goes. It’s a lie to have a bench and a sign marking a bus stop if you deceive me into not getting home from work on time.

    For the seniors, the important part is only that a bus will (probably) come. They have a sense of urgency or deep anxiety that needs to be resolved, and the bus stop is a symbolic destination signaling that they’ve begun to solve the problem by taking action. They see a familiar roadside bench that symbolizes “going places” without actually going anyplace. While you’re waiting for the bus, that’s as fast and as far as you can go; you have to wait because you can’t control the bus and nothing you do matters until it arrives, so you might as well relax. That's why there's a bench.

    The bench and the sign are first and foremost physical truths and lies second, insofar as our superficial expectation. This bench relaxes anxious minds as much as bodies. A burden is lifted because there’s no sense worrying until the bus comes, so a forgetful mind easily ambles along to enjoy the beautiful outdoors for a while...

    Putting it another way, the bus stop is at least a “truth” in the patient’s dream-world as Lulu and the nurse Regine describe. I hope this kind of remedial mental treatment catches on and more people learn about the success of simple, non-invasive solutions like the bus stop.

    Lab notes: "The Bus Stop" and "Do I Know You?" podcast shorts follow the most recent Radiolab episode, "Lucy".

    Wednesday, March 3, 2010

    What do you mean he's not singing? Just look!

    Эдуард Анатольевич Хиль (Edward Anatolevich Hill) is gaining fame again for a once-forgotten performance in Soviet Russia over thirty years ago in 1976. Back then it was considered genuine pop TV entertainment, but in today's culture it has resurfaced as the "trololo" internet meme because of its strangeness more than its catchy tune. Why it is strange to modern viewers isn't hard to see once you start watching:

    I realize many of you don't speak Russian, so I've transcribed the complete lyrics here so you can follow along:

    Ahhhhh ya ya yaaaah, ya ya yaaah, yaaah, ya yah.
    Ohohohoooo! Oh ya yaaah, ya ya yaaah, yaaah, ya yah.
    Ye-ye-ye-ye-yeh ye-ye-yeh ye-ye-yeh, oh hohohoh.
    Ye-ye-ye-ye-yeh ye-ye-yeh ye-ye-yeh, oh hohohooooooooooo!
    -aaaaoooooh, aaaooo hooo haha

    Nah-nah-nah-nah-nuh-nuh, nah nuh-nuh, nah nuh-nuh, nah nuh-nuh, nuh-nah.
    Nah-nah-nah-nun, nun-ah-nah, nun-ah-nah, nah-nah-nah-nah-nah!
    Nah-nah-nah-nah-naaaaaaaaaaaaaaaaaaaaaaaaaah! Dah dah daaaaaaaaah...
    Da-da-daaah, daaah, daa-daah.

    Lololololoooooooo! La la-laaaaaah, la la laah, lol, haha.
    Oh-ho-ho-ho-ho, ho-ho-ho, ho-ho-ho, oh-ho-ho-ho-ho!
    Oh-ho-ho-ho, ho-ho-ho, ho-ho-ho, lo-lo-loooo!

    Luh luh lah, lah, lah-lah.
    Da-da-daaah, daaah, daa-daah.

    Lololololo, lololo, lololol, la la la la yaah!
    Trolololo la, la-la-la, la-la-la-
    Oh hahahaho! Hahaheheho! Hohohoheho! Hahahaheho!
    Lolololololololo, lololololololol, lololololololol, lololo LOL!

    Ahhhhh! La-la-laaah! La la-laaah, laaah, la-la.
    Oh-ho-ho-ho-hoooooo! La, la-laaaah, lalala, lol, haha.
    Lolololo-lololo-lololo, oh-ho-ho-ho-ho!
    Lolololo-lololo-lololo, oh-ho-ho-ho hooooooooooooooooooooo!
    (Wave goodbye)
    Note: Transcribing these lyrics took longer than you might think.
    Note: You can download Trololo Sing Along with the lyrics from
    Vimeo for free (see About This Video).

    Getting serious now, why does a song like this, with no discernible words (vokaliz style) still work as a music video? Body language! Eduard isn't using words, but he's a recognizable performer singing a story about a feeling, Ostrovskii's "I Am So Happy to Finally Be Back Home"(Cyrillic: Я очень рад, ведь я, наконец, возвращаюсь домой), using his facial expression, posture, and tonality.

    It's a strange sight in contrast to modern Western norms, but considering that human communication is more non-verbal than verbal, a singer lip-syncing to non-words is actually saying a lot.

    Edit 3/7/10: Looks like since this writing the meme has picked up enough momentum to generate an English Wikipedia page for Eduard Khil' (in addition to its Russian counterpart). There's an interesting quote from Hill, now living in St. Petersburg Russia, who was asked about his new-found internet fame by a Russian news outlet recently. Here's his reply:

    I haven't heard anything about it. It's nice, of course! ...
    Thereby hangs a tale about this song. Lyrics were written for it, but they were poor. I mean, they were good, but one couldn't publish them at that time. They contained words like these: "I'm riding my stallion, so-and-so mustang, and my beloved Mary is thousand miles away knitting a stocking for me". Of course, we failed to publish it at that time, and we, Arkady Ostrovsky and I, decided to make it a vocalise. But the essence remained in the title. Yes, it's a little prankish – it has no lyrics, so we had to make up something for people would listen to it, and so there was an interesting arrangement.
    Eduard Khil, Life News (Russian)

    Edit 3/15/10: Eduard has been further pressed by Russian media and he seems to be gladly embracing the new popularity trend. He's even posted a video address to the world and recently sat down to watch YouTube parodies on live TV.

    Addendum 4/14/10: Read more about trololo and the reasoning behind the vocal lyrics in the new thought posted here.

    Think about...

    Random Thoughts

    Where Thinkers Come From
    Real Time Web Analytics