Grants are the scientist’s homework

I can’t believe I did not realize this before. Thanks to Mickey Kosloff for enlightening me by posting this on his Facebook.

 

Of course grants are like homework. You don’t want to do them; anything is better, really; multiple excuses why not to do them right now; anything has more priority, suddenly.

BUT if you do them (grants/homework)  badly you get a bad grade. Actually, even if you do them well, you can get a bad grade. You will get left behind (no promotion) or kicked out of school (no tenure).  And if you keep not doing them or not doing them well, you won’t get to play (do research).

OK, I messed around enough. Back to grant writing…

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Announcement: a Competition to Improve Wikipedia Entries in Computational Biology

Improve wikipedia entries in computational biology, and you too can win cash prizes, a free membership to International Society of Computational Biology, or a dinner date with an ISCB officer  of your choice!

OK, maybe not the last one, but definitely the first two.

The ISCB is announcing a competition to improve Wikipedia entries that have to do with computational biology. This is a project for students and trainees only, so all you professors can go back to writing grants & papers. From the competition announcement page:

The International Society for Computational Biology (ISCB) announces an international competition to improve the coverage on Wikipedia of any aspect of computational biology. A key component of the ISCB’s mission to further the scientific understanding of living systems through computation is to communicate this knowledge to the public at large. Wikipedia has become an important way to communicate all types of science to the public. The ISCB aims to further its mission by increasing the quality of Wikipedia articles about computational biology, and by improving accessibility to this information via Wikipedia. The competition is open to students and trainees at any level either as individuals or as groups.

The prize for the best article will be an award of $500 (US) provided by the ISCB and a year’s membership to the ISCB. A second prize of $200 and a year’s membership to the ISCB.

 

So read the rules carefully, and then go ahead and go an contribute to Wikipedia. Instructors: see how to engage your class through the Wikipedia education program.

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Short note on getting students busy

I recently read this post about lacunae in  Bioinformatics.  One complaint was:

I know that documentation is a thankless task. But some parts of the Bio[Java|Perl|Python] libraries are described only as an API? This became apparent to me when I had to teach the libraries to students. What does this module do and why does it do it that way? Uh …

Hey, when life gives you students, you make projects! How about having them document the missing bits in the Bio* projects? After all, they have to know the functionality of those libraries anyway. In that way, they learn how to write documentation, engage with open-source developers,  and give back to the community.  Also, they can come up with Cookbook examples (a good documentation and coding project). At least for Biopython, it’s as easy as getting on the GitHub repository.   Note: contact the respective project email list first to coordinate this. They’ll know best how to guide you technically.

Credit: Jake von Slatt http://steampunkworkshop.com/lcd.shtml

 

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Should research code be released as part of the peer review process?

So there have been a few reactions to my latest post on accountable research software, including a Tweeter kerfuffle (again). Ever notice how people come out really aggressive on Twitter? Must the the necessity to compress ideas into 140chars. You can’t just write “Interesting point you make there, sir. Don’t you think that your laudable goal would be better served by adopting the following methodolo…”  Oops, ran out of characters. OK, let’s just call him an asshole: seven characters used. Move on.

What I will try to do here is compile the various opinions expressed about research software, its manner of publication and accountability. I will also attempt to explain what my opinion is on the matter. I  do not think mine is the only acceptable one. As this particular subject is based on values, my take is subject to my experiential baggage, as it were.

Back to business.

Continue reading Should research code be released as part of the peer review process? →

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Can we make accountable research software?

Preamble: this post is inspired by a series of tweets that took place over the past couple of days. I am indebted to Luis Pedro Coelho (@LuisPedroCoelho) and to Robert Buels (@rbuels) for a stimulating, 140-char-at-a-time discussion. Finally, my thanks (and yours, hopefully) to Ben Temperton for initiating the Bioinformatics Testing Consortium.

Science is messing around with things you don’t know. Contrary to what most high school and college textbooks say, the reality of day-to-day science is not a methodical hypothesis ->  experiment -> conclusions, rinse, repeat. But it’s a lot messier than that.  If there is any kind of process in science (method in madness) it is something like this:

1. What don’t I know that is interesting? E.g. how many teeth does a Piranha fish have.

2. How can I get to know that? That’s where things become messy. First is devising a method to catch a Piranha without losing a limb. So you need to build special equipment. Then you may want more than one fish, because number of teeth may vary between individuals. It may be gender dependent, so there’s a whole subproject of identifying boy Piranha and girl Piranha. It may also be age dependent, so how do you know how old a fish is? Etc. etc.

3. Collect lots of data on gender, age, diet, location, and of course, number of teeth.

4. Try to make sense of it all. So you may find that boy Piranha have 70 teeth, and girls have 80 teeth, but with juveniles this may be reversed, but not always, and it differs between the two rivers you visited. And in River “A” they mostly eat Possum  that fall in, but in River B they eat fledgling bats who were too ambitious in their attempt to fly over the river, so there’s a a whole slew of correlations you do not understand… Also, along the way you discover that there is a new species of pacifist, vegetarian Piranha that live off algae and have a special attraction to a  species of kelp whose effect is not unlike that of Cannabis on humans.  Suddenly, investigating the Piranha stonerious  becomes a much more interesting endeavor.

Continue reading Can we make accountable research software? →

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

ZomBee Watch, a new citizen-scientist project

“If the bee disappeared off the surface of the globe, man would have only four years to live”.

— Albert Einstein

ResearchBlogging.org
No, there are no typos in the title. And, no, there is no zombie outbreak towards which people are being recruited to fight. (Well, yeah, that’s what “they” always say isn’t it?)  The ZomBee Watch project is concerned with honeybees, and a probable contributing cause to  colony-collapse disorder or CCD. CCD has decimated beehives across North America and Europe.  Many possible causes have been investigated including loss of genetic diversity leading to vulnerability, parasitic mites that carry a lethal virus, fungi, pesticides and various combinations of the above named factors and others. Since honeybees pollinate so many of our crops, there is a real concern for food security if worldwide bee population drops. Also, wild plant diversity can be threatened, opening the way to ecosystem changes.

One possible contributor to CCD is this little guy (or rather gal):

Photo: Christopher Quock, San Francisco State University

See the little fly riding the bee’s back? That’s the female Apocephalus borealislaying its eggs in the bee. A. borealis’s larva hatch within the bee, causing it to leave the hive at night and be attracted to lights. The bee dies, while the larva leave it and pupate around it. New flies then emerge, which will infect more bees.

So the citizen scientist project ZomBee Watch, from San Francisco State University and The Natural History Museum of Los Angeles County aims to find out how widespread Zombie Fly (as they call the borealis) infections are. They would like people help them establish a baseline for the number of bees that leave the hive at night, and how many are infected by the Zombie Fly. From their website:

There are many ways you can get involved. It can be as easy as collecting honey bees that are under your porch light in the morning, under a street light or stranded on sidewalks. If you are a beekeeper, setting up a light trap near one of your hives is the most effective way to detect ZomBees. It’s easy to make a simple, inexpensive light trap from materials available at your local hardware store. To test for the presence of Zombie Fly infection all you need to do is put honey bees you collect in a container and observe them periodically. Infected honey bees give rise to brown pill-like fly pupae in about a week and to adult flies a few weeks later.

 

And a slide tutorial:

ZomBee Watch Tutorial from asimsfsu

 

Go forth and stop the zombee apocalypse! The ZomBee Watch site has more information on how to join. Also, read their paper in PLoS ONE.

 

Andrew Core, Charles Runckel, Jonathan Ivers, Christopher Quock, Travis Siapno, Seraphina DeNault, Brian Brown, Joseph DeRisi, Christopher D. Smith, & John Hafernik (2012). A New Threat to Honey Bees, the Parasitic Phorid Fly Apocephalus borealis PLoS ONE : 10.1371/journal.pone.0029639

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Postdocs in Genome-Scale Proteomics, Imaging-based Screening and Cancer Biology

 

Three postdocs positions available at a great lab in Denmark. Read on:

 

We are seeking three Postdoctoral Fellows to The Cellular Signal Integration Group (C-SIG). The group is a network biology research group located at the Department of Systems Biology at the Technical University of Denmark (DTU). Our department represents one of the largest network biology centers in academia and has a highly multi-disciplinary profile.
DTU Jobs and Careers
In the lab, we explore biological systems by both generating high-throughput data and developing and deploying algorithms aimed at predicting cell behavior with accuracy similar to that of weather or aircraft models. Our focus is on studying cellular signal processing and decision-making.

Job description
The positions are available in the Lindinglab. Prof. Dr. Linding is a world-leading network biologist whose laboratory is interested in the mechanisms by which cells use signaling networks to respond and adapt to changes in their environment. We are seeking highly motivated, bright Researchers to join our highly dynamic, productive and stimulating lab.

In these positions, the successful applicant will work in a fast-paced dynamic environment, performing large-scale integrative studies to advance research into biological and complex systems in order to develop new understanding of evolution and therapies for human diseases.

We are seeking experienced and motivated Postdocs to work on projects related to our recent studies on phosphorylation networks and network medicine (Erler & Linding Cell 2012Bakal, Linding et al. Science 2008Tan et al. Science 2009Tan et al. Science2011 and Jørgensen et al. Science 2009 ).

The projects involve different combinations of RNAi screening, mass-spectrometry, NGS/genomics and large-scale cell culturing to systematically investigate dysregulation of signaling networks in cancer. By combining computational modelling with genomics, quantitative phospho-proteomics and high-throughput imaging we aim to perform global modeling of biological systems and cancer progression.

Continue reading Postdocs in Genome-Scale Proteomics, Imaging-based Screening and Cancer Biology →

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Taming the Impact Factor

Quite a bit has been written about how the journal impact factor (JIF) is a bad metric. The JIF is supposed to measure a journal’s impact using a formula that normalizes the number of cited articles in  a given time frame (typically a year). It is calculated exclusively by Thomson-Reuters, and is trademarked by this company.

Reminder: a journal’s impact factor (JIF) is calculated as follows:

Journal X’s 2008 impact factor =
Citations in 2008 (in journals indexed by Thomson-Reuters Scientific) to all articles published by Journal X in 2007–2008
divided by
Number of articles deemed to be “citable” by Thomson-Reuters Scientific that were published in Journal X in 2007–2008.

Among the criticisms made at the JIF is that it is subject to editorial manipulation (mainly by lowering the of the denominator of  “citable articles”) and that it is irreproducible. The JIF is also an arithmetic mean of a non-Gaussian distribution and, as such, it is a wrong measure of central tendency to use: if forced to do so, a median would be more appropriate. The reason the distribution is non-Gaussian, is that inevitably few papers are cited a large number of times, and most papers are cited a few times on a very skewed, exponential distribution. The JIF also lacks robustness, as the JIF of a journal in any given year can be dramatically  skewed by a single paper .

Moreover, the JIF is misused to evaluate individual researchers’ achievements, when in fact this is a completely wrong application of the metric, as even Thomson-Reuters state:

In the case of academic evaluation for tenure it is sometimes inappropriate to use the impact of the source journal to estimate the expected frequency of a recently published article.

The European Association of Science Editors have issued a statement that “journal impact factors are used only – and cautiously – for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes”.

Yet journals continue to trump their  impact factors as a major selling point. Every year, around this time the new JIFs are published. Editors are quick to herald new rises in their JIFs, no matter how small, without giving pause to think of the meaning of what may very well be a temporal noise (1.5%? Really?).

Continue reading Taming the Impact Factor →

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

ISMB 2012 Vignettes Pt. 3: Swag

Promotional materials are part of any conference. In scientific meetings, the swag usually comes from the booths of product promoters, science publishers, and scientific societies. It was a nice surprise to see a Federal funding agency, the US Department of Energy give away decks of cards. I’m a sucker for cards, so I took a deck. (Full disclosure: I received a grant from DOE which partially funded a meeting I held last year.) .The cards show organisms and microbial communities whose genome sequences are of interest to the DOE’s Biological and Environmental Research program. The cards show various organisms relevant to the BER’s mission of funding research into bioenergy, Carbon cycling, and Biogeochmistry. More details and some sample cards in the gallery.

 

FASEB, the Federation of American Societies for Experimental Biology also gave out promotional materials. They had bumper stickers and buttons advocating teaching evolution. The intentions behind this bumper sticker are good, but the result is somewhat ironic.

 

You see, depicting human evolution using the iconic “march of progress” from monkeys through apes to humans is wrong on several levels. First, there is the implicit idea that evolution makes organisms “better”, and that humans are “better” than monkeys. Here is the answer to that misconception from the excellent site Understanding Evolution at UC Berkeley:

MISCONCEPTION: Evolution results in progress; organisms are always getting better through evolution.

CORRECTION: One important mechanism of evolution, natural selection, does result in the evolution of improved abilities to survive and reproduce; however, this does not mean that evolution is progressive — for several reasons. First, as described in a misconception below (link to “Natural selection produces organisms perfectly suited to their environments”), natural selection does not produce organisms perfectly suited to their environments. It often allows the survival of individuals with a range of traits — individuals that are “good enough” to survive. Hence, evolutionary change is not always necessary for species to persist. Many taxa (like some mosses, fungi, sharks, opossums, and crayfish) have changed little physically over great expanses of time. Second, there are other mechanisms of evolution that don’t cause adaptive change. Mutation, migration, and genetic drift may cause populations to evolve in ways that are actually harmful overall or make them less suitable for their environments. For example, the Afrikaner population of South Africa has an unusually high frequency of the gene responsible for Huntington’s disease because the gene version drifted to high frequency as the population grew from a small starting population. Finally, the whole idea of “progress” doesn’t make sense when it comes to evolution. Climates change, rivers shift course, new competitors invade — and an organism with traits that are beneficial in one situation may be poorly equipped for survival when the environment changes. And even if we focus on a single environment and habitat, the idea of how to measure “progress” is skewed by the perspective of the observer. From a plant’s perspective, the best measure of progress might be photosynthetic ability; from a spider’s it might be the efficiency of a venom delivery system; from a human’s, cognitive ability. It is tempting to see evolution as a grand progressive ladder with Homo sapiens emerging at the top. But evolution produces a tree, not a ladder — and we are just one of many twigs on the tree.

Second, the “march of progress” also shows species replacement, leading to the mistaken concept (related to the “progress” concept) that the later and more “progressive” species replace the “less progressive” ones. Evidence to the contrary is evident in the fact that the lion’s share of life on earth is comprised of microbes. The species replacement misconception is also seized upon as a strawman argument by creationists, most famously the argument “If evolution is true why are there still monkeys”? The answer is threefold: first, humans did not evolve from monkeys, but rather humans and monkeys evolved from a common ancestor. Second, humans are not “more advanced” than monkeys (see above), at least not in light of evolution. Finally, the  concept that the “more advanced” species replace the “less advanced” is patently wrong (and, in itself, is based upon a wrong premise of the “progress” concept).

So while FASEB does have thousands of members who understand evolution, it is unfortunate none of them reviewed the design of this sticker.

 

 

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

ISMB 2012 Vignettes Pt. 2: Phylogenomic Approaches to Function Prediction

I chaired the Automated Function Prediction meeting at ISMB this year. The meeting, held every year (almost) deals with the latest approaches to predicting protein function from genetic and genomic data, and also discussing the Critical Assessment of Function Annotation This year we were fortunate to have Jonathan Eisen as our keynote speaker. Ever wondered whether we actually need all those genomes that are being sequenced on a daily basis? Jonathan claims that we do, since it is through a gene’s history and evolution that we can properly understand — and predict — a genes’ function.

 

Here is Jonathan’s talk:

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

ISMB 2012 Vignettes Pt. 1: Grant Writing Tips

ISMB 2012 was an excellent meeting. The organizers were celebrating the 20th anniversary of ISMB meetings, and have carefully chosen the keynote speakers to reflect not only the latest advances in bioinformatics, but also to talk about the past accomplishments,  how they led us to where were are now,  and what the future may hold.  Larry Hunter‘s retrospective talk was very good in that respect: the slides going from punch cards through Apple II and today with his  hilarious narrative were reasons enough to listen to him speak.

Over the next few days, I will post bits and pieces from my experience at ISMB. To start off, here are Russ Altman‘s (@rbaltman) slides from the Grant Writing Workshop that Yana Bromberg has organized. Russ is a great science communicator, and here he was at his best: teaching scientists how to effectively communicate their research ideas to other scientists, the goal being to get money for said ideas.  His slides were simple and self-contained, so I will just jot down a few points made by Russ at the meeting after the jump.

 

Ismb grant-writing-2012 from Iddo
1. Follow the rules: this is not the time to get creative with fonts, margins, etc. Your grant will get trashed by an autocheck system if you do not follow the exact rules.  Also, talk tot he program officer. Get on their radar, and make sure you know that you are sending the grant to the correct program.
2. The reviewer is in a bad mood: they take on a large pile of grant to review, wait too long, and are mad at themselves, the agency, their travel. They may have read a few grants before yours, and if those were not very good, they are mourning the future of science by now.  They are looking for a reason to not read your grant through and go to the next one on the pile. Therefore, 1) do not give them a reason to trash your grant; 2) try and brighten their day, and restore their faith in scientists.
3. Make it beautiful: You are a reviewer. You just worked your way through a page which is one solid block of text.. and nothing is more disheartening to the reviewer than turning a leaf and getting more of the same. As a proposal writer, ensure that you break the monotony. Use bullet points, titles, etc to break the monotony and send their eyes to the important stuff (bold is good for the latter, but don’t overdo it).  And, of course, an image is worth 1000 words. Literally, sometimes.
4. Good ideas are hard to come by.  Russ: “I can’t help you with that one, but see bullet point 2”.
5. Abstract structure is critical: address all points made in the slides. Why is this research are is important, what unidentified problem is there, and how are you going to solve that problem.
6. Specific Aims: allow yourself maximum flexibility after the award. List what you are going to do, as the “how” is likely to change.
7. Literature review: 1) show that you are smart enough to be in the field; 2) cite your potential reviewers, or known ones. The NIH gives a roster of who is on your study section.
8. Prior results: you may not want to present the proposed work as “done”, so you may want to place some prior results in the Methods section, as appropriate.  Don’t lie though: if a Specific Aim has already been completed, you should design another Specific Aim.

9. Methods: pretty much a self-contained slide. My own take: work hard to make the methodology accessible to an audience outside your field. It is easy to get caught up in jargon and verbal shortcuts known to you and to the 10 other people worldwide performing these methods. Don’t yield to the temptation to make your grant shorter by obfuscating the description of the methods.

10. Other things: fairly self-explanatory. I might add that naming a person (postdoc or grad student) on a grant rather than a “TBA” seemed to do good based on the comments given on grant which I got funded.

Thanks,  Russ!

 

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Spaghetti Western Blot

This is simply brilliant. The best thing since Bad Project.

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Intermission: two vids

Too busy with grant deadlines, and preparations for the looming ISMB 2012. (Including, of course, the Automated Function Prediction meeting.) So here are two nice vids to pass the time.

Jennifer Gardy and Tom Scott made this great A-Z of bacteria video.  Guaranteed to freak out your kids, or yourself.

So how many of those did you have?

Here’s a nice animation to accompany Tom Lehrer‘s (and Gilbert & Sullivan’s) “Element Song”, set by TimwiTerby

OK. I didn’t realize he covered almost the whole table in that one.

Back to work.

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

The Evolution of Music

ResearchBlogging.org
A collaboration between a group in  Imperial College and Media Interaction group in Japan yielded a really cool website: darwintunes.org. The idea is to  apply Darwinian-like selection to music. Starting form a garble, after several generations producing  something that is actually melodic and listen-able. Or a Katy Perry tune. Whatever.  The selective force being the appeal of the tune to the listener. From the paper published yesterday in PNAS:

The processes underlying a single DarwinTunes population
are shown in Fig. 1A. At any given time, a DarwinTunes population
has 100 loops, each of which is 8 s long. Consumers rate
them on a five-point scale (“I can’t stand it” to “I love it”) as they
are streamed in random order. When 20 loops have been rated,
truncation selection is applied whereby the best 10 loops are
paired, recombine, and have two daughters each.

 

 

Fig. 1. (A) Evolutionary processes in DarwinTunes. Songs are represented as tree-like structures of code. Each generation starts with 100 songs; however, for clarity, we only follow one-fifth of them. Twenty songs are randomly presented to listeners for rating, and the remaining 80 survive until the next generation; thus, at any time, the population contains songs of varying age. Of the 20 rated songs, the 10 best reproduce and the 10 worst die. Reproductives are paired and produce four progeny to replace themselves and the dead in the next generation. The daughters’ genomes are formed from their parents’ genomes, subject to recombination and mutation. (B) Evolution of musical appeal. During the evolution of our populations, listeners could only listen to, and rate, songs that belonged to one or, at best, consecutive generations. Here, they were asked to listen to, and rate, a random sample of all the songs that had previously evolved in the public population, EP1. Thus, these ratings can be used to estimate the mean absolute musical appeal, M, of the population at any time. To describe the evolution of M, we fitted an exponential function. Because the parameter that describes the rate of increase of M is significantly greater than zero, M increases over the course of the experiment

 

Great. This blog is about biology and music, I think this is the first post I actually had both together. Nothing more to say, really, except that DarwinTunes is seriously going to ruin my productivity today.

 

Here are the evolving tunes. From the atonal generation 0 to the rather palatable, if somewhat dull, last generation (3630 as of this writing):


Robert M. MacCallum, Matthias Mauchb, Austin Burta, & Armand M. Leroia (2012).  (2012-06-18) Evolution of music by public choice.  Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1203182109

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks

Annotating Proteins in the Uncanny Valley

The Uncanny Valley

Every day, software appears to do more things that we thought were exclusively in the human realm. Like beating a grandmaster in chess,  or carrying out a conversation. I say “appears” because there is obviously no self-aware intelligence involved, as this rather bizarre conversation between  Cleverbots demonstrates. For humans, playing chess  and carrying out a conversation are products of a self-aware intelligence, which gives rise to symbolic representation of information that can be conveyed in speech or in writing, lead to enjoying an abstract game, and many many other good things.


Whereas in machines, playing chess and conversing is a an imitation, playacting if you will,  of human activity.  When Cleverbot talks, it is not fueled by sentience, but rather by many prior examples of scenarios fed into a learning algorithm. Chess and conversation are not produced by a real intelligence, anymore than the actor playing Hamlet really dies at the end of the play (sorry for the spoiler).

So we can relax, as we are still not HAL9000 or Skynet ready, nor is the human/machine merger singularity any nearer than it was. Still, certain upshots of human intelligence seemed to be emulated, with some success, by computers. The Cleverbot conversation is a a good example: it is a mostly comical but occasionally eerie caricature of human conversation, the eeriness stems from it being uncomfortably close to what we perceived to be a  human-only  activity, yet not that close.  There is a name for the phenomenon of getting weirded-out by computer and robotic activities that are too similar to humans’: the Uncanny Valley. In a certain place on the graph between the dimensions of familiarity and human likeness, familiarity drops precipitously, and we freak out. In the Uncanny Valley, certain things become  like humans (zombies, bunraku puppets, actroids, and the post-multi-surgical Michael Jackson) the likeness makes us feel uncomfortable.

 

Source: spectrum.ieee.org http://is.gd/0w7Xim

Continue reading Annotating Proteins in the Uncanny Valley →

Share and Enjoy:
  • Fark
  • Digg
  • Technorati
  • del.icio.us
  • StumbleUpon
  • Facebook
  • Reddit
  • Twitter
  • FriendFeed
  • PDF
  • email
  • Print
  • Google Bookmarks