Towards Evaluating Social Media for Scholarly Communication

There are A LOT of people and organizations that are looking at ways of using modern web technologies (web2.0, social media, collaboration, and other buzzwords as well) to enhace the creation, modification, and dissemination of their research and other scholarly work.  There’s even a conference going on right now discussing the matter (see some of the conference discussion on FriendFeed).  And it seems like every day there are a host of new tools, start ups, web sites to enhance collaboration, sharing, and communication between scientists.

There were so many web sites evolving so quickly, there was a call to figure out how to critically evaluate, compare, and contrast the tools.  One way to look at all the sites and tools is to examine how well they achieve the core goals of scientific communication * :

  • Registration of a new idea or claim to an individual or group of collaborators
  • Certification / peer-review of a claim
  • Awareness / access to the details of the claim
  • Archival of the claim
  • Reward for the registrant(s)

Imagine we could assign each site or tool a score along each of these goals.  We could then plot the cumulative score on a radar graph like this:

Radar Graph Visualization of Social Media for Scholarly Communication

This type of graph can help decision makers visualize how well different systems fulfill different goals of scholarly communication, how they are lacking, and overall what are the opportunities for development of future tools.

Note that the scores in the image are not at all rigorously determined.  I made up some quick estimates for a few sites, and compared them to made-up estimates for publication in a high impact journal such as Nature.  I made up the estimates based on the following loose criteria:

Registration: A contribution or claim can be attributed to an individual or a set of contributors, with a creation date time stamp and revision history.

Certification: A contribution can be rated by others.  There can be a few, influential raters (eg editorial board) or many (eg crowd sourcing/collaborative filtering).  Ratings can be anonymous or attributed.  Ratings can be meta-rated (eg the Slashdot moderation system).  Ratings can be simple thumbs up/thumbs down, or with comments/feedback.

Awareness/Access: Users can identify new contributions, as well as contributions that are relevant to their interests.  Awareness tools can range from passive (user must browse/search) to active (system recommendations based on clustering or collaborative filtering).  Entery and metadata for contributions are queriable/accessiblly by a publically documented API, open standard, or format.

Archival: A contribution can be identified and accessed by a single URI (possibly with multiple resource URLs — see Fielding’s thesis on REST).  Association to similar data via metadata are present.  Contributions are exportable into a documented open standard or format . Contributions will be available at the URI for the forseeable future.

Reward: A contribution counts toward professional career advancement, or standing within the academic discipline. (Obviously, social media is currently lacking in this area).

These criteria attempt to combine the traditional requirements for scholarly communication, with modern needs/expectations of web2.0 technologies and open science.  There is certainly room for refinement, and I’d welcome comments from the peanut gallery.

Hmm…maybe there’s room for a collaborative filtering type tool for science web2.0 tools.  People can rate sites on each of these, and other interesting criteria…

* See Roosendaal, H., & Geurts, P. A. T. M. (1998). Forces and functions in scientific communication. In . Retrieved July 25, 2008, from http://www.physik.uni-oldenburg.de/conferences/crisp97/roosendaal.html.  Also Van de Sompel, H., Payette, S., Erickson, J., Lagoze, C., & Warner, S. (2004). Rethinking Scholarly Communication: Building the System that Scholars Deserve. D-Lib Magazine, 10(9). Retrieved August 12, 2008, from http://www.dlib.org/dlib/september04/vandesompel/09vandesompel.html.

Advertisements