[rfc-i] Can the web be archived?

Heather Flanagan (RFC Series Editor) rse at rfc-editor.org
Tue Jan 20 17:41:04 PST 2015

Hash: SHA1

On 1/19/15 9:41 PM, Eliot Lear wrote:
> The New Yorker reports[1] about how links go stale on the web, and
> how this is impacting journals and the like.  I thought it might
> be interesting to this lot in terms of how and when we cite URLs.
> You'll note, I'm only posting a URL about the article, but in a
> spate of cruelty, the New Yorker article doesn't have a to the
> study they quote, and so I can't claim to be posting about a URL to
> a URL ;-)
> Here's the relevant quote:
>> But a 2013 survey of law- and policy-related publications found
>> that, at the end of six years, nearly fifty per cent of the URLs
>> cited in those publications no longer worked. According to a 2014
>> study conducted at Harvard Law School, “more than 70% of the URLs
>> within the Harvard Law Review and other journals, and 50% of the
>> URLs within United States Supreme Court opinions, do not link to
>> the originally cited information.”
> Eliot
> [1] http://www.newyorker.com/?p=2960285&mbid=social_tablet_f

Indeed.  This problem is extremely frustrating to STEM (scientfic,
technical, engineering, and medical) publishers, too.  Different
models try to "fix" it, but all models I'm aware of require a
publisher to Do Something, where something means registering their
documents with a service or trying to archive a copy of everything
they reference in their books/journals/articles.

So far this hasn't proven to be a tractable problem, particularly not
when anyone and everyone can be a publisher on the web.

- -Heather

Version: GnuPG/MacGPG2 v2
Comment: GPGTools - http://gpgtools.org


More information about the rfc-interest mailing list