Digital libraries containing millions of out-of-print and public domain works would vastly expand the scope of research and education worldwide, extending access to millions of people in undeveloped countries who don't have it now. (Illustration by Ken Orvidas / For The Times / April 30, 2012)

Since 2002, at first in secret and later with great fanfare, Google has been working to create a digital collection of all the world's books, a library that it hopes will last forever and make knowledge far more universally accessible.
But from the beginning, there has been an obstacle even more daunting than the project's many technical challenges: copyright law.
Ideally, a digital library would provide access not only to books free from copyright constraints (those published before 1923), but also to the tens of millions of books that are still in copyright but no longer in print.
Copyright law makes it risky to digitize these books without permission from copyright owners, and clearing the rights can be prohibitively expensive (costing on average, according to estimates, about $1,000 per book). Even if the money wasn't a problem, hundreds of thousands — and probably millions — of books are likely to be "orphan works" whose rights-holders are unknown or can't be found.
Google bumped up against copyright law in 2005, when lawsuits were filed by the Authors Guild and by a group of five publishers alleging that Google's scanning of books from major research library collections constituted copyright infringement. Google argued that scanning books to index their contents and make snippets available online was fair use, not infringement. But with its potential liability running into the billions or even trillions of dollars, Google was understandably receptive to overtures from the Authors Guild and publishers to settle instead of litigate.
Full story at the Los Angeles Times.