text
stringlengths 0
16.9k
| page_start
int64 0
825
| page_end
int64 0
825
| source_file
stringclasses 99
values |
---|---|---|---|
Open Science
CC, along with SPARC and EIFL, completed year one of our four-year Arcadiafunded Open Climate Campaign focused on promoting Open Access to researchon climate science and biodiversity. We invite you to read “A Year in the OpenClimate Campaign,” detailing our progress engaging national governments andfunders of climate change research.
Open Climate Campaign
In 2023, with support from the Patrick J. McGovern Foundation, welaunched a new project to help open up access to large climate datasets. Wesuccessfully conducted a landscape analysis of 30 major global sources ofclimate data and published our “Recommendations for Better Sharing ofClimate Data.”
Open Climate Data Project
Project to Openly License Life Sciences Preprints
CC secured new funding from the Chan Zuckerberg Initiative to help makeopenly licensed preprints the standard for sharing scientific knowledge.
We co-launched a new project with Norway to help implement openlicensing policies to ensure Norwegian Agency for Development Cooperationpublicly funded climate research, educational resources, data, and softwareare open.
Open Earth Platform Initiative
Our Impact
We expanded our work in biodiversity, climate, and life sciences focused onensuring that science research and data are open.
"Coral Reef at Palmyra Atoll National Wildlife Refuge" by USFWS Pacific is licensed under CC BY-NC 2.0. | 7 | 7 | 2023-Creative-Commons-Annual-Report-2-1.pdf |
In 2023, we convened hundreds viaroundtables, community conferences(e.g. MozFest, Wikimania), and publicevents (e.g. symposium on GenerativeAI & Creativity)to debate copyright law,the ethics of open sharing, and otherrelevant areas that touch AI.
At our CC Global Summit, participantsdrafted community-driven principleson AI that are a valuable input and willhelp inform the organization’s thinkingas we determine CC’s exact role in the AIspace.
“The Pillars of Creation” by James Webb Space Telescope is licensed under CC BY 2.0.
Areas of Exploration
Support for Creators in the Time of Artificial Intelligence | 8 | 8 | 2023-Creative-Commons-Annual-Report-2-1.pdf |
2023 Financial Health
Income
Foundations $4,402,663
Corporate Sponsors $413,349
Major Donors $103,215
Small Dollar Donors $144,217
Program Income $169,980
Consulting $173,939
In-Kind $30,358
Other $38,792
Total: $5,496,708
Expenses
CC Licenses & Training $763,196
Programs $2,248,091
Events $395,600
Operations $1,654,225
Total: $5,061,112
"bird flock in vedanthangal" by VinothChandar is licensed under CC BY 2.0. | 9 | 9 | 2023-Creative-Commons-Annual-Report-2-1.pdf |
Our Supporters
We thank you all for your steadfast support of our work. In2023, we received contributions from 30 foundations andcompanies and over 2,700 individual donors.
Thank You!
Amateur Radio DigitalCommunicationsAndrew GassBen AdidaBrewster & Mary KahleBruno HannudColin SullivanDouglas JaffeDouglas Van HouwelingEsther Wojcicki Garrett CampGabriel LevinJames Grimmelmann
John Seely BrownLawrence LessigMarta BelcherMary Shaw & Roy WeilMolly Van HouwelingMustafa ÜstündağPaul and Iris BrestReid BorsukTassanee PonlakarnTed and Michele WangZahavah Levine and Jeff Meyer
| 10 | 10 | 2023-Creative-Commons-Annual-Report-2-1.pdf |
This is a frame from “Twenty Years of Creative Commons (in Sixty Seconds)” by Ryan Junell and GlennOtis Brown for Creative Commons licensed under CC BY 4.0. It includes adaptations of multiple openand public domain works. View full licensing and attribution information about all works included in thevideo on Flickr.
Creative CommonsPO Box 1866 Mountain View CA 94042 USA+1 415 429 [email protected] | 11 | 11 | 2023-Creative-Commons-Annual-Report-2-1.pdf |
Towards a
April 2024
Books Data
Commons for
AI Training | 0 | 0 | creative_common_ai.pdf |
1. Introduction 1
While the field of arti ficial intelligence research and technology has a long history, broad
public attention grew over the last year in light of the wide availability of new generative AI
systems, including large language models (LLMs) like GPT-4, Claude, and LLaMA-2. These
tools are developed using machine learning and other techniques that analyze large datasets
of written text, and they are capable of generating text in response to a user’s prompts.
While many large language models rely on website text for training, books have also played
an important role in developing and improving AI systems. Despite the widespread use of e-
books and growth of sales in that market, books remain di fficult for researchers and
entrepreneurs to access at scale in digital form for the purposes of training AI.
In 2023, multiple news publications reported on the availability and use of a dataset of books
called “Books3” to train LLMs. The Books3 dataset contains text from over 170,000 books, 2
which are a mix of in-copyright and out-of-copyright works. It is believed to have been
originally sourced from a website that was not authorized to distribute all of the works
contained in the dataset. In lawsuits brought against OpenAI, Microsoft, Meta, and
Bloomberg related to their LLMs, the use of Books3 as training data was specifically cited. 3
The Books3 controversy highlights a critical question at the heart of generative AI: what role
do books play in training AI models, and how might digitized books be made widely
accessible for the purposes of training AI? What dataset of books could be constructed and
under what circumstances?
In February 2024, Creative Commons, Open Future and Proteus Strategies convened a series
of workshops to investigate the concept of a responsibly designed, broadly accessible
dataset of digitized books to be used in training AI models. Conducted under the Chatham
House Rule, we set out to ask if there is a possible future in which a “books data commons
for AI training” might exist, and what such a commons might look like. The workshops
brought together practitioners on the front lines of building next-generation AI models, as
well as legal and policy scholars with expertise in the copyright and licensing challenges
surrounding digitized books. Our goal was also to bridge the perspective of stewards of
Authored by Alek Tarkowski and Paul Keller (Open Future), Derek Slater and Betsy Masiello (Proteus 1
Strategies) in collaboration with Creative Commons. We are grateful to participants in the workshops,
including Luis Villa, Tidelift and openml.fyi; Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan,
Cornell; Lila Bailey, Internet Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin,
Library Futures/NYU Engelberg Center, Nicholas P . Garcia, Public Knowledge; Sayeed Choudhury; Erik
Stallman, UC Berkeley School of Law. The paper represents the views of the authors, however, and
should not be attributed to the workshop as a whole. All mistakes or errors are the authors’.
See e.g. Knibbs, Kate. “The Battle over Books3 Could Change AI Forever.” Wired, 4 Sept. 2023, 2
www.wired.com/story/battle-over-books3/.
For key documents in these cases, see the helpful compendium at “Master List of Lawsuits v. AI, 3
ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI Cos.” Chat GPT Is Eating the World, 27 Dec.
2023, chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoft-
meta-midjourney-other-ai-cos. See also “Fair Use Week 2024: Day Two with Guest Expert Brandon
Butler.” Fair Use Week, sites.harvard.edu/fair-use-week/2024/02/26/fair-use-week-2024-day-two-with-
guest-expert-brandon-butler/. Accessed 20 Mar. 2024 (arguing that use of this dataset is not
consequential for the fair use analysis).
Towards a Books Data Commons for AI Training 1 | 1 | 1 | creative_common_ai.pdf |
content repositories, like libraries, with that of AI developers. A “books data commons” needs
to be both responsibly managed, and useful for developers of AI models.
We use “commons” here in the sense of a resource that is broadly shared and accessible,
and thus obviates the need for each individual actor to acquire, digitize, and format their own
corpus of books for AI training. This resource could be collectively and intentionally
managed, though we do not mean to select a particular form of governance in this paper. 4
This paper is descriptive, rather than prescriptive, mapping possible paths to building a
books data commons as de fined above and key questions relevant to developers,
repositories, and other stakeholders, building on our workshop discussions. We first explain
why books matter for AI training and how broader access could be bene ficial. We then
summarize two tracks that might be considered for developing such a resource, highlighting
existing projects that help foreground both the potential and challenges. Finally, we present
several key design choices, and next steps that could advance further development of this
approach. 5
In this way, we do not use “commons” in the narrow sense of permissively licensed. What’s more, this 4
resource could also be governed as more of a data “trust,” and, indeed, we discuss extensively the work
of HathiTrust as a relevant project in this domain. However, our use of the word “commons” is not
meant to preclude this or other arrangements.
There are, of course, a range of other types of texts that are not on the web and/or not digital at all - 5
e.g., periodicals, journals, government documents. These are out of scope for this paper, but also worthy
of further analysis.
Towards a Books Data Commons for AI Training 2 | 2 | 2 | creative_common_ai.pdf |
2. Basics of AI Training and Technical Challenges
of Including Books
It’s critical to understand that LLMs are not trained on text “as is” – meaning that the model
is not digesting the text in a way humans would, front to back. The text does not represent a
copy of the original text in its original form. Instead, the text is processed in smaller chunks
of text, which are then shuffled and “tokenized,” as we explain further below.
One way to conceptualize the chunking, shu ffling and tokenizing process is to imagine a 900
page book, which has 400,000 words. To feed into an AI model, the book will first be cut into
manageable chunks of text that represent up to several thousand tokens; such a process
might result in around 50 “chunks” of text. Each of those chunks will contain long sections of
narrative content; however, the chunks themselves will then be randomized, and fed into the
AI model out of sequence from each other; the first chunk may be text from Chapters 9 and
10, while the initial text in Chapter 1 may be in the 30th chunk. Within these chunks, the text
itself will be understood by the model as tokens.
In the example below, 252 characters of human-readable text are shown in tokenized form as
57 distinct tokens, the relationships between which then form the basis of building an AI
model. The illustration shows a block of human-readable text as it would be tokenized for AI
training; different colors are used in this visualization merely to differentiate one token from
another within the string of text. As the visualization makes clear, not all of the tokens
directly correspond to a single word; tokens merely represent characters that often appear
together in the training data. 6
OpenAI’s Tokenizer tool at https://platform.openai.com/tokenizer explains how ChatGPT uses tokens 6
and provides a tool to visualize examples. As noted on their site, the tokenization process is different for
every model, this is merely an illustrative example. The visual below represents an example of how
OpenAI’s ChatGPT creates tokens from English text.
Towards a Books Data Commons for AI Training 3 | 3 | 3 | creative_common_ai.pdf |
Tokens do not typically represent words, but instead often represent subword tokens. For
example the word “incompetence” may be broken into three tokens: “in-,” “competent,” and “-
ence.” This approach to tokenization enables representation of grammar and word variations,
effectively allowing a high degree of language generalizability. 7
In recent years, LLM research has successfully been able to scale up models by pre-training
on a large number of tokens. In turn, this has allowed a higher degree of language
generalizability in the resulting model. For example, OpenAI’s ChatGPT trained on hundreds
of billions of tokens, allowing it to model language in a very general way. The resulting
models an then be fine-tuned for speci fic tasks using training data representing a particular
corpus, such as software code. 8
McKinsey provides an overview of the different types of tokens that may be used by AI models. 7
McKinsey. “What Is Tokenization? | McKinsey.” Mckinsey.com, 2023, www.mckinsey.com/featured-
insights/mckinsey-explainers/what-is-tokenization.
There are certain technical challenges in using books in AI training as well, given the nature of the 8
format. First, one must address whether a book is already in digital form. For the vast majority of books,
that is not the case. One first needs to digitize the book, and convert it to a digital text file using optical
character recognition (OCR), or use a born-digital version (although we return to specific limitations on
that approach below). Second, once a book is in digital text form, it must be converted into a text format
that is suitable for AI training. Text conversion tools transfer the content of books into complete text
files, which is akin to the type of conversion that must be done between a Microsoft Word or Adobe PDF
file format and a simple .txt format. This conversion is generally not adequate for the purpose of AI
training; researchers have found that post-processing is required to ensure these text files are properly
formatted for the purposes of tokenization. For example, when building the dataset known as The Pile,
researchers had to modify an existing epub-to-text converter tool to ensure that document structure
across chapters was preserved to match the table of contents, that tables of data were correctly
rendered, to convert numbered lists from digitally legible lists of “1\.” to “1.”, and to replace unicode
punctuation with ascii punctuation. See Discussion in 4.3.2 in Bandy, Jack, and Nicholas Vincent.
Addressing “Documentation Debt” in Machine Learning Research: A Retrospective Datasheet for
BookCorpus. 2021, https://arxiv.org/pdf/2105.05241.pdf. and C.16 of The Pile documentation in Gao,
Leo, et al. The Pile: An 800GB Dataset of Diverse Text for Language Modeling, https://arxiv.org/pdf/
2101.00027.pdf.
Towards a Books Data Commons for AI Training 4 | 4 | 4 | creative_common_ai.pdf |
3. Why Books are Important to Training AI
Despite the proliferation of online content and some speculating that books would simply die
out with the advent of the Internet, books remain a critical vehicle for disseminating 9
knowledge. The more scientists study how books can impact people, the less surprising this
is. Our brains have been shown to interact with longform books in meaningful ways: we
develop bigger vocabularies when we read books; we develop more empathy when we read
literary fiction; and connectivity between different regions of our brain increases when we
read. 10
In that light, it might be unsurprising that books are important for training AI models. A
broadly accessible books dataset could be useful not only for building LLMs, but also for
many other types of AI research and development.
Performance and Quality
The performance and versatility of an AI model can signi ficantly depend on whether the
training corpus includes books or not. Books are uniquely valuable for AI training due to
several characteristics.
• Length: Books tend to represent longer-form content, and fiction books, in particular,
represent long-form narrative. An AI trained on this longer-form, narrative type of
content is able to make connections over a longer context, so instead of putting
words together to form a single sentence, the AI becomes more able to string
concepts together into a coherent whole; even after a book is divided into many
“chunks” before the process of tokenization, that will still provide long stretches of
text that are longer than the average web page. While Web documents, for instance,
tend to be longer than a single sentence, they are not typically hundreds of pages long
like a book.
• Quality: The qualities of the training data impact the outputs a tool can produce.
Consider an LLM trained on gibberish; it can learn the patterns of that gibberish and,
in turn, produce related gibberish, but will not be very useful for writing an argument
or a story, for instance. In contrast, training an LLM on books with well-constructed
arguments or crafted stories could serve those purposes. While “well-constructed”
and “crafted” are necessarily subjective, the traditional role of editors and the
publishing process can provide a useful indicator for the quality of writing inside of
books. What’s more, metadata for books — information such as the title, author and
year of publication — is often more comprehensive than metadata for information
“the novel, too, as we know it, has come to its end” — “The End of Books.” Archive.nytimes.com, 21 June 9
1992, archive.nytimes.com/www.nytimes.com/books/98/09/27/specials/coover-end.html. Accessed
27 Aug. 2021.
Stanborough, Rebecca Joy. “Benefits of Reading Books: For Your Physical and Mental Health.” 10
Healthline, 15 Oct. 2019, www.healthline.com/health/benefits-of-reading-books#prevents-cognitive-
decline.
Towards a Books Data Commons for AI Training 5 | 5 | 5 | creative_common_ai.pdf |
found on the web, and this additional information can help contextualize the
provenance and veracity of information.
• Breadth, Diversity, and Mitigating Bias: Books can serve a critical role in ensuring AI
models are inclusive of a broad range of topics and categories that may be under-
represented in other content. For all that the Internet has generated an explosion in
human creativity and information sharing, it generally represents only a few decades
of information and a small portion of the world’s creative population. A books
dataset, by comparison, is capable of representing centuries of human knowledge. As
a result such a dataset can help ensure AI systems behavior is based on centuries of
historical information from modern books. It can help ensure broad geographic and
linguistic diversity. What’s more, the greater breadth and diversity of high-quality
content help mitigate challenges around bias and misinformation. Using a more
diverse pool of training data can help support the production of a model and outputs
of the model that are more representative of that diversity. Books can be useful in
evaluation datasets to test existing models for memorization capabilities, which can
help prevent unintended reproduction of existing works. Of course, this is all
contingent on actual composition of the corpus; in order to have the bene fits
described, the books would need to be curated and included with characteristics like
time, geographic and linguistic diversity.
• Other Modalities: Finally, books do not just contain text, they often contain images
and captions of those images. As such, they can be an important training source for
multi-modal LLMs, which can receive and generate data in media other than text.
Lowering Barriers to Entry & Facilitating Competition
Broad access to books for AI training is critical to ensure powerful AI models are not
concentrated in the hands of only a few companies. Access to training data, in general, has
been cited as a potential competitive concern in the AI field because of the performance 11
benefits to be gained by training on larger and larger datasets. But this competitive wedge is
even more acute when we look specifically at access to book datasets.
The largest technology companies building commercial AI models have the resources and
capacity to mass digitize books for AI training. Google has scanned 40 million books, many
of which came from digitization partnerships they formed with libraries. They may already
use some or all of these books to train their AI systems. It’s unclear to what extent other 12
companies already have acquired books for AI training (for instance, whether Amazon’s
existing licenses with publishers or self-published authors may permit such uses);
See e.g. Trendacosta, Katherine and Doctorow, Cory. “AI Art Generators and the Online Image Market.” 11
Electronic Frontier Foundation, 3 Apr. 2023, www.eff.org/deeplinks/2023/04/ai-art-generators-and-
online-image-market; Narechania, Tejas N., and Sitaraman, Ganesh. “An Antimonopoly Approach to
Governing Artificial Intelligence.” SSRN Electronic Journal, 2023, cdn.vanderbilt.edu/vu-URL/wp-content/
uploads/sites/412/2023/10/09151452/Policy-Brief-2023.10.08-.pdf, https://doi.org/10.2139/
ssrn.4597080. Accessed 25 Feb. 2024.
See white paper for Google’s Gemini models https://arxiv.org/pdf/2312.11805.pdf — “Gemini models 12
are trained on a dataset that is both multimodal and multilingual. Our pretraining dataset uses data from
web documents, books, and code, and includes image, audio, and video data.”
Towards a Books Data Commons for AI Training 6 | 6 | 6 | creative_common_ai.pdf |
regardless, comparable efforts to Google’s would cost many hundreds of millions of
dollars. 13
Independent researchers, entrepreneurs, and most other businesses and organizations are
unlikely to have the resources required to digitally scan millions of books nor purchase
licenses to digitized books in ways that could unlock the bene fits described above. Ensuring
greater competition and innovation in this space will require making this type of data
available to upstarts and other entities with limited resources. A well-designed and
appropriately governed digital books commons is one way to do that.
“By 2004, Google had started scanning. In just over a decade, after making deals with Michigan, 13
Harvard, Stanford, Oxford, the New York Public Library, and dozens of other library systems, the
company, outpacing Page’s prediction, had scanned about 25 million books. It cost them an estimated
$400 million. It was a feat not just of technology but of logistics.” Somers, James. “Torching the Modern-
Day Library of Alexandria.” The Atlantic, 20 Apr. 2017, www.theatlantic.com/technology/archive/
2017/04/the-tragedy-of-google-books/523320/.
Towards a Books Data Commons for AI Training 7 | 7 | 7 | creative_common_ai.pdf |
4. Copyright, Licensing, & Access to Books for
Training
Even if books can be acquired, digitized, and made technically useful for AI training, the
development of a books data commons would necessarily need to navigate and comply with
copyright law.
Out-of-Copyright Books: A minority of books are old enough to be in the public domain and
out of copyright, and an AI developer could use them in training without securing any
copyright permission. In the United States, all books published or released before 1929 are in
the public domain. While use of these books provides maximal certainty for the AI developer
to train on, it is worth noting that the status of whether a book is in the public domain can be
difficult to determine. For instance, books released between 1929 and 1963 in the U.S. are 14
out of copyright if they were not subject to a copyright renewal; however, data on copyright
renewals is not easily accessible.
What’s more, copyright de finitions and term lengths vary among countries. Even if a work is
in the public domain in the US, it may not be in other countries. Countries generally use the 15
life of the last living author + “x” years to determine the term of copyright protection. For
most countries, “x” is either 50 years (the minimum required by the Berne Convention) or 70
years (this is the case for all member states of the European Union and for all works
published in the U.S. after 1978). This approach makes it di fficult to determine copyright
terms with certainty because it requires information about the date of death of each author,
which is often not readily available.
In-Copyright Books: The vast majority of books are in copyright, and, insofar as the training
process requires making a copy of the book, the use in AI training may implicate copyright
law. Our workshop covered three possible paths for incorporating such works.
Direct licensing
One could directly license books from rightsholders. There may be some publishers who are
willing to license their works for this purpose, but it is hard to determine the scale of such
access, and, in any event, there are signi ficant limits on this approach. Along with the
challenge (and expense) of reaching agreements with relevant rightsholders, there is also the
practical difficulty of simply identifying and finding the rightsholder that one must negotiate
For a sense of the complexity, see e.g. Melissa Levine, Richard C. Adler. Finding the Public Domain: 14
Copyright Review Management System Toolkit. 2016, quod.lib.umich.edu/c/crmstoolkit/
14616082.0001.001. Accessed 20 Mar. 2024.; Kopel, Matthew. “LibGuides: Copyright at Cornell Libraries:
Copyright Term and the Public Domain.” guides.library.cornell.edu/copyright/publicdomain;
Mannapperuma, Menesha, et al. Is It in the Public Domain? A HANDBOOK for EVALUATING the
COPYRIGHT STATUS of a WORK CREATED in the UNITED STATES. 1923.
See e.g. Moody, Glyn. “Project Gutenberg Blocks Access in Germany to All Its Public Domain Books 15
because of Local Copyright Claim on 18 of Them.” Techdirt, 7 Mar. 2018, www.techdirt.com/
2018/03/07/project-gutenberg-blocks-access-germany-to-all-public-domain-books-because-local-
copyright-claim-18-them/. Accessed 20 Mar. 2024.
Towards a Books Data Commons for AI Training 8 | 8 | 8 | creative_common_ai.pdf |
with. The vast majority of in-copyright books are out-of-print or out-of-commerce, and most
are not actively managed by their rightsholders. There is no o fficial registry of copyrighted
works and their owners, and existing datasets can be incomplete or erroneous. 16
As a result, there may be no way to license the vast majority of in-copyright books, especially
those that have or have had limited commercial value. Put differently, the barrier to using 17
most books is not simply to pay publishers; even if one had signi ficant financial resources,
licensing would not enable access to most works.
Permissively licensed works
There are books that have been permissively licensed in an easily identi fiable way, such as
works placed under Creative Commons (CC) licenses. Such works explicitly allow particular
uses of works subject to various responsibilities (e.g., requiring attribution by the user in their
follow-on use).
While such works could be candidates for inclusion in a books data commons, their inclusion
depends on whether the license’s terms can be complied with in the context of AI training.
For instance, in the context of CC licensed works, there are requirements for proper
attribution across all licenses (the CC tools Public Domain Dedication (CC0) and Public
Domain Mark (PDM) are not licenses and do not require attribution). 18
See e.g. Heald, Paul J. “How Copyright Makes Books and Music Disappear (and How Secondary 16
Liability Rules Help Resurrect Old Songs).” Illinois Program in Law, Behavior and Social Science Paper
No. LBSS14-07 Illinois Public Law Research Paper No. 13-54 https://doi.org/10.2139/ssrn.2290181.
Accessed 4 Jan. 2020, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2290181; Rosen,
Rebecca J. “Why Are so Few Books from the 20th Century Available as Ebooks?” The Atlantic, 18 Mar.
2014, www.theatlantic.com/business/archive/2014/03/why-are-so-few-books-from-the-20th-century-
available-as-ebooks/284486/. See also “Google Book Search Settlement and Access to Out of Print
Books.” Google Public Policy Blog, publicpolicy.googleblog.com/2009/06/google-book-search-
settlement-and.html. Accessed 20 Mar. 2024 (discussing this issue in the context of the failed class-
action settlement between Google, the Authors Guild, and the Association of American Publishers).
Google’s final brief in the settlement proceedings notes the “prohibitive transaction costs of identifying
and locating individual Rightsholders of these largely older, out-of-print books” — see this brief at https://
web.archive.org/web/20130112060651/http://thepublicindex.org/docs/amended_settlement/
google_final_approval_support.pdf. The Authors Guild and Association of American Publishers also
justified the settlement’s terms in light of the fact that “the transaction costs involved in finding
copyright owners and clearing the rights are too high”; while they argued that most works are not truly
“orphans,” they note that total transaction costs as a whole (including, for example, determining whether
the author or publisher holds the rights and then negotiating rates) are so high as to block uses of out-
of-print works anyway — see this brief at https://web.archive.org/web/20130112060213/http://
thepublicindex.org/docs/amended_settlement/Supplemental_memorandum_of_law.pdf.
In the EU, the 2019 Copyright Directive introduced specific provisions on the "use of out-of-commerce 17
works and other subject matter by cultural heritage institutions" (Articles 8-11 CDSMD). These
provisions allow cultural heritage institutions to "make available, for non-commercial purposes, out-of-
commerce works or other subject matter permanently in their collections". The limitation to non-
commercial purposes means that works made available under these provisions would be of limited use
in building a books data commons.
For one assessment of the difficulties of complying with the CC licenses in this context, to the extent 18
they are applicable, see Lee, K., A. Feder Cooper, & Grimmelmann, J. (2023). Talkin’ ‘Bout AI Generation:
Copyright and the Generative AI Supply Chain. Forthcoming, Journal of the Copyright Society 2024.
https://doi.org/10.2139/ssrn.4523551.
Towards a Books Data Commons for AI Training 9 | 9 | 9 | creative_common_ai.pdf |
Reliance on Copyright Limitations and Exceptions
Even if a book is in copyright, it’s possible that copying books for AI training may be covered
by existing limitations and exceptions to copyright law in particular jurisdictions. For
example:
• In the United States, many argue using existing works to train generative AI is “fair
use,” consistent with existing law and legal precedents. This is the subject of a 19
number of currently active court cases, and different actors and tools may yield
different results, as fair use is applied case-by-case using a flexible balancing test.
• In the European Union, there are explicit exceptions in the law for “text and data
mining” uses of in-copyright works, both for non-commercial research and for
commercial purposes. However, for commercial uses and for users outside of
research and heritage institutions, they must respect the rights of rightsholders who
choose to “reserve their rights” (i.e., opt-out of allowing text and data mining) via
machine readable mechanisms. The exception also requires that users have “lawful 20
access” to the works.
• Finally, Japan provides a speci fic text and data mining exception, without any
comparable opt-out requirement for commercial uses as is embedded in EU law. 21
While exceptions that allow AI training exist in several other countries, such as Singapore and
Israel, most countries do not provide exceptions that appear to permit AI training. Even where
potentially available, as in the United States, legal uncertainty and risk create a hurdle for
anyone building a books commons. 22
See e.g. Comments from Sprigman, Samuelson, Sag to Copyright Office, October 2023, at https://19
www.regulations.gov/comment/COLC-2023-0006-10299 as well as many other submissions to the US
copyright office; see also Advocacy, Katherine Klosek, Director of Information Policy and Federal
Relations, Association of Research Libraries (ARL), and Marjory S. Blumenthal, Senior Policy Fellow,
American Library Association (ALA) Office of Public Policy and. “Training Generative AI Models on
Copyrighted Works Is Fair Use.” Association of Research Libraries, 23 Jan. 2024, www.arl.org/blog/
training-generative-ai-models-on-copyrighted-works-is-fair-use/.
See Articles 3 and 4 of the EU’s Directive on Copyright and Related Rights in the Digital Single Market 20
— https://eur-lex.europa.eu/eli/dir/2019/790/oj.
Japan clarified its laws in 2018 to make clear that this type of use is permitted — see discussion in 21
Testimony of Matthew Sag, July 2023, https://www.judiciary.senate.gov/imo/media/doc/
2023-07-12_pm_-_testimony_-_sag.pdf, see also Fiil-Flynn, S. et al. (2022) Legal reform to enhance global
text and Data Mining Research, Science. Available at: https://www.science.org/doi/10.1126/
science.add6124 (Accessed: 28 Sept. 2023).
See supra note 22. See also Jonathan Band, Copyright Implications of the Relationship between 22
Generative Artificial Intelligence and Text and Data Mining | Infojustice. infojustice.org/archives/45509. In
addition, for an in-depth look at the cross-border legal challenges involved see: Wrapping up Our NEH-
Funded Project to Help Text and Data Mining Researchers Navigate Cross-Border Legal and Ethical
Issues. 2 Oct. 2023, buildinglltdm.org/2023/10/02/wrapping-up-our-neh-funded-project-to-help-text-and-
data-mining-researchers-navigate-cross-border-legal-and-ethical-issues/. Accessed 20 Mar. 2024.
Towards a Books Data Commons for AI Training 10 | 10 | 10 | creative_common_ai.pdf |
It is also important to note two other issues that can affect the application of limitations and
exceptions, in particular, their application to e-books.
The first important limitation is that almost every digital book published today comes with a
set of contractual terms that restrict what users can do with it. In many cases, those terms
will explicitly restrict text data mining or AI uses of the content, meaning that even where
copyright law allows for reuse (for example, under fair use), publishers by contract can
impose restrictions anyway. In the United States, those contract terms are generally thought
to override the applicability of fair use or other limitations and exceptions. O t h e r 23
jurisdictions, such as those in the EU, provide that certain limitations and exceptions cannot
be contractually overridden, though experience to date varies with how those anti-contractual
override protections work in practice. 24
The second limitation is the widespread adoption of “anti-circumvention” rules in copyright
laws and the interplay of these with a choice to rely on copyright limitations and exceptions.
Digital books sold by major publishers are generally encumbered with “digital rights
management” (DRM) that limits how someone can use the digital file. For instance, DRM can
limit the ability to make a copy of the book, or even screenshot or excerpt from it, among
other things. Anti-circumvention laws restrict someone's ability to evade these technical
restrictions, even if it is for an ultimately lawful use.
What this means for our purposes is that even if one acquires a digital book from, for
example, Amazon, and it is lawful under copyright law to use that book in AI training, it can
still generally be unlawful to circumvent the DRM to do so, outside narrow exceptions. 25
Thus, the ability to use in-copyright books encumbered by DRM — that is, most all books sold
by major publishers — is generally limited. 26
Practically, using in-copyright books to build a books commons for AI training — while relying
on copyright’s limitations and exceptions — requires turning a physical book into digital form,
or otherwise engaging in the laborious process of manually re-creating a book’s text (i.e., re-
typing the full text of the book) without circumventing the technical restrictions themselves.
See Hansen, Dave. “Fair Use Week 2023: How to Evade Fair Use in Two Easy Steps.” Authors Alliance, 23
23 Feb. 2023, www.authorsalliance.org/2023/02/23/fair-use-week-2023-how-to-evade-fair-use-in-two-
easy-steps/. Accessed 20 Mar. 2024.
See Band, Jonathan. “Protecting User Rights against Contract Override.” Joint PIJIP/TLS Research 24
Paper Series, 1 May 2023, digitalcommons.wcl.american.edu/research/97/. Accessed 20 Mar. 2024.
In the U.S. the Copyright Office has recognized the importance of allowing particular exceptions for 25
researchers engaged in text and data mining. See their rulemaking in 2021 https://
www.federalregister.gov/documents/2021/10/28/2021-23311/exemption-to-prohibition-on-
circumvention-of-copyright-protection-systems-for-access-control. These rules are reviewed triennially
and are currently under review, with submissions suggesting both contraction and expansion; see the
Authors’ Alliance comments in January 2024 https://www.authorsalliance.org/2024/01/29/authors-
alliance-submits-long-form-comment-to-copyright-office-in-support-of-petition-to-expand-existing-text-
and-data-mining-exemption/. It is possible that one could argue for these exceptions to be expanded,
and then work to renew that exception every three years. The EU’s text and data mining exception may
also limit use of DRM to impede data mining, but only for particular covered research and heritage
institutions; commercial and other users are not covered, however.
Note that CC licenses forbid use of DRM — but that doesn’t address most all books sold by publishers.26
Towards a Books Data Commons for AI Training 11 | 11 | 11 | creative_common_ai.pdf |
5. Examining approaches to building a books data
commons
There are many possible permutations for building a books data commons. To structure our
exploration, we focused on two particular tracks, discussed below. We chose these tracks
mindful of the above legal issues, and because there are already existence proofs that help
to illuminate tradeoffs, challenges and potential paths forward for each.
5a. Public domain and permissively licensed books
Existing Project Example : The Pile v2 27
In 2020, the nonprofit research group EleutherAI constructed and released The Pile — a large,
diverse, open dataset for AI training. EleutherAI developed it not only to support their own
training of LLMs, but also to lower the barriers for others. 28
Along with data drawn from the web at large, The Pile included books from three datasets.
The first dataset was the Books3 corpus referenced at the outset of this paper. The second
and third books datasets were smaller: BookCorpus2, which is a collection of 17,868 books
by otherwise unpublished authors; and a 28,752 books in the public domain and published
prior to 1919, drawn from a volunteer effort to digitize public domain works called Project
Gutenberg.
As the awareness about The Pile dataset grew, certain rightsholders began sending copyright
notices to have the dataset taken down from various websites.
Despite the takedown requests, the importance of books to EleutherAI and the broader
community’s AI research remained. In hoping to forge a path forward EleutherAI announced
in 2024 that they would create a new version of the dataset, which they will call The Pile v2. 29
Among other things, v2 would “have many more books than the original Pile had, for
example, and more diverse representation of non-academic non- fiction domains.” At the
same time, it would only seek to include public domain books and permissively licensed
content. As before, this corpus focuses on English language books.
This is an illustrative example, and there are also other projects of this ilk. For instance, see the 27
Common Corpus project, which includes an array of public domain books from a number of countries,
at https://huggingface.co/blog/Pclanglais/common-corpus; see also https://huggingface.co/datasets/
storytracer/internet_archive_books_en (“This dataset contains more than 650,000 English public domain
books (~ 61 billion words) which were digitized by the Internet Archive and cataloged as part of the
Open Library project.”)
See Gao et al, supra note 8.28
Goldman, Sharon. “One of the World’s Largest AI Training Datasets Is About to Get Bigger and 29
“Substantially Better.” VentureBeat, 11 Jan. 2024, venturebeat.com/ai/one-of-the-worlds-largest-ai-
training-datasets-is-about-to-get-bigger-and-substantially-better/. Accessed 20 Mar. 2024.
Towards a Books Data Commons for AI Training 12 | 12 | 12 | creative_common_ai.pdf |
Implications of the The Overall Approach
Stepping back from The Pile v2 speci fically, or any particular existing collection of books or
dataset built on their basis, we want to understand the implications of relying on public
domain works and expressly licensed works in building a books commons.
The benefits are relatively straightforward. Both categories, by de finition come with express
permission to use the books in AI training. The cost of acquiring the books for this use may
be effectively zero or close to it, when considering public domain and “openly” licensed
books that allow redistribution and that have already been digitized.
But this approach comes with some clear limitations. First, as noted above, for many books
in the public domain, their status as such is not always clear. And with respect to
permissively licensed books, it is not always clear whether and how to comply with the
license obligations in this context.
Setting aside those challenges, the simple fact is that relying on public domain and existing
permissively licensed books would limit the quantity and diversity of data available for
training, impacting performance along different dimensions. Only a small fraction of books
ever published fall into this category, and the corpus of books in this category is likely to be
skewed heavily towards older public domain books. This skew would, in turn, impact the
content available for AI training. For instance, relying on books from before 1929 would not 30
only incorporate outdated language patterns, but also a range of biases and misconceptions
about race and gender, among other things. Efforts could be made to get people to
permissively license more material — a book drive for permissive licensing, so to speak; this
approach would still not encompass most books, at least when it comes to past works. 31
5b. Limitations & Exceptions
Existing Project Example: HathiTrust Research Center (HTRC)
The HathiTrust Research Center provides researchers with the ability to perform
computational analysis across millions of books. While it is not suited speci fically for AI
training, it is an existence proof for what such a resource might look like.
For instance, AI researchers note that the recently released Common Corpus dataset is an “invaluable 30
resource” but “comes with limitations. A lot of public domain data is antiquated—in the US, for example,
copyright protection usually lasts over seventy years from the death of the author—so this type of
dataset won’t be able to ground an AI model in current affairs or, say, how to spin up a blog post using
current slang” and the “dataset is tiny.” Thus, while it is possible to train an AI model on the data, those
models will have more limited utility on some dimensions than current frontier models trained on a
broader array of data. See Knibbs, Kate, Here’s Proof You Can Train an AI Model Without Slurping
Copyrighted Content | WIRED. (2024, March 20), at https://www.wired.com/story/proof-you-can-train-ai-
without-slurping-copyrighted-content/.
Our workshop discussion did note that some widely available datasets for AI training have also 31
pursued more direct licensing agreements. For instance, the SILO LLM was created by working with
scientific journal publishers to make works available for both download and AI training. While this might
be viable in the context of particular, narrow classes of works, the barriers to efficient licensing
mentioned above would remain a problem for any broader efforts. See Min, Sewon, et al. “SILO
Language Models: Isolating Legal Risk in a Nonparametric Datastore.” ArXiv (Cornell University), 8 Aug.
2023, https://doi.org/10.48550/arxiv.2308.04430. Accessed 14 Dec. 2023.
Towards a Books Data Commons for AI Training 13 | 13 | 13 | creative_common_ai.pdf |
It is also an example predicated on copyright’s limitations and exceptions — in this case, on
U.S. fair use. While the Authors Guild filed a copyright infringement suit against HathiTrust,
federal courts in 2012 and 2014 ruled that HathiTrust’s use of books was fair use. 32
A nonprofit founded in 2008, HathiTrust grew out of a partnership among major US university
libraries and today is “an international community of research libraries committed to the
long-term curation and availability of the cultural record.” It started in what it calls the “early 33
days of mass digitization” — that is, at a time when it started to become economical to take
existing physical artifacts in libraries and turn them into digital files at a large scale.
The founding members of HathiTrust were among the initial partners for Google’s Book
Search product, which allows people to search across and view small snippets of text from
in-copyright books and read full copies of public domain books scanned from libraries’ 34
collections. The libraries provided Google with books from their collections, Google would
then scan the books for use in Book Search, and return to the libraries a digital copy for their
own uses. These uses included setting up HathiTrust not only to ensure long-term
preservation of the digital books and their metadata, but also to facilitate other uses,
including full text search of books and accessibility for people with print disabilities. In
separate court cases, both Google and HathiTrust’s uses of the books were deemed
consistent with copyright law.
The uses most relevant to this paper are those enabled by what HathiTrust refers to today as
the Research Center. The Center grew in part out of a research discipline called “digital
humanities,” which, among other things, seeks to use computational resources or other
digital technologies to analyze information and contribute to the study of literature, media,
history, and other areas. For instance, imagine you want to understand how a given term
(e.g., “war on drugs”) became used; one might seek to analyze when the term was first used
and how often it was used over time by analyzing a vast quantity of sources, searching out
the term’s use. The insight here is that there is much to be learned not just from reading or
otherwise consuming speci fic material, but also from “non-consumptive research,” or
"research in which computational analysis is performed on one or more volumes (textual or
image objects)" to derive other sorts of insights. AI training is a type of non-consumptive use.
Today, the Center “[s]upports large-scale computational analysis of the works in the
HathiTrust Digital Library to facilitate non-profit and educational research.” It includes over 18
million books in over 400 languages from the HathiTrust Digital Library collection. Roughly
58% of the corpus is in copyright. HathiTrust notes that, while this corpus is large, it has
limitations in terms of its representation across subject matter, language, geography, and
other dimensions. In terms of subject matter, the corpus is skewed towards humanities
(64.9%) and social sciences (14.3%). In terms of language, 51% of the books are in English,
Authors Guild v. HathiTrust, 902 F.Supp.2d 445 (SDNY October 10, 2012) and Authors Guild v. 32
HathiTrust, 755 F.3d 87 (2d Cir. 2014).
See https://www.hathitrust.org/member-libraries/member-list/ — the membership is principally US 33
institutions, and most of the non-US members are from English speaking countries or institutions that
use English as the primary language of operations.
This functionality is limited to scanned books provided by library partners in the US.34
Towards a Books Data Commons for AI Training 14 | 14 | 14 | creative_common_ai.pdf |
German is the next-largest language represented at 9%, and is followed by a long-tail of
languages by representation.
In order to enable these uses, HathiTrust has invested in technical solutions to prevent
possible misuse. To some extent, they manage this by limiting who gets access to the
Center, and limiting access to speci fic features to researchers at member institutions.
HathiTrust has also put in place various security controls on both the physical storage of the
digitized books and the network access to those files. The primary uses of the data through
the Research Center includes access to an extracted features set and access to the
complete corpus “data capsule,” which is a virtual machine running on the Center’s servers.
The data capsule allows users to conduct non-consumptive research with the data, but it
limits the types of outputs allowed in order to prevent users from obtaining full content of in-
copyright works. The measures taken include physical security controls on the data centers
housing the information, as well as restrictions via network access and encryption of backup
tapes. In the finding that HathiTrust use was a fair use and thus rejecting a lawsuit brought
by the Authors Guild, the Court noted the importance of these controls. 35
Today, the Center’s tools are not suitable for AI training, in that they don’t allow the speci fic
types of technical manipulation of underlying text necessary to train an AI. Nevertheless, the
Center demonstrates that building a books data commons for computational analysis is
possible, and in turn points to the possibility of creating such a resource for AI training. 36
Implications of Overall Approach
By relying on existing limitations and exceptions in copyright law, the number of books one
could include in the corpus of a books data commons is far greater and more diverse. Of
course, a bigger dataset doesn’t necessarily mean a higher quality dataset for all uses of AI
models; as HathiTrust shows, even a multimillion book corpus can skew in various
directions. Still, dataset size generally remains signi ficant to an LLM’s performance – the
more text one can train on, or rather the more tokens for training the model, the better, at
least along a number of performance metrics. 37
While holding the potential for a broader and more diverse dataset, a key limitation in
pursuing this approach is that it is only feasible where relevant copyright limitations and
exceptions exist. Even then, legal uncertainty means that going down this path is likely to
generate, at a minimum, expensive and time-consuming litigation and regulatory
This is explained explicitly in the appeals court’s decision: Authors Guild v. HathiTrust, 755 F.3d 87 (2d 35
Cir. 2014).
HathiTrust has also made available some data derived from books, such as the Extracted Features 36
set: “HTRC releases research datasets to facilitate text analysis using the HathiTrust Digital Library.
While copyright-protected texts are not available for download from HathiTrust, fruitful research can still
be performed on the basis of non-consumptive analysis of transformative datasets, such as in HTRC's
flagship Extracted Features Dataset, which includes features extracted from full-text volumes. These
features include volume-level metadata, page-level metadata, part-of-speech-tagged tokens, and token
counts:” https://analytics.hathitrust.org/datasets#top.
See Testimony of Chris Callison-Burch, July 2023, https://docs.house.gov/meetings/JU/37
JU03/20230517/115951/HHRG-118-JU03-Wstate-Callison-BurchC-20230517.pdf (“As the amount of
training data increases, AI systems’ capabilities for language understanding and their other skills
improve.”); Brown, Tom, et al. Language Models Are Few-Shot Learners. 22 July 2020, at https://arxiv.org/
pdf/2005.14165.pdf (“we find that performance scales very smoothly with model size”).
Towards a Books Data Commons for AI Training 15 | 15 | 15 | creative_common_ai.pdf |
engagement. And, at least in the U.S., it could generate billions of dollars in damages if the
specific design choices and technical constraints are not adequate to justify a finding of fair
use.
This sort of books dataset could be built by expanding use of in-copyright books that have
already been digitized from existing libraries and other sources. Speci fically, workshop
participants mentioned that the Internet Archive, HathiTrust, and Google as entities that have
digitized books and could repurpose their use to build a books commons, although
challenges with using these datasets were noted. The Internet Archive is in the midst of
litigation brought by book publishers for its program for lending digital books; while not
directly relevant to the issue of AI training using their corpus of books, this sort of litigation
creates a chilling effect on organizations seeking to make new uses of these digitized books.
Meanwhile, Google encumbered HathiTrust’s digital copies with certain contractual
restrictions, which would need to be addressed to develop a books dataset for AI training,
and Google itself is unlikely to share its own copies while it provides them a competitive
advantage.
Perhaps as a matter of public policy, these existing copies could be made more freely
available. For instance, to ensure robust competition around AI and advance other public
interests, policymakers could remove legal obstacles to the sharing of digitized book files for
use in AI training. Alternatively, policymakers could go further and a ffirmatively compel
sharing access to these digital book files for AI training.
It's possible that there could be a new mass digitization initiative, turning physical books into
new digital scans. At least in theory, one could try to replicate the existing corpora of
HathiTrust, for example, without Google’s contractual limitations. At the same time, such an
effort would take many years, and it seems unlikely that many libraries would want to go to
the trouble to have their collections digitized a second time. Moreover, while new scans may
provide some incremental bene fit over use of existing ones (e.g., by using the most modern
digitization and OCR tools and thus improving accuracy), there is no inherent social value to
making every entity that wants to do or allow AI training invest in their own redundant
scanning.
A new digitization effort could target works that have not been yet digitized. This may be
particularly useful given that previous book digitization efforts, and the Google Books project
in particular, have focused heavily (though not exclusively) on libraries in English-speaking
countries. Additional digitization efforts might make more sense for books in those
languages that have not yet been digitized at a meaningful scale. Any new digitization effort
might therefore start with a mapping of the extent to which a books corpus in a given
language has been digitized.
Towards a Books Data Commons for AI Training 16 | 16 | 16 | creative_common_ai.pdf |
6. Cross-cutting design questions
The workshops brie fly touched on several cross-cutting design questions. While most
relevant for approaches that depend on limitations and exceptions, considerations of these
questions may be relevant across both tracks.
Would authors, publishers, and other relevant rightsholders
and creators have any ability to exclude their works?
One of the greatest sources of controversy in this area is the extent to which rightsholders of
copyrighted works, as well as the original creators of such works (e.g., book authors in this
context), should be able to prevent use of their works for AI training.
While a system that required a ffirmative “opt-in” consent would limit utility signi ficantly (as
discussed above in the context of directly licensing works), a system that allowed some
forms of “opt-out” could still be quite useful to some types of AI development. In the context
of use cases like development of LLMs, the performance impact may not be so signi ficant.
Since most in-copyright books are not actively managed, the majority of books would remain
in the corpus by default. The performance of LLMs can still be improved across various
dimensions without including, for example, the most famous writers or those who continue
to commercially exploit their works and may choose to exercise an opt-out. Perhaps the
potential for licensing relationships (and revenue) may induce some rightsholders to come
forward and begin actively managing their works. In such a case, uses that do require a
license may once again become more feasible once the rightsholder can be reached.
Workshop participants discussed different types of opt-outs that could be built. For example,
opt-outs could be thought of not in blanket terms, but only as applied to certain uses, for
example to commercial uses of the corpus, but not research uses. This could build on or
mirror the approach that the EU has taken in its text and data mining exceptions to
copyright. Opt-outs might be more granular, by focusing on allowing or forbidding particular 38
uses or other categories of users, given that rights holders have many different sets of
preferences.
Another question is about who can opt-out particular works from the dataset. This could
solely be an option for copyright holders, although authors might be allowed to exercise an
opt-out for their books even if they don’t hold the copyrights. This might create challenges if
the author and rightsholder disagree about whether to opt a particular book out of the
corpus. Another related issue is that individual books, such as anthologies, may comprise
works created (and rights held) by many different entities. The images in a book may have
come from third-party sources, for instance, or a compendium of poetry might involve many
In fact, as noted above, to the extent an AI model developer intends for their model to abide by the 38
EU’s legal regime, they will have to abide by such opt-outs, at least if they are engaged in text and data
mining for commercial uses and/or are users outside of the covered set of research and heritage
institutions. A books data commons may incorporate opt-outs in particular to serve such EU-focused AI
developers.
Towards a Books Data Commons for AI Training 17 | 17 | 17 | creative_common_ai.pdf |
different rightsholders and authors. Managing opt-outs for so many different interests within
one book may get overly complicated very fast.
In any event, creating an opt-out system will need some ways of authenticating whether
someone has the relevant authority to make choices about inclusion of a work.
Who would get to use the books data commons? For what?
A commons might be made publicly available to all, as has been done with datasets like The
Pile. Another possible design choice is to restrict access only to authorized users and to
enforce particular responsibilities or obligations in return for authorization. Three particular
dimensions of permitted uses and users came up in our discussions:
• Defining and ensuring acceptable and ethical use: Participants discussed to what
extent restrictions should be put on use of the resource. In the case of HathiTrust,
acceptable use is implicitly ensured by limiting access to researchers from member
institutions; other forms of “gated access” are possible, allowing access only to
certain types of users and for certain uses. One can imagine more fine-grained 39
mechanisms, based on a review of the purpose for which datasets are used. This
imagined resource could become a useful lever to demand responsible development
and use of AI; alongside “sticks” like legal penalties, this would be a “carrot” that
could incentivize good behavior. At the same time, drawing the lines around, let alone
enforcing, “good behavior” would constitute a significant challenge.
• Charging for use to support sustainability of the training corpus itself: While wanting
to ensure broad access to this resource, it is important to consider economic
sustainability, including support for continuing to update the resource with new works
and appropriate tooling for AI training. Requiring some form of payment to use the
resource could support sustainability, perhaps with different requirements for
different types of users (e.g., differentiating between non-commercial and
commercial users, or high-volume, well-resourced users and others). 40
• Ensuring bene fits of AI are broadly shared, including with book authors or
publishers: The creation of a training resource might lower barriers to the
development of AI tools, and in that way support broadly shared bene fits by
facilitating greater competition and mitigating concentration of power. On the other
hand, just as concentration of technology industries is already a signi ficant challenge,
AI might not look much different, and the bene fits of this resource may still simply go
to a few large firms in “winner takes all-or-most” markets. The workshops discussed
how, for instance, large commercial users might be expected to contribute to a fund
that supported contributors of training data, or more generally to fund writers, to
ensure everyone contributing to the development of AI benefits.
For examples of gated access to AI models, see https://huggingface.co/docs/hub/en/models-gated.39
As an analogy, consider for instance Wikimedia Enterprise, which “build[s] services for high-volume 40
commercial reusers of Wikimedia content” and charges for that access. https://meta.wikimedia.org/
wiki/Wikimedia_Enterprise.
Towards a Books Data Commons for AI Training 18 | 18 | 18 | creative_common_ai.pdf |
What dataset management practices are necessary?
No matter how a books data commons gets built, it will be important to consider broader
aspects of data governance. For example:
• Dataset documentation and transparency: Transparent documentation is important
for any dataset used for AI training. A datasheet is a standardized form of
documentation that includes information about provenance and composition of data,
and includes information on management practices, recommended uses or collection
process.
• Quality assurance: Above, we note the many features that make books useful for AI
training, as compared with web data, for example. That said, the institution managing
a books commons dataset may still want to collect and curate the collection to meet
the particular purposes of its users. For instance, it may want to take steps to
mitigate biases inherent in the dataset, by ensuring books are representative of a
variety of languages and geographies.
• Understanding uses: The institution managing a books commons dataset could
measure and study how the dataset is used, to inform future improvements. Such
monitoring may also enable accountability measures with respect to uses of the
dataset. Introducing community norms for disclosing datasets used in AI training and
other forms of AI research would facilitate such monitoring.
• Governance mechanisms: In determining matters like acceptable and ethical use, the
fundamental question is “who decides.” While this might be settled simply by whoever
sets up and operates the dataset and related infrastructure, participatory
mechanisms — such as advisory bodies bringing together a broad range of users and
stakeholders of a collection — could also be incorporated.
Towards a Books Data Commons for AI Training 19 | 19 | 19 | creative_common_ai.pdf |
7. Conclusion
This paper is a snapshot of an idea that is as underexplored as it is rooted in decades of
existing work. The concept of mass digitization of books, including to support text and data
mining, of which AI is a subset, is not new. But AI training is newly of the zeitgeist, and its
transformative use makes questions about how we digitize, preserve, and make accessible
knowledge and cultural heritage salient in a distinct way.
As such, efforts to build a books data commons need not start from scratch; there is much
to glean from studying and engaging existing and previous efforts. Those learnings might
inform substantive decisions about how to build a books data commons for AI training. For
instance, looking at the design decisions of HathiTrust may inform how the technical
infrastructure and data management practices for AI training might be designed, as well as
how to address challenges to building a comprehensive, diverse, and useful corpus. In
addition, learnings might inform the process by which we get to a books data commons —
for example, illustrating ways to attend to the interests of those likely to be impacted by the
dataset’s development. 41
While this paper does not prescribe a particular path forward, we do think finding a path (or
paths) to extend access to books for AI training is critical. In the status quo, large swaths of
knowledge contained in books are effectively locked up and inaccessible to most everyone.
Google is an exception — it can reap the bene fits of their 40 million books dataset for
research, development, and deployment of AI models. Large, well-resourced entities could
theoretically try to replicate Google’s digitization efforts, although it would be incredibly
expensive, impractical, and largely duplicative for each entity to individually pursue their own
efforts. Even then, it isn’t clear how everyone else — independent researchers, entrepreneurs,
and smaller entities — will have access. The controversy around the Books3 dataset
discussed at the outset should not, then, be an argument in favor of preserving the status
quo. Instead, it should highlight the urgency of building a books data commons to support an
AI ecosystem that provides broad benefits beyond the privileged few.
For other existing and past examples, one might look to the work of Europeana, https://41
www.europeana.eu/en, as well as the mountain of commentary on the failed class action settlement
between Google, the Authors Guild, and the Association of American Publishers — see e.g. the excellent
collection of court filings created by James Grimmelmann and colleagues (now archived at the Internet
Archive) — https://web.archive.org/web/20140425012526/http://thepublicindex.org/. The Settlement
expressly would have set up a “Research Corpus” for non-consumptive research. HathiTrust created a
Research Center, with the intention of becoming one of the hosts for the “Research Corpus.” The
Settlement was criticized and was ultimately rejected by the district court for both substantive reasons
(that is, what the settlement would specifically do) and procedural (in the sense of violating class-action
law, but also in a broader sense of representing a “backroom deal” without sufficient participation from
impacted interests). The Research Corpus was not a core locus of critique, though it did receive concern
in terms of providing too much control to Google, for example. Our purpose in mentioning this is not to
relitigate the issue, but rather to call out that design decisions of this sort have been considered in the
past.
Towards a Books Data Commons for AI Training 20 | 20 | 20 | creative_common_ai.pdf |
Acknowledgements
Authored by Alek Tarkowski and Paul Keller ( Open Future), Derek Slater and Betsy Masiello
(Proteus Strategies) in collaboration with Creative Commons.
We are grateful to participants in the workshops, including Luis Villa, Tidelift and openml.fyi;
Jonathan Band; Peter Brantley, UC Davis; Aaron Gokaslan, Cornell; Lila Bailey, Internet
Archive; Jennifer Vinopal, HathiTrust Digital Library; Jennie Rose Halperin, Library Futures/
NYU Engelberg Center, Nicholas P . Garcia, Public Knowledge; Sayeed Choudhury; Erik
Stallman, UC Berkeley School of Law. The paper represents the views of the authors,
however, and should not be attributed to the workshop as a whole. All mistakes or errors are
the authors’.
This report is published under the terms of the Creative Commons Attribution
License.
Towards a Books Data Commons for AI Training 21 | 21 | 21 | creative_common_ai.pdf |
Open Data:
Emerging trends, issues
and best practices
a research project about openness of
public data in EU local
administration
by Marco Fioretti
for the
Laboratory of Economics and Management
of
Scuola Superiore Sant'Anna, Pisa
This report is part of the “Open Data, Open Society” Project financed through the DIME network
(Dynamics of Institutions and Markets in Europe, www.dime-eu.org) as part of DIME Work
Package 6.8, coordinated by Professor Giulio Bottazzi
1/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 0 | 0 | Open_Data_Report.pdf |
Table of Contents
1. Introduction........................................................................................................................ 3
2. Social and political landscape............................................................................................. 3
2.1. Wikileaks and the Open Data movement................................................................................................................................... 5
2.2. Data Openness in EU................................................................................................................................................................ 6
2.3. Open Data in Latin America, Asia and Africa........................................................................................................................... 8
3. Emerging trends and issues related to Open Data............................................................. 11
3.1. Cost of not opening PSI is increasing...................................................................................................................................... 11
3.2. Creative, unforeseen uses of local Open Data increase............................................................................................................ 12
3.3. Legal issues remain crucial..................................................................................................................................................... 13
3.4. The price of digitization.......................................................................................................................................................... 14
3.5. The nature of Open Government and the relationship between citizens and Government....................................................... 15
3.6. Clearer vision of the real risks and limits of Open Data.......................................................................................................... 16
3.6.1. Data alterations and financial sustainability............................................................................................................................................... 17
3.6.2. Real impact of data manipulation or misunderstanding............................................................................................................................. 17
3.6.3. Unequal access............................................................................................................................................................................................19
3.6.4. Lack of education to data............................................................................................................................................................................20
3.6.5. Lack of public interest................................................................................................................................................................................ 21
3.6.6. Unprepared Public Administrators............................................................................................................................................................. 22
3.7. The privacy problem............................................................................................................................................................... 22
3.8. Need to better define what is Public Data................................................................................................................................ 23
4. Conclusion: seven Open Data strategy and best practices suggestions............................. 27
4.1. Properly define and explain both Open Data and Public Data................................................................................................. 27
4.2. Keep political issues separated by economics ones................................................................................................................. 27
4.3. Keep past and future separate.................................................................................................................................................. 28
4.4. Impose proper licensing and streamline procurement.............................................................................................................. 29
4.5. Educate citizens to understand and use data............................................................................................................................ 30
4.6. Focus on local, specific issues to raise interest for Open Data................................................................................................. 31
4.7. Involve NGOs, charities and business associations................................................................................................................. 32
5. Bibliography..................................................................................................................... 33
2/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 1 | 1 | Open_Data_Report.pdf |
1. Introduction
This report is the final deliverable of the Open Data, Open Society research project. It follows the
publication of the Open Data, Open Society report , finished in late October 2010 and published in
early January 2011. That first report focused on explaining the critical importance of digital data in
contemporary society and business activities; defining Open Data; giving examples on their
potential, especially at the local level, on transparency and economics activities; finally, defining
summarizing some general best practices.
This second report looks at what happened in the Open Data arena after October 2010. After some
considerations on the general social and political background in late 2010/early 2011, it is divided
in two main parts. The first describes some emerging trends and issues related to Open Data, that
got minor or no coverage in the first report. The second part discusses some practices and actions to
follow to deal with those trends and issues.
2. Social and political landscape
It is worthwhile to begin by mentioning several events, happened between the end of 2010 and the
first months of 2011, that can help to understand what will be the place and role of Open Data in the
future, as well as the challenges faced by its advocates.
The first two are the Spanish "Indignados" and the Arab Spring. The first movement has among its
goals "a change in society and an increase in social awareness" . The Arab Spring, as L. Millar put
it on the New Zealand Computer Society website , "demonstrated the potency of technology to
reflect citizens' views of government systems that are not transparent." As a consequence, noted the
Afrinnovator blog, "we have seen from the civil disobedience in the North of Africa and the Middle
East, the appetite for more accountable and transparent government will only grow from here on" .
From this analysis it looks like, in a way, both the Indignados and the participants to the Arab
Spring are (also) asking for Open Data, even if they aren't using the term and many participants to
these grassroots movement may still ignore its definition, that was born inside hackers and Public
Administration circles.
Two other important events that, in different ways and at different levels, prove the importance of
Open Data are the Fukushima nuclear accident and the Cablegate, which we'll analyze in the next
3/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 2 | 2 | Open_Data_Report.pdf |
paragraph. Whatever one may think about nuclear power, Fukushima remembered how important
total transparency and accountability are in the management and maintenance of all power sources,
and in the decision-making processes that create the corresponding public policies.
For the meantime, we'll note how all these events seem to hint that structural need and bottom-up
demand for Open Data are mounting everywhere, even in cultural contexts very different than those
in which Open Data was born, and even if sometimes they are not mentioned explicitly or
consciously. Even in Western Countries, the high-level motivations, for the transparency and
governance models that inspire Open Data, from positions different than those from which the
movement started, are increasing. In 1931 Pope Pio XI wrote, in the Encyclical Quadragesimo anno
that:
80. The supreme authority of the State ought, therefore, to let subordinate groups
handle matters and concerns of lesser importance, which would otherwise dissipate its
efforts greatly. Thereby the State will more freely, powerfully, and effectively do all
those things that belong to it alone because it alone can do them: directing, watching,
urging, restraining, as occasion requires and necessity demands. Therefore, those in
power should be sure that the more perfectly a graduated order is kept among the
various associations, in observance of the principle of "subsidiary function," the
stronger social authority and effectiveness will be the happier and more prosperous the
condition of the State.
This is the principle of subsidiarity, often summarized in a way that may sound familiar to many
Open Data advocates: "What men can do by themselves with their own resources can't be taken
away from them and assigned as a task to society" . In March 2011, journalist Guido Gentili made
just this connection. After noting that the principle was also introduced in the Italian Constitution by
the 2001 reform of article 118, he concluded that subsidiarity as a strategy for development isn't an
English invention and the "Big Society" vision (a proposal in which Open data is key) would do
good to Italy too".
At a more practical and economical level, digital information continues to increase. In spite of
mounting cost pressures, large public and private organizations have to maintain massive amounts
of structured and unstructured data, that keep growing, both for their own internal needs and to
simply comply with government regulations. At the same time, signals that traditional public
services and the whole welfare state won't remain sustainable for long with traditional means,
continue to arrive, therefore strengthening the search for radical, innovative and cost-effective
solutions.
Besides costs, another practical driver and justification for Open Data that is becoming more and
4/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 3 | 3 | Open_Data_Report.pdf |
more concrete over time is damage control. In a world that produces digital data without
interruption, uncontrolled and unpredictable data releases are facts of life that are very hard to
predict, practically impossible to avoid and increasingly common. Opening public government data,
that is providing plenty of officially verified information, becomes therefore also a damage control
solution, to prevent or at least minimize damages from such uncontrolled releases. Without official
Open Public Data, individual citizens, political parties or other organizations will start to process
and compare (if they already aren't...) data from unofficial sources anyway, maybe from different
countries. In such cases, it will be unavoidable not reach sometimes, even in good faith, wrong
conclusions. This is not some theoretical possibility far in the future, as this real world example
(from a comment to an Open Data discussion in an italian blog) proves:
"on the [non italian] Geonames website you can download geo-referenced data
about... 47000 Italian municipalities. That worries me, because there are only 8094 of
them. Besides, I grabbed a few random data about population, and I can guarantee you
that not one was right. What should be done in such cases?
From an Open Data perspective, all these recent stories have (at least) one thing in common: they
suggest that, considering its current needs and problems, current societies want and need more Open
Data than they already have.
2.1. Wikileaks and the Open Data movement
During the 2010/2011 winter the discussions around the Cablegate and other documents published
by Wikileaks have, in some occasion, included hostility towards Open Data. This is a consequence
of a more or less conscious mixing of the two themes, because in a very general sense, both Open
Data and Wikileaks are about transparency, accountability and democracy.
As far as this study is concerned, two conclusions can be drawn from the Cablegate/Wikileaks
scandal.
The first is that, in practice, it is necessary to find and equilibrium between secrecy and
transparency whenever government activities are concerned. Citizens must be able to know what
the state is actually doing but sometimes, be it for careful evaluation of all the alternatives or
because of security, it must be possible to work behind closed doors, at least temporarily . We'll
come back to this point later in this report.
The second conclusion is that, while certainly both Open Data and Wikileaks are about openness
and transparency in politics, not only there are deep differences between the two ideas but, in our
5/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 4 | 4 | Open_Data_Report.pdf |
opinion, the Wikileaks experience proves the advantages of Open Data.
Was Wikileaks right to publish the cable? Were the specific facts and behaviors uncovered by
Cablegate right or wrong? The answer to these questions are outside the scope of this document.
Here we only wish to point out that Cablegate and Wikileaks, at least in the form we've known them
so far, have been about:
• reacting to problems after they occurred
• without any intervention and involvement of the parties and organizations that may have
behaved improperly
Open Data, instead, is about prevention of errors, abuses and inefficiencies, through conscious and
continuous collaboration of citizens and governments officials during day to day operations, if not
before their beginning.
Of course, citizens must always check that they aren't getting incomplete or biased data. But in any
case, Open Data means that the involved government officials aren't just prepared to see that data
published, they know and accept it from the start. In such a context, some risks associated to
Wikileaks, like the fact that the leaker lacks the means to influence the downstream use of the
information, and therefore may harm anybody connected to the linked information, are almost non-
existent.
Above all, unlike the content of most Wikileaks documents, Open Data are almost always data that
should surely be open, unlike wartime military reports, and that almost never contain any personal
information. In summary, whatever the conclusions about Wikileaks are, they could not be
conclusions against Open Data, because there are too many differences between the two
movements.
2.2. Data Openness in EU
Both the interest and the need for data openness at the European Union level remain high. Here,
without making any complete analysis, we'll only report and comment a few relevant episodes.
While studies continue to point at the political and economical advantages of Open Data, great
inefficiencies and delays still keep the time and cost savings that could be achieved a far goal for
the European Union.
All the principles of the Open Declaration (collaboration, transparency, empowerment) have been
declared key areas of action of the new EC eGov action plan. Particularly important, as explained
6/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 5 | 5 | Open_Data_Report.pdf |
by David Osimo in EU eGov action plan published: the good, the bad and the unknown , are the
actions on Open Data (a EU portal and a revision of the EU PSI directive), and on citizens control
over their data. However the Action Plan contains no reference to the need for a more open and
collaborative governance.
In the case of European Structural Funds, as Luigi Reggi reported in March 2011:
there is no single point of access to the data. Hundreds of Managing Authorities are
following different paths and implementing different information strategies when
opening up their data.
Many databases (often simple PDF lists) [...show...] huge variation not only in
the way they can be accessed but also in content and quality of data provided.
... [...The results of...] an independent web-based survey on the overall
quality of data published by each Managing Authority responsible for the 434
Operational Programmes approved in July 2009... can be summarized as follows:
The use of open, machine-processable and linked-data formats have unexpected
advantages in terms of transparency and re-use of the data by the public and private
sector. The application of these technical principles does not need extra budget or major
changes in government organization and information management; nor does it require
the update of existing software and infrastructures. What is needed today is the
promotion among national and local authorities of the culture of transparency and the
raising of awareness of the benefits that could derive from opening up existing data and
information in a re-usable way.
The European Cohesion Policy is only halfway to accomplishing a paradigm shift to
open data, with differences in performance both between and - in some cases - within
European Countries.
Things don't go much better for the European Union in the energy field. Carlo Stagnaro wrote in
EU Energy Orwellianism: Ignorance Is Strength:
Energy is an active area of EU public policy. Yet authorities are not revealing
information (data is surely has) that is crucial to determine whether its policies are
distorting the market and come at too high a cost to society. This is a major fault in
Europe's credibility in advancing its policy goals, as well as a serious limitation to the
accountability of the policy making process
We realized that, while strongly supporting green investments the EU does not know, or
does not make it public, how much is spent every year on green subsidies... With regard
to green jobs, several estimates exist, but no official figure is provided.
More recently... I discovered that Eurostat does not tell how much coal capacity is
installed - as opposed to natural gas- or oil-fueled generation plants. It is possible to
know how much coal is used, but not the amount of fixed capital which is invested in
7/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 6 | 6 | Open_Data_Report.pdf |
coal plants. If data are not available, every conclusion is questionable because it relies
on assumptions or estimates.
2.3. Open Data in Latin America, Asia and Africa
Several countries in Latin America are studying and making experiments with Open Data both at
the government and at the grassroots level. The same is happening, on a much smaller scale, in a
few parts of Asia and Africa. On average, the volume of these Open Data experiments and the level
of local interest and awareness around them is still lower than what is happening in Europe and
North America. In spite of this we suggest that it is important, for public officials and civic activists
in Western Countries, to follow these developments closely. The reason is that they may turn into
very useful test beds for all the strengths and limits of Open Data, especially those not encountered
yet where the movement was born.
In fact, the original discourse and arguments around Open Data are heavily Western centric. The
problem they want to solve is how to make democracy work better in countries where it already
exists and which share a great amount of history and cultural/philosophical values.
Other countries face very different challenges, from the philosophical level to the practical one. A
common issue in developing countries, for example, is that there is very little to open simply
because much PSI (Public Sector Information) doesn't exist in digital format yet. Therefore, the first
thing to do is to create data, normally through outsourcing and crowd sourcing.
Other issues, that will be discussed in detail in other sections of the report because they are also
present in Europe in different forms, are related to lack of equal opportunities for access to data and
serious fears (sometimes, concrete, sometimes caused by confusion about what should be open and
how) that data will be used against citizens. A commenter to Gurstein's Open Data: Empowering
the Empowered or Effective Data Use for Everyone? said:
in Delhi and Mumbai, mobs and rioters managed to get information about particular
identity groups through voter rolls: openness is, in certain situations, a precarious
virtue. It is almost certain that Open Data would be used to rig election but here again
openness is not the issue, they would find it anyway...
So far, the main interest about Open Data in Asian countries seems limited, so to speak, to its
effects on transparency in politics. At a two-weeks programming contest held at the end of 2010 in
Thailand, for example, one of the most appreciated entries was a software scraper of the Thailand's
Member of House of Representative Website, that made it possible for everybody to create
applications using those data.
8/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 7 | 7 | Open_Data_Report.pdf |
Right now, one of the most active Asian countries in the Open Data arena is India, which also
signed an Open Government partnership with the USA in November 2010. In January 2011 the
Indian Congress Party announced plans for a new law to fight corruption among public servants and
politicians. Anti-corruption websites (including ones in local dialects) like
Indiaagainstcorruption.org, already existed, including one, Ipaidabribe.com, that collected more
than 3,000 people reports of graft in its first four months.
As it happens in Asia, even Latin America is currently focused, at least outside Public
Administration circles, on how to open public data to achieve actual transparency. This appears
even from the way many projects are labeled, that is "Civic Information" instead of Open Data
(which is an idea starting from data reuse) or Open Government.
The reason is that even where good Freedom of Information laws exist in Latin America, they still
have too little practical effects. Mexico, for example, already has a digital system to manage
Freedom of Information requests, but there are reports of complaints filed against municipal
officials that either have no effect at all, or aren't possible in the first place, because relevant
information has not been updated in years, or omits key data like (in the case of budget reports)
"descriptions of how the money was spent".
Even with these difficulties, the Latin America Open Data/Civic Information landscape is active
and definitely worthwhile following. The list of interesting Civic Information projects in Latin
America include (from Sasaki's Access to Information: Is Mexico a Model for the Rest of the
World?:
• Mexico
• Mexican Farm Subsidies - an online tool to analyze how the federal government
allocates those subsidies
• Compare Your School : compares aggregate test results from any school with the
municipal, regional, and national averages
• Rebellion of the Sick built for patients with chronic diseases whose expenses are not
covered by the government subsidized health coverage.
• Argentina: Public Spending in Bahía analyzes how public funds are used.
• Colombia: Visible Congress monitors the actions of the Colombian congress
• Brazil
• Eleitor 2010 : a website to submit reports of electoral fraud during the Brazil 2010
9/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 8 | 8 | Open_Data_Report.pdf |
elections
• Open Congress : a tool for political scientists to track the work and effectiveness of
the Brazilian congress
• Paraguay: Who Do We Choose?: lists profiles of all candidates for many public posts.
In Brazil, the principle that "what is not confidential should be available on the Internet in the open
data format" is already discussed and, in principle, accepted, by some departments of the Brazilian
federal government. However, the preferred practice for now is (if there are no other obstacles) to
only publish data that have been explicitly requested by some citizens.
A report presented in May 2011 at the First Global Conference on Transparency Research
mentioned a couple of Open Data issues in Latin America that are worth noting, because they're
present even in Europe and North America, in spite of the different historical and social
background:
• "Better coordination is needed between right to information campaigners and open data
activists."
• "If activist manage to target particular topics to add "value" to the discussion, demand for
open data could eventually increase in the region."
In Africa, mobile phones are much more available, and more essential than computer with Internet
access, often bypassing the need for real desktop PCs with many applications. Therefore, from a
purely technical point of view, transparency, accountability and efficiency in government are
quickly becoming accessible to most African citizens through mobile networks rather than through
the "traditional" Internet. However, there are still too few public departments and procedures that
use digital documents and procedures on a scale large enough to generate meaningful volumes of
digital data that could be then published online.
While we write, Kenya is laying the legal groundwork to support Open Data. Permanent Secretary
for Information and Communications, Dr. Bitange Ndemo is reported as having been championing
for quite some time. In practice, big challenges remain for Open Data usage in Kenya. The easiest
one to solve is to technical, that is find skilled people that can package the data in ways that the
public can consume (even on mobile phones...). The real problem, however, is the fact that
(summarizing from Thinking About Africa's Open Data):
There is a lot of Kenya data but it isn't accessible. The entities that hold the most public
and infrastructure data are always government institutions. Getting information from
them can be very hard indeed. We don't know who to go to to get the data we need, and
10/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 9 | 9 | Open_Data_Report.pdf |
there is no mandate to support one group to centralize it.
Kenya's own OpenData.go.ke website has only ever seen a small handful of data sets,
none of which are now (early April 2011) available anymore. Groups like the Ministry
of Education might publish some information on schools, but they won't give anyone
the location data.
3. Emerging trends and issues related to Open
Data
One of the most common activities for Open Data activists in this moment is the creation of
country-wide catalogs of all data sources, to facilitate individuation and correlation of independent
data sets. Normally, all initiatives of this type are announced on the Open Knowledge Foundation
blog and/or its data hub CKAN. Another relevant development is the publication of an Open Data
Manual that "can be used by anyone but is especially designed for those seeking to open up data,
since it discusses why to go open, what open is, and the how to 'Open' Data." Activists in several
European countries have already published local versions of the manual, or equivalent documents.
On this background, several interesting issues, some of which were anticipated in the Open Data,
Open Society report, are coming in full light. They are presented, one at a time, in the following
sections of this chapter.
3.1. Cost of not opening PSI is increasing
Much has been said on the economic benefits of opening public sector information, and much more
remains to be said and studied. One part of this issue that is becoming more evident over time is that
Open Data are the simplest, if not the only way, to save Public Administrations from the costs that
they have already (and rightfully!) forced themselves to bear, through assorted laws and official
regulations. This is explained well in the report from LinkedGov about the economic impact of
open data:
(p. 2) "As the costs of disseminating and accessing information have declined, the
transactions costs associated with charging for access to information, and controlling
subsequent redistribution have come to constitute a major barrier to access in
themselves. As a result, the case for free (gratis) provision of Public Sector Information
is stronger than has already been recognized.
Eaves provides a practical example from Canada in Access to Information is Fatally Broken… You
Just Don't Know it Yet : the number of Access to Information Requests (ATIP) has almost tripled
11/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 10 | 10 | Open_Data_Report.pdf |
since 1996. Such growth might be manageable if the costs of handling each requests was dropping
rapidly, but it has more than quadrupled.
Unfortunately, alternatives like charging for access to data or cutting the budget for providing them
to citizens remain very common in spite of their negative effects. According to Eaves, the first
practice has already caused a reduction in the number of freedom of information requests filed by
citizens, while budget cuts invariably and greatly delay processing times.
3.2. Creative, unforeseen uses of local Open Data increase
Proofs that, as cited in the Open Data, Open Society report, "Data is like soil", that is valuable not
in itself, but because of what grows on it, often in ways that the landowner couldn't imagine,
continue to arrive. Here is an example from Day Two: Follow the Data, Iterating and the $1200
problem:
Ed Reiskin noticed a problem with street cleaning. Some trucks would go out, coming
back with little or no trash depending on the day and route they took. After getting the
tonnage logs, his team quickly realized that changing certain routes and reducing
service on others would save money (less gas, parts, labor) and the environment (less
pollution, gas consumption, water). A year later, the department realized a little over a
million dollars in savings. The point? Follow the data.
The value embedded in data isn't only economical or political, but also social. Here are a few
examples.
At the Amsterdam fire brigade, once a fire alarm starts, all sorts of data is collected , to maximize
the probabilities to save lives and property, about the location and the route to the emergency:
constructions on the way, latest updates from OpenStreetMap, the type of house and if possible
more data such as construction dates, materials, people living there and so on.
Using the geographical coordinates embedded in online photo databases like Flickr, digital
cartographer Eric Fischer creates maps that highlight people behavior. For example, he documented
how, in Berlin, most locals tend to stay in the same neighborhoods and don't go to West Berlin or to
the outskirts of the city. This information has economic value, journalist Kayser-Bril noted: "You
can then sell this for instance to businessmen who want to open a shop in Berlin for tourists, and
telling them where to go and where not to go."
Norwegian transport company Kolumbus has embedded 1,200 bus stops with barcodes in the square
QR format, that can encode text or URLs. Scanning those codes with a free software application for
smartphones loads a website that lists upcoming bus departure times. Later, Kolumbus partnered
12/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 11 | 11 | Open_Data_Report.pdf |
with a project called "Tales of Things" to allow people to leave messages for each other (or just for
the world) at the bus stops. Scanning the QR code now allows people to see not just the bus
timetable, but also the notes other travelers have left on that stop, including "what's nearby, who's
waiting for whom, what number can you call for a good time. It's a cross between bus stop
Facebook and digital graffiti", that happened thanks to the openness of the original bus stop data.
The Social Life of Data Project will study instead how particular datasets have been used, who used
them, how those people are connected and what conversations happen around Open Data.
3.3. Legal issues remain crucial
Proper licensing of Public data is essential. The more Open Data activities continue, the clearer this
rule becomes. What distinguishes Open Data from "mere" transparency is reuse. Paraphrasing
Eaves, until a government get the licensing issue right, Open Data cannot bring all the possible
benefits in that country. If there are no guarantees that public data can be used without restriction,
very little happens in practice, and when it happens it may be something against the public interest.
Canadian Company Public Engines Inc, that is paid by local police departments to collect, process
and analyze official crime data, also publishes online, with a proprietary license, anonymized
summaries of those data. When in 2010 another company, Report See Inc, scraped those data from
their website to reuse them, Public Engines sued.
Reporting this, D. Eaves rightly points out that both companies are right: one is trying to protect its
investment, the other is simply trying to reuse what IS public data, by getting it from the ONLY
place where it's available. This is what happens when public officials leave the ownership of public
data to the third parties hired to collect them. Please note that, in practice, it makes very little
difference whether those third parties are private, for-profit corporations or even other Public
Administrations. Unless, of course, there are national laws already in place that define in advance
what is the license of all present and future Public Data, no matter how they were generated and by
whom, those data can be lost in any moment for society. In all other cases, the legal status of data
will be either officially closed and locked, or uncertain enough to prevent most or all reuses. In
February 2011, the news came that, even if they weren't the original copyright holders, Public
Engines had been able to put together enough legal claims to convince Report See to give up.
Disputes like this should not happen and would not happen if all contracts regarding collection and
management of PSI clearly specified that all the resulting data either go directly into the public
domain (after being anonymized if necessary, of course) or remain exclusive property of the
13/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 12 | 12 | Open_Data_Report.pdf |
government. Even ignoring data openness, this is essential for at least three other reasons. The first
is to protect a public administration from having to pay twice for those data, if it needs it again in
the future for some other internal activity, not explicitly mentioned in the initial contract. The
second reason is to not spend more than what is absolutely necessary to respond to public records
requests, that is to comply with Freedom of Information laws.
The final reason is to guarantee quality assurance and detection of abuses at the smallest cost, that is
sharing it with all the citizens using the public services based on those data. A real world example
of this point comes from the "Where's My Villo?" service in Brussels. Villo! is a city-wide bike-
sharing scheme started in May 2009, through a partnerships with a private company: JCDecaux
finances the infrastructure and operates it, in exchange for advertising space on the bikes
themselves and on billboards at the bike sharing stations. The availability of bikes and parking
spaces of each station is published online in real time on the official Villo's website.
When the quality of service decreased, some citizens started "Where's My Villo?", another website
that reuses those data to measure where and how often there aren't enough available bikes and
parking spaces, in a way that made it impossible for JCDecaux to deny the problems and stimulated
it to fix them. Both this happy ending and the fact that it came at almost no cost to the city, because
citizens could monitor the service by themselves, were possible just because the data from the
official website were legally and automatically reusable.
3.4. The price of digitization
In practice, public data can be opened at affordable costs, in a useful and easily usable way, only if
it is in digital format. As a consequence of this fact, demand for Open Data exposes a problem that
already existed and must be fixed anyway, regardless (again) of openness. Any substantial increase
of efficiency and reduction of the costs of Public Administrations can only happen when data and
procedures are digitized. The problem is that such digitization (which, obviously, must happen
anyway sooner or later) can be very expensive and we are only now starting to really realize how
much. Actual, material costs are not the worst problem here. Activities like semi-automatic
scanning of paper documents or typing again their content inside some database, are relatively low,
one-time expenses that are also very easy to calculate and budget in advance with great precision.
The real costs are those at the social, cultural, historical and workflow reorganization level. What is
really difficult, that is expensive in ways that are hard to predict, is to fit inside digital, more or less
automatic procedures and file templates, formats, habits and customs developed, maybe over
14/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 13 | 13 | Open_Data_Report.pdf |
several centuries, in the analog, pre-computer world. Developing countries are good case studies
from this point of view, because they are often leapfrogging from oral tradition straight to
computers in all fields, not just e-government.
Land ownership in India, discussed by Gurnstein in 2010, is a perfect example of the problems
carried by digitization that requests for Open Data only expose, without creating them. Digitization
can certainly increase efficiency, transparency and economic activities, but fully achieves these
goals only by:
• standardizing as much as possible all concepts, formats and procedures.
• replacing completely, at least in standard day to day procedures, whatever other records and
ways of working existed before
Gurnstein wrote:
"The problem of open access in the case of land records in India is... the manner in
which the data tends to get encoded. Typically, digitization of land records would mean
either scanning the record as it is, or inputting all the data on the record as it is,
without changing any fields. But ways of maintaining land records are highly diverse...
Private ownership is not the only means of holding a land parcel. When it comes to
land ownership, for example, it may eliminate the history of land, how were sub-
divisions and usufruct rights negotiated and enforced."
Another risk of digitization and e-government (without openness, that is) is lack of contact between
citizens and institutions:
"Prior to digitization, land records in India were available to people who made
requests with village accountants for them. .. after digitization of several services,
village accountants no longer personally visit the villages they are in charge of... What
has happened with digitization is a reorganization of earlier forms of social and
political relations. Accountability has moved from the immediate village level"
Of course, all these problems existed well before computers and return every time the political or
social order changes. The demand for Open Data is only increasing, by orders of magnitude, the
numbers of times in which we meet them.
3.5. The nature of Open Government and the relationship
between citizens and Government
Open Data are an essential part of Open Government. Almost everybody agrees with this.
Agreement on what exactly defines Open Government is, however, less universal. In January 2011
Lucas Cioffi, replying to Alex Howard, wrote:
15/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 14 | 14 | Open_Data_Report.pdf |
The biggest difference between Gov 2.0 and OpenGov seems to be how they approach
transparency. Gov 2.0 is about transparency through open data and the "government as a
platform" idea. "Open Government" is about Transparency for the sake of
accountability, but not necessarily interaction, cooperation and reuse of data outside the
government.
[who advocates] Open Data does so in order to make it accessible to citizens
rather than to hold government accountable. This is not to say that one approach is
better than another, but this is to say that there seem to be two very different
motivations for advocating for transparency, and they do seem to correlate to whether
people label themselves as part of Gov 2.0 or part of OpenGov.
In general, reflection and debate on this point is accelerating. At the moment, some characteristics
of Open Government on which there is more or less agreement are that Open Government is about:
• deliberation, choice, influence on decisions and participation as a common citizen
• letting all citizens use technology to participate, monitor and define government activities.
In other words, Government is really Open when it's based on interaction, not only on some
set of infrastructures and methods imposed top-down
• diffused, seamless conversations, that are only possible with digital technologies, online
social networks and so on, between public employees and citizens.
The obvious potential limit of these definitions is that they rely on a big, still largely unknown
factor, that is actual citizen participation. When data are opened, the problem becomes to have
everybody use them, in order to actually realize Open Government as defined above. This issue will
be explored in detail in the next paragraphs, but we can already say that Open Data are highlighting
the critical, weak points in the present and future relationship between citizens and governments.
While citizens participation is essential, especially in times of social and economic crisis, achieving
it on a large scale won't be easy. Frustration and lack of trust in institutions in many countries are
high, so it's no surprise when people express doubts that opening government data won't help much
in fixing things.
3.6. Clearer vision of the real risks and limits of Open Data
Open Data, we already said, is about reuse. The point is, at least when the goal is Open Government
and transparency in politics, reuse by whom? There is no automatic cause-effect relationship
between Open Data and real transparency and democracy. On the contrary, several problems may
occur, if administrators and citizens don't pay close attention.
16/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 15 | 15 | Open_Data_Report.pdf |
3.6.1. Data alterations and financial sustainability
Some concerns about the limits of Open Data are about what may happen, or stop to happen, before
they are published online. The most common concerns of this type are (from Open Public Data:
Then What? - Part 1):
1. Opening up PSI causes those data to not be produced anymore, or to be only produced as
private property by private corporations, because the public agencies whose job was to
produce those data, can't sell them anymore.
2. total accessibility of data provides more incentives to tinker with them, at the risk of
reducing trust in institutions and inhibiting decision-making even more than today.
Data manipulation is the topic of the next paragraph. Speaking of costs, a point to take into account
is that, once data are open, routinely used and monitored by as many independent users as possible,
even the cost of keeping them up to date may be sensibly reduced: in other words, in the
medium/long term Open Data may reduce the need to periodically perform complete, that is very
expensive, studies and surveys to update a whole corpus of data in one run.
Besides, and above all, even if opening data always destroyed any source of income for the public
office that used to create and maintain them, this problem would only exist for the PSI datasets that
are already sold today. Such data, even if of strategic importance as is the case with digital
cartography, are only a minimal fraction of all the PSI that could and should be opened to increase
transparency, reduce the costs of Government and stimulate the economy. In all these other cases:
• the money to generate the data already arrives by some other source than sales and
licensing(but even with those data it may be possible to generate them by crowdsourcing,
thereby reducing those costs!)
• the only extra expense caused by publishing those data online (assuming they're already
available in some digital format, of course!), would be the hosting and bandwidth costs, that
may be greatly reduced by mirroring and other technical solutions like torrents, already
widely used to distribute Free/Open Source Software (FOSS) through the Internet.
3.6.2. Real impact of data manipulation or misunderstanding
The fix for the risk that data is manipulated is to not only open government data and procedures, but
to simplify the latter (which eventually also greatly reduces cost) as much as possible. Abundance
of occasions to secretly play with data and how they are managed is a symptom of excessive, or
peak complexity: again, problems and risks with Open Data are a symptom of a [pre-
17/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 16 | 16 | Open_Data_Report.pdf |
existing] problem that is somewhere else.
Regardless of the real probability of data alterations before they are published, the major problem
happens after. We already mentioned in the first report the fact that, while correct interpretation of
public data from the majority of average citizens is absolutely critical, the current situation, even in
countries with (theoretical) high alphabetization and Internet access rates, is one in which most
people still lack the skills needed for such analysis. Therefore, there surely is space for both
intentional manipulation of PSI and for misunderstanding it. After the publication of the first report,
we've encountered several examples of this danger, which are reported in the rest of this paragraph.
Before describing those cases, and in spite of them, it is necessary to point out one thing. While the
impact on the general public (in terms of raising interest and enhancing participation) on the Open
Data activity of 2010 is been, in many cases and as of today, still minimal, it is also true that there
has been no big increase in demagogy, more or less manipulated scandals and conflictual discussion
caused by Open Data. There has certainly been something of this in the Cablegate but that's not
really relevant because, as we've already explained, what Wikileaks did is intrinsically different
from Open Data. So far, negative or at least controversial reactions by manipulation and
misunderstanding of Open Data haven't happened to such a scale to justify not opening PSI.
This said, let's look at some recent example of misunderstanding and/or manipulation based on
(sometimes open) public digital data.
Nicolas Kayser-Bril mentioned a digital map of all the religious places in Russia, that shows
[also] "mosques that are no longer in use, so as to convey the idea that Muslims were invading
Russia."
In September 2010 the Italian National Institute of Geophysics and Vulcanology officially declared
in September 2010 that they were evaluating whether to stop publishing online Italy's seismic data,
as they had been doing for years. The reason was that, following the March 2009 earthquake in
Italy, the data were being used to "come to conclusions without any basis at all" , both by the press,
to sell more, and by local politicians trying to hide the lack of preventive measures, like enforcing
anti seismic construction codes.
Still in Italy, Daniele Belleri runs a Milan crime mapping blog called "Il giro della Nera", making a
big effort to explain to his readers the limits of the maps he publishes, and the potential for
misunderstanding if they are used without preparation, or with wrong expectations. This is a
synthesis of Belleri's explanation, also covered in other websites , that is applicable to any map-
18/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 17 | 17 | Open_Data_Report.pdf |
based PSI analysis and presentation, not just to crime mapping:
In general, a map is just a map, not reality. It doesn't always and necessarily provide
scientific evidence. Crime maps, for example, are NOT safety maps, as most citizens
would, more or less consciously, like them to be: a tool that tells them where to buy a
house their according to the level of criminality in the district.
When used in that way, crime maps can give unprepared users two false impressions:
the first, obvious one, is that certain areas are only criminal spaces, exclusively
inhabited by criminals. The other is to encourage a purely egoistic vision of the city,
where the need for safety becomes paranoia and intolerance and all that matters is to be
inside some gated community. This doesn't lower crime levels at all: the only result is to
increase urban segregation.
To make things worse, crime data not analyzed and explained properly don't just contribute to
strengthen egoistic attitudes and lock the urban areas that are actually the most plagued by crime
into their current difficult state indefinitely. Sometimes, they may even perpetuate beliefs that are,
at least in part, simply false. Of course, when those beliefs not grounded in facts already existed,
open crime data can help, by finding and proving the gaps between perception of criminality and
reality. Belleri, for example, notes that residents of Milan consider the outskirts of their city more
dangerous than downtown Milan, while Londoners think the opposite about London... but in both
cities the truth emerging from data is exactly the opposite (at least for certain categories of crime) of
what their residents believe.
3.6.3. Unequal access
Even ignoring crime mapping, in some worst case scenarios, data openness may be not only
hindered by social divisions, but also create or enhance them. If citizens can't find and recognize
real, relevant meaning and practical value in data, as well as way to use them to make change
happen, there won't be any widespread, long lasting benefit from openness. How can we guarantee,
instead, that such meaning and value will be evident and usable? What are the ingredients for
success here?
Enhancing access to PSI it's harder than it may seem because it isn't just a matter of physical
infrastructure. It is necessary that those who access Open Data are in a position to actually
understand them and use them in their own interest.
This is far from granted also because, sometimes, the citizens who would benefit the most from
certain data are just those, already poor, marginalized and/or without the right education, who have
the least chances to actually discover and be able to use them. This is what G. Friedman was
19/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 18 | 18 | Open_Data_Report.pdf |
speaking about when, in September 2010, he wrote about the great divide caused by Open Health
Data:
[in the USA] "statistically speaking, chronic disease is associated with being
older, African American, less educated, and living in a lower-income household. By
contrast, Internet use is statistically associated with being younger, white, college-
educated, and living in a higher-income household. Thus, it is not surprising that the
chronically ill report lower rates of Internet access.
Starting from this, and commenting a study of the performances, with respect to coronary artery
bypass grafting, of several medical centers, Frydman expressed his concern that:
the empowered will have access to [this data] and will act upon it,
while many of the people suffering from chronic diseases (the same
population that would benefit most from access to this information) won't.
Over time it is therefore probable that the current centers of excellence will
treat an ever growing number of empowered while the centers that
currently experience high mortality rates will get worse and worse result,
simply because they will treat an ever growing number of digital outliers
who haven't the possibility to obtain health data and apply filters.
Since one of the topics of this project is the economic value of Open Data, it is necessary to add a
somewhat obvious observation to Frydman's concerns (regardless of their probability). Even if it is
difficult now to make accurate estimates, such negative developments would surely impact also the
costs of health services and insurances, not to mention healthcare-related jobs, both in the
communities hosting centers of excellence and in those with the worst ones.
3.6.4. Lack of education to data
Boris Müller, professor for interface and interaction design at the University of Applied Sciences in
Potsda, said in an April 2011 interview: "I think that really a citizen needs to know how
visualizations work in order to really evaluate the quality of the data and the quality of the
evaluation." As data visualization and analysis becomes more popular easier to use (even as a tool
for manipulating the public opinion), it's important for the public to:
• understand that, before becoming digital, information was coded, stored and used in many
ways, through social norms and human interactions more complex than computer ones (cfr
the digitization of India land ownership records), therefore making exact, one-to-one
equivalence between analog and digital procedures hard or impossible in many cases
• think critically about where data comes from
• remember to always follow the development of data-based stories, or accusation.
20/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 19 | 19 | Open_Data_Report.pdf |
Here's an example of why the two last things are important. In April 2011, during a prime time TV
talk-show, Italian MP Enrico Letta asked Education Minister Gelmini to justify further cuts to
Public Schools declared in the new State budget. Gelmini knew nothing about such cuts to the
budget of her own Ministry, so all she could reply at the moment was that Letta's assertions were
inconsistent.
Two days later, two bloggers "proved" that Gelmini was right and Letta's analysis wrong because he
had cited gross figures instead of net ones and ignored that school budget cuts from 2012 onwards
were not new at all, but had been already approved in 2008. Right after this debunking, a third blog
asserted that everybody was wrong: Letta, Gelmini and also the first two bloggers who, for
unknown reasons, had associated to the Education budget alone all the cuts to the whole public
sector, and then based all their calculations on a different (and wrong) summary table, not the one
used (still wrongly, but for other reasons) by Letta.
As far as we're concerned, the real issue here is not who was right and why, exactly, all the others
made certain mistakes. The actual problem is: how many of the people who saw Gelmini
unprepared on TV the day this case started also followed up the story in the next days and found out
that things weren't exactly as they had looked in that talk show, even if Letta had "proved" his case
with actual, exact "data"? How many citizens are educated to follow the analysis of some data over
time?
3.6.5. Lack of public interest
After the October 2010 Government Open Source Conference in Portland, John Moore reported the
surprise, among participants, that people were not demanding more open data, that the push had
not yet come from public. If Open Data is about empowerment, transparency and saving public
money, why aren't more common citizens already very excited about Open Data? Part of the answer
is the already mentioned cynicism and lack of trust in institutions and in the possibility for
individuals to participate effectively to politics and administration. Too many citizens still don't feel
that it is their right to seek public information from their representatives and administrators, or that
doing so will make any practical difference.
Another part of the problem is poor marketing from data activists and Public Administrations, that
should start to act more like product developers, that is measure the outcome of their activity in
terms of what has more appeal for the general public. One way to achieve this, especially at the
local level, may be to highlight (only) the concrete cost savings and local jobs directly created by
21/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 20 | 20 | Open_Data_Report.pdf |
the availability of Open Data. Of course, this isn't always possible.
3.6.6. Unprepared Public Administrators
It is undeniable that today, especially at the local level, most Public Administrators that should or
may contribute to open the public data held by their organizations still ignore, and sometimes
disdain, Open Data proposals, principles and practices. This happens for many reasons. We'll only
mention two of them that are quite common. They are interesting because, while being somewhat
related and sharing common origins, one is very hard to fix, the other, at least in comparison, very
easy.
To begin with, most of these administrators are people that, albeit very competent and committed to
their work, were not really trained to live with so much of what they perceive as "their" documents
and daily activities as Open Data implies regularly exposed to the public. This is true even among
administrators who are already well acquainted with mainstream "Web 2.0" practices. Many
officers who already have a regular presence on Facebook, Twitter or other social networks and
regularly use those platforms to discuss their work with their constituents feel diffident about Open
Data in the same measure as their colleagues who don't even use computers yet. A cultural barrier
like this requires both strong demand from citizens and detailed examples of how Open Data can be
good for the local budget to be overcome in acceptable time frames.
Another factor that may keep administrators away from Open Data is the more or less unconscious
assumption that, in order to use them, a City Major or Region Governor should be very skilled
himself, if not with actual programming, with "Web 2.0" tools, modern online services and/or
general software engineering principles. This is simply not true. Surely, Open Data is something
that is made possible only by modern digital technologies and the Internet, but at the end of the day
it's "simply" a way to increase transparency, efficiency and cost reductions inside Public
Administration, and to create local jobs. If these hypotheses are as concrete as this and many other
studies explain, there is no need for a Major to have programming skills, like social networks or
have any other personal "2.0" skill or training to see the advantages of Open Data and delegate to
his or her IT staff their implementation.
3.7. The privacy problem
Being perceived as a lethal attack to privacy remains one of the biggest misunderstandings that
prevents adoption of Open Data. On one hand, there is no doubt that in an increasingly digital world
it becomes harder and harder to protect privacy. But, exactly because the whole world is going
22/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 21 | 21 | Open_Data_Report.pdf |
digital, attacks to privacy and to civil rights in general can and are coming by so many other sides
that those from (properly done) Open Data are a really tiny percentage of the total.
This is a consequence of the fact that data about us end up online from the most different sources
(including ourselves and our acquaintances), and that often it would be very hard to discover, never
mind prove, that they've been used against our interest. There have been concerns, for example, that
insurance companies may charge higher fees for life insurance to those among their customers
who... put online a family tree from which it shows that they come from families with an average
life expectancy lower than usual.
Assuming such concerns were real, would it always be possible to spot and prove such abuses of
data, that weren't even published by any Public Administration? Of course, publishing online
complete, official Census data of several generations, in a way that would make such automatic
analysis possible would be a totally different matter.
Getting rid of all the unjustified concerns about privacy is very simple, at least in theory. All is
needed to dismiss for good the idea that Open Data is a generalized attack to privacy is to always
remember and explain that:
1. Most Open Data have nothing personal to begin with (examples: digital maps, budgets, air
pollution measurements....)
2. The majority of data that are directly related to individuals (e.g. things like names and
address of people with specific diseases, or who were victims of some crime) have no reason
to be published, nor there is any actual demand for them by Open Data advocates
3. Exceptions that limit privacy for specific cases and categories of people (e.g. candidates to
public offices, Government and Parliament members etc...) already exist in many countries
4. Very often, in practice, Open Data struggles only happen about when and how to make
available in the most effective way for society information that was already recognized as
public. What to declare public, hence open, is indeed a serious issue (more on this in the next
paragraph) but is a separate one.
3.8. Need to better define what is Public Data
Together with citizens education, there is a huge challenge that Governments and the Open Data
movement will have to face (hopefully together) in 2011 and beyond. This challenge is to update
and expand the definition of Public Data and to have it accepted by lawmakers and public
administrators.
23/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 22 | 22 | Open_Data_Report.pdf |
What is, exactly, Public Data? A definition that is accepted almost implicitly is "data that is of
public interest, that belongs to the whole community, data that every citizen is surely entitled to
know and use" . This definition is so generic that accepting it together with the assumption that all
such data should be open as preached by the Open Data movement (online, as soon as possible, in
machine readable format with an open license etc...) doesn't create any particular problem or
conflict.
Real problems however start as it has happened all too often so far, whenever we assume more or
less consciously that "Public Data" in the sense defined above and data directly produced by
Governments and Public Administrations, that is what's normally called PSI (Public Sector
Information) are the same thing.
There is no doubt that Governments and Public Administrations produce huge quantities of Public
Data. But this is an age of privatization of many public services, from transportation to healthcare,
energy and water management. This is an age in which many activities with potentially very serious
impacts on whole communities, like processing of hazardous substances or toxic waste, happen
outside Public Administrations. The paradox is that, as Sasaki put it , this increased privatization is
happening in the very same period in which " we are observing a worldwide diffusion of access to
information laws that empower citizens to hold government agencies accountable."
In such a context, "Public Data"is critical just because it is a much bigger set of data than what
constitutes traditional, official PSI. "Public Data" includes all that information plus the much bigger
amount of data describing and measuring all the activities of private companies, from bus
timetables to packaged food ingredients, aqueducts performances and composition of fumes
released in the atmosphere, that have a direct impact on the health and rights of all citizens of the
communities affected by the activities of those companies.
Are such data "Public" today, in the sense defined at the beginning of this paragraph, that is
something every citizen has the right to know without intermediaries or delegates, or not? Should
they be public? If yes, shouldn't law mandate that all such data be Open (that is, published online as
soon as possible, in machine readable format with an open license etc...) just like, for example, the
budget of some Ministry? Answering these questions may be one of the biggest challenges for the
Open Data community, and for society as a whole, in the next years.
Here are, in order to facilitate reflection on this issue, a few recent, real world examples of "Public
Data" that are not PSI, and of the impacts of their lack of openness.
24/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 23 | 23 | Open_Data_Report.pdf |
In April 2011, John Farrell wrote:
solar and other renewable energy developers must find the best places to plug in to the
grid, e.g. where demand is high or infrastructure is stressed. The cost to connect
distributed generation may also be lower in these areas. Unfortunately, data about a
utility's grid system is rarely public.
California utilities are changing the game. Southern California Edison (SCE) rolled out
a map of its grid system, highlighting (in red) areas that "could potentially minimize
your costs of interconnection to the SCE system." Since as much as a third of the cost of
PV can be recaptured via its benefits to the electric grid when properly placed in the
distribution system, having this information is crucial for solar developers. Public data
also levels the playing field between independent power producers and the utilities,
since the latter can use federal tax credits and their proprietary knowledge of the
electric grid to build their own distributed renewable energy at the most attractive
locations.
Having public data on distribution grid hot spots can make renewable energy
development more cost effective and more democratic. Tell your utility to publish its
map.
This, instead, is an excerpt of This Data isn't dull. It improves lives (March 2011, New York Times)
that looks at public transportation and consumer safety:
The USA Department of Transportation is considering a new rule requiring airlines to
make all of their prices public and immediately available online. The postings would
include both ticket prices and the fees for "extras" like baggage, movies, food and
beverages. The data would then be accessible to travel Web sites, and thus to all
shoppers.
The airlines would retain the right to decide how and where to sell their products and
services. But many of them are insisting that they should be able to decide where and
how to display these extra fees. The issue is likely to grow in importance as airlines
expand their lists of possible extras, from seats with more legroom to business-class
meals served in coach.
Electronic disclosure of all fees can make it much easier for consumers to figure out
what a trip really costs, and thus make markets more efficient, without requiring new
rules and regulations.
Another initiative has been proposed by the Consumer Product Safety Commission. In
2008, Congress overwhelmingly passed and President George W. Bush signed
legislation mandating an online database of reported safety issues in products, at
saferproducts.gov. The Web site ran for a few months in a "soft launch" and went into
full operation on Friday.
Thirteen years ago, two parents were told that their 18-month-old son had died in an
accident in a model of crib in which other children had died, yet there was no easy way
for any parent or child-care provider to know that.
25/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 24 | 24 | Open_Data_Report.pdf |
What about food? Here is what Christian Kreutz said in January 2011:
Nutrition is another interesting sector to use open data, which I discovered lately. A last
example for food is the whole potential behind bar code scanning - you take your
mobile phone to the supermarket and scan products to get the information behind the
fair trade certificate or behind the company. In the recent dioxin scandal in Germany,
the company Barcoo took information from the ministry of agriculture in Germany, of
which farms have intoxicated eggs and offer the info in their app. So, you can check in
the supermarket the eggs that are fine and not with your mobile phone.
Food in supermarkets is only one of thousands cases of "Public Data" from a strategic sector of the
economy that is huge, essential for creation of local jobs and in deep crisis in many countries in this
period: traditional, brick and mortar retail and service businesses.
Consider this explanation by venture capital firm Greylock about why they Invested in Groupon:
The Power of Data
Groupon is targeting a market that is huge and broken. Local advertising is a $100
billion annual business in the U.S. and consumers spend something like 80% of their
disposable income within a couple miles of their homes. Many local businesses still try
to attract new customers through that heavy yellow book that gets dropped on your front
doorstep until it rots or gets tossed in the recycling bin.
We think the technologies visible to consumers will be increasingly commoditized,
while the data used to understand consumers better will become increasingly proprietary
and valuable.
Offers to consumers can be intelligently served up based on a person's demographics,
buying history and location. The merchant side of the equation is just as interesting.
Local businesses need to be able to do more than just run a sale once or twice a year.
The theater on Main Street or the children's museum across town should have the ability
to revenue optimize, like United Airlines or Hilton, by appropriately pricing and
marketing unsold capacity. We started really leaning forward in our chairs when the
discussion turned to strategy, including the ways to use data to power Groupon's future
consumer- and merchant-facing products.
We believe Groupon is the break-out leader in the massive local commerce space and its
investment in data will be a critical ingredient in its long term march to build a
meaningful and foundational company.
Groupon is the clear market leader in the local deals market in 2011. However, complaints from
merchants about the money they can loss by offering deals via Groupon already exist. Now,
couldn't all the "local deals" raw information be considered as Public Data that merchants could (be
trained to) directly publish themselves online, in ways that would allow everybody, not just
Groupon, to present the deals to customers in ways more profitable for merchants? The point is,
how many merchants, merchant associations and majors (whose budgets always and immediately
26/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 25 | 25 | Open_Data_Report.pdf |
benefit when local businesses make more money) are aware of this opportunity?
4. Conclusion: seven Open Data strategy and
best practices suggestions
Starting from the trends and conclusion described in the previous chapter, this section lists, in the
most synthetic way possible, some strategic actions and best practices for 2011, that we consider
important in making Open Data succeed and bring the greatest possible benefits to all citizens and
businesses.
4.1. Properly define and explain both Open Data and Public
Data
Just because Open Data is becoming more popular (and, we may say, more and more necessary
every year), it is essential to intensify efforts to explain, both to the general public and to public
administrators, that
1. Privacy issues are almost always a non-issue. Quoting from What "open data" means -
and what it doesn't): Privacy and/or security concerns with putting all the government's data
out there are a separate issue that shouldn't be confused with Open Data. Whether data
should be made publicly available is where privacy concerns come into play. Once it has
been determined that government data should be made public, then it should be done
openly.
2. Defining as Public and consequently opening them in the right way, much more data than
those born and stored inside Public Administration is an urgent task that is in the best
interest of all citizens and businesses
4.2. Keep political issues separated by economics ones
Open Data can reduce the costs of Public Administrations and generate (or at least protect, as in the
case of deals from local merchants) local jobs in all sectors of the economy, not just high-tech ones.
There seems to be enough evidence for these two assertions to go for more Open Data even if they
had no effect at all on participation to politics. This should always be kept in mind, also because
some data that can directly stimulate business are not the same that would be useful for
transparency.
27/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 26 | 26 | Open_Data_Report.pdf |
4.3. Keep past and future separate
For the same reason why it is important to always distinguishes between political and economical
advantages (or disadvantages) of Open Data, it is necessary to keep decisions about future data
(those that will arrive in the future, due to new contracts, public services and so on) separate from
those about data that already exist. At the end of 2010, T. Steinberg wrote that the idea that
Government should publish everything non-private it can now is "rather dangerous", and that it
would be much better to release nothing until someone actually asked for it, and at that point doing
it right, that is with an open license and so on. The first reasons for Steinberg's concern is that
asking for everything as soon as possible would "stress the system too much, by spreading thin the
finite amount of good will, money and political capital" . The second is that many existing old data
and data archival systems are, in practice, so uninteresting that it wouldn't make sense to spend
resources in opening them.
Even if these concerns were always true, it is important to realize that they apply (especially the
second) to already existing data, not to future ones. The two classes of data have, or can have, very
different constraints. Existing data may still exist only in paper format and/or be locked by closed or
unclear licenses, or not relevant anymore for future decisions.
Opening future data, instead, is almost always more important, useful urgent, easier and cheaper
than digitizing or even only reformatting material that in many cases is already too old to make
immediate, concrete differences. While this argument is probably not always true when we look at
Open data for transparency, it probably is when it comes to economic development.
Therefore, features and guidelines that should be present in all future data generation and
management processes include:
• standardization: the less, obviously open, formats are used for data of the same type, the
easier it is to merge and correlate them. The formats that have to be standardized are not
only those at the pure software level. Even more important is, for example, to adopt by law
standard identificators for government suppliers, names and machine-readable identifiers of
budget voices and so on
• preparation for future digitization: new digital systems should explicitly be designed from
the beginning so that it will be possible, when non-digital records will be digitized, to add
them to the databases without modifying losses.
• Open licenses
28/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 27 | 27 | Open_Data_Report.pdf |
• better procurement
The first two features have obvious technical advantages regardless of data openness. The last two,
being critical, are discussed separately in the next paragraph.
4.4. Impose proper licensing and streamline procurement
As with the first report prepared for this project, we will not delve into the details of how to license
data because this topic continues to be followed and debated in all details by LAPSI and other
projects or researchers. We will simply confirm the importance of establishing a proper license, at
the national level, for all Public Data, that makes them Open in the right way and makes sure that
what is opened stays open and that don't demand what isn't possible to enforce (e.g. attribution),
because, quoting again Eaves, "no government should waste precious resources by paying someone
to scour the Internet to find websites and apps that don't attribute".
We want, however, to spend a few words about another legal/administrative side of the issue, that is
procurement. Traditional procurement laws are very likely not flexible enough, in most countries, to
handle the implementation of data-based public services. Here's why.
We know that if Public Data are Open, everybody, from volunteer activists to hired professionals,
can very quickly write or maintain simple software applications that help to visualize and use them
in all possible ways. Paradoxically, this is a problem when an Administration either wants to set up
an Open Data programming contest (that besides being inexpensive, it's much simpler to organize
and join than traditional tenders or grants) or needs to just pay somebody to write from scratch and
maintain some new program of this type, or customize existing ones.
The reason is that, just because this type of software development is so quick, even hiring a
professional to do it, or setting up a contest would be... too inexpensive to be handled with default
procurement procedures. Quoting from Day Two: Follow the Data, Iterating and the $1200
problem:
A big problem for cities is procuring products under $10,000. How does a city pay for
an awesome application like SeeClickFix when it doesn't fit the normal year-long
planning and two-year implementation in the millions of dollars? In Tuscon, Andrew
Greenhill tapped the Mayor's general budget for it, instead of trying to get the IT
department to shell out. In San Francisco, Ed Reiskin uses discretionary spending. But
every time, procurement gets messy. In reference to nepotism laws, Ed worries that he'll
appear "like I'm giving my buddies dollars." Building great products for cities has to
include finding great strategies to pay for them. In San Francisco, Jay Nath doesn't even
have a budget…which, he says is 'liberating' because he doesn't need to go through
29/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 28 | 28 | Open_Data_Report.pdf |
procurement.
The same issue is denounced as an obstacle to innovation and cost savings in New
recommendations for improving local open government and creating online hubs:
John Grant focused on a major pain point for government at all levels for tapping into
the innovation economy: procurement issues, which civic entrepreneurs run into in
cities, statehouses and Washington. "It is time to look at these procurement rules more
closely," he said, and promote higher levels of innovation. "There are a lot of ideas are
happening but a lot of rules restrict vendors from interacting in government," said
Grant. Turner-Lee observed that traditional procurement laws may also not be flexible
enough to bring more mobile apps into government.
Current procurement laws aren't partially incompatible with an Open Data world only at this level,
that is when it's time to procure software that makes the data useful. Even bigger problems and
inefficiencies can be introduced at the beginning of data life, that is when data collection and
processing services are procured. We've already explained that forgetting to impose the right license
is one of the problems, but it's not the only one. Even future organization of all the foreseeable data
management activities should take advantage of the flexibility provided by data openness. Here is
how Tim Davies summarizes this point:
Right now [public] bodies often procure data collection, data publishing and data
interfaces all in one block (as seems to be the case with Oxfordshires real-time bus
information - leading to a roadblock on innovation) - and so without these layers being
separated in procurement, some of the benefits here stand to be lost.
Changing procurement of information/data-rich public services would be, of course, only the first
step of a general reform of procurement laws and regulations. After management of Open Data has
been simplified, it becomes time to implement similar simplifications to procurement of everything
else. In fact, in such a scenario, there would be much less possibilities for the loopholes, frauds and
inefficiencies that forced local procurement procedures to become so slow and complicated: since
the public budget and other relevant public data would already be fully open, errors and other
problems would surface and be fixed much more quickly and reliably than today, even assuming
that they would continue to appear with the same frequency.
4.5. Educate citizens to understand and use data
It is necessary to guarantee the widest possible availability of all the pre-requisites for effective use
of Open Data. In other words, it is necessary to provide free and widely accessible training, oriented
to average citizens, on how and why to visualize Public Data and use them to make informed
30/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 29 | 29 | Open_Data_Report.pdf |
decisions. Ideally, this training should be provided at a local level with local programs, in a way that
makes it possible to use it on local issues, for the reasons and in the ways discussed in the next
paragraph. For example, visualization techniques like those used by ABC News to show the effects
of the March 2011 Japan Earthquake, in which all the user has to do to compare scenes from before
and after the earthquake is to move a slider, should be routinely used to explain proposals about
urban planning, zoning and related topics.
4.6. Focus on local, specific issues to raise interest for Open
Data
Considering the continuous evidence and concerns about scarce interest and preparation of citizens
to use Open Data in their political, economic and professional decisions, one of the final
recommendations of the Open Data, Open Society report confirms its importance and needs to be
repeated: it is very effective, if not simply necessary if the goal is to generate a critical mass of
citizens that demand and use Open Data in the shortest possible time, to practice all the
recommendations of this report at the local level,
Most people encounter their local governments much more often then their national ones. When
working within a single city or region it is much easier to inform citizens, raise their interest and
involve them, because they would be searching local solutions to improve local services and/or
save local money. There may also be much more opportunities to do so, especially in this period of
financial crisis that will see substantial decreases both in credit by financial institutions and in
subsidies from central governments. Concreteness and, as they say in marketing, "customer focus"
must be the keys for local activists and public employees working on local Open Data:
• work on specific issues and with precise objectives
• focus on immediate usefulness
• work on demand, on the services that people want. Required services define what data must
be open, not the contrary
This is the most effective, if not the only strategy, to solve one of the biggest debates in open data:
"how do we get people to use the data that we publish?" . The right question, instead, is "what data
do people want?". Even if citizens don't realize yet that what they actually want is more Open Data,
or that what they need can be done more quickly and cheaply by releasing some information in that
way.
A great example of what all this means is the Great British Public Toilet Map: a public participation
31/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 30 | 30 | Open_Data_Report.pdf |
website that tracks which councils have published public toilet open data, and which have not. A
map like this solves one specific, concrete problem in the ordinary, daily life of many people:
"Many older people have continence concerns and only go to places where they know there is a
toilet. "
It is also possible and useful to pass the message that, when it comes to participation, activism and
transparency in politics, Open Data are a concrete and pacific weapon that is both very effective and
very easy to use for everybody. Dino Amenduni explained the first point well at the end of 2010
with words and arguments that, while tightly bound to the current situation in Italy, apply, in spirit,
also to other countries:
in order to have your voice heard, it is necessary to threaten the private interests of
politicians. The ways to achieve this goal are, in my opinion... Communication
guerrilla: physical violence doesn't generate change anymore. Power is in the hands of
those who have data. But those data must be communicated, made usable, fun to use,
shareable, in order to give the feeling that knowledge brings a concrete (economic or
intangible) personal advantage
Proofs that participation to generation and usage of Open Data is easy would include, instead,
examples like electionleaflets. All citizens who can use a computer scanner and have Internet access
can upload on that website the leaflets distributed by the candidates during a campaign, making
much easier (after other, more skilled volunteers have inserted the content of the leaflets in
searchable databases) comparisons between the candidates positions or making public some
disrespectful material (racist, insulting…).
4.7. Involve NGOs, charities and business associations
As a final note and recommendation of this report, we'll note that, in comparison with hackers and
public officers, there are other parties that could and should play a role in Open Data adoption much
bigger than what they have had so far.
NGOs and charities, as well as professionals or business associations, all have lots to gain from
Open Data but don't seem, in many cases, to have realized this yet. Members of the first category
should routinely ask for support directly to Open Data civic hackers to gather (either from
government or citizens) more up to date information that is specifically relevant for their
campaigns.
The other associations, instead, should be much more active both in publishing Open Data about
their activities, to gain better access to customers and guarantee fair market competition, and in
32/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 31 | 31 | Open_Data_Report.pdf |
officially lobbying Public Administrations to get the PSI they could use for the same purposes. As
other suggestions made here, these are activities that should start at the city and regional level, first
with custom-made education initiatives, then with specific data-based services. Engaging all these
actors in the adoption of (local) Open Data will be one of the big challenges of the next years.
5. Bibliography
Besides those explicitly linked from the text, this report has drawn inspiration by many other
resources. The most important ones are listed here, but the complete list should be much longer. We
wish to thank first the authors of the works listed below and, immediately after, to all the activists,
inside and outside governments worldwide, who are working on this topic.
1. Are you prepared for the pitfalls of Gov 2.0?
2. Can we use Mobile Tribes to pay for the costs of Open Data?
3. Canada launches data.gc.ca - what works and what is broken
4. Creative Commons and data bases: huge in 2011, what you can do
5. Defining Gov 2.0 and Open Government
6. How Government Data Can Improve Lives
7. If you like solar, tell your utility to publish this map
8. Indian corruption backlash builds after "year of the treasure hunters"
9. Información Cívica / Just What is Civic Information?
10.Is open government just about information?
11.LSDI : In un click la mappa del crimine
12.La casta è online: dategli la caccia!
13.Linee guida UK sull'opendata
14.MSc dissertation on Open Government Data in the UK
15.Open Data (2): Effective Data Use .
16.Open Data: quali prospettive per la pianificazione?
17.Open Knowledge Foundation Blog " Blog Archive " Keeping Open Government Data
Open?
18.Open data, democracy and public sector reform
19.Pubblicato Camere Aperte 2011 - blog - OpenParlamento
20.Reasons for not releasing data in government
21.The impact of open data: first evidence
33/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 32 | 32 | Open_Data_Report.pdf |
22.Thinking About Africa's Open Data
23.Towards EU Benchmarking 2.0 - Transparency and Open Data on Structural Funds in
Europe
24.UK Open Government Licence removes barriers to re-use of public sector information
25.Western Europe: A journey through tech for transparency projects
26.What open data means to marginalized communities
27.What's in a Name? Open Gov and Good Gov
28.WikiLeaks Relationship With the Media
29.WikiLeaks, Open Information and Effective Use: Exploring the Limits of Open Government
34/34
Copyright 2011 LEM, Scuola Superiore Sant'Anna. This work is released under a Creative Commons attribution license (http://creativecommons.org/licenses/by/3.0/) | 33 | 33 | Open_Data_Report.pdf |
Sports smart watch
User Manual
DT3 Mate
Thank you for choosing our smart watch. You can fully understand
the use and operation of the equipment by reading this manual.
The company reserves the right to modify the contents of this manual
without any prior notice.
The product contains: a packing box, a manual, a watch body, and a
charging cable.
A. Watch function description
Button description: | 0 | 0 | 6126797.pdf |
Up button:
Short press to light up or turn off the screen; one press to go back the dial interface; long press to
reactivate the watch.
Button down:
Short press to enter multi-sport mode.
In addition, when the watch is in the off-screen state, you can light up the screen by pressing any
buttons.
Charging instructions:
Wireless charging, as shown in the picture below.
1.1 Shortcut function:
1) Swipe to the left till you find the "+" icon, click the icon to add part of the functions in the
shortcut.
2) Scroll down the screen when the watch is in the dial interface, you can find Bluetooth
connection status, time, power, brightness adjustment and other functions. | 1 | 1 | 6126797.pdf |
3) Swipe to the right when the watch is in the dial interface, you can find time/date/week/the latest
message (enter to view multiple messages)/some of the recently used menu functions, and turn on
or off audio Bluetooth for calls.
4) Swipe up the screen when the watch is in the dial interface to enter the menu interface, and
scroll up and down to find the corresponding function.
5) Long press the watch face interface and swipe to right or left to switch the watch face, select
one of them and set it with one-click.
1.2 App notification
1) When the watch is bound to the APP, and you allow the watch to display notifications on the
watch, the new messages received in your mobile phone will be pushed to the watch, and a total of
10 messages can be saved. The messages received after 10 messages will be overwritten one by
one.
2) Swipe to the bottom to click the delete icon to clear all message records.
1.3 Drop-down menu
Scroll down the screen when the watch is in the dial interface to enter the drop-down menu
interface.
1) Bluetooth connection status; time; power left;
2) About, where you can check the firmware version of watch and the address of the Bluetooth
3) Setting, where you can enter it to set part of the functions;
4) Brightness adjustment; where you can adjust the brightness of the screen;
5) Alipay. Download the app Alipay in your mobile phone and bind it with your watch to realize
offline payment.
1.4 Phone/Call History
1. Swipe to the left when the watch is in the watch interface, click the calling icon to turn on/off
the calling Bluetooth. Turn on the calling Bluetooth, you will find the name of the calling
Bluetooth, then go to the Bluetooth settings of your mobile phone, and bind the Bluetooth in the
name of the calling Bluetooth of your watch. You can use the watch to make phone calls when
they are successfully bound.
2. Call records, which can save the records of incoming and dialed calls. (It can save more than 50
call records, and it will be automatically overwritten when 128 records are full. Click any call
record to call back)
3. Dial the keyboard, you can enter the phone number to make a call.
1.5 message
When the watch is successfully bound to the app, and you approve notifications of corresponding
apps in your mobile phone system, and switch on these apps or callings notifications functions on
your watch, the notifications on your mobile phone can synchronize to your watch.
1.5.1. Incoming call notification:
Turn on the incoming call reminder in the app. When the phone has a incoming call, the watch
will light up or vibrate.
1.5.2. SMS notification: | 2 | 2 | 6126797.pdf |
Enable the SMS notification in the app. When one or more SMS messages are received on the
mobile phone, the watch will receive one or more SMS reminders at the same time.
1.5.3. Other application message notifications:
Turn on the corresponding application message notification in the app, such as WeChat, QQ,
Outlook, Facebook and other applications. When the mobile phone receives one/multiple
application message notifications, the watch will receive one/multiple corresponding message
reminders at the same time.
1.6 Frequently used contacts
The watch binds to the app, and you allow the watch to access to the phone book of your mobile
phone, then you can synchronize you contacts of your mobile phone to the smartwatch.
1.7 Fitness data
Fitness data is turned on by default. When you enter the fitness data interface, scroll up the
screen, the smartwatch will display the current data of steps, distance, and calories. The data will
be wiped out at 00:00 every day in the morning.
1.8 Sports modes (walking, running, cycling, rope skipping, badminton,
basketball, football)
1.8.1 Select the corresponding exercise mode, click the “Start” button on the screen to start the
exercise; click the “Start” button again to pause the recording of the exercise; click the “End”
button to end the recording, and save to the data.
1.8.2 The data can only be saved when the recording of the exercise is more than 1 minute; If the
recording time is less than 1 minute, the smartwatch will remind you that the data is too little to be
saved.
1.9 Heart rate
After you wearing the smartwatch correctly, you can measure heart rate when you enter the
heart rate function. If you don’t wear the smartwatch properly, it will remind you to wear firmly
for the measurement.
1.10 ECG
After you wearing the smartwatch correctly, and enter the ECG function(you need to turn on the
ECG interface in the app, you can have single measurement at a time. The data of ECG will be
saved in the mobile phone. This function should be used with the app.
2.0 My QR code
Connect the watch to the APP, find My QR Code in the APP, select WeChat/QQ/Alipay and other
"Receive money QR code" to sync to the watch (Please follow the instructions of the app to
operate the function).
2.1 Remote control music | 3 | 3 | 6126797.pdf |
Bind the smartwatch to the app WearPro, you can control the music to start/pause/play previous
song/play next song of your phone.
Bind the audio/calling Bluetooth of the smartwatch also, the music will be broadcast on the
smartwatch.
2.2 Sleep
Sleep monitoring time period: from 18:00 at night to 10:00 the next day, the data will be
generated by the watch. After connecting to the APP, the sleep data on the watch can be
synchronized to the APP for you to check.
2.3 stopwatch
Click the stopwatch to enter the timing interface, and you can record the time once.
2.4 Weather
After the smartwatch is connected to the app and the data is synchronized, tap Weather on the
watch to display the weather information for the day.
2.5 Find mobile phone
After the watch is bound to the app WearPro, tap this function to find the mobile phone, and the
mobile phone will vibrate or emit a ringtone.
2.6 Meteorology
Click on “Meteorology” on the watch to display the ultraviolet (UV) and air pressure conditions of
the day.
2.7 Massager
Tap the green button to start the massage, and the watch is in a vibrating state, tap the red button
to end the massage state.
3.0 Menu style
There are a variety of menu styles for users to choose.
3.1 Settings
1) You can select the watch language on the settings of the watch, or the watch language can be
synchronized with your mobile phone language after the watch successfully binds to the APP.
2) Switch the watch face, swipe to the right to view the next watch face, select a watch face, and
click it to set the watch face.
3) Set screen time; a variety of screen time lengths can be selected.
4) Vibration intensity; set reminder vibration intensity.
5) Password; a 4-digit password can be set (if you forget the password, please enter 8762 to
decrypt the previous password).
6) Restore factory settings; click √ to enable the factory reset, and click X to cancel the factory
reset. | 4 | 4 | 6126797.pdf |
B.Bind to the APP
1. APP download method
1.1 Scan the QR code to download
1.2 Search the application at App market and download
For Android users:
Search for "WearPro" in the Google Play app store or any customized Android store to download,
remember to check the pop-up box on your phone when installing, and agree to the permission.
For iOS users:
Search for "WearPro" in the APP Store to download, remember to check the pop-up box on your
phone when installing, and agree to the permission.
After WearPro is installed, the app icon appears as
.
2.Bind Bluetooth
2.1 Unconnected to the APP state:
After the watch is turned on, the Bluetooth will be in the state of being searched. After open the
APK/APP, go to Devices > Add Device > click to start searching, select and click the
corresponding watch device name, and the watch will be successfully bound to the app.
2.2 Connected to the APP state:
Watch time synchronization: the time shown at the smartwatch and your mobile phone will
synchronized after the smartwatch is bound to the APP successfully.
2.3 Binding the audio/calling Bluetooth
When the smartwatch is in the dial interface, you can find the audio/calling Bluetooth icon, and
click it to turn it on, then go to the Bluetooth settings of your mobile phone and click the name of
the audio/calling Bluetooth of the smartwatch to bind it.
3. Find Watch
After the smartwatch is bound to the APP, you click “Find Watch” in the APP, the smartwatch
will light up and vibrate for once.
4. Camera | 5 | 5 | 6126797.pdf |
Click “camera” in the app WearPro to wake up the camera mode of the watch, click the camera
button on the watch to take photos, and the photos will be automatically saved to the phone
album.
5. Data synchronization
After the watch is successfully bound to the application, the data in the smartwatch can be
synchronized to the application.
6. Tilt to wake the screen
Wear the smartwatch correctly on your wrist (left/right hand). when you switch on the feature, you
can light up the screen when you raise up your wrist.
7. Do not disturb mode
In the APP, tap “Device” > “More” > “Do not disturb mode”, set the start to end time, such as
12:00 to 14:00, then you won’t receive phone calls and apps notifications on the watch during this
period.
8. Daily alarm clock
In the APP in the APP Device>More, set the start and the end time, the alarm can be set only once
or repeatedly on the date (week) setting, and the alarm can be turned on/off.
9. Sedentary reminder
Set the start and the end time of the sedentary reminder, and the time interval (minutes) in the
APP. You can set the reminder for once or to repeat regularly by entering the repeating setting.
When the sedentary time is reached, the watch will vibrate and display a sedentary icon on the
screen.
10. Drink water reminder
Set the reminder frequency (minutes) and the time period of the start and the end in a day in the
APP. You can set the reminder for once or to repeat regularly by entering the repeating setting
and selecting the date (week) of the water reminder. When the time of drink water reminder is
reached, the watch will vibrate and there will be a water icon on the screen.
11. Dial push
11.1.Push an existing watch face
Bind the watch and the app, open the app, tap Device > Watch face push, the watch will restart
and bind the APP automatically after the synchronization of the watch face.
11.2. Customize the watch face
Bind the watch and the app, open the app, tap Device > Watch face push, the first several watch
faces marked with “custom watch faces” are customizable. The watch will restart and bind the
APP automatically after the synchronization of the watch face.
12. Firmware version | 6 | 6 | 6126797.pdf |
The version of the watch is displayed on “Firmware upgrade” in the column of “Device”, and
users can decide to whether upgrade the firmware version.
13. Unbind
In the "Device" column of WearPro, scroll down to the "Unbind" and click to unbind the APP. The
iSO users need to go to the Bluetooth settings of the phone, select the Bluetooth name of the
smart watch, and click "Forget this device". The “About”
of the watch has an “Unbind”
button, click it to unbind or do it in the APP. For the safety of users’ data, the watch will implement a
factory reset after that.
●Frequently asked questions and answers
*Please avoid exposing the device to extreme temperatures that are
too cold or too hot for a long time, which may cause permanent
damage.
*Why can't I take a hot bath with my watch?
The temperature of the bath water is relatively changed, it will
produce a lot of water vapor, and the water vapor is in the gas phase,
and its molecular radius is small, and it is easy to seep into the gap of
the watch case. The internal circuit of the watch is short-circuited,
which damages the circuit board of the watch and damages the
watch.
*No power on, no charging
If you receive the goods and the watch does not turn on, it may be
caused by a collision during the transportation of the watch and the
battery Seiko board has been protected, so plug in the charging cable
to activate it. | 7 | 7 | 6126797.pdf |
If the battery is too low or the watch does not turn on after a long
period of time, please plug in the data cable and charge it for more
than half an hour to activate.
Warranty description:
1. If there are any quality problems caused by manufacturing,
materials, design, etc. in normal use, the motherboard of the watch is
guaranteed for repair for free within one year, while the battery and
charger within half a year from the date of purchase.
2. No warranty is provided for failures caused by the user's personal
reasons, as follows:
1). Failure caused by unauthorized disassembly or modification of
the watch.
2). Failure caused by accidental fall during use.
3). All man-made damages or the third party's fault, or misuses (such
as: water in the device, cracking by external force, scratches on the
case, damage, etc.) are not covered in the warranty.
3. When requesting the warranty service, please provide a warranty
card with the date of purchase and the stamp of the place of purchase
on it. | 8 | 8 | 6126797.pdf |
4. When the user needs the device repaired, please take the device to
our company or our company's dealership.
5. All functions of the device please refer to the actual product.
Purchase date:
IMEI code:
Where to buy:
Customer Signature:
Signature of Store Clerk:
Stamp of Store:
FCC Caution:
This device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions:
(1) This device may not cause harmful interference, and (2) this device must accept any interference received,
including interference that may cause undesired operation.
Any changes or modifications not expressly approved by the party responsible for compliance could void
the user's authority to operate the equipment.
NOTE: This equipment has been tested and found to comply with the limits for a Class B digital device,
pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against
harmful interference in a residential installation. This equipment generates, uses and can radiate radio
frequency energy and, if not installed and used in accordance with the instructions, may cause harmful
interference to radio communications. However, there is no guarantee that interference will not occur in a
particular installation.
If this equipment does cause harmful interference to radio or television reception,
which can be determined by turning the equipment off and on, the user is encouraged to try to correct the
interference by one or more of the following measures:
-- Reorient or relocate the receiving antenna.
-- Increase the separation between the equipment and receiver.
-- Connect the equipment into an outlet on a circuit different
from that to which the receiver is connected.
-- Consult the dealer or an experienced radio/TV technician for help.
The device has been evaluated to meet general RF exposure requirement. The device can be used in
portable exposure condition without restriction.
FCC ID:2A54U-DT3MATE | 9 | 9 | 6126797.pdf |
A USER'S GUIDE TO
COM POST
The Beauty of Your Lawn & Garden
Blossoms from the Soil
Compost adds organic material and nutrients to the soil,
increases water-holding capacity and biological activity,
and improves plant growth and health.
Revised 2009 | 0 | 0 | CompostGuide.pdf |
A project of the Washington Organic Recycling Council, with
support from the Washington State Department of Ecology’s
Public Participation Grant program.
This product was partly funded through a grant from the
Washington Department of Ecology. While these materials
were reviewed for grant consistency, this does not necessarily
constitute endorsement by the department.
Special thanks: the original version of this brochure in 2003
was created by the Washington County, Oregon Solid Waste and
Recycling Program in cooperation with the Washington Organic
Recycling Council and the Composting Council of Oregon.
Tips to Remember:
• Don’t put plants into 100% compost. Mix
compost thoroughly into existing soil before
planting.
• When transplanting, it’s better to amend the
whole bed, not just planting holes, to promote
root growth.
• Ask your compost supplier which compost
product is best for your intended use.
• Use compost at the recommended application
rate.
• To maintain healthy soil, reapply compost or
mulch every 1-2 years.
• Many composts are rich in plant nutrients, so
you may be able to reduce fertilizer use after
applying compost.
• Compost can also reduce your lawn and garden’s
summer irrigation needs.
• Compost-amended soil and mulching slow run
off, reduce erosion, and break down pollutants.
When you use compost, you’re helping to
protect our precious streams, rivers, lakes, and
marine waters.
original artwork provided by:
www.compostwashington.org www.ecy.wa.gov www.soilsforsalmon.org | 1 | 1 | CompostGuide.pdf |
Resources
Compost Organizations
Washington Organic Recycling Council
Find a compost producer in your area
www.compostwashington.org
US Composting Council
Seal of Testing Assurance (STA) program
www.compostingcouncil.org/programs/sta/
Restoring the Soil to Protect our Waterways
www.soilsforsalmon.org
Compost amendment and erosion control
during construction: information for builders
www.buildingsoil.org
Natural Lawn & Garden Care, Soils, and Home
Composting
City of Seattle
www.seattle.gov/util/services/yard
King County
www.kingcounty.gov/soils
Washington State University
www.puyallup.wsu.edu/soilmgmt/
The Beauty of Your Lawn and Garden
Blossoms from the Soil
Thank you for your interest in compost.
Compost is a versatile product with many benefits. It enhances
soil quality, helps save water, and supports your community’s
efforts to recycle organic debris. All this helps to conserve our
natural resources and reduces the amount of material sent to the
landfill.
Compost-amended soil also helps break down pollutants and
absorb stormwater runoff. By making nutrients slowly available
to plants and enhancing plant health, compost can reduce the
need for chemical fertilizers and pesticides. All these benefits
help protect our lakes, rivers, and marine waters from pollution
and excessive runoff.
Compost is a natural amendment for your lawn or garden, and
can be used regularly to enrich your soil. This guide is designed
to help you get the most from the compost that you buy.
| 2 | 2 | CompostGuide.pdf |
Compost: A Natural Cycle
Composting is a natural process in which micro-
organisms and macro-organisms break down organic
material (leaves, twigs, grass, etc.) into a dark crum -
bly soil amendment. Modern compost facilities use
the same natural biological composting process.
Their controlled-temperature process works faster,
breaks down pesticide residues, and also kills weed
seeds and plant diseases.
Compost improves soil structure and plant
growth by
• Replenishing soil organic matter, and storing
nutrients in plant-available forms
• Supporting beneficial soil life
• Reducing erosion and water run-off
• Loosening clay soils for better root
development (increasing soil pore space)
• Retaining moisture in sandy soils so
plants need less watering.
Comparing Landscape Products
A variety of soil and landscape products are sold. Here’s a
comparison:
Compost is stable, decomposed organic matter, excellent for
improving soil structure, fertility, moisture holding capacity, and
plant growth.
Mulch is any material applied to the soil surface. Woody mulches
(high in carbon, low in nitrogen) like wood chips, bark and woody
composts are great for woody plants. Annual plants should be
mulched with nutrient-balanced mulches like compost, grass
clippings, or leaves.
Peat Moss is partially decayed sphagnum moss from peat bogs. It
provides soil porosity, but not the nutrients or biological diversity for
healthy soil that compost provides.
Fertilizers are concentrated sources of plant nutrients, used in small
amounts to supplement natural soil fertility.
Topsoil that is sold is usually not native topsoil. Quality
manufactured topsoils are a blend of native sandy sub-soils with
composted organic matter to support soil life.
Ask Your Compost Supplier
Whether you’re buying direct from the composting facility, or from a local
vendor, here are some good questions to ask:
• What ingredients go into your compost?
• What compost products or blends do you sell?
• Are there quality control or testing results available for these
products? (These may be on the manufacturer’s website.)
• Which product is best for my intended use?
• What application rate do you recommend?
• How much do I need for my area? (Or see pages 4-6.)
| 3 | 3 | CompostGuide.pdf |
Compost Questions and Answers
What is compost?
Compost is a natural humus-like soil amendment that results from
the controlled aerobic (with oxygen) decomposition of organic
materials. Compost is not soil – it should be mixed with soil. It is
not fertilizer, although it contains many slowly released nutrients.
What materials (“feedstocks”) are used to make compost?
Compost facilities in Washington recycle a variety of organic
materials, including yard debris, food scraps, manure, biosolids,
forest residuals like sawdust and bark, construction wood, and
agricultural residues. All of these materials can be used to produce
high quality compost. Your supplier can tell you which materials
they compost.
How do I know I’m getting safe, quality compost?
Fortunately, in Washington we have strict permitting and production
standards for compost facilities, that include both time and
temperature requirements and contaminant limits.
What about weed seeds, plant diseases or pesticide residues?
The controlled time, aeration, and temperature process required in
Washington has been shown to kill weed seeds and plant diseases.
That same process breaks down most pesticide residues. There are
a few agricultural pesticides that are not easily broken down, and
permitted Washington compost manufacturers carefully watch their
feedstocks to keep those materials out of the composting process.
Compost Beginnings
The yard debris or food scraps* that you
place into your home compost bin, take to
a drop-off site, or set out for curbside
collection could become the compost that
you later use on your garden, lawn, and
flowerbeds.
It is essential to place only quality organic
material into the composting process. Here
are some tips:
l The products you use or spray in your
yard can end up in the compost process.
Carefully read the labels of pesticide and
herbicide products you use. (See page 9.)
l Please keep yard debris free of :
x Garbage
x Plastic of any sort
- Plastic plant pots
- Plastic plant tabs
- Plastic bags (if you want to bag
your yard debris, use paper
garden bags - available at most
garden centers)
x Rock, brick, or masonry
x Glass or metal
x Pet waste.
* Many localities now collect food scraps and
food-soiled paper along with yard debris for
composting. Call your local collection service
to find out what is collected in your area.
| 4 | 4 | CompostGuide.pdf |
Building Rich and Healthy Soil
With Compost
To grow healthy plants you need healthy soil.
Healthy Soil:
l Is teeming with life! Healthy soil is a miniature ecosystem.
A teaspoon of healthy soil will have upwards of four billion
tiny organisms which recycle nutrients, suppress disease, and
discourage pests.
l Retains moisture but allows drainage. Healthy soil has
structure that allows water to drain through, retains moisture,
and promotes strong root growth.
l Is full of organic nutrients. Plants depend on the micro-
organisms found in healthy organic-rich soil to provide
nutrients to their roots, and help them thrive.
A healthy garden and landscape is naturally resistant to pests,
drought, weeds, and diseases. Maintaining healthy soil may allow
you to reduce use of chemical fertilizers and pesticides.
Soil is a planting medium. Compost is a soil amendment.
Do not place plants directly into 100% compost.
Ask your supplier or see next page for mixes for different uses.
Washington State Encourages the Use of Compost,
to Protect Our Water Quality
The Washington State Department of Ecology recommends that soils
on construction sites be restored with compost before planting, and also
encourages the use of compost for construction site erosion control, to reduce
stormwater runoff and help keep our rivers, lakes, and Puget Sound clean.
Learn more at www.SoilsforSalmon.org or www.BuildingSoil.org.
Selecting Quality Compost
Compost is available in many product types and blends that may be
used for different gardening applications. The type of feedstock,
the composting process, and any supplementary additives determine
the end product.
Many facilities offer a variety of blends based on compost, such as
garden mix, potting soil, planting mix, mulches, turf top-dressing
and soil blends.
What to Look for in Compost
For most compost applications you will want a finished product that
has matured and stabilized. Look for material
l with a dark, crumbly texture
l with a mild odor
For most compost applications you will not want compost that is
extremely dry or wet, or extremely hot. (Note that it is okay for
compost to be warm and to give off some steam and mild odor.)
Quality Testing at Composting Facilities
Feel free to ask your compost provider if they have a quality control
program, and ask for test results. Compost facilities in Washington
are permitted by the Department of Ecology and must meet
standards for both the composting process and contaminants,
ensuring a quality product. Some facilities also participate in the
“Seal of Testing Assurance” (STA) testing program. See
“Resources” on page 11 to learn more.
Remember:
Your compost provider can help you pick the best compost mix
for your needs.
| 5 | 5 | CompostGuide.pdf |
The Composting Process
Even though there are a variety of composting methods, most
composting follows a similar process:
1. Grinding Organic Materials:
Depending on the facility, the feedstock (material) available, and
the desired compost product, different combinations of materials
are added together and ground into small pieces:
• Nitrogen-rich materials (such as grass, fresh plant
cuttings, biosolids, and manures)
• Carbon-rich materials (such as dried leaves, woody
materials, and straw).
2. Heating Up:
The material is placed into piles where it begins to heat up from
the biological activity of the compost microbes. Typically, com-
post temperatures are required to reach at least 131 degrees F in a
specified time period in order to destroy weed seeds and patho -
gens. The compost is turned or aerated, allowing the composting
microbes to breathe. After a period of time, the nitrogen-rich
material is depleted, the biological process slows, and the hot
compost begins to cool.
3. Finishing:
Typically “finished” compost has undergone a series of steps to
ensure maturity and stability. The cooling compost is aged, which
allows the decomposition process to slow down and the finished
compost to stabilize.
The end products you purchase may be entirely compost, or a
combination of compost blended with uncomposted additives
(such as peat, bark, minerals, or soil).
Applications for Compost
Planting New Garden Beds or Lawns
Spread a 2-4 inch layer of compost and mix into the upper 6-12
inches of existing soil: use more in sandy soils, and less in heavy clay.
Reapply ½-1 inch annually on garden beds.
Mulch (surface applications on landscape beds)
Spread a 1-2 inch layer of coarse, woody compost. To allow proper
airflow, it is best not to pile mulch around the stems of trees and
shrubs. Pull mulch 1-2 inches away from stems.
Top Dressing for Lawns
Spread a ¼ to ½ inch layer of fine screened compost, and rake it into
the lawn. For best results, plug-aerate the lawn before top-dressing.
Overseeding at the same time will thicken thin patches in lawns.
Blended (Manufactured) Topsoils
Good quality “topsoil” products usually include 10-40% compost by
volume, mixed with a sandy loam soil that allows good drainage.
These compost-soil blends help establish healthy lawns and gardens.
When to Use Compost?
• Any time you’re preparing soil for planting
• Mulching beds and gardens in spring, summer, or fall
• Top-dressing lawns in spring or fall.
| 6 | 6 | CompostGuide.pdf |
How Much Compost to Use
l Estimate the planting area (Math Hint: Square feet = length x width)
l Decide upon the appropriate application depth of the compost (page 4)
l Use the charts below to estimate your compost needs. (Abbreviations: ft = foot; yd = yard; sq = square; cu = cubic.)
l Conversions: 9 square feet = 1 square yard; 27 cubic feet = 1 cubic yard.
Plot Size # of Sq Feet 1/2” Deep - Mulching 2” Deep - Amending new
or Top-dressing lawns or gardens
5' x 10' plot 50 sq ft 2.08 cu ft of compost 8.33 cu ft of compost (0.31 cu yd)
10' x 10' plot 100 sq ft 4.17 cu ft of compost 16.66 cu ft of compost (0.62 cu yd)
20 x 50' plot 1000 sq ft 41.7 cu ft of compost 166.7 cu ft of compost (6.2 cu yd)
1 acre 43,600 sq ft 1,815 cu ft of compost (67 cu yd) 7,257 cu ft of compost (268 cu yd)
Question: I have a plot about this big, how much compost do I buy?
Compost Quantity 1/2” Deep - Mulching 2” Deep - Amending new
or Top-dressing lawns or gardens
1 cu ft bag of compost 24 sq foot area 6 sq foot area
1.5 cu ft bag of compost 36 sq foot area 9 sq foot area
2.2 cu ft bag of compost 53 sq foot area 13 sq foot area
2.5 cu ft bag of compost 60 sq foot area 15 sq foot area
1 cubic yard of compost 648 sq foot area 162 sq foot area
Compost Works! Soil blending trials conducted in 2008 by the Washington Organic Recycling Council, with funding from the Washington Department of Ecology,
demonstrated that compost improves soil structure (lowers bulk density), nutrient availability (increases cation exchange capacity), moisture holding
capacity, and supplies both nutrients that plants need and organic matter that supports soil life. See the 2008 Soil Blending Trial report at
www.compostwashington.org.
Question: If I buy this much compost, how many square feet will it cover?
| 7 | 7 | CompostGuide.pdf |
Portal Version 4.3 - User Manual
V1.0
October 2019
| 0 | 0 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 2 of 57
Portal Version 4.3 – User Manual
V1.0
October 2019
Table of Contents
1 Introduction ..................................................................................................................................... 4
1.1 Purpose of the Document ....................................................................................................... 4
1.2 Reference Documents ............................................................................................................. 4
1.3 Terminology ............................................................................................................................. 4
2 Approach ......................................................................................................................................... 6
3 Main User Functions of the Portal .................................................................................................. 6
3.1 Portal Home Page .................................................................................................................... 8
3.1.1 How to browse through the Editorial Content of the Portal ......................................... 10
3.1.2 How to view / search for “Latest News” ....................................................................... 17
3.1.3 How to view / search for “Open Data Events” .............................................................. 18
3.1.4 How to subscribe to the EDP Newsletter ...................................................................... 19
3.1.5 How to view “Tweets” on the EDP ................................................................................ 20
3.1.6 How to switch to another User Language ..................................................................... 21
3.1.7 How to search for EDP Site Content .............................................................................. 22
3.1.8 How to Search for Datasets by Data Category .............................................................. 23
3.1.9 How to Search for Datasets by Keyword ....................................................................... 25
3.2 Datasets (Data Platform) ....................................................................................................... 26
3.2.1 Entering the Datasets-View ........................................................................................... 27
3.2.2 How to filter datasets by using “Faceted Search” ......................................................... 27
3.2.3 How to store personal queries ...................................................................................... 29
3.2.4 How to filter datasets by geographical area ................................................................. 31
3.2.5 How to download dataset distributions ........................................................................ 33
3.2.6 How to view licensing information ................................................................................ 34
3.2.7 How to switch to another user language ...................................................................... 36
3.2.8 How to browse by data catalogues ............................................................................... 37
3.3 Visualization of Geo-Spatial Data (map.apps) ....................................................................... 38
3.3.1 How to visualize geo-spatial data from a dataset resource .......................................... 38
3.4 Graphical Data Visualisation Tool .......................................................................................... 43
3.4.1 How to visualize graphical data from a dataset resource ............................................. 43 | 1 | 1 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 3 of 57
3.5 Help Desk ............................................................................................................................... 48
3.5.1 How to contact the Portal’s Help Desk .......................................................................... 48
3.6 Metadata Quality Assurance (MQA) ..................................................................................... 50
3.6.1 The Global Dashboard View .......................................................................................... 50
3.6.2 The Catalogue details view ............................................................................................ 51
3.7 SPARQL Manager ................................................................................................................... 54
3.7.1 SPARQL Search .............................................................................................................. 54
3.7.2 SPARQL Assistant ........................................................................................................... 55
3.7.3 SPARQL Saving/Modifying a Query ............................................................................... 56
3.7.4 SPARQL Queries ............................................................................................................. 57
List of Figures
Figure 1: EDP Home Page (upper part) ................................................................................................... 8
Figure 2: EDP Home Page (lower part) .................................................................................................... 9
Figure 3 – Dataset Resource Page with Link to Geo-Spatial Visualisation. ........................................... 38
Figure 4 – Selection of layers................................................................................................................. 39
Figure 5 – Feature Info tool. .................................................................................................................. 40
Figure 6 – Legend tool. .......................................................................................................................... 40
Figure 7 – Disclaimer and tutorial buttons. ........................................................................................... 41
Figure 8 – Error message dialog. ........................................................................................................... 42
| 2 | 2 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 4 of 57
1 Introduction
1.1 Purpose of the Document
The main purpose of this document is to pres ent a User Manual for the main user functionalities of
the Portal Version 4.3, launched in production in May 2019. This document consists of an update of
the User Manual for the Portal Version 3.0 published in November 2017[4].
1.2 Reference Documents
Id Reference Title Version
[1] EDP_S1_MAN EDP_S1_MAN_Portal-Version1-UserManual_v1.0 1.0
[2] EDP_S1_MAN EDP_S1_MAN_Portal-Version1.3-UserManual_v1.2 1.3
[3] EDP_S1_MAN EDP_S1_MAN_Portal-Version2.0-UserManual_v1.0 2.0
[4] EDP_S1_MAN EDP_S1_MAN_Portal-Version3.0-UserManual_v1.0 3.0
Table 1-1: Reference Documents
1.3 Terminology
Acronym Description
API Application Programmer Interface
CKAN (replaced by the “Data Platform”)
CSV Comma separated values
Data Platform Single page web app for managing and displaying datasets
DCAT-AP DCAT Application Profile - Metadata specification based on the Data
Catalogue vocabulary (DCAT)
DRUPAL Content Management System
ECAS / EU-Login EU user login page
EDP European Data Portal
FME Feature Manipulation Engine
GUI Graphical User Interface
HTTP Hypertext Transfer Protocol
JSON JavaScript Object Notation (a lightweight data-interchange format)
maps.app Geo-spatial data visualization application
MQA Metadata Quality Assistant
RDF Resource Description Framework
SOLR Search engine used for portal content search and dataset search | 3 | 3 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 5 of 57
Acronym Description
SPARQL Query language for linked data (RDF)
SSL Secure Socket Layer
URL Uniform Resource Locator
XML Extensible Markup Language
Table 1-2: Abbreviations and Acronyms | 4 | 4 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 6 of 57
2 Approach
The approach used for this User Manual was based on the identification of the main user functions of
the Portal and the description of each function from the user’s perspective in terms of “How to…”.
Each main function documentation consists of a screen snapshot, the steps required to execute the
function and optionally a screenshot with the results.
3 Main User Functions of the Portal
This section describes all of the main user functions supported by the Portal Version 3.0.
The table 1-3 below lists the described functions by module.
Module Name Function
1 Portal HomePage
- How to browse through the Editorial Content
(how to access Resources on Open Data: eLearning
modules, Training Companion, Reports about Open
Data)
- How to view / search for “Latest News”
- How to view / search for “Open Data Events”
- How to subscribe to the EDP Newsletter
- How to view “Tweets” on the EDP
- How to switch to another User Language
- How to search for EDP Site Content
- How to search for Datasets by Data Category
- How to search for Datasets by Keyword
2 Datasets (Data Platform) Entering the Datasets-View
How to filter datasets by using “Faceted Search”
How to store personal queries
How to filter datasets by geographical area
How to download dataset distributions
How to view licensing information
How to switch to another user language
How to browse by data catalogues
3 Visualization of Geo-Spatial
Data (map.apps)
How to visualize geo-spatial data from a dataset resource
4 Graphical Data Visualisation
Tool
How to visualize graphical data from a dataset resource
5 Help Desk How to contact The Portal’s Help Desk
6 Metadata Quality Assurance
(MQA)
Monitoring tool for the metadata quality:
‐ The Global Dashboard View
‐ The Catalogue details view
7 SPARQL Manager How to run SPARQL Queries using:
- SPARQL Search | 5 | 5 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 7 of 57
Module Name Function
- SPARQL Assistant
- SPARQL Saving/Modifying a Query
- SPARQL Queries
Table 1-3: Main functions of the Portal Version 3.0 | 6 | 6 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 8 of 57
3.1 Portal Home Page
Header links
Main menu
Searching
for
Datasets
By Keyword
Searching
for
Datasets
By Data
Category
News
section
Portal
Search
Site
content
Language
selection
Figure 1: EDP Home Page (upper part)
| 7 | 7 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 9 of 57
Landscaping
section
Event
Calendar
section
EDP
Tweets
section
Featured
Articles
section
Newsletter
section
EDP Help
Desk
Footer
links
Social
Media links
Figure 2: EDP Home Page (lower part) | 8 | 8 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 10 of 57
3.1.1 How to browse through the Editorial Content of the Portal
The editorial content of the Portal is organized into 4 main menu items:
1. What we do
2. Providing Data
3. Using Data
4. Resources
1. Click on “What we do”, then on sub-menu “Our Activities”
The system displays a separate page with information on what is done in the Portal.
| 9 | 9 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 11 of 57
2. Click on “Providing Data”, then on sub-menu “Practical Guide”
System displays a separate page with information on how to provide data to the Portal. This page
mainly addresses the suppliers (harvested portals) of the data and metadata.
| 10 | 10 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 12 of 57
3. Click on “Using Data”
The system display s a separate page with informat ion on how the Portal data/metadata can be
(re-)used. This page mainly addresses the users of the data and metadata.
3a. Benefits of Using Open Data
By clicking on the sub-menu “Benefits of Using Data” , the system displays a page with potential
benefits from the (re-)usage of Open Data.
| 11 | 11 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 13 of 57
3.b Use Cases of Open Data
By clicking on the sub-menu “Use Cases”, the system displays a list of success st ories (use cases)
from users having successfully (re-)used Open Data for an app, website, etc.
The list can be filtered by keyword, country of origin, region, sector (data category) and type of
use case.
| 12 | 12 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 14 of 57
4. Click on “Resources”
System displays several sub-menu items that lead to eLearning and training mater ial as well as to
a library of downloadable reports and documents about Open Data.
4a. eLearning
By clicking on the “ eLearning” sub -menu item and then on the button
on the
subsequent page, the system switches to the training platform from which 16 training lessons can
be directly taken online.
| 13 | 13 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 15 of 57
4b. Training Companion
By clicking on the “ Training Companion ” sub -menu item, the system provides detailed
information on how to deliver training on the basics of Open Data as well as the corresponding
supporting materials.
| 14 | 14 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 16 of 57
4c. Reports about Open Data
By clicking on the “ Reports about Open Data ” sub -menu item, the system provides a list of
available reports on open data. The list can be filtered by keyword, year of publication, country of
origin and type of report.
| 15 | 15 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 17 of 57
3.1.2 How to view / search for “Latest News”
The Home Page displays the latest 4 news items in the “Latest News” panel on the left hand side.
‐ Click on any of the 4 news items to display the complete news article (here: item#1).
‐ Or click on “More news” in order to fin d previously published news articles in the news
archive.
| 16 | 16 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 18 of 57
3.1.3 How to view / search for “Open Data Events”
The Home Page displays the latest 4 Open Data events in the “Open Data Events in Europe” panel on
the right hand side.
‐ Click on any of the 4 events to display the event article (here: item#1).
‐ Or click on “ View calendar ” in order to find current and future events on the events
calendar.
| 17 | 17 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 19 of 57
3.1.4 How to subscribe to the EDP Newsletter
On the Portal Home Page:
‐ Either Click on the “Newsletter” item in the page header:
Then, on the “Newsletter subscriptions” page:
• Enter your E-Mail address
• Click on the button “Subscribe”
The system will display a notification message after successful subscription.
Or
‐ Enter your email address directly in the footer and click on the “Subscribe” button.
The system will display a notification message after successful subscription.
| 18 | 18 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 20 of 57
3.1.5 How to view “Tweets” on the EDP
The Home Page displays the la test tweets on the European Data Portal in the “Tweets” pa nel on the
right hand side.
‐ Click on any of the tweets to display the complete tweet on twitter.
‐ Scroll vertically to see previous tweets.
| 19 | 19 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
European Data Portal Version 4.3 – User Manual Page 21 of 57
3.1.6 How to switch to another User Language
Select another language from the language selection box located o n the upper right corn er of the
home page.
The User Interface as well as the main editorial content is displayed in the selected language.
The EDP currently supports all 24 official EU languages + Norwegian:
English (en), Bulgarian (bg), Spanish (es), Czech (cs), Danish (da), German (de), Estonian
(et), Greek (el), French (fr), Irish (ga), Croatian (hr), Italian (it), Latvian (lv), Lithuanian (lt),
Hungarian (hu), Maltese (mt), Dutch (nl), Polish (pl), Portuguese (pt), Romanian (ro), Slovak
(sk), Slovenian (sl), Finnish (fi), Swedish (sv), Norwegian (no).
Note:
The following detailed editorial content – apart from the landing pages - is only available in English /
French and some additional languages:
‐ Practical Guide (formerly “Goldbook”): (en)
‐ eLearning Modules: (en, fr, de, it, es, sv)
‐ Training Companion: (en)
‐ More Training Material: (en)
‐ Reports about Open Data: (en)
‐ Use Cases (en)
| 20 | 20 | edp_s1_man_portal-version_4.3-user-manual_v1.0.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.