Wednesday, December 22, 2010

Intro to Microformats

Microformats: Digging Deeper into the Web by Ben Ward is a short introduction to the topic. It would have been nice to see some mention of COinS, since this is a publication for information workers.
Extracting, repurposing and combining information is a core activity for info pros so it's good to know that we are being helped with this on the Web, even if we are not aware of it. Ben Ward describes how microformats - vocabularies which enable recurring information to be described and then reused - are making content available in a richer form and facilitating the combining of data.

Tuesday, December 21, 2010

Dept. Stays

Here is a Christmas present from LOC. Lots of useless work avoided.
The Library of Congress will not undertake changing headings with the abbreviation “Dept.” to the fuller form at this time. Between August 20-October 1, 2010, the Library requested comments from the library community on changing “Dept.” to “Department” to follow the longstanding AACR2 provision (which is also incorporated into RDA: Resource Description and Access) of not abbreviating "department" in headings unless it is abbreviated by the body on the resource from which the name has been taken.
The few comments received by the Policy and Standards Division, Library of Congress, via email showed a clear preference for making this change but the limited response did not constitute a mandate. In addition those opposed to the change had solid reasons for not undertaking the change at this time. Consequently, the Library’s Policy and Standards Division will NOT proceed with implementing the change now. The issue will be reviewed again, following a decision regarding implementation of RDA.

Monday, December 20, 2010

ePub Metadata

Epub-logo-color-bookImage via WikipediaIt seems those ePub books I've been downloading form Project Gutenberg have metadata. It is based on Dublin Core but does provide more specific terms for the creator. Talat Chaudhri at UKOLN writes about why ePub is of interest from the point of view of metadata and application profiles, in What is ePub?

Is anyone harvesting ePub metadata? It seems it would be trivial to provide OAI-PMH interface to ePubs.

Tuesday, December 14, 2010

MARBI Papers Available for Review

Proposal 2011-01: Coding for Original Language in Field 041 (Language Code) of the MARC 21 Bibliographic Format

Discussion Paper No.2011-DP01: Changes to the MARC 21 Bibliographic Format to Accommodate RDA Production, Publication, Distribution and Manufacture Statements

Discussion Paper No. 2011-DP02: Additional Elements to Support RDA in the MARC 21 Format

Discussion Paper No. 2011-DP03: Identifying Work, Expression, and Manifestation records in the MARC 21 Bibliographic, Authority, and Holdings Formats

Discussion Paper No. 2011-DP04: Treatment of Controlled Lists of Terms for Carrier Attributes in RDA and the MARC 21 Bibliographic Format

The MARBI ALA Midwinter Conference 2011 agenda is available

Monday, December 13, 2010

Metadata Interoperability

On 2010-12-15, between 13-17 CET, Mikael Nilsson will defend his PhD thesis on metadata interoperability. The defense will be webcast.

From Interoperability to Harmonization in Metadata Standardization - Designing an Evolvable Framework for Metadata Harmonization
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata.

This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed.
Webcasting a Ph.D. defense is new to me. Is this a trending practice?

OLAC

OLAC is now in their membership drive. now is the time to renew or join. At only $20.00 for the year or $55.00 for three years, it is a best buy.

Tuesday, November 30, 2010

Dates in MADS

The MODS Editorial Committee is considering changes to the Metadata Authority Description Schema (MADS).Comments are welcomed.
The MODS Editorial Committee is considering changes to MADS. Some of these will be changes to bring it up to date with changes in MODS 3.4 (and a few things in earlier 3.X versions). Other changes involve accommodating RDA elements, many of which have been added to MARC. We are currently discussing how to express dates in MADS and would like some feedback from the MODS community.

It is desirable to have dates associated with the entity being described in MADS, e.g.. a person or organization. There are several dates in RDA data elements: Date associated with the person (date of birth, date of death,period of activity), Date associated with a family, Date associated with a corporate body (date of conference, establishment, and termination), Date associated with an event. In adding dates associated with the entity to MADS we need to consider how dates are handled in MODS as well as how they would be mapped from MARC.

In MARC authorities we added a field 046 for dates associated with the entity described in the authority record; this field was previously in the bibliographic format. We added separate data elements for:
  • birth date
  • death date
  • date created (start and end)
  • start period
  • end period
The latter could be used for period of activity for a person or date of establishment/termination of an organization. The dates are in a structured form with the encoding scheme identified in a separate data element. This field is considered part of the "general information" about the heading.

In MODS, most of the dates are in originInfo (other than subject dates, which are in subject/temporal) and would not apply to authorities. In MODS and MADS birth and death dates may be included as part of the name heading (to be consistent with headings in MARC) in name/namePart type=”date”. There are some specific types of dates that give additional information about the entity described in the record, and it is desirable to include them in a separate element whether or not they are given as part of the name.

However, dates in MODS do follow a common structure that may be useful to adopt in MADS. MODS date elements use an “encoding” attribute to specify the scheme used to form the date and a “point” attribute with values start and end to indicate date ranges. An element “dateOther” is used for dates not accommodated otherwise with a type attribute to allow the kind of date to be specified, since we could never predict all the possibilities for all applications.

Proposal: For MADS we suggest adding container element . This would be more or less equivalent to what was added to MARC. The proposed list of date subelements for MADS is as follows:

<birthDate>
<deathDate>
<activePeriod> (with point=start,end)
<dateCreated> (with point=start,end)
<dateOther> (with type and point=start,end)

All dates would have an encoding attribute.

There are many other ways to do this of course (e.g. separate data elements for start and end rather than point attribute).

Comments are welcome.

Saturday, November 27, 2010

eReaders

eReaders are being pushed as one of the gifts this holiday season. I bought one last summer and here are my personal impressions and thoughts about the device. They may help you in making a decision.

I bought a nook. I wanted e-ink; I spend all day looking at a screen and need to rest my eyes. This limited my choices to the Kindle, Kobo, nook and Sony readers. I wanted something open, something I could use to download books from my local library. That eliminated the Kindle. I picked the nook because it just felt better, it was really just a toss up. I did not get the 3G model. I think I made the right decision there, for me. I have plenty of books unread, no chance of being stuck without anything to read. Connecting it to my computer once a week or so works for me.

I've mostly been reading books freely available on the Internet. Project Gutenberg and eBooks@Adelaide have supplied many of my texts. Kipling, H. Rider Haggard, and Jules Verne are the authors I've been reading the most. I have downloaded a book from the library and it worked just fine. I also bought a book by Haggard that was also on Project Gutenberg, to compare them. Not much difference. I could have saved the two or three dollars.The texts do sometimes have odd white space. Long blanks might have been illustrations, they may be disconcerting for some readers.

On my computer I use Calibre to manage the texts. For the library book I had to use Adobe Digital Editions.

The nook comes with chess and sudoku. I tried them and that was it. I'd rather play on the DS. Maybe if I was stuck in an airport for a very long time I'd try them again. There is a very poor Web browser. Once again I'd rather browse using a DS. The nook can also play audio files. I have yet to try this, my mp3 player works just fine. (The DS is not a bad player either.) These features are fluff. The wi-fi works fine. Setting up a new connection is straight forward.


I have enjoyed my nook. It works for me. However, it does take downloading and installing programs that do not come with the unit. Also, unless you want to purchase all your books, searching for texts and downloading them in the right format is also necessary. They are not simple devices, they assume either deep-pockets or a fair bit of tech savvy.

Phone Apps

Since I asked for suggestions for a contact app for my Android phone it is only fair that I share the apps I use and find useful.
  • Key Ring. Scan your membership cards and have them in your phone. Prevents the Constanza Syndrome.
  • Light. Turns the whole screen white. Helpful for old tired eyes in dark pubs.
  • Memo. Just a very basic note taker.
  • Tripit. Pulls your confirmation notices from your email and presents the info in a timeline. It adds maps, weather. If you only travel a few times a year this is worth a download.
  • Ultimate Stopwatch. It has the countdown feature that I need for physical therapy.
OK, those are the ones I find very useful that are not obvious. I do have Twitter, Four Square, Flickr, Facebook and a slew of library apps.

I'd like to find a contact app as nice as my old Palm had. I'm also looking for a good backup app. Something that would copy my apps and data from from the phone/SIM card to the SD card.

Friday, November 26, 2010

Contacts

Anyone know of a good Android phone app that includes street address info? Thanks.

Wednesday, November 24, 2010

HTTPS Everywhere

The EFF has a new version of HTTPS Everywhere.
This week, EFF launched a new version of HTTPS Everywhere, a free security tool that provides enhanced privacy protection for Firefox browser users. EFF built HTTPS Everywhere to automatically switch many of the websites you visit from insecure HTTP to secure HTTPS.

EFF and the Tor Project originally built the HTTPS Everywhere software to help users take advantage of secure web searching on Google and a few other sites. Browsers normally prefer HTTP, unless site operators explicitly redirect browsers to HTTPS. HTTPS Everywhere changes the browser to prefer HTTPS wherever it's known to work.

After researchers demonstrated major web security flaws on social networking sites, webmail and search engines, EFF was inspired to expand HTTPS Everywhere to include Facebook, Twitter, Hotmail, Bit.ly, Cisco, Dropbox, Evernote, and GitHub. In addition to making HTTPS Everywhere open-source and available for free, EFF has released a technical guide to help website operators implement HTTPS properly, which will improve security and privacy across the web.

Friday, November 19, 2010

MADS/RDF Ontology for Review

The Library of Congress is looking for feedback on the MADS/RDF ontology.
A MADS/RDF ontology developed at the Library of Congress is available for a public review period until Jan. 14, 2011. The MADS/RDF (Metadata Authority Description Schema in RDF) vocabulary is a data model for authority and vocabulary data used within the library and information science (LIS) community, which is inclusive of museums, archives, and other cultural institutions. It is presented as an OWL ontology.

Documentation and the ontology are available at: http://www.loc.gov/standards/mads/rdf/

Based on the MADS/XML schema, MADS/RDF provides a means to record data from the Machine Readable Cataloging (MARC) Authorities format in RDF for use in semantic applications and Linked Data projects. MADS/RDF is a knowledge organization system designed for use with controlled values for names (personal, corporate, geographic, etc.), thesauri, taxonomies, subject heading systems, and other controlled value lists. It is closely related to SKOS, the Simple Knowledge Organization System and a widely supported and adopted RDF vocabulary. Unlike SKOS, however, which is very broad in its application, MADS/RDF is designed specifically to support authority data as used by and needed in the LIS community and its technology systems. Given the close relationship between the aim of MADS/RDF and the aim of SKOS, the MADS ontology has been fully mapped to SKOS.

Thursday, November 18, 2010

Google Refine

Anyone played with or used Google Refine on their data yet? Since it is part of Google, it is likely to get more use than the MARC tools we use in the library community. Might be good to be aware of what the rest of the world is doing with their data. The description sounds interesting "Google Refine is a power tool for working with messy data, cleaning it up, transforming it from one format into another, extending it with web services, and linking it to databases like Freebase." I'm going to have to find some time to explore this tool.

Wednesday, November 17, 2010

New NACO Node

Very exciting news SkyRiver is now a NACO node.
The PCC welcomes a new NACO Node* member, SkyRiver Technology Solutions. Name authority records contributed through SkyRiver will carry the prefix "ns." The Library of Congress has been working with all NACO nodes to prepare for the new prefix in LC NACO Authority File (LCNAF) records. The first NACO records from SkyRiver will enter the shared database on November 17, 2010.

SkyRiver will use MARC organization codes as identifiers for PCC partners who contribute through their service. SkyRiver's own records will appear under the MARC code CaEvSKY.
I'll have to investigate pricing. If inexpensive enough this could widen NACO participation. We, and I'd guess some other small research libraries, just can't afford OCLC start-up fees. Yet, we have close contact with authors in our subject areas. Our connections and expertise could be valuable to the community. I'm hoping this new NACO node will be priced so we can share without too much pain.

Thursday, November 11, 2010

VRA Core Schemas now Hosted by Library of Congress

News from LC.
The VRA Core is a data standard for the description of works of visual culture as well as the images that document them. The standard is now being hosted by the Network Development and MARC Standards Officeof the Library of Congress (LC) in partnership with the Visual Resources Association . VRA Core’s schemas and documentation are now accessible at http://www.loc.gov/standards/vracore/ while user support materials, such as VRA Core examples, FAQs and presentations, will continue to be accessible at http://www.vraweb.org/projects/vracore4/

In addition, a new listserv has been created called The Core List (vracore@loc.gov). The Core List is an unmoderated computer forum that allows users of the VRA Core community to engage in a mutually supportive environment where questions, ideas, and tools can be shared. The Core List is operated by the Library of Congress Network Development and MARC Standards Office. Users may subscribe to this list by filling out the subscription form at the VRACORE Listserv site

Tuesday, November 09, 2010

Space Science Workshop for Nebraska and Kansas Librarians

Public library staff in Nebraska and Kansas who serve 9-13-year-olds are invited to join us for a free NASA-supported workshop. The two-day training features our Explore! module about Jupiter and NASA's upcoming Juno mission.

Space is limited. Register by December 17, 2010.

NASA Space Science Workshop: Explore! Jupiter's Family Secrets
Grand Island, Nebraska
February 16-17, 2011

Explore! Jupiter’s Family Secrets will acquaint you with Jupiter and NASA’s upcoming Juno mission to discover clues about our solar system’s common history. Scientists and educators from NASA and the Lunar and Planetary Institute will share space science information, resources, hands-on activities, and demonstrations developed specifically for you to infuse into your programs with youth ages 9 to 13 and their families.

The workshop is free. A $200 stipend is available for 20 participants to offset travel costs for those traveling from beyond a 50-mile radius of the workshop location. A stipend request form will be emailed to participants on a first-come-first-accepted basis.

Friday, November 05, 2010

IFLA Cataloguing Section

The IFLA Cataloguing Section's meeting report from the conference in Gothenburg, Sweden, in August is now available online.

Questionnaire for U.S. Individuals/Libraries Who Want to Comment on RDA

LC wants your comments on RDA.
The U.S. RDA Test Coordinating Committee would welcome comments from individuals or libraries in the U.S. who are not formal or informal Test participants, whether they did or did not create RDA records.

The Committee has designed an online questionnaire available at URL http://www.surveymonkey.com/s/Q5968DB. Note that the questionnaire is designed primarily to accept comments about the experiences of creating catalog records using the RDA instructions and of using RDA records in a catalog but record creation is not a requirement for filling out the survey.

If you are a formal US RDA Test participant and have submitted other surveys for the Test, please do not use the Informal US RDA Testers Questionnaire.

MARC Tools

PTFS has released some Open Source MARC Utilities Tools.
PTFS has released its MARC Utilities - Metadata Converter, for the conversion of MARC records in batch mode to other metadata formats including Dublin Core and XML.

The MARC Utilities is a self-contained executable program which offers an array of catalog tools to manage, convert and analyze MARC-based data. It is written in Perl-TK, and is a great add-on to any MARC cataloger's (or database administrator's) toolset.

Tuesday, October 26, 2010

Open Conference Systems

The more tools that make it easy to create metadata the better it is for catalogers and the people they serve. Open Conference Systems makes it easy as selecting an option to create OAI metadata for harvesting.
Open Conference Systems (OCS) is a free Web publishing tool that will create a complete Web presence for your scholarly conference. OCS will allow you to:
  • create a conference Web site
  • compose and send a call for papers
  • electronically accept paper and abstract submissions
  • allow paper submitters to edit their work
  • post conference proceedings and papers in a searchable format
  • post, if you wish, the original data sets
  • register participants
  • integrate post-conference online discussions

Monday, October 25, 2010

Genre and Form Terms for Law

A logo of the Unites States Library of Congres...Image via WikipediaNews from LC. Since early 2007, the Library of Congress has created over 600 genre/form terms for moving images, sound recordings, and cartographic materials. In November 2010 the Policy and Standards Division (PSD) will approve approximately 80 genre/form terms for law, the culmination of a successful partnership with the American Association of Law Libraries (AALL), whose members developed a thesaurus of law genre/form terms and presented it to PSD. (For AALL's thesaurus see http://www.aallnet.org/sis/tssis/committees/cataloging/classification/genreterms/lawgenreformterms2010final.pdf.)

The law genre/form terms will appear on LC's Tentative Weekly List 44 and be approved on November 3, 2010. The Library of Congress plans to implement the terms in new cataloging in early 2011; a separate announcement will be made when the specific date has been determined.

Additional information on this and other genre/form projects can be found on LC's genre/form web page, http://www.loc.gov/catdir/cpso/genreformgeneral.html. The page includes a timeline, an extensive FAQ, reports, discussion papers, and announcements.

Classification of Named Entities In Wikipedia

Fine Grained Classification of Named Entities In Wikipedia by Maksim Tkachenko, Alexander Ulanov, and Andrey Simanovsky has been published as HPL-2010-166.
This report describes the study on classifying Wikipedia articles into an extended set of named entity classes. We employed semi-automatic method to extend Wikipedia class annotation and created a training set for 15 named entity classes. We implemented two classifiers. A binary named-entity classifier decides between articles about named entities and other articles. A support vector machine (SVM) classifier trained on a variety of Wikipedia features determines the class of a named entity. Combination of the two classifiers helped us to boost classification quality and obtain classification quality that is better than state of the art.
Pretty technical, but anything that helps disambiguation sounds fine to me.

Friday, October 22, 2010

Greater Houston Texas Area

Time to register for the Treasures of the Gulf Coast TLA District 8 Fall Conference. It takes place next weekend. October 30, 8:00 am to 2:00 pm at Clear Brook High School, located at 4607 FM 2351, Friendswood, TX 77546. The Conference Luncheon Speaker is Gwendolyn  Zepeda.

Vendor registration is still open.

I'm sorry I'll have to miss it, I do enjoy the meeting every year. However, this year I'll be at a Tweetup for the shuttle launch at Cape Canaveral. A once in a lifetime opportunity.

Thursday, October 21, 2010

DC-2010--Pittsburgh Proceedings

The DCMI conference proceedings are open access and are now published. Almost 200 pages of interesting reading. Some papers are:
  • Building blocks of metadata: What can we learn from Lego™? / Emma Tonkin, Andrew Hewson
  • Visualizing Metadata for Environmental Datasets / Sherry Koshman
  • FRBR: A Generalized Approach to Dublin Core Application Profiles / Maja Zumer, Marcia Lei Zeng, Athena Salaba
  • Enhancing Interoperability of FRBR-Based Metadata / Jenn Riley
  • Moving Library Metadata Toward Linked Data: Opportunities Provided by the eXtensible Catalog / Jennifer B. Bowen
  • Celebrating 10 Years of Government of Canada Metadata Standards / Margaret Devey, Marie-Claude Côté, Leigh Bain, Lynne McAvoy
  • Extending RSS to Meet Central Bank Needs / Paul Asman, San Cannon, Christine Sommo
Good quote from Twitter @anarchivist (Mark Matienzo) "Data that cannot speak for itself will be more vulnerable to becoming irrelevant" - Tom Baker.

Tuesday, October 19, 2010

Hosted Library Systems

What issues are important in a hosted library system? Beyond price and up-time that is. Ones that come to my mind are:
  1. Ability to handle MARC authority records
  2. Z39.50 or sru/srw searching, the ability for a searcher to download a MARC record
  3. Stable, short URLs for each record
  4. RDA complaint
What am I overlooking? Any suggestions about good, bad or ugly systems.

Monday, October 18, 2010

Addition to Source Codes for Vocabularies, Rules, and Schemes

The source codes listed below have been recently approved. The codes will be added to the applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code list for current usage in MARC fields and MODS/MADS elements.

The source codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Classification Scheme Source Codes
The following source code has been added to the Classification Scheme Source Codes list for usage in appropriate fields and elements.

Addition:
jelc
Journal of economic literature (JEL) classification

Standard Identifier Source Codes
The following source code has been added to the Standard Identifier Source Codes list for usage in appropriate fields and elements.

Addition:
isni
International standard name identifier (ISNI) (Geneva: International Organization for Standardization)

Subject Heading and Term Source Codes
The following source code has been added to the Subject Heading and Term Source Codes list for usage in appropriate fields and elements.

Addition:
gnd
Gemeinsame Normdatei (Leipzig: Deutsche Nationalbibliotheke)

Wednesday, October 13, 2010

Unshelved

The cataloger for Mallville Public comes out of the basement this week in Unshelved.

Tuesday, October 12, 2010

Maintenance release of DCMI Terms Recommendation

"The DCMI Usage Board has published a maintenance release of DCMI Metadata Terms. Changes include a formal range for dcterms:title, a new datatype dcterms:RFC5646, and a declaration that the property foaf:maker is equivalent to dcterms:creator." From DCMI News.

Friday, October 08, 2010

Taxonomy Evaluation

Monte Carlo Study of Taxonomy Evaluation by Alexander Ulanov, Georgy Shevlyakov, Nikolay Lyubomishchenko, Pankaj Mehra, Vladimir Polutin might be of interest.
Ontologies are increasingly used in various fields such as knowledge management, information extraction, and the semantic web. However, it is useful to know the quality of a particular ontology before deployment, especially in the case when there are numbers of similar ones. Ontology evaluation is the problem of assessing a given ontology from the point of view of a particular criterion of application, typically in order to determine which of several ontologies would better suit a particular purpose. An ontology contains both taxonomic and factual information. Taxonomic information includes information about concepts and their association usually organized into a hierarchical structure. This paper addresses the evaluation of such taxonomies.

Thursday, October 07, 2010

Cataloging Correctly for Kids

Cataloging Correctly for Kids: An Introduction to the Tools has a new edition, the 5th.

Introduction, by Sheila S. Intner

  1. Guidelines for Standardized Cataloging for Children
    Joanna F. Fountain for the Association for Library Collections & Technical Services, Cataloging and Classification Section, Cataloging of Children’s Materials Committee
  2. How Children Search
    Lynne A. Jacobsen
  3. Cataloging Correctly Using AACR2 and MARC 21
    Deborah A. Fritz
  4. Copy Cataloging Correctly
    Deborah A. Fritz
  5. Cataloging Correctly (Someday) Using RDA
    Deborah A. Fritz with Lynnette Fields
  6. Authority Control and Kids’ Cataloging
    Kay E. Lowell
  7. Using LC’s Children’s Subject Headings in Catalogs for Children and Young Adults: Why and How
    Joanna F. Fountain
  8. Sears List of Subject Headings
    Joseph Miller
  9. Dewey Decimal Classification
    Julianne Beall
  10. Cataloging Nonbook Materials
    Sheila S. Intner and Jean Weihs
  11. How the CIP Program Helps Children’s Librarians
    Joanna F. Fountain and Michele Zwierski
  12. Cataloging for Kids in the Academic Library
    Gabriele I. Kupitz
  13. Cataloging for Non-English-Speaking and Preliterate Children
    Pamela J. Newberg
  14. Automating the Children’s Catalog
    Judith Yurczyk
  15. Vendors of Cataloging for Children’s Materials
    Pamela J. Newberg and Jennifer Allen

Monday, October 04, 2010

Children's and Young Adults' Cataloging Program

A logo of the Unites States Library of Congres...Image via WikipediaNews from LC.
The Library of Congress, U.S. and Publisher Liaison Division is pleased to announce that as of September 2010 the Annotated Card Program is officially renamed the Children's and Young Adults' Cataloging Program. The Library of Congress initiated the Annotated Card Program in the fall of 1965 and it is one of its oldest programs. Though renamed, the program will continue to provide the same services. The new name, which now contains the word "cataloging," better defines the activity of the program. The inclusion of "children" and "young adult" in the name specifically identifies the audience for the types of materials handled by the program.

The Children's Literature Section, under the U.S. and Publisher Liaison Division, is responsible for the Children's and Young Adults' Cataloging Program. It catalogs the wide range of fiction material published for children and young adults. The records created, which include an objective and succinct summary of the book, are primarily used by publishers, school libraries, and public libraries. The section also develops new children's subject headings, proposes changes to existing headings, monitors the policies and practices of children's cataloging, keeps abreast of trends in children's publishing, and responds to queries related to the cataloging of children's and young adults' material. The Children's Literature Section actively participates in the American Library Association Committee on Cataloging of Children's Materials and solicits its advice and feedback when developing policy for children's cataloging. The section will continue the services of the program under its new name, Children's and Young Adults' Cataloging Program.

Monday, September 27, 2010

Addition to Source Codes for Vocabularies, Rules, and Schemes

The source code listed below has been recently approved. The code will be added to applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code list for current usage in MARC fields and MODS/MADS elements.

The code should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables. Disclaimer Cartographic Data Source Codes

The following source code has been added to the Cartographic Data Source Codes list for usage in appropriate fields and elements.

Addition:
wpntpl
Washington Place Names (Tacoma Public Library)

Thursday, September 23, 2010

Multilingual Dictionary of Cataloguing Terms and Concepts

News from IFLA. The first version of the Multilingual Dictionary of Cataloguing Terms and Concepts (MulDiCat) has been released. It is planned to make a later version available as SKOS. "The Multilingual dictionary of cataloguing terms and concepts contains definitions for many terms and concepts used by the library cataloguing community. Terms and definitions are available in English and a variety of other languages."

Hackfest Seeks Suggestions

Hackfest, a prelude to Access 2010, is looking for suggestions for projects. Have a tech need, maybe here is the place to get something done.
On October 13th, a very special event is happening: the Access Hackfest. A tradition since Access 2002, the Hackfest brings together library practitioners of all kinds to tackle challenges and problems from the mundane to the sublime to the ridiculous. If you can imagine a spectrum with three axes (Axes Hackfest? forget I mentioned that), you might be just the person to pose those challenges to the Hackfest participants!

What we're saying here, folks, is that we need suggestions for Hackfest projects. There are no limitations to these challenges: oh sure, you're bringing together complete strangers and asking them to accomplish in eight hours or less what the rest of the library ecosphere is incapable of or uninterested in solving during the rest of the year - but that's what makes the Hackfest MAGICAL. Example results of previous Access Hackfests are available from http://library.acadiau.ca/access2004/hackfest.html and a podcast by Hackfest alumnus Dan Chudnov that captures the spirit of the Hackfest is available from http://ur1.ca/1qtj4

We plan to keep the Hackfest suggestions secret until the moment of their unveiling on the morning of October 13th - can you feel the anticipation mounting? - so please submit your challenges in one clearly written paragraph or less via the Hackfest submission form at http://access2010.lib.umanitoba.ca/node/45

Wednesday, September 22, 2010

Anachronistic Assumptions About MARC

Interpreting MARC: Where’s the Bibliographic Data? by by Jason Thomale appears in the latest code4lib Journal
The MARC data format was created early in the history of digital computers. In this article, the author entertains the notion that viewing MARC from a modern technological perspective leads to interpretive problems such as a confusion of “bibliographic data” with “catalog records.” He explores this idea through examining a specific MARC interpretation task that he undertook early in his career and then revisited nearly four years later. Revising the code that performed the task confronted him with his own misconceptions about MARC that were rooted in his worldview about what he thought “structured data” should be and helped him to place MARC in a more appropriate context.
Lots of other good reading in the issue too.
I've just been reminded that there was a book covering this same topic written back in 1989. MARC for library use by Walt Crawford. Thomale does not reference the work. Crawford's book is an excellent read, well written. I need to get my copy autographed some day.

Monday, September 20, 2010

Not All the Loonies Are From Texas

Sometimes it seems that Texas has more than its share of crackpots. A Governor who does not accept evolution, a state board of education who removed Thomas Jefferson from a list of important Enlightenment thinkers but then inserted Thomas Aquarius, book burnings, etc. So I'm glad this news comes from Oklahoma, there is a legislative referendum that forbids courts from considering or using Sharia Law. Has this been a problem in OK? Have people gone to court to resolve differences and then been stoned or caned? Or is everything so fine in OK that they have time and resources to spend on imaginary problems rather than ones in the real world?

Monday, September 13, 2010

Improving OpenURLs Through Analytics

NISO Teleconference on Improving OpenURLs Through Analytics.
Please mark your calendars to attend today's NISO Open Teleconference call, to be held from 3-4 p.m. (eastern), focusing on the NISO IOTA (Improving OpenURLs Through Analytics) Project. The IOTA Working Group -- originally known as the OpenURL Quality Metrics Working Group -- was begun in early 2010 as a two-year project to look at the feasibility of creating industry-wide, transparent and scalable metrics for evaluating and comparing the quality of OpenURL implementations across content providers. A website hosting the analytics to date (nearly 9 million OpenURLs analyzed), with support information, can be found at http://openurlquality.niso.org/; in addition, the IOTA project can be tracked via Twitter at http://twitter.com/nisoiota and via the blog at http://openurlquality.blogspot.com/. In this call, learn more about the project and the current work from working group member Elizabeth Winter, who will be joined by Adam Chandler, the IOTA group chair, for further discussion.
This Open Teleconference conversation is part of an ongoing series of month calls held on the second Monday of each month (barring holidays) as a way to keep the community appraised of NISO's activities. It also provides an opportunity for you to provide feedback to NISO on our activities or make suggestions about new activities we should be engaging in.
The call is free and anyone is welcome to participate in the conversation. To join, simply dial 877-375-2160 and enter the code: 17800743.

Thursday, September 09, 2010

Houston Area

The TLA District 8 Fall Conference is fast approaching. Always a good meeting.
This fall's conference, "Treasures of the Gulf Coast," will take place at Clear Brook High School in Clear Creek ISD on Saturday, October 30, 2010. The high school is located on FM 2351 in the edge of Friendswood, Texas off of I-45. Please mark your calendars now. More information will be forthcoming.

There are many of you out there doing great and brilliant things that you need to share with your peers. The application for program proposals will go out tomorrow. Please think about sharing some of the great things you do to make your library a "treasure" to your patrons and fill out a conference proposal form.

Applications for the vendor's exhibit area will also be up in the next week.
Submit proposals.

Wednesday, September 08, 2010

Addition to Source Codes for Vocabularies, Rules, and Schemes

The source code listed below has been recently approved. The code will be added to applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code list for current usage in MARC fields and MODS/MADS elements.

The code should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Cartographic Data Source Codes
The following source code has been added to the Cartographic Data Source Codes list for usage in appropriate fields and elements.

Addition:
nzpnd
New Zealand Place Names Database (New Zealand Geographic Board Nga Pou Taunaha o Aotearoa (NZGB))

Additions to the MARC Code List for Relators

The codes listed below have been recently approved. The codes will be added to the MARC Code List for Relators.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Additions:
mfp
Manufacture place
prp
Production place

Friday, September 03, 2010

CORE: Cost of Resource Exchange Protocol

NISO has announced the publication of its latest Recommended Practice, CORE: Cost of Resource Exchange Protocol (NISO RP-10-2010).
This Recommended Practice defines an XML schema to facilitate the exchange of financial information related to the acquisition of library resources between systems, such as an ILS and an ERMS.CORE identifies a compact yet useful structure for query and delivery of relevant acquisitions data. "Sharing acquisitions information between systems has always been a difficult problem," said Ted Koppel, AGent Verso (ILS) Product Manager, Auto-Graphics, Inc. and co-chair of the CORE Working Group. "The rise of ERM systems made this problem even more acute. I'm glad that we, through the CORE Recommended Practice, have created a mechanism for data sharing, reuse, and delivery." Co-chair Ed Riding, Catalog Program Manager at the LDS Church History Library, added, "The CORE Recommended Practice provides a solution for libraries attempting to avoid duplicate entry and for systems developers intent on not reinventing the wheel. I look forward to the development of systems that can easily pull cost information from one another and believe CORE can help facilitate that."
CORE was originally intended for publication as a NISO standard. However, following a draft period of trial use that ended March 2010, the CORE Working Group and NISO's Business Information Topic Committee voted to approve the document as a Recommended Practice. This decision was in part based on the lack of uptake during the trial period as a result of recent economic conditions, and was motivated by the high interest in having CORE available for both current and future development as demand for the exchange of cost information increases. Making the CORE protocol available as a Recommended Practice allows ILS and ERM vendors, subscription agents, open-source providers, and other system developers to now implement the XML framework for exchanging cost information between systems. "I am pleased that CORE is now available for systems developers to begin using in order to facilitate the exchange of cost information between systems in a library environment," commented Todd Carpenter, NISO's Managing Director.

NASATweetup

I'm going to see a shuttle launch as part of the NASATweetup. So excited. This map shows just how awesome it will be. It will be hard to think about RDA issues for a while.

Tuesday, August 31, 2010

Cataloging Matters

I should have already mentioned this, but better late than never. Cataloging Matters is a podcast by Jim Weinheimer (who is already well known and respected from his participation in AUTOCAT and his weblog First Thus. He has already released the third episode.

Monday, August 30, 2010

Open Publication Distribution System (OPDS) Catalog Format for Digital Content

Version 1.0 of the Open Publication Distribution System (OPDS) Catalog format for digital content has been released.
The open ebook community and the Internet Archive are pleased to announce the release of the first production version of the Open Publication Distribution System (OPDS) Catalog format for digital content. OPDS Catalogs are an open standard designed to enable the discovery of digital content from any location, on any device, and for any application.

The specification is available at: http://opds-spec.org/specs/opds-catalog-1-0.

Based on the widely implemented Atom Syndication Format, OPDS Catalogs have been developed since 2009 by a group of ebook developers, publishers, librarians, and booksellers interested in providing a lightweight, simple, and easy to use format for developing catalogs of digital books, magazines, and other content.

OPDS Catalogs are the first component of the Internet Archive’s BookServer Project, a framework supporting open standards for discovering, lending, and vending books and other digital content on the web.

Additions to Source Codes for Vocabularies, Rules, and Schemes

News from LC.
The source codes listed below have been recently approved. The codes will be added to applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code list for current usage in MARC fields and MODS/MADS elements.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Description Convention Source Codes
The following source code has been added to the Description Convention Source Codes list for usage in appropriate fields and elements.

Addition:
ncr
Nippon cataloging rules (Tokyo: National Diet Library)
Cartographic Data Source Codes
The following source code has been added to the Cartographic Data Source Codes list for usage in appropriate fields and elements.

Addition:
erpn
Scott, Andrew. The encyclopedia of raincoast place names: a complete reference to coastal British Columbia (Madeira Park, BC: Harbour Publishing)

Saturday, August 28, 2010

Library of Congress Changed Subject Heading Subdivisions

Sept. 1 there will be a new edition of the Library of Congress Changed Subject Heading Subdivisions.
Each August I review the previous years' changes from Library of Congress's "Weekly List" of new headings and cross-check them with their annual "Free-Floating Subdivisions". Questionable entries are referred to the Library of Congress Cataloging Distribution Service for resolution. Changes are then added to my master file, which is then totally cumulated. Official publication date of each year's new edition is September 1.
Joyce T. Ogden, the author, sent me a very nice note asking that I announce the newest version of her work.

Wednesday, August 25, 2010

British Library Catalog

The British Library in London main buildingImage via WikipediaThe British Library has made their catalog freely available for research.
As part of its work to open its metadata to wider use beyond the traditional library community, the British Library is making copies of its main catalogue and British National Bibliography datasets available for research purposes. Files are initially being made available in XML and structured in an RDF/DC format (see sample). Files are distributed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
The British Library is currently investigating options for structuring its catalogue information as linked data and is collaborating with a number of organisations in examining the issues associated with making bibliographic metadata available in this way.

Friday, August 20, 2010

Dept. or Department?

LC is seeking comments on changing Dept. to Department.
The Library of Congress proposes to adopt the AACR2 provision (which is also incorporated into RDA: Resource Description and Access) of not abbreviating "department" in headings unless it is abbreviated by the body on the resource from which the name has been taken. OCLC has agreed to change the approximately 48,000 1XX fields in name authority records, and the Library of Congress would change its approximately 200,000 bibliographic records and re-distribute them, beginnning no earlier than March 2, 2011. The former 1XX heading would be retained as a 4XX field in the authority record (with $w nne), and existing references would be adjusted as necessary (e.g., for higher bodies with "Dept." in their names). Fields 110, 130, 151, 410, 430, 451, 510, 530, 551 along with 781, all except 4xx where $w is present are all candidates for change. A very small number of changes may be erroneous because the resource actually used an abbreviated form. Such conversion errors may be corrected by NACO participants as encountered after the batch process has been run. The Library of Congress is seeking comments on this proposal by Oct. 1, 2010. Comments should be sent to policy@loc.gov.

Friday, August 13, 2010

OAI @ OCLC

Some good news from OCLC about their OAI database, now you can submit your own records for harvesting, just use the WorldCat Digital Collection Gateway.
Repository managers from libraries, museums, archives and other cultural heritage and research institutions can now contribute metadata records for digital materials to WorldCat using the new, enhanced WorldCat Digital Collection Gateway, increasing visibility and accessibility of special collections, institutional repositories, and other unique digital content to Web searchers worldwide....

Designed for self-service use, the WorldCat Digital Collection Gateway is a Web-based tool that enables repository managers to customize how their metadata displays in WorldCat.org and determine their metadata harvesting schedule—monthly, quarterly, semi-annually or annually. Additionally, it applies their institution's "holdings symbol" to their records, thereby highlighting the unique information resources their institution is contributing to WorldCat.

Monday, August 09, 2010

Question Answering on the Web

Simplifying Question Answering on the Web by Raghu Anantharangachar and Srinivasan Ramani has been published as HPL-2010-92.
The growth of World Wide Web (WWW) has created a huge repository of information. However, this information repository can only be used by human beings. A person searching for a specific answer to a question can use the web to gather information from various web sites and other information repositories, and manually find out the answer to his question. While doing this, the user also has to understand the contents of each web site or repository, and interpret the data appropriately, as they might be in different formats and representations. Thirdly, the data on the web might be coming from various sources, and security and authenticity is an issue. The user needs to manually check the validity of the data sources, and use the data appropriately. Lastly, to derive answers to questions which need some factual data and some logical reasoning, the user needs to perform the reasoning himself. All these problems make it difficult for the na*ive user to easily obtain answers to his questions and use them for his purpose. We describe a solution to this problem of question answering in the yellow pages or e-commerce context. While we believe that the technology can be applied to other domains, we do not aim at a universal question answering system. We have explored a solution using semantic web techniques which support the creation of a huge machine processable information repository, through a Web 2.0 process, with the users contributing the effort and editors ensuring quality. The system has a simple interface through which vendors can create structured data in the form. Using ontologies standardizes the terminology and eliminates errors in interpretation and usage of terms. We have explored the use of simple and shallow reasoning where possible, to give inferred information in addition to explicitly stored information. Lastly, we describe a simple and intuitive user interface that is interactive. This tool enables the user to express his question in an unambiguous way. A very similar tool enables information contributors to enter information.

MARBI Meeting Minutes

The 2010 Annual MARBI Meeting minutes are now available online.

Saturday, August 07, 2010

App Inventor for Android

Have an idea for an Android app? App Inventor for Android can help make it reality.

Friday, July 30, 2010

PURL Community Site Moves

I've always had a soft spot for PURLs. They never caught on like I thought they should have. However, they are still around and you can keep up with community news at Google Code now.
The PURL Open Source Software community site at purlz.org has been migrated from Zepheira's servers to Google Code. Accordingly, the mailing lists hosted at purlz.org are also migrating to a Google Group.

Please read this message carefully to avoid missing mail!
  1. The purlz.org DNS domain will redirect to the Google Code site, but you may wish the direct URL in case of any DNS migration issues. It is: http://sites.google.com/site/persistenturls/
  2. The direct URL for the new Google Group is: http://groups.google.com/group/persistenturls
-> Please subscribe! All new messages will go there. No new messages will be sent to the purlz.org mailing lists!

Thursday, July 29, 2010

TechKNOW

The new issue of TechKNOW is now available. July 2010, Volume 16, Issue 2
  • AutoIt for Technical Services Workflow / Becky Yoose, Miami University Libraries
  • Coordinator's Corner / Fred Gaieck, Ohio Reformatory for Women
  • Book Review: Introducing RDA: A Guide to the Basics / Chris Oliver
  • BarTender Software Allows Dayton Metro to Eliminate Stickers, Streamline Workflow / Andrea Christman, Dayton Metro Library
  • US RDA Testing Period
  • Book Review: Acquisitions in the New Information Universe: Core Competencies and Ethical Practices / Jesse Holden
  • LCSH Headings for Cooking and Cookbooks Have Been Changed

OCLC a Monopoly?

In a move that could have far-reaching implications for competition in the library software and technology services industry, SkyRiver Technology Solutions, LLC has filed suit in federal court in San Francisco against OCLC Online Computer Library Center, Inc. The suit alleges that OCLC, a purported non-profit with a membership of 72,000 libraries worldwide, is unlawfully monopolizing the markets for cataloging services, interlibrary lending, and bibliographic data, and attempting to monopolize the market for integrated library systems, by anticompetitive and exclusionary practices.
Seen on the Library Technology Guides site.

New Greek Romanization Table

News from ALA.
The ALA Committee on Cataloging: Description and Access (CC:DA) has approved the consolidated Greek romanization table of April 2010. This revised table differs from the existing table only in the inclusion in the consolidated table of two archaic letters and additional examples. The consolidated table also does not include Coptic for which a separate table will be developed. The approved consolidated table has replaced the existing table on the Cataloging and Acquisition Web site (http://www.loc.gov/catdir/cpso/roman.html) and will be published in the Summer 2010 issue of Cataloging Service Bulletin. The Policy and Standards Division wishes to express its gratitude to those who commented on the draft table.

Wednesday, July 21, 2010

Genre/Form Headings for Cartographic Resources

The Library of Congress, Thomas Jefferson Buil...Image via Wikipedia
LC continues to create cartographic genre/form terms.
The Policy and Standards Division (PSD) of the Library of Congress continues to develop genre/form headings on a discipline-by-discipline basis, and will implement genre/form headings for cartographic resources on September 1, 2010....
On September 1, 2010 the Library will implement cartographic genre/form headings and the revised form subdivisions in new cataloging. PSD will work to update existing bibliographic records to change the form subdivisions and add genre/form headings, and expects to complete the process within a year.

Friday, July 16, 2010

Google Buys Metaweb

Image representing Metaweb Technologies as dep...Image via CrunchBase
Google has bought Metaweb, the folks behind Freebase. This could be an important step on the way to a more semantic web.
In addition to our ideas for search, we’re also excited about the possibilities for Freebase, Metaweb’s free and open database of over 12 million things, including movies, books, TV shows, celebrities, locations, companies and more. Google and Metaweb plan to maintain Freebase as a free and open database for the world. Better yet, we plan to contribute to and further develop Freebase and would be delighted if other web companies use and contribute to the data. We believe that by improving Freebase, it will be a tremendous resource to make the web richer for everyone. And to the extent the web becomes a better place, this is good for webmasters and good for users.

Thursday, July 15, 2010

Cataloging Atlases

Topographical map of the Earth showing North A...Image via Wikipedia
I'm wondering, now that Atlases is a genre term (as it should be) what will the subject headings for world atlases be? Earth $vAtlases maybe?

I also just noticed the difference between topographic maps and physical maps. The later do not include any vegetation or man-made structures. Many maps with topographic in the title should get the genre term Physical maps.

Romanization Considered

The final report of the ALCTS Non-English Access Working Group on Romanization has been released.
The Working Group was established by the ALCTS Non-English Access Steering Committee in June 2009 to examine the current use of romanized data in bibliographic and authority records and to recommend whether romanization is still needed in bibliographic records. The Working Group would like to thank all who submitted comments on our draft report released in November. This input from the wider cataloging community was invaluable, and many of the comments received have been incorporated directly into the final report.
The Committee considered 2 options: A) transliteration and vernacular in records and B) vernacular only in records.
A majority of the Working Group believes that the factors discussed in this report are significant enough to make a general shift to Model B in bibliographic records premature at this point. Some members of the Working Group feel that having romanized access points in records provides enough added value that their use should be continued indefinitely. Others believe that in an environment of shrinking staffs and production pressures we should anticipate future developments in making our decision and recommend a move to Model B sooner rather than later. However, most believe that although a gradual move towards the use of Model B for current cataloging is probable, we should continue use of Model A for now as we prepare for a potential transition.
The committee had other recommendations as well.

NISO News

The latest issue of the NISO Newsline supplement Working Group Connection is now available.

VuFind 1.0 Released

Today, VuFind 1.0 has been released.
In addition to improved stability, the new release includes several features missing from the previous release candidate:
  • Flexible support for non-MARC metadata formats
  • A mobile interface
  • Dewey Decimal support
  • Integration with Serials Solutions' Summon
  • Dynamic "recommendations modules" to complement search results with relevant tips
Here is the description of VuFind from their home page.
VuFind is a library resource portal designed and developed for libraries by libraries. The goal of VuFind is to enable your users to search and browse through all of your library's resources by replacing the traditional OPAC to include:
  • Catalog Records
  • Locally Cached Journals
  • Digital Library Items
  • Institutional Repository
  • Institutional Bibliography
  • Other Library Collections and Resources
VuFind is completely modular so you can implement just the basic system, or all of the components. And since it's open source, you can modify the modules to best fit your need or you can add new modules to extend your resource offerings.

Wednesday, July 14, 2010

Changing LC Subject Headings

A post on AUTOCAT alerted me to this, suggestions for changes to LC subject headings can be made from the Authorities & Vocabularies service. There is a suggest terminology tab at the top of the page. I missed that for how long?
The LC Authorities and Vocabularies service welcomes any suggestions you might have about terminology used for a given heading or concept.

Would you like to suggest a change to this heading?
No need to be a SACO member, or even a librarian.

Monday, July 12, 2010

Institutional Identifier

The NISO Institutional Identifier (I2) Working Group (WG) has released a midterm report and is looking for comments.
The NISO I2 WG is soliciting feedback on the report and guidance for the next steps in developing this standard from individuals and groups involved in the digital information transactions. Stakeholders include publishers/distributors, libraries, archives, museums, licensing agencies, standards bodies, and service providers, such as library workflow management system vendors and copyright clearance agencies. Anyone involved at any level in the distribution, licensing, sharing or management of information is invited to participate.

Please read the information below and participate in the evaluation of our midterm work by reading the midterm release document and answering a few questions about each development area. You are the stakeholders for this information standard. We must work to ensure that it meets your needs, so your input is very valuable and important to us.

Thursday, July 08, 2010

Accurate Metadata Sells Books

Accurate Metadata Sells Books by Calvin Reid appears in a recent Publishers Weekly.
Now, Dawson said, accurate metadata has become a marketing tool for publishers, a shopping guide for consumers, and an absolute necessity for distributors and retailers. The growth of the importance of metadata, Dawson said, led to the creation of ONIX, or Online Information Exchange, an XML-based standardized format for transmitting information electronically. (XML, or extensible markup language, is a digital format that allows data to be easily reused in other forms.) XML software is said to be easy to use, inexpensive, and its tags or descriptions are easy for people, as well as machines, to read.
The metadata for ebooks is also important. Calibre, a popular ebook tool, uses metadata from LibraryThing. Libraries are in this, if not directly. OCLC has a service to enrich ONIX metadata with WorldCat information and authorities. The LibraryThing metadata can come from libraries.

Wednesday, July 07, 2010

OLAC Conference

Macon, Georgia will host the next OnLine AudioVisual Catalogers (OLAC) conference, from Friday, October 15, through Sunday, October 17, 2010, at the new Macon Marriott City Center. Registration will be available through September 20, and afterward if space allows.

Dr. Robert Ellett and Mac Elrod will be among our speakers.

The standard registration fee is $150.00 ($100.00 for LIS students), which includes three continental breakfasts and two lunches.

Macon is in central Georgia, approximately 75 miles south of Atlanta; it is easily accessible by shuttle bus from Hartsfield-Jackson International Airport.

A preconference on NACO funnel training will be held on Thursday, October 14; the registration deadline is July 15.

The deadline for poster session applications has been extended to July 15.

To register, or for more information, visit the official conference website: http://www.olacinc.org/drupal/conference/2010/index.html

or the conference blog:

http://macon2010.wordpress.com/


Hotel reservations should be made directly through Marriott: http://www.marriott.com/hotels/travel/mcnfs-macon-marriott-city-center/. Our group code is OLAOLAA.
Widely posted and distributed.

In other OLAC news, the June 2010 OLAC Newsletter is now available. Always good reading.

Friday, July 02, 2010

Creating Linked Data

RDF graph for Eric Miller provided as an examp...Image via Wikipedia
The Nodalities blog has the post The Data Publishing Three-Step. Agree completely. Our cataloging already meets common standards, the resources for linking are becoming more common. For example, Ross Singer made this announcement yesterday.
I just wanted to let people know I've made the MARC codes for forms of musical compositions (http://www.loc.gov/standards/valuelist/marcmuscomp.html) available as http://purl.org/ontology/mo/Genres.

http://purl.org/NET/marccodes/muscomp/sy#genre

They follow the same naming convention as they would in the MARC 008 or 047, so it's easy to map (that is, no lookup needed) from your MARC data:

http://purl.org/NET/marccodes/muscomp/sy#genre

etc.

The RDF is available as well:
http://purl.org/NET/marccodes/muscomp/sy.rdf
Now all we have is the last step, just make it available.

One thing the Nodalities blog post neglects to mention is to use tools that make it simple. Druple now makes it very easy to create RDF. The lead article, Drupal: Semantic Web for the Masses in the Nodalities magazine covers that very nicely.

Connecting People and Their Work

BibApp is a tool to connect people and their research interests at an institution.
BibApp is a campus research gateway and expert finder. It matches researchers on your campus or research center with their publication data and mines that data to see collaborations, create visualizations of areas of research, and find experts in research areas. With BibApp, it is easy to see what publications can be placed on the Web for greater access and impact. BibApp can push those publications directly into an institutional repository.
BibApp allows researchers and research groups to promote research, find collaborators on campus, and make research more accessible. It also allows libraries to better understand research happening in local departments, facilitate conversations about author rights with researchers, and ease the population of the institutional repository. Finally, BibApp allows campus administrators to achieve a clearer picture of collaboration and scholarly publishing trends on campus.
BibApp is the result of a collaboration between the University of Wisconsin-Madison and the University of Illinois at Urbana-Champaign. The Illinois Informatics Institute at the University of Illinois (https://www.informatics.illinois.edu/icubed/) provided generous funding for the development of the 1.0 release of BibApp.
BibApp is a Ruby on Rails application, coupled with the Solr/Lucene search engine, and either MySQL or PostgreSQL as its datastore. It uses open standards and protocols such as OpenURL and SWORD and automatically pulls in data from third party sources such as Google Books and the Sherpa/Romeo publisher policy database. BibApp imports publication data in RIS, MEDLINE and Refworks XML bibliography formats and exports data in several citation formats (APA, Chicago, IEEE, MLA, more) via CiteProc. BibApp also provides a web services API for delivering data as XML, YML, JSON, and RDF. BibApp is released under a University of Illinois/NCSA Open Source License (http://www.opensource.org/licenses/UoI-NCSA.php).
Live installations of BibApp can be found at:

Wednesday, June 30, 2010

New York Times Publishing More Subject Headings

Research Buzz notes that the New York Times has published more subject headings.
This release includes 498 of the most commonly-used subject headers, which, like the names, are mapped to DBPedia and/or Freebase. The NYT hopes to eventually release all 3,500 of its subject descriptors.
Thurs. 1 July, 2010 Gary Price over at Resource Shelf also covered this and offers some more details.

Changes to Value Lists for Codes and Controlled Vocabularies

The code listed below has been recently approved. The code will be added to applicable Value Lists for Codes and Controlled Vocabularies lists. See the specific value list for current usage in MARC fields and MODS/MADS elements.

The code should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

MARC Authentication Action Code
The following code has been added to the MARC Authentication Action Code List for usage in appropriate fields and elements.

Addition:
croatica
Croatian National Bibliography Code croatica signifies that the descriptive elements have been edited and all headings were verified against the relevant authority file to prepare the record for inclusion in the Croatian National Bibliography.

Additions to Source Codes for Vocabularies, Rules, and Schemes

The source codes listed below have been recently approved. The codes will be added to applicable Source Codes for Vocabularies, Rules, and Schemes lists. See the specific source code list for current usage in MARC fields and MODS/MADS elements.

The codes should not be used in exchange records until 60 days after the date of this notice to provide implementers time to include newly-defined codes in any validation tables.

Standard Identifier Source Codes
The following source code has been added to the Standard Identifier Source Codes list for usage in appropriate fields and elements.

Addition:
itar
ITAR (Importtjeneste og autoritetsregistre)

Subject Heading and Term Source Codes
The following source code has been added to the Subject Heading and Term Source Codes list for usage in appropriate fields and elements.

Addition:
gccst
Government of Canada core subject thesaurus (Gatineau [Quebec]: Library and Archives Canada)

Thursday, June 24, 2010

WorldCat and Twitter

Twitter logo initialImage via Wikipedia
Query WorldCat from Twitter.
#Ask4Stuff is a new, Twitter-based service that returns a WorldCat search when you send a tweet with the tag #Ask4Stuff. So if you send the following tweet:

#Ask4Stuff lake erie shipwreck

You'll get a tweet back that says something like:

@YOURNAME A few things about lake erie shipwreck in #Ask4Stuff, check out http://is.gd/cY7gi

Wednesday, June 23, 2010

Procedural Guidelines for Proposed New or Revised Romanization Tables

The Library of Congress, Policy and Standards Division has developed Procedural Guidelines for Proposed New or Revised Romanization Tables. They are looking for comments.
These guidelines apply to the creation of new tables and the revision of existing tables.

Principle/Goals:
  • The ALA/LC Romanization Tables should be transliteration schemes rather than replicating pronunciation. Pronunciation is variable around the world. Another goal of this principle is to enable machine-transliteration whenever possible and preferably reversible transliteration.
  • The ALA/LC Romanization Tables should be in line with internationally accepted standards and/or standards officially sanctioned by the home country when possible.
Guidelines:
  1. Examine any existing national and international standards before beginning the process of creating a new or revising an existing romanization table.
  2. Mapping characters to the Latin script
    1. Take the equivalent characters used from the MARC Basic Latin script repertoire as much as possible.
    2. Choose a Latin script equivalent for a non-Latin letter, not necessarily based on pronunciation of the letter, but so as to maximize clarity and minimize confusion with the transliteration of other letters. The resulting Latin script equivalents should allow for the reversal of romanization as systematically as possible, without the application of special algorithms or contextual tests.
    3. Avoid special Latin script alphabetic characters as they are not always widely supported in display and printing.
  3. Modifiers
    1. Use modifier characters (diacritical marks) in conjunction with the basic Latin script characters, but take care to avoid modifier characters that are not widely supported (e.g., ligature marks), or whose positioning over or under a Latin script base letter may interfere with the printing and/or display of that letter.
    2. Above. It is recommended that the acute (´), grave (`) and dieresis (¨) be preferred to other modifying characters over base letters. Use the tilde (?), macron (¯), circumflex (?), and dot above (˙) characters if needed.
    3. Below. Avoid modifiers below characters, since they often interfere with portions of Latin letters that descend and when underlining is present. If a modifier below is desired, prefer the dot below (.) or the cedilla (¸).
  1. Marks used as guides to pronunciation should not be rendered as Latin alphabet characters, but rather as diacritics or punctuation marks to facilitate reversibility.
  2. Non-alphabetic languages
    1. In dealing with non-alphabetic scripts, e.g., syllabic scripts, the above guidelines should be applied to the extent that they can.
    2. Any provisions for aggregation should be based on such factors as international agreement, convenience of use, promotion of consistent application, and ease of computer access.
  1. Other factors. The impact of file maintenance on legacy records should be considered in revising tables in relation to the ease or difficulty of accomplishing it, the benefits provided by the revisions, and the obligations of and impact on various organizations and institutions.
Process:
  1. Forwarding proposed new or revised Romanization tables. Submit all draft tables (new and revised) to the Policy and Standards Division, Library of Congress, preferably as an attachment to an electronic mail message sent to policy@loc.gov Submit all draft proposals as complete tables in an electronic format, e.g., Microsoft Word, so that the resultant file may be updated during the review process. Submit revisions to existing tables as part of a complete table for the language. If only a part is being revised, clearly note the proposed revisions either 1) within the table itself or 2) as a separate document indicating what the proposed revisions are and the justification for them. Provide pertinent justification, e.g., experts consulted, sources consulted, for any proposed new or revised table.
  2. Library of Congress review. The Policy and Standards Division and other Library staff with knowledge of the language or script will review draft tables (both new and revised).
  3. Other review. After reaching consensus within the Library of Congress, the Library will seek comments from the community at large, including the appropriate committee within the American Library Association. This is done in several ways:
    • the draft will be posted on the Cataloging and Acquisitions Web site (http://www.loc.gov/aba/) with a request for comments usually within 90 days of the posting;
    • the draft table will be published in Cataloging Service Bulletin with a request for comments within 90 days;
    • the draft will be sent to identified stakeholders with the same 90 days request for comments; and
    • the availability of the draft will be noted in a posting to various electronic lists according to the language. See list below.
  4. Receipt of comments. The requests for comments specify that such comments are to be sent to policy@loc.gov by a specified date. The Policy and Standards Division and other Library of Congress staff will evaluate the comments as they are received. Once the Library reaches consensus, the division will revise the draft table as appropriate. The Policy and Standards Division will acknowledge the receipt of comments.
  5. Approval process. The Library of Congress will forward draft tables that have been completed to the chair of the appropriate committee within the American Library Association. Draft tables for languages of Africa and Asia go to the chair of the Committee on Cataloging: African and Asian Materials (CC:AAM). Drafts for languages in other parts of the world go to the chair of the Committee on Cataloging: Description and Access (CC:DA). If the appropriate ALA committee has disagreements with the submitted draft table, it may be necessary to return to one of the steps above.
  6. The Library of Congress will issue status reports to the stakeholders and electronic lists noted above.
  7. Approved tables. Once the appropriate committee has approved the draft table, the Policy and Standards Division will make any changes to the table as the result of this process, post the approved table to the Cataloging and Acquisitions/ALA-LC Romanization Tables Web page (http://www.loc.gov/catdir/cpso/roman.html), and publish the approved table in Cataloging Service Bulletin.

Data Portability Policy Statements

Image representing DataPortability as depicted...Image via CrunchBase
The DataPortability Project has announced a data portability policy statement and released a tool to create a statement for your organization.
The heart of the Portability Policy is a set of plain language questions that we hope will become a common vocabulary between software users and providers. Through these questions, a provider can disclose what they do or do not, to enable data portability. Eventually, we intend to release machine-readable version of these policies.

Data portability applies to a much broader set of software products than just social networks. The promise of data portability is that everyone benefits when work can be repurposed – by yourself with other tools or by other people. Any tool that lets people enter or organize their digital “stuff” should control how that stuff can be reused. Text documents, music play lists, pictures, and research data are just as valuable to share as “friend lists” and address books.

We do not promote any particular technology or approach; there are no right or wrong answers. While a social network might want to illustrate the myriad ways that they connect people and allow for data portability, a service focused on deeply personal medical or financial issues might want to highlight the fact that they allow no portability at all. Our intent is simply to increase communication and ensure that both parties — visitors and the service itself — each know what they should expect from the other.
If your site provides APIs or not this is a nice easy way to let people know.

Tuesday, June 22, 2010

Subject Headings for Cooking and Cookbooks

The Thomas Jefferson Building of the Library o...Image via Wikipedia
This statement was issued by LC concerning the change from cookery to cooking. For some libraries this is going to be a major change.
The Library of Congress issued the list of the new and revised subject headings for materials on cooking and cookbooks on June 22, 2010 (http://www.loc.gov/aba/cataloging/subject/weeklylists/). These new and revised headings will be distributed beginning with the CDS distribution file vol. 25, issue 24 dated June 14 and will continue until completed. The revision of Subject Headings Manual (SHM) H 1475, "Cooking and Cookbooks," is forthcoming and will be posted as a PDF file on the public Cataloging and Acquisitions Web site (http://www.loc.gov/aba/). It will also be included in SHM Update Number 2 of 2010, which will be distributed in the fall.

The word "cookery" has been changed to "cooking" in approximately 800 subject headings (e.g., Cooking, Cooking (Butter), Cooking for the sick, Aztec cooking, Cooking, American--Southwestern style).

A topical subject heading for Cookbooks and a genre/form heading for Cookbooks have also been approved, and are available for use.

Most of the Children's Subject Headings in the form Cookery--[Ingredient] have been cancelled in favor of the adult heading Cooking ([Ingredient]). However, three of those headings have been retained and revised: Cooking (Buffets), Cooking (Garnishes), and Cooking (Natural foods).

In cases where reference structure for a heading has been changed but the heading itself has not, the heading was omitted from the list. For example, the headings Brunches, Comfort food, and Tortillas had the broader term Cookery, which has been changed to Cooking. None of these three headings appear on the Weekly List. The references on approximately 500 headings have been changed.

Every effort will be taken to expeditiously change the old form of subject headings in bibliographic records to the new form during the next few months.

Genre/Form Headings for Moving Images

The OLAC LC Genre/Form Headings for Moving Images Best Practices Task Force has released a draft of the guidelines that it has been developing for public comment.
The guidelines are intended to supplement and be compliant with existing practices as well as provide examples of usage. In a few cases, notably the "nationality/language" genre section, we offer alternative (with appropriate local coding) options for access that we believe some libraries might find helpful. We also acknowledge that the LC Moving Image Genre/Form Heading world has been shifting under our feet as we have worked on these guidelines, and that a number of other groups are working on similar guidelines in other areas. Thus, these guidelines must be regarded as somewhat in flux. Most notably has LCs recent decision to separate out (and re-code in MARC) the genre/form terms from LCSH. Our examples as they currently stand do NOT reflect the new MARC coding of 655 7 $2. They will be edited to do so during the draft revision baring any reversals from LC. We are opening the draft for comments till July 23rd.

Standards Poster

What's the term for a lot of standards? A mess? Gaggle? Flock? Whatever it is Seeing Standards: A Visualization of the Metadata Universe is a poster of standards in the memory community.
The sheer number of metadata standards in the cultural heritage sector is overwhelming, and their inter-relationships further complicate the situation. This visual map of the metadata landscape is intended to assist planners with the selection and implementation of metadata standards.

Each of the 105 standards listed here is evaluated on its strength of application to defined categories in each of four axes: community, domain, function, and purpose. The strength of a standard in a given category is determined by a mixture of its adoption in that category, its design intent, and its overall appropriateness for use in that category.

The standards represented here are among those most heavily used or publicized in the cultural heritage community, though certainly not all standards that might be relevant are included. A small set of the metadata standards plotted on the main visualization also appear as highlights above the graphic. These represent the most commonly known or discussed standards for cultural heritage metadata.