Quantcast
Channel: libraries – Connecting Librarian
Viewing all articles
Browse latest Browse all 13

VALA 2016 – Day Two – Valentine Charles, Kevin Fordham and so much more

$
0
0

Building a Framework for Semantic Cultural Heritage Data – Valentine Charles

Valentine works with the Europeana Foundation, which is the central portal for cultural heritage in Europe.

Europeana has a huge range of items from European countries, including content from and related to Australia.

The European Library was the model on which Europeana was based, but then the reality of different languages, metadata standards and more had to be resolved. Had one year to find a common language to bring it all together. 

Their first attempt was the Eoropeana Semantic Elements which was based on Dublin Core. However, it was limited in linking, mixing data and caused a lot of mapping quality problems. Used it for two years and they launched the website based on this.

Then looked at a new model with linked data, giving cross community reuse of data models and was a collaborative approach. making it usable, mutual and reliable.

The Europeana Data Model was created, which reused several semantic based models but could easily absorb the existing data. When the model was released to their community, they were able to get valuable feedback and make more refinements.

Europeana now aggregates processes, enriches metadata using EDM and provides a portal for users to access data and objects under a Creative Commons Zero licence. The model is community driven, with the input from different communities making it stronger and better suited to partner organisations requirements . This has also raised complications as some areas have not worked well as yet eg. Event data.

Many projects and partners have been able to either map their data directly or create specialist cases of EDM so that none of their metadata is lost. It’s reuse is being encouraged by being an anchor to other granular models as well as using existing vocabularies and defining mappings with existing standards.

Now working on sharing rights information with the Digital Public Library of America, supported by Creative Commons. They are also exploring the option of allowing users to annotate entries and include this content in their data. Will likely use an existing standard for this.

It raises challenges such as having criteria for extending the model and the balance of centralised and reference versus extensions.

Interestingly, although it was designed as a simple core, there are now complaints that with all the extensions, it has now become complex. This can impact on data quality.

For now they will continue to develop EDM iteratively, developing and adopting best practices and encourage reuse.

Further developments will come in data validation (eg. RDF), data checking, investigation of crowd sourcing – including getting a better idea of what users need in data quality.

Advice – develop data models that are interoperable and reusable, involve stakeholders throughout, rely on tech to help address complex problems, and if you can’t do it provide use cases.

———————————-

Linked open data and Australian GLAMs – Ingrid Mason

Study tour of overseas GLAMs in October 2015 – VALA Travel Scholarship.

Visited a number of projects, all were either hosted by libraries or had library partners.

4 phases of change were common to all initiatives.

Assessment – review previous projects, immerse slowly into technical information, share, investigate your data deeply, identify benefits and stakeholders. 

Experimentation – implement new tech as a pilot, test and data clean, outside of current library infrastructure to limit impact. Eg. NYPL Labs

Production – agreement to go ahead, clean and enrich data at scale, develop a new discovery layer, publishing data as a service. Eg RDF, XML.

Evaluation – a seeding gains, identifying challenges and capacity, assessing stakeholder investment and considering whether to move ahead or extend linked open data.

They investigated broad practice change and metadata management.  Presented three brief case studies here on Bibliotheque de France (BNF), Europeana and Linked data for Libraries project involving Cornell, Stanford and Harvard Universities.

—————————————-

Building a richly featured library management platform – Hugh Rundle

Hughrundle.net

We have a responsibility to protect the privacy of library members.

Biggest threat to patron privacy is librarians.  There are many levels of security including secure logins, http so, pgp, and more.

It is possible to build a system which protects privacy, but shares data.

  1. Put patrons in control. However, although we have a copy of their data, it is not ours.
  2. Move from just in case to just in time knowledge.
  3. Maintain control over library assets.
  4. Encrypt by default.
  5. Maintain fast performance for search and transactions.
  6. Enable search over encrypted data to reduce data load.
  7. Store historical data.
  8. Use data for enriched search.
  9. Zero knowledge database is the goal – or as close to it as we can get.

Use Mylar software running on JavaScript package called Meteor (server side).

Potential system – Tinfoil

All data would be encrypted, so that staff could only see that a patron has so many items on loan, but not what and staff could see that a title is on loan, but not to whom.

Encryption could be lifted momentarily for notices to be generated and other exceptions would need to be investigated.

Patrons could continue to access their own details, including loans and when they are due, via their own login.

—————–///—-

GLAMMapping Trove – Ariana Betti and Kevin Verbeek

GLAMMapping is visualisation software and this paper discusses improvements since the last VALA Conference.  The software is used to visualise large data sets.

Visual analytics for users – very under or badly utilised due to sociological factors, organisational factors, public policy related factors.  Their investigation involved research, user testing, evaluations and more over 5 years.

Libraries are interested in presenting their data visually and users want access. Extra factor that comes into play is technology – particularly ease of use, which at present this does not have.

For users, adaptation can be difficult and for libraries, staff and funding for such innovation can be scarce.

GLAMMapping is currently working in strategic partnerships, giving priority to customised products, regular releases of updates to the free site are and continual innovation.

Showed demo of a sample of 7000 Trove records on a world map based on place of publication, then by place overplayed by year. You can then also limit by author or keyword.

But how can it the scale up to Trove’s 60 million records. Two issues are performance and

First they had to import all of Troves records. Had to sort out access for their FRBR hierarchy – for speed they had to flatten the table.ata then had to be GeoCoder and the individual ‘squares’ had to be prepared for many sizes. For extra large sizes, they created a different visual code to express that it should be larger than it is.

Future challenges: filtering on big data sets, faster clustering, improved geocoding.

Explore your own data – sign up and you can import your own sets, as long as they are not too big and meet their format ( details on the website)

http://Glammap.net

——————————-

Change, capability and culture: building a confident workforce for the future – Fiona Salisbury

Why change – institutional drive, developed a Funding Future Ready program, greatest change in the library’s history.  Latrobe Uni wide.

Change Prcoess – define, design, consult, implement. Structure provided was well resourced and library embraced it.

Change team was created to drive this process in early 2014 With implementation beginning in late 2014 and elements still continuing now.

Design phase resulted in a new structure informed by strategic documents from the library and parent. One result was One Library with teams across the five campuses, another was making everything client-centric.

Resulted in a big change to their functional teams.

Implementation was conducted in a short, tight timeframe – leading to both difficulties and opportunities. A lot of other changes were happening at the same time, including new LMS.  Highlighted the need for staff to be ready for ongoing change at all levels.

Comprehensive program of staff development and training, both internal and external was vital. There was particular value in training developed for staff by staff. This has been developed into a Training Database for staff to review, refresh and start from new.

Each role in the new restructure had a success profile developed – to enable them to truly understand what their new job involves. Although mostly done individually, they were discussed with colleagues, managers and sometimes in a workshop setting.

Recognised that important factors were informal – the way things really got done – so put emphasis in identifying and working through them. Ongoing, they listen, gather evidence, gather feedback and respond. Response is gradual as time needs to be taken to unravel what was really being said.

Stop start continue was around all services – what could stop, what could be new and what would continue.

Building a new culture and new ways of working – common understanding of goals, values and expectations, effective and transparent communication, ongoing staff development, reviewing and challenging attitudes and assumptions, building a culture of accountability.

—————————————–

Content Round-Table – Professional development

Four things that are core to our profession – access to information, literacy, protecting our culture and access to ICT.  Passion is also necessary, which just can’t be taught.

Anyone who enters the profession must have a commitment to ongoing professional development.  But how do we encourage those who don’t have that interest and don’t want to invest – get your undecideds involved as a starting point. Make sure you provide timely content that is enjoyable.

Informal social media groups can also be helpful as a personal learning network.

Librarians are great at change, we have been changing for a hundred years.

—————————–

There’s an ambiguous road sign that reads Bibframe and a fork in the road. D you take it? – Kevin Ford

Lots of libraries are considering Bibframe, but are unsure whether to head down that path yet. It is intended to replace MARC.

Bibframe recognises and embraces that we are in a networked world, both on and of the web. Challenge is getting our content out where web tools can find it. To achieve this we need to model, design and publish our data differently.

It will replace Marc and its tow functions, data representation and communication format.

Bibframe core classes include Work, Instance, Authority and Annotation. It has Dopted the principles of linked data and expresses as RDF.

As the conversation is ongoing, are linked data and RDF massive distractions. Basic issues include single namespace, terminology, use of blank nodes and RDF specific modelling patterns. They are integration related, not bibliographic description, which is what Bibframe is about.

Presently, the Library of Congress is in a full blown pilot (since August 2015).  There are concerns as the vocabulary has not changed since 2014.  Version 2.0 has been announced but has not yet appeared.

Concerns include a lack of transparency – a product is only seen and allows feedback once LC is happy with it; lack of balance between bibliographic and linked data/RDF expertise.

Even though it is not finalised, spin offs/forks have already been created. Bibframe – nlm – National Library of Medicine. LD4L and Bibfra.me. But this passion and work is not being harnessed for Bibframe.


Viewing all articles
Browse latest Browse all 13

Latest Images

Trending Articles





Latest Images