Wikipedia and "produsers"

A while back at work I suggested we look at moving our entire Encyclopedia to Wikipedia.org. My position has not changed. I still think we should do just that.

Putting the content up on Wikipedia.org gives it MUCH wider exposure than our website ever can and it therefore has the potential to bring new users to our website that may not even know we exist (via links in to our own web content). With a wikipedia.org user account, we can maintain an appropriate amount of control over the content (more than we have at present over wikipedia content that started as ours, already put up there by others).

Another point is that putting it up on Wikipedia allows us to engage the assistance of various volunteers who’d like to help us, but don’t live locally. I’ve been approached by a few keen volunteers who don’t live locally only recently and I think they’d do a good job for us in both maintaining content and generating new content (which we could edit when needed).

It isn’t urgent, but I think we could make some progress on at least a trial.

Adam our web developer has suggested a few things that we need to do before we start using wikipedia.org:

Some colleagues here said they liked the idea of hosting our own wiki like The (UK) National Archives Your Archives wiki, but they are also supportive of moving our encyclopedia to Wikipedia. One person has started working through existing entries and tidying them up to make sure links are up to date and the sources and references are included for the entries. The motivation to do was that Wikipedians can challenge and/or remove unsourced material.
In a lot of cases we don’t currently list the sources for entries so we are going back to the background material we have for the entry and if that doesn’t exist we may recreate the research. This has made the process slower than we had expected.

As we look further into this and begin to examine some of the issues of “ownership”, reliability and “endorsed” wikipedia entries, a read of a very recent post about such matters via the ABC Digital Futures blog would seem advisable. At first glance one might think that this is a bit of a long bow to draw, because it is focused on the digital future of the national broadcaster and it discusses a model of participation regarding travel advisory websites, but when you think about it, many of the principles apply in a much broader sense and to us us as we look at moving our encyclopedia to a more open and participatory environment.

The whole Lonely Planet model is very similar to our situation. Indeed, our existing top-down model of the publication (in various forms) of Australian military history guides, magazines and books could well be undermined in much the same way as the environment in our own small world shifts from military history for the people to one of military history by the people.
The author/presenter is Alex Bruns and you can read his full text online here.

  1. In recognising that everybody has a valuable contribution to make, and as he encourages us not to be afraid of it, Alex says there are four preconditions that are needed:
    the replacement of a hierarchy with a more open participatory structure;
  2. recognising the power of the COMMUNITY to distinguish between constructive and destructive contributions;
  3. allowing for random (granular, simple) acts of participation (like ratings); and
  4. the development of shared rather than owned content that is able to be re-used, re-mixed or mashed up.

So, throughout his article he uses the term “produser” to describe the participants in such a community. It is all about true collaboration, engagement, and the shared development of content.

Finally, he suggests these four principles for anyone seeking to successful and sustainable participatory environments (mind the big words):

  1. Open Participation, Communal Evaluation – inclusive, not exclusive
  2. Fluid Heterarchy, Ad Hoc Meritocracy – from a hierarchy to leadership based on accumulated merit that is recognised by the whole community
  3. Unfinished Artefacts, Continuing Process – evolutionary development of articles; nothing is ever truly “finished”
  4. Common Property, Individual Rewards – tangible outcomes for individual contributors.

The reasons we need to get involved in the broader wikipedia community are basically two: firstly it is inevitable that it will grow as a community and if we are to have any influence at all we need to be involved; and secondly, we do not have the resources to be involved in two communities by managing one on our own site as well. Wikipedia is the pick (at least in my mind) because it has much more potential reach and exposure than we ever will. I think it is overly pessimistic to look at the worst possible case scenario (of extensive and malicious damage to entries) in this instance. Moving our encyclopedia to wikipedia should not be looked at as a surrender.

Recently in D-Lib there was a good example of an institution (University of Washington Libraries) using wikipedia to promote digital collections by using deep links back into their site. See Using Wikipedia to Extend Digital Collections.

Not all new ideas are good ideas . . .

And IMHO, this initiative from ICOM is a truly awful idea. Here is an excerpt from an email that I saw this morning. It starts like this:
ICOM cordially invites all members of the global museum community to participate in IMD on 18 May 2008 with activities in their museums based on our theme “Museums as agents of social change and development”.
Sounds good so far. IMD is International Museum Day. They invite us all to participate in both the real and virtual worlds with activities consistent with their theme. And here is where they lose the plot completely:
The highlight of the suggested online activities on http://icom.museum is hosted by The Tech Museum of Innovation on 18 May in the replica of its Silicon Valley museum of technology on SECOND LIFE, the virtual 3-D platform created by Linden Lab. From real-world museums, museum professionals and the public will be able to communicate with colleagues, artists and “residents” in the virtual world. They will therefore be able to participate in the collective development of exhibits in The Tech in SECOND LIFE.
I am not at all sorry to say that this is simply one of the worst ideas I’ve heard of recently.

What will people pay for online?

As a former economist, I have long been interested in the new economic or commercial models that are emerging on the web. Many of us will be familiar with Chris Anderson’s “Long Tail” description of the niche marketing of online stores like Amazon. Well, here is another theory. It comes from another respected web pioneer: Kevin Kelly (who helped launch Wired Magazine and is still a board member of the Long Now Foundation).

Kelly has recently written up a post called Better Than Free and in it he offers us “eight generatives” that people will still be willing to pay for in the new web environment (”a copy machine”) where so many copies of everything are now available somewhere for free (eg. peer-to-peer networks, not that I’d have any idea what they are for!).

What he says is that even when some product or service is available for free, we are probably still willing to pay for it elsewhere when it is surrounded by or within an environment characterised by these qualities (which can’t be copied, cloned, faked, replicated, counterfeited, or reproduced). Here is a quick and dirty summary of Kelly’s article with my comments about how each one might apply in the museum world, in italics:

  • Immediacy – Eventually you will be able to find a free version of just about everything somewhere, but it could take sometime. People still pay a premium for special air delivery import magazines, so in much the same way we value getting a copy immediately delivered to our inbox as soon as it is released, requested or created. Digital downloads by subscription where applicable and possible.
  • Personalization — Generic versions may well be free, but getting something bespoke will always be something that some people want. Offering products like hand-crafted digital prints, very high resolution objects, or rare special copies/facsimile editions may be well received.
  • Interpretation — “As the old joke goes: software, free. The manual, $10,000. But it’s no joke.” I’m not sure how this applies to us because for many museums, particularly in Australia, although we have bucket loads of interpretation, the general expectation is that we provide it, as well as most quick reference services, for free online. Perhaps we need to look at paid subscriptions for well-written online publications?
  • Authenticity — “You’ll pay for authenticity.” Again, we can offer very authentic material and already have this advantage. More, better branding?
  • Accessibility – “Ownership often sucks. You have to keep your things tidy, up-to-date, and in the case of digital material, backed up. Many people, me included, will be happy to have others tend our “possessions” by subscribing to them.” I’m still not so sure about this generative: in some ways, we can get web services like del.icio.us and Google Reader to do such things for us, like looking after our bookmarks/favourites and blogs (respectively) for free. It also doesn’t seem to be named that well.
  • Embodiment — “At its core the digital copy is without a body. . . . The music is free; the bodily performance expensive.” For museums I think this is about what else we can offer in terms of paid programs or experiences. Generally, major museums and galleries in Oz, charge only for special/imported exhibitions or “blockbusters” (except us). Perhaps it means selling or charging curatorial talks on the talks circuit. I do a few of those in relation to our Lawrence exhibition and a few other things, and so far they are all free!
  • Patronage — Audiences probably want or at least don’t mind paying creators. “But they will only pay if it is very easy to do, a reasonable amount, and they feel certain the money will directly benefit the creators.” This applies universally for creators, including us, but we probably need to pay more attention to making payment easier and reasonable.
  • Findability — “No matter what its price, a work has no value unless it is seen; unfound masterpieces are worthless.” I like this one a lot and it is probably one of the most relevant to our cultural world where such a large percentage of our collections is not on permanent display. It should not be too hard to highlight, find and get our products and services – not too many gates or complicated registration.

Some of these eight qualities apply to us more than others and a few could be better described or have a different descriptor applied to them (like accessibility?). To the list I’d probably add trust and, like one of his comments says, usability. Most cultural institutions are trusted and we can take advantage of that, but usability isn’t really a major focus – think of most of our unfriendly catalogues and systems.

OK, so I might occasionally use a peer-to-peer network for some music and films that are either impossible hard to get or far too expensive in Oz, but I also download a lot of material for a fee from iTunes and I’d agree that the reasons I do this are pretty well mapped out above. If we are to come up with a decent model to make money or even recover costs for certain products and services on our museum websites, we need to very carefully look at this article.

Shift happens: how the network effect, two-sided markets, and the wisdom of crowds are impacting libraries and scholarly communication

Bruce Heterick, JSTOR, New York, USA

Abstract: This session will discuss the changing nature of library services and scholarly research in the networked world. Our affiliated group of not-for-profit digital initiatives – JSTOR, ARTstor, Portico, and Aluka – has a unique perspective on this shifting environment. There is ongoing discussion about the evolving Web (or Web 2.0): the migration of the Internet from a platform to a service; the network effect that encourages (and values) contributions and collaborations; and a shift in software and services to a participatory model. This evolution is changing libraries, publishing, and scholarship. In particular, it is fundamentally changing the paradigm of scholarly communication, and this presentation will examine this change.

I thought this was yet another good paper from the final day. Bruce knew his stuff and was an engaging and stimulating speaker. Fabulously, you can download the slides he used from this link: http://www.jstor.org/about/forum/ShiftHappens.pdf (1.1. Mb pdf file)

Bruce opened up by quoting Neil Postman “Technology doesn’t add or subtract something. It changes everything.” It does, however have a short half life. He then argued that Apples introduction of the iPod (bringing us portable media) in 2001 was as important an advance as Tim Berners-Lee’s World Wide Web in 1989.

Next he told us of John Seely Brown’s “Four exponentials” (regarding the pace of change as it applies to working together):

  • Moore’s Law: the power of computing doubles every 18 months.
  • The Law of Fibre: the capacity of the bandwidth of fibre doubles every 9 months.
  • The Law of Storage: digital storage doubles for the same cost every 12 months.
  • The Law of Community (Metcalf’s Law): the power of the network increases with the square of the networked people interacting with it (more people = more power).

This increasing pace of change becomes unsettling for some, but he said that when things are in control, you are probably moving too slowly.

The Transition from the Information Age to the Age of Participation

  • Active, not passive
  • Multilateral, not unilateral (If your federated search has a problem, who do you call? It could be with any one of 12 repositories.)
  • Communities, not silos
  • Contribution as well as consumption.

An Environment with New Dynamics

  • The network effect. It increases in value the more people use it, eg. Open Source software (Linux, Open Office), Communication (email, SMS), Social Networking software (MySpace, Facebook), Scholarly Resources (arXiv.org, JSTOR). Its growth can be extraordinarily fast (“viral”) and without control. Eventually the power of the network moves down.
  • Two-sided markets. In Web 2.0 people can contribute as easily as they consume. These new networks have two groups that provide benefits to each other and enjoy intermediary platforms that balance their interests, eg. Flickr, eBay and OCLC’s WorldCat.
  • The “Wisdom of Crowds”. In the right circumstances groups are often smarter than the best people in them. Their decisions work best when the crowd is: diverse, decentralized, has a mechanism for summarising the answer and acts independently, eg. Wikipedia (this applies particularly to our situation and our Encyclopedia!), Google’s page ranking algorithm.

So, what does this mean for us?

  • Libraries (and we may read here “museums” or “cultural institutions” I think) have to manage access and preservation for system wide and local resources (wikis, blogs, repositories).
  • We need to take advantage of economies of scale (OMG, I think I’ve said this meself before and nobody believed me!) so that we can reduce costs by sharing core services.
  • We must reconfigure our services for the networked environment (which means they aren’t actually configured that way now).
  • We need to learn how to engage proactively with our constituents – see the OCLC report Sharing, Privacy and Trust in Our Networked World.
  • Free-standing publishers will need to share the commodity layers of their activities, eg. HighWire Press. There is tremendous pressure to move from print to electronic publishing.
  • Publishers that harness the network effects and which are able to build self-sustaining communities will grow faster than others, eg. arXiv.org
  • (There are also implications for the academic world, but I’m not going into those here. Sorry, call me selfish and self-centred.)

Conclusion
Libraries (and other cultural institutions) are small systems in a much larger one and we must learn to move with it! Bruce then briefly touched on the “Gorbachev Syndrome” in which change agents are swept aside by the tide of change they initiated because of their continued commitment to legacy systems/products/services. And I’m afraid that in my view, most libraries and archives that I know about are still well anchored in their old ways and processes. The world has changed around us and we need to move on. Some of our much loved standards and ways need to be left behind, not continually patched up and brought with us.

Key messages from VALA 2008


Some people don’t have the time to plough through all this text, so I’ve been asked to put together some of the main messages that I picked up at VALA. I reserve the right to adjust these as I complete posting all of my notes. So, to date, I think the key messages that come to mind are as follows:

  • The importance of pro-active engagement and interaction with the relatively new social networks that have emerged on Web 2.0. That is where the future will evolve from (very rapidly) and we need to be aware and involved to stay up. It is relatively risk and cost free. We should start making more use of engagement/interactive tools like wikis to develop and grow our own community (utilising the wisdom of crowds).
  • Systems (on the web) need to be engaging and intuitive (not “must do”) or they’ll be avoided by users.
  • We need to look at the ways we catalogue and who we are cataloguing for (ourselves). If you think of the Collection-Cataloguing continua it is something like Acquisition>Arrangement>Store>Keep – we are good at all of that, but we are not so good when it comes to the “providing public access via the web” (assisting our users to find and get) part. Our catalogues need to be fully optimised for search engines like Google, Yahoo and MSN. If we are maintaining systems that do not get the data out to the web because of some facility or capability that only we need, we should consider using a mash-up to account for those needs and simpler more open web standards for the essential needs of the public users. The use of persistent identifiers (particularly “canonical” URLs that in many ways are brief catalogue entries themselves) was a plenary topic that attracted much interest.
  • Web services are increasingly being used and can provide almost anything. Slideshare is a good online example of an online repository in the Web 2.0 world. Much of the useful cataloguing (or tagging) is done by the extended community or network. Library Thing for Libraries was also mentioned quite a bit as being used by many libraries around our size (mainly to augment their conventional cataloguing systems).
  • We must stay in touch with developments in Copyright and we should consider making use of the Exceptions in the Act to ensure they stay with us (this will be relevant to the WW1 non-OR digitisation project that we are just beginning – many orphaned and unpublished works).
  • Regarding digital repositories – much of the experience so far has been in the universities storing research material. From them we learn that a one-size-fits-all approach (from the outset) should not be applied too rigorously. Needs and different requirements will evolve as the repositories are used and certain assets may have vastly different needs to others (eg. storage, metadata, etc.) Otherwise, the ECM itself may become a victim of “Gorbachev Syndrome” – swept away by a tide of change that it started and could not keep up with through its own inflexibility and resistence to changing with the times and new technological trends.

Making identifiers concrete


Here is a link to LukeW’s good summary of Stuart Weibel’s plenary that closed VALA 2008. I liked what Stuart had to say and my notes probably differ a tad from Luke’s notes, but maybe I just misunderstood what Stuart had to say?

Stuart comes from OCLC and presented really well, leaving most of us with a new perspective on what could have been a dull a dry topic. I found his message easy to follow and quite inspiring.

Branding & Web 2.0
OCLC have released a report on the perceptions of libraries and information resources.
Libraries and search engines are trusted about the same.
People care about the quantity and quality of information.
They do not view paid information or free information differently.
Branding can be achieved by building on trust by making things look free. Scale is represented by libraries and their presence everywhere. They have global scope and reach (via networks?). BUT people need more awareness. We must be part of the new online environments that dominate our lives.
Social networking software. Only technical manifestation is new (we’ve always networked). Motivate people to tag, participate. Wired said 40% of those they interviewed contributed in one way or another. (Higher than Yahoo’s figures.)
Re social consumer environments. Facebook, etc. are not just for games. But they are probably not the right models. There are lessons to learn though (OCLC has just put an application into Facebook). They are flawed – closed gardens, rudimentary features, but offer an experience as well as a service.
Libraries must compete and compare favourably with popular models (in Seattle books with coffee is the law!). But can we compete and should we? What can we do to fit in and how to distinguish between the trends and the trendy?
Catalogues – how can they change morph/grow? Networking. Collections linked to people, organisations, concepts, context, metadata, etc. (So I’ve just started an account on Library Thing to learn how this works.)
Do we need a web or scaffolding – do we want more – coherence, durability, etc.
Mentioned FRBR – works, expression, manifestation and item. But with other dimensions.
He said that for discovery on the web, a book review is more useful than a MARC record (I agree – how many people truly understand MARC records?). They are a social bibliography. He also cited: lists, services, commentary, etc.

Infusing bibliographic ideas into the web & vice versa?
First class objects need: persistent identifiers; access to all; stand alone status (identification & clear IP); and they are curated (not left lying around unintended – bit-rot!). Allow users to enter and traverse the catalog from any point!
Establishing a canonical identity on the web is very important.
See WorldCat identities. This should have been done ages ago. Tag cloud into popular Ids. Has stuff by/on author, works, links, encourages serendipitous discovery, associated subjects. All from bibliographic data.

Identities on the web
What characteristics are best in identifiers? There are no hard/fast rules – just suggestions. He thinks URLs need to reflect something about what they are. Make them meaningful.

Design criteria for identifiers
Characteristics:

  • persistence (function of organisational commitment);
  • universal accessibility & global scoping (work everywhere, open to all, WorldCat provides architecture for library assets mapping global surrogate to the local);
  • optimised for search engines and canonical (raises search engine ranking);
  • branding via URIs (mini-billboards);
  • usability by people and machines – speakable, short, predictable (hackable).

This amounts to gluing the pieces together with identifiers.

WorldCat identifiers – are they good enough?
Unique, free, citable, resolvable, linked, canonical (no, not really). Some functional duplicates (more records pointing to same thing).

A glimir of the future?
A global manifestation identifier. Global, business neutral, canonical, provides URL equity, fits with FRBR model.
There are other identifier schemes. So, OCLC is cautiously exploring this territory.

Summary:
IDs are the key; they are needed for mission, to compete, brand, to bring bibliographic values to web, to provide services and access to digital tribe. Books not done yet.

See particularly his blog posts on related subjects (persistent identifiers).

Andy Powell asked whether he was talking about the semantic web. Stu supports it, but is skeptical about the technologies involved. He spoke of middleware as the plywood of the internet. It needs to become the plywood of our arena. He said the abstract model has fundamental importance on the web.

Designing for today’s Web


Luke Wroblewski, Senior Principal of Product Ideation & Design, Yahoo! Inc. and Principal of LukeW Interface Designs, USA
I really enjoyed this plenary and got a lot out of it. He may have initially been a bit biased towards Yahoo and anti-Google, but eventually he got over that and had excellent points to make. Again these notes are pretty rough. It was a great start to the final day and really got us in the mood and opened our minds.

It isn’t that people don’t read – they’ll read when they find something they want to read. So, there are three key considerations in designing websites: Presentation – voice, where interaction happens; Interaction – responding to users; & Organisation – structure.
Luke strives for usefulness, usability and desirability (why do I care, why should I use it?)
He referred to videoegg – as a good example.

What is different about today’s web? What are the recent shifts?

A. From locomotion to services
We interact through locomotion, conversation and manipulation. It is all now much easier and more widely evident on the web. He showed us the huge use of yahoo answers (5 mins to an answer in US; 0 to 90 mill. users in 1.5 years); the spread and use of word processors online. It is all part of the web transition – the locomotion to digital representations of physical entities, then digital manipulation of physical good (e-commerce), and now purely digital services (no physical presence, just display services: aggregation, flickr, MySpace, blogging tools, video editing online, entertainment sites). Many of these services have popped up in last few years with hosting for around US$6 per month. Instantly you can reach a huge audience; there are few barriers to enter; and you can use free open source platforms. BUT, you have about 1.6 secs per month per person to convince them your site is interesting, unique, worthwhile. Therefore you need to know your core – define, focus & build outwards. Some examples of those who do:

  • Eg. eBay – global economic democracy. 30th largest economy in world. US$1,800 sold per seconds. 520,000 stores hosted worldwide. (Sorry, no link to eBay, I was the one who shouted out “greed” when Luke asked what makes it work so well.) Luke said it was held together by democratic feedback. Any search or browse defined by democratic comments & feedback; they are not sorted. Interaction on a level playing field is core underpinning element that makes it tick.
  • Also digg with 3 million people online filtering the news. Interaction on all news items. All can express opinions. One click interaction made the site go – the core element.
  • flickr – builds outwards, can be shared, embedded, favourited, etc., but core is a picture of a subject.

In packaging design for the web, Luke saw three key tools:

  1. “Meaningful shouting” through: differentiation (distinct and appropriate), attraction, and embodying the brand. He looked at three well known wiki tools – how are they distinguished meaningfully? Is it coherent story-wise?
  2. “Back of pack” – supporting the story & outlining benefits/features. The new Yahoo home page calls up elements and gives you benefits in three bullets on a pop-up window. It helps people use the product. Yahoo also provided a 2 min video on how to use Yahoo Bookmarks.
  3. The unpacking experience – eg. the Apple experience. Culminates with the personal photo taken from your new laptop. Google video just gives you a form. It is an interrogation room. That happens with 90% web services. Jumpcut first asks you to make a movie, so the first things seen is the movie and online editor. After that, you are asked for an email address. Also showed pbwiki – it does the email messaging thing and then three more steps, re passwords, access terms of service, more services, etc., and then you need to get through a barrage of marketing material, all before you can even start! But he compared that to geni (creates a family tree) – it starts with a name to make a tree. You jump straight in right out of the gate. It is what people want to do! They got 5 million profiles in 5 months.

B. From pages to rich interactions
It is all about design considerations. Ajax interface design. Pages become more dynamic, updated and rich (although these same pages become more difficult for those who are print handicapped). Examples of inline micro-actions (within previously flat web pages): ratings, online indication, fade, transition, status, transition, etc. You manage three spaces by design: the invitation (to vote/drag); transition (when voting/dragging); and feedback (when the vote/drag is done). All are then encapsulated in design patterns – repeatable design patterns (they catalogue different states and interfaces – eg. Yahoo’s pattern library) – the use of search assistance layers after some user hesitation and these deliver more meaningful results, and more conversational information retrieval.

C. From sites to content experiences
Sites used to be structured in hierarchies – closed and rather negative. The emerging networks, like clouds, etc. are not as accurate, but maybe they are accurate enough? Content is not treated as part of a structure, it is treated as part of an experience. See his article on primary and secondary action in web form. The new experiences are delivered in the form of: content creation tools (eg. search, blogs, like ajaxian, wikis), aggregators (like digg, del.icio.us), display surfaces (eg. Facebook, MySpace), and entertainment services (eg. You Tube).
Design considerations again. When readers come to his page it delivers primary content, related content and a bit of context. How much of site dedicated to overhead? People really just want what they came to find and maybe a bit of related stuff and context. Why hamper the user experience with what they don’t want. Do you need to get everything into the whole page? (The long tail phenomena again – for most web sites, only a few pages get most of the attention/use.)
Eg. personalised search like ROLLYO and a party planner/arranger like RENKOO
If expectations are met . . . people will look around and may take up relevant invitations.
Distributed or re-mixed content. These experiences are not just about distribution, but bringing content in context (and core design still matters!). Eg. blog posts with rich metadata in it, say from Yahoo Shortcuts (interestingly, I found it easier to find this page online via a Google search than a Yahoo search). Things can be added in with a single click on your blog. Context can be king.

D. From webmaster creation to everyone creates
Community on the web comes from features like tags, ratings reviews, trackbacks, blogs, wikis, subscribe RSS, etc. Unity through shared interests and goals. Something gets them all there around something. Social behaviours – reputation & identity; communications; sequences, etc. Implications: GOOD – Filter, content creation, increased engagement (Yahoo answers), invested consumers and collaborative innovation. BAD – blurred focus, spam and poor quality, power laws (abuse), factions and tribes, privacy and exposure issues.

  • Enable identity for communities: welcome, anonymity can be a death sentence, profiles.
  • Provide for creators synthesizers, comsumers, not just one or two. Who creates – only 1% create, 10% are synthesizers and 100% are consumers – they read, engage, benefit from content. Value from reaction of people with each other.
  • It all depends on the tools. How to get people to contribute and how to encourage quality? MySpace kinda ugly? But is is possible to create good stuff. It is hard to create good profiles on MySpace and easy to create ugly stuff.
  • Quality content is based off the level of effort needed to put something in. Burying the submit button encourages fewer but better posts! So some barriers to entry can help QA. The best check on bad behaviour is identity. Has implications for comments on our blogs (Facebook founder, Mark Z.)

See his blog – Functioning Form

Luke’s responses to questions:
Re government websites – any trends? Enormous opportunity for us to build on these principles. Many different ways to engage. Eg. initial (adverse) reactions from digital media, film and music and their attitude now.
Getting around crappy content – don’t just go for the quick fix, quick dollar, think about the long term (what Liz usually urges us to do!). There are ways to make it good.
Redeveloping sites from scratch – all about knowing your core. Start at that. Not with the amalgam that you have that hides the great original idea. What is really working and what is the core essence?

Unlocking Access: In support of a hands-on Internet Policy

(Keynote)
Michael Geist
Professor Geist holds the Canada Research Chair in Internet and E-commerce Law at the University of Ottawa.
He blogs on the net and IP. See also the Fair Copyright for Canada Facebook Group.
[No paper on CD or the web just yet, so for now you’ll need to rely on my rough notes. It was a good keynote!]

History.
Initially there was a push for governments to be hands-off, but they were never there. They always wanted to have a hand in regulating the net on at least a domestic level and in some international agreements. Canadians used the Australian anti-spam legislation as their model. (I didn’t know we had such a law – it certainly isn’t effective.) There always was a role for public policy and government.

Internet 2008.
The blogosphere (>100 million, but incl. some subject matter experts in some fields); power of social networks – Facebook/Myspace (eg. his group on Canada Fair Copyright had many thousands of members within a week or so indicating opposition to new legislation – now 40,000 members); podcasts’ role (he usually uses his iPhone to record talks and then podcasts the MP3 file – people don’t want to read, but will listen or even re-listen; wider audience); postsecret – posts secrets to the world in an artistic/creative way (within a veil of anonymity) – many sad, but now >250k and many in galleries and museums in US; online video sharing (eg. YouTube, Star Wreck – free download, incl English subtitles – people could download and still they bought DVD and were licensed for broadcast; elephants dream – open movie using free tools; public broadcasting (like our ABC); flickr & other photo sharing sites like Facebook (many using CC licenses); rise of creative commons (some rights reserved); free online publications (that can also be purchased, eg. In the Public Interest); collaborative internet growth (eg. Wikipedia.org – it doesn’t have a monopoly on making mistakes, but has a remarkable panel of expertise; EOL – encyclopedia of life); citizen journalism‘s rise (eg. OhmyNews – written by everyone); Project Gutenberg (public domain digitised books, like SPW); LibriVox (audio versions of books); educational content online like MIT OpenCourseWare (decade long time-frame finished within four years or four years early assisted by advances in technology and the rise of support for such initiatives); move towards Open Access, eg. PloS, the Public Library of Science – some people have gone on to win Nobel Prize from that online journal; see also Open Medicine); Internet Archive (Wayback Machine) – public domain material hosted for free, forever; digitisation projects like Google Books (whole and snippets), bringing books to life & Canada has a National Digitisation Strategy including photos freely available; Open Source software – browsers, web services, etc.
BoingBoing (originally a zine) has larger readership than any newspaper in Canada. Lots of concern re copyright in Canada, all starting from that Facebook group. Many others being used to voice opposition to public policies.

Internet 2018.
(Not a prediction, but public policies and potential.)

  1. Connectivity. Broadband for all (or you cannot participate – so there is a public sector policy role there); muni wifi; net neutrality (a notion of a two tiered internet – fast for the rich and slower for the rest – treating all content in an equal fashion); spam; spyware.
  2. Enhancing participation. Intermediary liability issues (eg. things posted on your blog by others & not taken down fast enough); domain names; privacy (still struggling with issues, eg. Facebook issues & their privacy settings – many don’t use, 70-80%); trust; transparency.
  3. Copyright. Anti-circumvention legislation; fair use (Canada has no exceptions like time shifting, three-step test, loss of gift if not used, etc.); term extension (70 years+?); orphaned works; WIPO (an agenda that has moved much further than anyone expected).
  4. Content. Open Access; digitisation; Crown copyright (could affect us – people asking for permission to copy the Copyright Act!; military denying screenshots of equipment if it thought it critical!); public broadcasting.

He finished by saying: “It isn’t about a hands-off approach, the future of the net is in our hands.”

I liked this good round-up of issues relevant to public policy and the net. It wasn’t too heavy and highlighted many possibly obscure and not obvious connections.
Responses to questions:
The interests of public institutions sometimes undermined by meeting the lowest common denominator and strategies limited to such baby steps that are so conservative and aimed mostly at not offending! Too many stakeholders in the room making decisions. They need to take a stronger line with what they are doing and use restrictions. Too much time spent telling people what to do, not what you can do.
Social networks may be skewed towards the younger demographics.
Expectations of privacy on things like Facebook – people are not expecting that everyone can see it. See Danah Boyd’s work regarding the reaction of youth to parents looking at their profiles. Some governments ban the use of Facebook by employees, but all of their potential hires are on Facebook. It needs a re-think about the content posted on Facebook.
He was against the introduction of filtering systems as they are highly problematic and of unknown length/application.

Australia’s new ‘flexible’ copyright exception: open-ended in name only

Emily Hudson, University of Melbourne
Exceptions under s 200AB of the Copyright Amendment Act 2006
The new Flexible Exceptions for Cultural Institutions are intended to be open-ended and more flexible than previous exceptions. They enable us to make use of copyright material where that use doesn’t infringe the copyright holder’s interests.
“Fair Use” – under US law. Libraries and Archives exceptions exist but don’t cover museums and galleries. (Our law has “Fair Dealing”.) Emily said flexibility and uncertainty move along together.
s 200AB(2)
Must be by/on behalf of a library or archive.
For the purpose of maintaining or operating the library or archive (onsite or online!); but
Can’t be for commercial advantage (cost recovery is OK) – any kind of profit is NOT OK; and
s 200AB(1)
Work is not infringed by a use where the use amounts to a special case; doesn’t conflict with a normal exploitation of the work/subject matter; and doesn’t unreasonably prejudice the legitimate interests of the owner. (With some terms having the same meaning as TRIPS Art. 13 – international law.)
Whole of sector behaviour could have an effect on how the law is applied.
There are no fixed answers, so therefore, for us a risk management strategy is wise.
Emily says there is some exciting potential for the sector to act collectively and in unison.
[Her papers are available online at IPRIA.]
Responses to questions:
Institutions are not used to uncertainty and flexibility, so are inherently conservative, whereas the US environment has seen more activity.
External legal advice tends to be more risk averse because they don’t really understand our circumstances or the law as it applies to us. What is the worst that will happen? Taking online works down usually works. If someone does go off and use litigation, the remedy will likely be rather limited against a cultural digitisation program. S 200AB gives a potentially powerful defence.
People seem to be looking to us to test the law on behalf of others.
ALCC is currently drafting the guidelines to use of 200AB under a use it or lose it principle. Laura Symes supposed to be talking to us now – Sophie??? We should let them know what we are about to do.
(I have to say that Emily delivered an amazing paper, because I know she’ll probably be reading this blog soon. Hopefully she’ll correct the errors I no doubt made above!]

Going Virtual for Enhanced Library Experience: a Case Study of the National Library of Singapore

(Keynote)
Schubert Foo, Professor, Division of Information Studies, Wee Kim Wee School of Communication & Information, Nanyang Technological University
Abstract
Amidst changing lifestyles, Internet savvy users, and the availability of large amounts of information on the Web, libraries are faced with the main challenge to remain relevant and to continue develop innovative products and services to serve the needs of users. This paper proposes a number of roles that libraries can play in such a future: as info-concierges; as a network of inter-connected info-concierges; and as a network of true collaborations. Using a case study of the National Library Singapore (NLS), a number of initiatives currently undertaken by the library to move forward in such a direction are outlined. These include the introduction of a SMS reference service, enhanced accessibility of NLS’s content through deliberate availability in users’ search and social networking spaces, and the development and use of a platform that uses the principles of “wiki” to support the formation and use of a collaborative reference network to support reference enquiries.
This paper is directly relevant to our references services in the Research Centre. I’ll make the full paper available to all RC staff (and anyone else interested) when I get back. My notes here are really pretty rough, but give you a taste of the content.
He was impressed by what he had seen and heard at the conference and encouraged us to spread our wings and not always expect the US to be the leaders in information management and technology.
He said that his students in Singapore have been very keen on using Wikipedia as a reference source for nearly everything! Many public reference enquiries received from parents on behalf of their children for study purposes. They use an acceptable complaint:compliment ratio of 1:24. Collaboration in teams is big in Singapore.
Libraries: brick V click; collect-organise-store-access; mediator (source-user); authoritative-trusted content. Most library users don’t come to the library they are net users. They use search engines and sometimes they believe that that is the only place to find information. Instant gratification is expected and must be download-able; they are not interested in browsing. They also like exceptional user-experiences (memorable, unique, exceptional were the words he used), but are not interested in help files. Only 1% of users go to an OPAC – they prefer Google (55%), Yahoo (21%) and then MSN (9.6%), in Singapore (I expect that the % in favour of Google is higher in Australia).
So what do we do as librarians? (Well, not me as I’m not a librarian.) We delve into their net world. Singapore has high saturation of broadband, PCs and mobile phone use. SMS is very highly used too. SMS plans are much cheaper there. (I think cost has a lot to do with the usage rates of new IT services and the web.) Users want to connect anywhere, anytime from any device.
The Info-Concierge – information as a commodity. Each object is self-contained, but must be connected and across multi-platforms. Let users continue on the pathway of discovery – “what is next?”. Connectivity through links, different platforms and by pushing/suggesting further exploration (like Amazon does). They use push for simple alerts, but it could be pushed much more on a finer granular manner. The concern is spamming users or intruding on their private spaces. They want to deliver information to users, not bother them. Basic encouragement ideas: taxonomies (browsing); formats; relational search; events; share & join in.
Promotion of discovery is very important. A good example is bookjetty.com, and where formal MARC records that are augmented by user tags/comments, like LibraryThing for Libraries. Bookjetty recognises where you are and presents you with options relevant for you. It gets users to get back to the library.
Libraries need to harvest, select & authenticate, meta-tag, create/maintain/grow taxonomies (they must be download-able!), and organise information content.
He encouraged connections (facilitated by libraries using Web 2.0): content-content; content-people; and people-people. Using tools like wikis, blogs & social spaces.
They also curate exhibitions relevant to topical and current events and to highlight their collections. All are eventually moved online in a virtual sense.
Reference services are provided within reach of everyone – wherever, whenever. SMS service as well as email and mobile phone. SMS request constrained to 160 characters. Answers are usually sent back as a URL within a template. If they provide a book’s catalogue entry, they have a comment field for value-adding “Librarian’s notes”. It finishes with a feedback sheet that attempts to get to know the user better by three key questions – like usefulness, finished, other comments (I could not read them on the screen). He said users are overwhelmingly positive in feedback.
They have an infopedia like our Encyclopedia, that was once buried in their website and now can be accessed by Google, Yahoo and MSN (he calls them the “GYM space”). They’ve used a microsite to expose it to Google. Content can be found more easily on Google Maps, Google Earth, a Yahoo Search, etc. Content usage has increased exponentially (160 fold). I wasn’t sure how they managed to push the content to these search engines – may be in the paper.
Collaborative research responses. Making wider use of librarians and even other users. There are multiple entry points and a network of specialists (community) that power it and moderate it. Based on a wiki. Community alerted by SMS/email to them and can come in and assist to make the full answer. Multi-user collaboration.
Conclusions
He urges support for librarians to initiate new projects, but says that we should not push too hard and allow for some experiments to fail. We also need to get to know users better and encourage information literacy. The basics are still needed.
Question responses:
He referred to the recent JISC/British Library report The Google generation is a myth – they are not that information (web) literate. One stop shops don’t work. He said that we need to be much more e-consumer friendly and connect via Facebook, etc. [The significance of this for research libraries is threefold: (1) they need to make their sites more highly visible in cyberspace by opening them up to search engines; they should abandon any hope of being a one-stop shop; they should accept that much content will seldom or never be used, other than perhaps a place from which to bounce.]
He was asked about Second Life and said that he thought it was something that a lot of users went into once and came out, then never returned. He suspects users are not serious about using it. (Apparently it has a huge “churn rate”.)