Unformatted text preview:

Context Data in Geo-Referenced Digital Photo CollectionsMor Naaman, Susumu Harada, QianYing Wang†,Hector Garcia-Molina, Andreas PaepckeStanford Universitymor, harada, hector, [email protected],†[email protected] time and location information about digital photo-graphs we can automatically generate an abundance of re-lated contextual metadata, using off-the-shelf and Web-baseddata sources. Among these are the local daylight status andweather conditions at the time and place a photo was taken.This metadata has the potential of serving as memory cuesand filters when browsing photo collections, esp ecially asthese collections grow into the tens of thousands and spandozens of years.We describe the contextual metadata that we automati-cally assemble for a photograph, given time and location, aswell as a browser interface that utilizes that metadata. Wethen present the results of a user study and a survey thattogether expose which categories of contextual metadata aremost useful for recalling and finding photographs. We iden-tify among still unavailable metadata categories those thatare most promising to develop next.Categories and Subject DescriptorsH.5.1 [Information Systems Applications]: InformationInterfaces and Presentation—Multimedia Information Sys-temsGeneral TermsHuman FactorsKeywordsgeo-referenced digital photos, photo collections, context1. INTRODUCTIONManaging personal collections of digital photos is an in-creasingly difficult task. As the rate of digital acquisitionrises, storage becomes cheap er, and “snapping” new pic-tures gets easier, we are inching closer to Vannevar Bush’s1945 Memex vision [3] of storing a lifetime’s worth of doc-uments and photographs. At the same time, the usefulnessPermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.MM’04, October 10-16, 2004, New York, New York, USA.Copyright 2004 ACM 1-58113-893-8/04/0010 ...$5.00.of the collected photos is in doubt, as the methods of accessand retrieval are still limited.The existing approaches towards the photo collection man-agement problem can be categorized into three main thrusts.First, there are tools to enable and ease annotation [16].However, annotation is still cumbersome and time-consumingfor consumers and professionals alike. Second, methods havebeen developed for fast visual scanning of the images likezoom and pan operations [2]. These tools may not scaleto manage tens of thousands of images. Finally, image-content based tools [18] are not yet, and will not be in thenear future, practical for meaningful organization of photocollections. While low-level features can be extracted, thesemantic gap between those and recognizing objects (andfurthermore, topics) in photographs is still wide.The goo d news is that automatically collected metadatahas been shown to be helpful in the organization of photocollections. In [6, 7] we demonstrate how the timestampthat digital cameras embed in every photo is effective in theconstruction of browsers over photo collections that have notbeen annotated manually.Beyond time-based automatic organization, technology ad-vances have made it feasible to add location informationto digital photographs, namely the exact coordinates whereeach photo was taken.1As location is one of the strongermemory cues when people recall past events [19], locationinformation can be extremely helpful in organizing and pre-senting personal photo collections.We have implemented PhotoCompas [13], a photo browserthat exploits such geo-referenced photographs. PhotoCom-pas uses the time and location information to automaticallygroup photos into hierarchies of location and time-basedevents. The system was proven effective [11, 13] for usersbrowsing their personal collections.We have now extended PhotoCompas; in addition to em-ploying the time and location metadata to automaticallyorganizing photo collections, the system deploys time andlocation also as generators of context. We extract additionalmetadata about each photo from various sources. We thenintegrate the metadata into the browser’s interface. Themetadata provides context information about each photo.For example, we obtain weather information about eachphoto. The time and place where the photo was taken allowsus to retrieve archival data from weather stations that arelocal to the photo’s exposure location. Similarly, given time1There are a number of ways to produce “geo-referencedphotos” using today’s off-the-shelf products. For a sum-mary, see [17].196Figure 1: The metadata categories generated by oursystem, as shown in the interface opening screen.Figure 2: A subset of the ”Sri Lanka dusk pho-tos” from the first author’s collection, detected us-ing contextual metadata.and lo cation we automatically obtain the time of sunriseand sunset when the photo was taken. PhotoCompas inte-grates this contextual information in its user interface (seeFigure 1). In Section 2 we briefly describe the metadata wethus assemble for each photograph.While the contextual metadata can serve well as memorycues, it can also imply the content of the image. For exam-ple, by clicking on the Dusk entry in the Light Status sectionof Figure 1, a photographer requests to see only photos thatwere taken when dusk had fallen at the exposure location.Figure 2 shows the result of this action, further restricted toshow only photos from Sri Lanka.Our focus here is on (i) measuring how effective usersb elieve our particular set of contextual metadata to be forphoto retrieval, (ii) observing which of this metadata theyactually take advantage of when searching through our in-terface, and (iii) exploring what other contextual metadatawould be profitable to capture in the future.To this end we gathered and analyzed several data setsby means of a user study and a separate survey. In theuser study we had subjects find photographs from their owncollection by interacting with our metadata enriched Pho-toCompas browser. We recorded their paths through theinterface. Also in the user study we asked each participantto grade 24 potential categories of contextual information byhow useful they


View Full Document

GT CS 4440 - LECTURE NOTES

Download LECTURE NOTES
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view LECTURE NOTES and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view LECTURE NOTES 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?