DOC PREVIEW
Evaluating a Pilot of a Mobile Service in Kenya

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IntroductionThe Huzzah MethodAppendixSystem OverviewDesign ProcessEvaluationTwo PilotsMethodologyResultsComparing User PopulationsGroup Formation and EvolutionUser InterfaceReferences“Huzzah for my thing:”Evaluating a Pilot of a Mobile Service in KenyaJonathan LedlieNokia ResearchCambridge, Mass., USAEmail: [email protected]—Deploying and evaluating a new technology is amajor challenge in ICTD research. Introducing new technologiescan be hampered by a lack of cultural insight, poor or delayedfeedback, and limited evaluation procedures, among other fac-tors. This short paper offers a model for introducing technologyin developing regions that mitigates these factors. We call thesesteps the “Huzzah method,” inspired by a quotation that rightlyderides technology that is introduced from afar and poorlyevaluated. The paper also includes selected portions from otherwork on Tangaza, whose design, implementation, and analysisfollowed the Huzzah method.I. INTRODUCTIONIn describing the conflicting views on designing and eval-uating new ICTD systems, Michael Best succinctly capturedhow quantitative and qualitative researchers often view eachothers’ work, as observed at ICTD 2009 in Doha:An additional tension emerged when those com-ing from the CS community criticized the socialscientific work as lacking rigor or importance. Moreinterestingly . . . was the opposite viewpoint of socialscientists finding the work of computer scientistsimmature. A number of people in Doha describedthe technology papers to me thusly: “I wanted tobuild a technology to do this thing. So I started tobuild it. I did this. Then I did that. Then I did abit more. Then it was built. Then I asked 10 peoplefrom Ghana if they liked my thing. Nine of themdid. Huzzah for my thing.” [1]This paper asks: can we as systems builders and qualitativeand quantitative researchers really do any better than this?When introducing a new technology, the answer may be no.So what can be changed to turn this imposition of foreigntechnology and weak evaluation into a sound methodology? Isuggest the following approaches, as I will illustrate with ourTangaza project:• Use local team members• Act on early feedback• Acquire honest criticism• Complement quantitative log data with qualitative surveysCombining this list with a tongue-in-cheek parsing of thequotation, I suggest calling this methodology Huzzah.When discussing the Huzzah method, I show how itsmethods were applied in the introduction of Tangaza, a “voiceTwitter,” in urban Kenya. The appendix includes a series ofexcerpts from a longer research paper on Tangaza [7]. Inparticular, the excerpts show how we used local members ofour team to design and improve upon Tangaza throughoutits piloting phase, and how our mixture of quantitative andqualitative methods complemented each other and provided aricher picture than would have been possible without the useof both.II. THE HUZZAH METHOD“I wanted to build a technology to do this thing.” A dangerfor any new technology is that the creator simply wants tobuild it, or to see if it can be built, or to apply some particularalgorithm, regardless of the user need. While this may succeedin creating a new and useful artifact, it often results in a“hammer in search of a nail,” i.e. a problem in search of asolution.A particular challenge in an ICTD context is when thecreator believes some yet-to-be-built technology will be useful,but has limited means to estimate its usefulness without aprototype. That is, walk-thrus, wizards, and surveys can hintat applicability, but nothing can replace having real peopleactually try the new piece of technology in their real context:the more radical it is, the more people will actually need to tryit. Where this becomes fuzzier is with underlying technologies,such as improvements to DTNs (e.g. [5]), where the directimpact on people often cannot be observed within the timeframe of the research (or at least not until the end of theFig. 1. Interviewing Tangaza trial users from the Huruma Slum in Nairobi,Kenya. We incorporated their feedback into the pilot’s later stages.research). A particularly problematic area here is when thetechnology is entirely new. However, companies, even smallones, do take this kind of risk quite often.In building Tangaza, we found one way to mitigate thisproblem was through having our team include several peoplefrom the locale where the technology was (initially) targetedand piloted; in fact, 4/5 of our team are Kenyan. The originalidea came from one of the Kenyan members and he andthe other Kenyan members helped guide the project towarda culturally-appropriate solution. While the Kenyan membersare from a different socioeconomic class from Tangaza’s targetgroup of lower income users, these team members were asource of constant insight; we did not fret over imposinga culturally-inappropriate technology for this reason. Thus,when “building your thing,” it is extremely helpful to havemany if not most team members come from the place whereit will be deployed. In cases where this is not possible (andit probably is), partnering with a local university or companymay be the best alternative.“So I started to build it.” As with any project, this stepmust include an analysis of prior work: have others (not justacademics) built anything similar in a similar context? Or arethere qualitative analyses that can be used to make certaindesign decisions?An additional step that we found particularly useful inTangaza’s development was user interviews performed directlyby the team of designers and implementers. While the ini-tial presentation of the technology occurred at our lab, theinterviews were in one of the trial participant’s homes, nearwhere most of the participants lived. The users themselvesmade several suggestions for new features which we coulddebate right then and have implemented and part of the livepilot a few days later. Because members of our team spoke thesame local languages and overlapped in ages with many of theparticipants, it was easy to have unstructured and open earlyfeedback. In addition, many participants appeared to find thisprocess empowering. This early feedback helped us improvethe system at a much faster rate than if people had only beensurveyed at the end of the three month trial.A related problem is balancing (a) acquiring feedback fromend-users against (b) biasing their eventual opinion of thetechnology. By showing and


Evaluating a Pilot of a Mobile Service in Kenya

Download Evaluating a Pilot of a Mobile Service in Kenya
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Evaluating a Pilot of a Mobile Service in Kenya and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Evaluating a Pilot of a Mobile Service in Kenya 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?