GW's 2016 Research Day had featured a new award: the Nashman Prize for Community-Based Participatory Research (CBPR). CBPR is research on significant social issues that occurs in collaboration with local residents with the aim to provide potential solutions and contribute to long-term sustainable change in the community.
Eighteen studies were submitted to compete for the inaugural Nashman Prize. We offer our congratulations to the winners:
First Place: Shanna Helf an undergraduate in the Human Services and Social Justice program, for her study, "Aging through Change: Gentrification, Social Capital, and Senior Citizens of Washington DC's Wards 1 and 6."
Second Place: Katherine Stasaki and Elsbeth Turcan,undergraduates in Engineering, for their study, "CAPITAL Words: Algorithmic Generation of Reading and Spelling Exercises for Low-Literacy Users."
See below for abstracts of each study.
Helf, 2016, "Aging through Change: Gentrification, Social Capital, and Senior Citizens of Washington DC's Wards 1 and 6."
This study investigated the social wellbeing of senior citizens in Wards 1 and 6 of Washington, DC, as affected by elements of gentrification and rapid urban change. Informed by literature from the fields of gerontology, human services, and urban studies, preliminary research shows that gentrification acts as a lifestyle barrier, inhibiting seniors’ interactions with their neighborhoods and the ability to age in place with familiar social support. To locate participants and identify areas of highest need, the researcher partnered with Age-Friendly DC and We Are Family, two prominent local organizations working towards inclusivity of seniors and intergenerational activity in DC. A mixed methods research design first utilized quantitative data from 600 responses to the Age-Friendly DC 2015 Livability Survey, identifying needs across all 8 wards of the city. Second, qualitative data collected during focus groups with seniors from Wards 1 and 6 provided deeper understanding of the first-person experience of aging through gentrification. Initial themes include affordability, respect and inclusion, interracial and intercultural relations, and the deep desire for independent, purposeful, and supported aging. In an era of unprecedented growth of the senior demographic, the results yielded by this study may inform policymakers and direct service providers in Washington, DC; in addition, A1:Q28 raised about the role of seniors in changing urban contexts will have implications for cities nationwide.
Katherine Stasaki and Elsbeth Turcan, "CAPITAL Words: Algorithmic Generation of Reading and Spelling Exercises for Low-Literacy Users."
According to the American Library Association, 14% of adults in the United States cannot “search, comprehend, and use continuous texts" [1]. There is a significant opportunity for the development of technology to help improve literacy rates.
The goal of the CAPITAL project is to make high-quality learning resources accessible to users of all literacy levels. The project aims to automatically create exercises that will help users improve their reading skills. CAPITAL Words is a mobile application designed to deliver and evaluate responses to exercises aimed at improving a novice reader’s phonemic awareness. Three types of exercises can be automatically generated:
Phoneme Swap is an exercise that takes a word and generates answer choices based on real words that differ from the base word by one phoneme. The student must either choose a correct spelling from a spoken word or choose a correct pronunciation for a word they read. For both exercises, two types of questions are generated---vowel and consonant questions. Vowel questions find all single-syllable words that differ only by the vowel phoneme. Consonant exercises swap commonly-confused letters [e.g. b/d/p, m/n, t/d].
Pick the Misspelling presents students with four words and their pronunciations. Students must hear each pronunciation and decide which word is misspelled. In order to ensure the questions were challenging, we developed an intelligent system of misspelling words. Spell the Word is an exercise that shows students a word with one of its syllables replaced by blanks. Students hear the word spoken and must select letters from a given pool to spell the missing syllable correctly. When creating a question, we intelligently select a syllable to remove. We then choose appropriate distractor letters, considering possible homophones.
Truly effective algorithms would generate questions indistinguishable from human-created questions, which poses the question: Can people tell the difference between human-made and algorithm-generated exercises? In order to test this, a survey was sent out to sixteen participants asking them to decide if a computer or a human had generated the given question.
Results strongly suggest that our algorithms generate questions that are comparable to human-generated exercises. On average, participants did worse than chance in guessing if a human or the algorithm generated the question---43% accuracy for misspelled words and 36% for spell the word. This indicates that people were unable to clearly differentiate between the computer-generated exercises and those created by humans