The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Some devices are becoming aware of our affective state, understanding even how we might feel at any given moment. Soon, a wrist-worn device might be able to modify numerous aspects of one’s environment to suit their affective state. Affective computing is still a field in its youth—though research in this field has burgeoned over the last two decades—that remains limited by the need for large sets of diverse, naturalistic, and multimodal affect data.
This talk first considers effective strategies for designing psychophysiological studies of human response to media that permit the collection and dissemination of very large samples that cross numerous demographic boundaries, data collection in naturalistic environments, distributed study locations, rapid iterations on study designs, and the simultaneous investigation of multiple research questions. Only through such a flexible framework for developing and executing studies on this scale will we begin to fill the need in affective computing for these kinds of data. As a concrete evaluation of our proposed strategies for collection and dissemination of data on this scale, we present a new dataset from our large-scale study of human psychophysiological response to musical affective stimuli.
Next, because music presents an excellent tool for the investigation of response to affective stimuli, we use these data to explore how to design more effective affective computing systems. Other works have demonstrated that certain musical selections elicit reliable affective responses in listeners. Comparatively little work has explored these relationships as they behave continuously over time. The remainder of this presentation presents our early work in identifying and characterizing the most significant of these relationships, with a focus on music and physiological responses that previous literature has shown to be calming or relaxed.