Sessions home

About

Listening to music isn’t always done alone nor is it always played through headphones. Music can be a social process and we want to build a service that is tailored to meet your needs: Whether you are throwing a dance party in your living room or having a dinner party with coworkers, Sessions is designed to curate playlists and play music that meets your needs.

For more information, I’ve compiled a report detailing the design process and a number of the UX design elements that will be addressed in the final product. The full pdf report is available here as well as a presentation on the project.

Background

Sessions is the manifestation of a research project between Jared Bauer and myself. An early prototype explored a mood-detection algorithm for groups of people. With numerous issues left unresolved, early mood detection algorithms were discarded. Incidentally, through the research project, we uncovered a void in available music streaming services, where most systems focus on streaming songs based on an individual’s preferences, treating the music listening process as a individualist endeavor and not enabling input or feedback in group settings.

Jared and I built a proof-of-concept prototype. It was a hacked together system that used a Processing script to extract audio features, used to detect a mood, from a crowd and a Python script to associate tags (pulled from Last.FM) with mood-values. The Python script would then use the current mood to play appropriate music via Spotify. At the time, we were exploring the possibility of using audio to detect the mood of a group. While initial results were promising, the project never gained traction. However, through the process, we discovered that we had a unique solution to another question: How do you determine the musical preferences of a group? Using tag frequency and cosine similarity, with mood as an underlying framework for performing tag comparison, we uncovered a unique and successful method for attempting to answer that exact question.

Discarding the initial research focus of the project, I have re-focused Sessions as a design project, focused on building a system to support playlist curation and real-time feedback for music listening in group environments.

Design Process

User Research

Early user research emerged as a result of the initial research agenda. Our proof-of-concept was demonstrated several times and once ran as an art installation in a gallery during an open house. It was during these demonstrations that Jared and I began to understand the significance a system that enabled music curation for groups could have.

Participant Observations: To understand music listening behavior in group situations, I initially gathered information as a participant observer in several relevant situations.

  • The Dinner Party To celebrate a birthday, a friend hosted a dinner party. During the meal, music was largely an ambient part of the evening. It was played through speakers via iTunes on one of the housemates laptop. The setup was located in the living room and off to the side, with the laptop placed precariously on top of a record player and an old tuner. After dinner, two people decided they wanted to have a dance party. They entered the living room, changed the song and turned up the speakers (quite a bit). This did as intended and people began to filter into the living room and start dancing. Throughout the evening, people would take turns walking up to the laptop and adding songs to a playlist that was quickly being populated for the party. The music spanned a number of genres, including EDM, Bluegrass, Rap, and Pop. Occasionally, people would try to change the song, only to discover that others were still enjoying it, though most of the time the change was welcomed by the group. Perhaps it was the setting, but there always seemed to be one or two people around the laptop adding songs, or re-ordering the playlist.
  • The Car Ride Cars present an interesting use case. The dynamics of who gets to play music shifts because of the social dynamics shared between the driver passenger relationship. A more simplistic example is temperature control. As a passenger, you let the driver set the temperature. You may comment, and suggest that it is warm or cold, but it is still within the control of the driver to change that setting. (As a passenger, you can add or remove layers to mediate the drivers preferences.) In extreme circumstances, you can override the driver, but that tends to be a last ditch effort. Likewise, music listening happens in a similar manner. I witnessed on numerous occasions the driver staring at their phone (and not the road as they should) trying to determine what music to play, be it a playlist on their phone or a radio station on Pandora. It was rare to get input from the passengers on the music, even if the passenger was not particularly fond of a song. On the other hand, the driver was much more willing to comment on the music being played. In some circumstances, the passenger would take control of the music at the permission of the driver. The drivers perceived interests would still greatly impact the passengers decision of what music to play.
  • At the Cafe While a slightly different situation, I was interested in music at cafe’s and other public places. In this situation, the music is controlled by an individual (or a subgroup such as the staff). The group has little say in the music being played, but the controlled seems to be acutely aware of the environment and would adjust music accordingly. For instance, during busy times, the music would get louder, faster, and much more lively. During slow periods, the music would shift to follow suit. Ultimately, while an interesting use-case, I’ve decided that supporting public settings such as a cafe is outside of the scope of this design project.

Survey: I was starting to dig into early prototypes when I had a realization. I was largely depending on this mood framework that Jared and I had started working with. The goal was to create a system that would enable users to influence the mood (and by nature the upcoming songs). I think it is a fair assumption that individuals use mood as a language for thinking about music. However, in my observations, I did not hear anyone discuss the current mood or the mood of a particular song. I was interested to see the language that people do use to talk about music, so I threw together a quick survey. I posted it on facebook to illicit as many responses as possible in a short window of time. It was a quick and dirty method and I wasn’t too interested in really digging into the responses. I wanted the data to confirm my initial hunches and get a perspective on the type of language that people use to describe a song. The survey was short. I simply asked participants to list off 5 songs they remember listening to recently. Then, on the following page I had participants list the first 5 words that came to mind regarding that song. There were a number or flaws in the survey and if done again, I would definitely change the way the content was presented. However, the results still proved useful. I now have a set of descriptors for a set of songs and it is interesting to see just how varied the responses are. While some did use terms that can be associated with affect, many do not.

The survey led to a shift in the overall design. To this point, I had been assuming that participants would be interesting in influencing the music by controlling the mood. That is not the language that people use to describe music, especially in a group. Think about the last time someone asked you what you want to listen to in a the car. How did you respond? I highly doubt you said, “I want to listen to something happy!” As a result, I am currently ideating new methods for users to influence the playlist curation.

Sketching

I am continually updating my sketches. Eventually I will get around to uploading some samples here to better document the process. For now, I am focusing on user input – what will that entail and how will it look. At this point, I have nailed down a general form for the system. Music will be played via a web server. Users will be able to connect to the server via a mobile application on a local network (limiting participation to collocated individuals). However, that’s about all I’ve got at this point. Lots of ideas though!

Prototypes

Sessions_Overview

References

Bauer, J.S., Jansen, A., and Cirimele, Jesse. (2011). MoodMusic: A Method for Cooperative, Generative Music Playlist Creation.Adjunct Proceedings of the 24th annual ACM symposium on User interface software and technology (214-218), pp. 4. [acm]