To launch my blog, I’d like to turn first to studies in neuroscience, which have recently declared that direct measurement of neural activity in the superior temporal gyrus (STG) can allow scientists to correlate brain scans with continuous speech — or to put it in clearer terms, to “read” someone’s thoughts. In a recent post, philosopher and NPR blogger Alva Noë discussed the implications: “If we can ‘read off’ experienced words by direct measurement of the brain, then it may be just a matter of time before we can determine by similar measurements what speech forms are being entertained in voluntary thought.” Undoubtedly, medical and psychiatric advances would follow.
But Noë attaches an important disclaimer to this research — “You are not your brain,” he writes. Our thoughts are much more than a lumpy pile of synapses; they are the composite of our bodily movements, environmental situation, and sensations. Neuroscientists, then, don’t claim to understand a person’s thoughts based on these scans — instead, they might simply try to help someone who has trouble communicating those thoughts.
That is all very well and good, but what might happen when you apply this kind of data-centric thinking to music? One of the most difficult hurdles classical musicians face is the need to distance oneself, to an extent, from the emotional affect of the music. In other words, a performer cannot get so lost in the beauty of a Rachmaninoff symphony that she fails to remember the technical side of her playing — how to move the bow in the precise way that produces the right tone, or how to articulate a note with delicacy on a temperamental woodwind instrument. Those technical aspects are crucial to effective communication.
So can we never actually fuse thought and emotion with perfection of form? Must we always sacrifice something in trying to communicate? We might fare well in remembering that, just as speech sounds translated from acoustic events don’t reveal everything about thought, music (as well as our reception of it) does not confide every detail, either. While music as a “speech act” conveys more about a person’s emotional state than language typically can, that added “information” actually heightens the gaps in our understanding. In other words, factors like pronunciation, facial expression, and inflection govern linguistic communication beyond the mere words — but in musical communication, we’re asked to look even further than that. The more we’re given, the less we know.
Could we ever use brain scans to determine what musical sounds a person is hearing? And would that even be worthwhile? Music therapists might benefit from research along those lines if scientists were able to determine the ways in which hearing music “in your head” (i.e., in the absence of actual sound) can or can not produce the same affect as playing it or listening to it. The brain reacts almost identically to imagined activity as it does to real activity, but it would be another thing entirely if we could learn to analyze brain scans for mentally reconstructed sound as opposed to live sound. Something like that might even lead to a culture surrounding aural imagery — specifically, how playing mental “games” with sound and music (replaying it, changing keys, imaging new or different timbres) might foster a new kind of communication with oneself and with others. Very cool indeed!