cognition, language, neuroscience

Learning nouns activates separate brain region from learning verbs

Another MRI study, this time investigating how we learn parts of speech:

The test consisted of working out the meaning of a new term based on the context provided in two sentences. For example, in the phrase “The girl got a jat for Christmas” and “The best man was so nervous he forgot the jat,” the noun jat means “ring.” Similarly, with “The student is nising noodles for breakfast” and “The man nised a delicious meal for her” the hidden verb is “cook.”

“This task simulates, at an experimental level, how we acquire part of our vocabulary over the course of our lives, by discovering the meaning of new words in written contexts,” explains Rodríguez-Fornells. “This kind of vocabulary acquisition based on verbal contexts is one of the most important mechanisms for learning new words during childhood and later as adults, because we are constantly learning new terms.”

The participants had to learn 80 new nouns and 80 new verbs. By doing this, the brain imaging showed that new nouns primarily activate the left fusiform gyrus (the underside of the temporal lobe associated with visual and object processing), while the new verbs activated part of the left posterior medial temporal gyrus (associated with semantic and conceptual aspects) and the left inferior frontal gyrus (involved in processing grammar).

This last bit was unexpected, at first. I would have guessed that verbs would be learned in regions of the brain associated with motor action. But according to this study, verbs seem to be learned only as grammatical concepts. In other words, knowledge of what it means to run is quite different than knowing how to run. Which makes sense if verb meaning is accessed by representational memory rather than declarative memory.

Standard
language

Audio analysis of baby cries can differentiate "normal" fussiness from pain


The team has employed sound pattern recognition approach that uses a statistical analysis of the frequency of cries and the power function of the audio spectrum to classify different types of crying. They were then able to correlate the different recorded audio spectra with a baby’s emotional state as confirmed by the child’s parents. In their tests recordings of crying babies with a painful genetic disorder, were used to make differentiating between the babies’ pained cries and other types of crying more obvious. They achieved 100% success rate in a validation to classify pained cries and “normal” cries.

I’m a new parent of twin boys, and I could really use something like this. But it would be even better if the algorithm could break down the “normal” cries into specific needs. Mr. Nagashima, you are doing God’s work; faster, please.

Standard
language, neuroscience, perception, physiology

Skin receptors may contribute to emotion

Interoception, the perception of internal feelings, is a funny thing. From our point of view as feeling beings, it seems entirely distinct from exteroceptive channels (sight, hearing, and so on). Interoception is also thought to be how we feel emotions, in addition to bodily functions. When you feel either hungry or lovesick, you are perceiving the state of your internal body, organs, and metabolism. A few years ago it was discovered that there are neural pathways for interoception distinct from ones used to perceive the outside world.

Interesting new research suggests that mechanical skin disturbances caused by pulsating blood vessels may significantly contribute to your perception of your own heartbeat. This is important because it means that skin may play a larger role in emotion than has been previously thought.

The researchers found that, in addition to a pathway involving the insular cortex of the brain — the target of most recent research on interoception — an additional pathway contributing to feeling your own heartbeat exists. The second pathway goes from fibers in the skin to most likely the somatosensory cortex, a part of the brain involved in mapping the outside of the body and the sense of posture.

This sounds surprising at first, but it makes perfect sense. There have been other instances where the functionality of perceptual systems overlap. For example, it’s been found that skin receptors contribute to kinesthesia: as the joints bend, sensations of skin stretch are used to perceive of joint angles. This was also somewhat surprising at the time, because it was thought that perception of one’s joint angles arose out of the receptors in the joints themselves, exclusively. The same phenomenon, of skin movement being incidentally involved in some other primary action, is at work here. We might be able to say that any time the skin is moved perceptibly, cutaneous signals are bound up with the percept itself.

In fact, I think this may be a good object lesson in how words about feelings can be very confusing. A few years ago, before the recent considerable progress in mapping the neural signature of interoception, the word ‘interoception’ was used to describe a class of perceptions—ones whose object was the perceiver. Interoception meant the perception of bodily processes: heartbeat, metabolic functioning, and so on. When scientists discovered a neural pathway that serves only this purpose, the word suddenly began to refer not to the perceptual modality, but exclusively to that neural pathway. Now that multiple pathways have been identified, the word will go back to its original meaning: a class of percepts, rather than a particular neural conduit.

Standard
art, language, tactility

"And reaching up my hand to try, I screamed to feel it touch the sky."

Check out this beautiful kinetic typography piece by Heebok Lee:

It’s based on an excerpt of the poem “Renascence” by Edna St. Vincent Millay.

renascence
noun
1. the revival of something that has been dormant.
2. another term for ‘renaissance.’
(Oxford English Dictionary)

Millay, who wrote the poem when she was only 20 years old, originally called it “Renaissance.” It’s interesting that the two words are so close in meaning and are pronounced almost the same way, but they’re not considered alternate spellings of the same word.

Edna St. Vincent Millay
Edna on a terrace.

Click below to read the poem in its entirety. I highly recommend reading the whole thing.
Continue reading

Standard
language

The meaning of 'most'

William Shakespeare, who knew a thing or two about words, advised that “An honest tale speeds best, being plainly told.” But the exact meaning of plain language isn’t always easy to find. Even simple words like “most” and “least” can vary greatly in definition and interpretation, and are difficult to put into precise numbers.

Until now.

Thrilling!

In a groundbreaking new linguistic study, Prof. Mira Ariel of Tel Aviv University’s Department of Linguistics has quantified the meaning of the common word “most.” [The study] “is quite shocking for the linguistics world,” she says.

“I’m looking at the nature of language and communication and the boundaries that exist in our conventional linguistic codes,” says Prof. Ariel. “If I say to someone, ‘I’ve told you 100 times not to do that,’ what does ’100 times’ really mean? I intend to convey ‘a lot,’ not literally ’100 times.’ Such interpretations are contextually determined and can change over time.”

I’ve noticed that I exaggerate modally—I choose a number and run with it for a while. Currently it’s 5, as in, “I’ve told you 50 times; I had to wait for five hours.” I don’t mean some specific number, I just mean to use it as a placeholder for exaggeration purposes. There must be a term for this. Linguists?

When people use the word “most,” the study found, they don’t usually mean the whole range of 51-99%. The common interpretation is much narrower, understood as a measurement of 80 to 95% of a sample — whether that sample is of people in a room, cookies in a jar, or witnesses to an accident.

So many problems are caused when we try to communicate with words about whose meaning we think we agree when actually we don’t agree at all. But Professor Mira Ariel is helping sort it out by empirically determining what it is that we mean. Wittgenstein showed that the meaning of words cannot extend beyond how they’re used. So empirical studies like this one can help us immensely. I’m betting this kind of research will also help artificial intelligence research.

“‘Most’ as a word came to mean “majority” only recently. Before democracy, we had feudal lords, kings and tribes, and the notion of “most” referred to who had the lion’s share of a given resource — 40%, 30% or even 20%,” she explains. “Today, ‘most’ clearly has come to signify a majority — any number over 50 out of a hundred. But it wasn’t always that way. A two-party democracy could have introduced the new idea that ‘most’ is something more than 50%.”

I can’t tell from this short article whether Professor Ariel has done research to support her assertion that modern democracy really is the source for the lexical definition of “most” as meaning between 51% and 100%. But if true it’s pretty interesting because it shows that the word “most” may be political—that is, an expression of power or authority—rather than geometrical or mathematical, which is what I had always assumed.

Here’s the full article.

Standard
language

Questions

Over dinner last night my friends and I got into a heated discussion about how to respond when someone asks a question that entails a rhetorical agenda. When a person on the street says, “Do you have any change?”, it poses a problem because you don’t want to lie as a matter of principle (because you have some) and you don’t want to tell the truth out of consideration for politeness and personal safety (because you have no intention of handing it over). Needless to say, we didn’t figure anything out.

The conversation reminded me of this scene from one of my favorite movies, Rosencrantz & Guildenstern Are Dead.

Standard
gesture, language, neuroscience

Gestures and words are neurologically similar


Two types of gestures were considered for the study: pantomimes, which mimic objects or actions, such as unscrewing a jar or juggling balls, and emblems, which are commonly used in social interactions and which signify abstract, usually more emotionally charged concepts than pantomimes. Examples include a hand sweeping across the forehead to indicate “it’s hot in here!” or a finger to the lips to signify “be quiet.”

Current thinking in the study of language is that, like a smart search engine that pops up the most suitable Web site at the top of its search results, the posterior temporal region serves as a storehouse of words from which the inferior frontal gyrus selects the most appropriate match. The researchers suggest that, rather than being limited to deciphering words alone, these regions may be able to apply meaning to any incoming symbols, be they words, gestures, images, sounds, or objects.

It doesn’t surprise me that a widely held theory of language is based on our understanding of how search engines work, because we tend to conceptualize our world with metaphors based on technology. But this suggests that many of our abstract theories might be pinned to planned obsolescence schedules, which is kind of amusing.

Standard