Why stress hormones and fight or flight response are part of ADHD “Teaching”
Here’s something that happens to ADHD children a lot: Getting pushed beyond their limits by accident. Here’s how it works and why it’s so bad.
The child says, “I can’t do this.” Adult (teacher or parent) does not believe it, because Adult has seen Child do things that Adult considers more difficult, and Child is too young to properly articulate why the task is difficult.
Adult decides that the problem is something other than true inabilities, like laziness, lack of self-confidence, stubbornness, or lack of motivation.
Adult applies motivation in the form of harsher and harsher scoldings and punishments. The child becomes horribly distressed by these punishments. Finally, the negative emotions produce a wave of adrenaline that temporarily repairs the neurotransmitter deficits caused by ADHD, and the Child manages to do the task, nearly dropping from relief when it’s finally done.
The lesson the Adult takes away is that Child was able to do it all along, the task was quite reasonable, and Child just wasn’t trying hard enough. Now, surely Child has mastered the task and learned the value of simply following instructions the first time.
The lessons Child takes away? Well, it varies, but it might be:
- How to do the task while in a state of extreme panic, which does NOT easily translate into doing the task when calm.
- Using emergency fight-or-flight overdrive to deal with normal daily problems is reasonable and even expected.
- It’s not acceptable to refuse tasks, no matter how difficult or potentially harmful.
- Asking for help does not result in getting useful help.
Not mine, source:
A lot of us Fucking Love Science and I’m glad about that but overly romantic thinking about Science means we are naive in a world that exploits naivety. This (happily) working scientist explains the rat race aspects of vocational daily science with a special insight into the perverse role of journalists as sloppy, inaccurate promoters of studies in search of public attention. Journalists shake dollars loose with exciting stories that are sometimes even partly true. This cycle points to the danger of scientists getting sloppy too, rushing to publish, making headlines sound sexy, and emphasizing results that are rarely replicable just to keep playing the (now degraded) game. We should think about this stuff. Over a long enough period, that’s a death spiral for what we fucking love about science.*
People think my job is to search for deep truths, understand the meaning of life and how the world and the universe works.
In reality, my main job is to write papers and get grants so my institution can build its reputation and get money. Most research that is published is wrong in some way (except for analytical work on theory), and not because we are being dishonest. It’s because we need to publish a lot and there really isn’t time to zero in on some ultimate truth, we just need to get things right enough to publish. And even if we had all the time in the world, there is the issue of experimental design. For one thing, experimental design is very subtle and difficult and most of the papers I read didn’t really do the right experiment to support their point. You can ask them to do more experiments as a reviewer, but as you can expect, this is not your favorite thing to read when your paper is reviewed because it means more time and money you probably don’t have.
That brings me to the other problem, even if you do have the time, do you have the money? Enough people? The right equipment? An infinite number of just the right test subjects? You see where this is going…
So we write our paper and try to make as big a splash as we can so we can get promoted in our jobs, get tenure, and get more grants. (Also because we want to get the information out there for others to use, research that is unpublished is just a hobby.) The institution wants to peddle this work for more clout, so they write up a press release that over-simplified the research and over-extrapolates the possible importance of the findings. This is sent to journalists who don’t read the actual paper, but rather report based on the press release an even more overly simplified version of the paper, now with wildly speculative implications for humanity, the earth, and/or understanding the universe.
This is how you can publish a paper that shows that mice react kind of funny in a statistically significant way to being exposed to a chemical that is commonly used in ink, and then you read in the New York Times about how using ballpoint pens will kill you.
EDIT: My main point here is that people should be skeptical about science, particularly what they read that is written by journalists – whether it is news articles or actual books. Popular science books were once written by the scientists themselves, now more often they are written by journalists. And there are people who are motivated more by telling a good story than showing what science is really like – full of mistakes and uncertainty, and normal pressures of any career, it is not just the pure search for truth.
But I don’t mean at all to say that being a scientist is terrible (I feel the opposite), I just don’t like the over-simplified way people view science as some pure source of distilled perfect truth. It’s not, but that’s not a bad thing. Understanding life is not simple, nor is understanding how the universe works, the fact that it is so complex is what makes being a scientist fascinating – but what makes writing best-selling book that tells the whole story a bit harder.
*- Like so many other things that depend on stockholders for sufficient funding, the important work becomes hollowed out inside and drifts further from its purpose.
Psycho comes from the Greek word psykho, which means mental. The Greek root word path can mean either “feeling” or “disease.” So psychopath is a word meaning “mental illness”. “Sociopath” is not a clinical term and it is a no-no for mental health professionals to use it. However, I am NOT a mental health professional, and the name is rather on point about the issue: Sick towards society, towards people. In the 1830’s this disorder was called “moral insanity.” By 1900 it was changed to “psychopathic personality.” More recently it has been termed “antisocial personality disorder” in the DSM-III and DSM-IV.
DSM-IV Definition: Antisocial personality disorder is characterized by a lack of regard for the moral or legal standards in the local culture. There is a marked inability to get along with others or abide by societal rules.
It’s easy to take the DSM on faith at face value as sufficient authority to settle the issue of who is or isn’t thoroughly, but the needs of the mental health community and others who have to deal with psychopaths don’t line up perfectly. The DSM criteria depend heavily on observed behaviors while law enforcement and criminal justice must often predict behavior based on personality characteristics. Continue reading
To love somebody
who doesn’t love you
is like going to a temple
and worshipping the ass
of a wooden statue
of a hungry devil.
– Lady Kasa
к вашим услугам!
The term “grey goo problem” was coined by nanotechnology pioneer K. Eric Drexler in his 1986 book Engines of Creation. It supposed a self-replicating nanobot going out of control and in a “sorcerer’s apprentice” way, recognizing NO stopping point for self-duplication. The Earth is left as a lifeless desert of “grey goo”, composed of the bodies of the nanobots.
This scenario joined the library of science fiction plots where it continues to appear. In 2004 he stated, “I wish I had never used the term ‘gray goo’.” He was probably conducting a form of due diligence by considering bad outcomes as well as good. As time has gone by, grey goo has been debunked as a concern in various ways (look it up if you are interested I’m not here to explore all that).
But there are other environments and other bots.
The Internet became a primary human environment at lightning speed, filling up with websites that represented more and more real societal institutions. Initially, they were mostly billboards for providing information. Gradually these online presences became interactive and even took over as the “real world”, the business end of everything. The isolation of solitary individuals running errands on the web protected society up to this point. That collapsed when we embraced social media and were reborn as mobs composed of socially isolated individuals. Continue reading
Today is the first day of Fall.
My 3-year-old son and I went to the beach and ate hamburgers in the car.
Then crossed the railroad bridge to the saw grass and sand.
The air balanced gently between warm and cold. In the sunshine, the last of summer’s heat warms our skin like a loving farewell.
We dug soft sand and threw rocks and wandered as you only wander with a child.
Nothing to accomplish. No hurry.
A stream comes out of the forest, clear and cold as when it melted into a torrent a hundred miles away, up a mountain from here. Red and yellow leaves ride the stream to its end where sweet water joins salt. Salmon fingerlings pass through to the sea.
We lie on the sand watching dark blue waves and the patchwork sky of scudding clouds like massive billowed sails.
Hundreds of migrating crows come to drink from the stream and caper between sky and ground like flowing ink, written too fast to read.
They tease and flirt like teenagers in the park.
We play with toy cars, dwarfed beside the grey bones of a giant tree that drank sun for hundreds of years before it fell and drifted here; and Isaac repeats the question we all ask, waking to this world:
The truth will set you free. But not until it is finished with you.
– David Foster Wallace
Why facts don’t change minds
(Mostly not my writing, two interviews full of interesting evidence about our structural intransigence)
Hugo Mercier and Dan Sperber are the authors of “The Enigma of Reason,” a new book from Harvard University Press. Their arguments about human reasoning have potentially profound implications for how we understand the ways human beings think and argue, and for the social sciences.
“Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from…
If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hyper-sociability.”
Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own…”
“…Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.”
Elizabeth Kolbert – Why Facts Don’t Change Our Minds – The New Yorker February 27, 2017, issue.
Henry Farrell: So, many people think of reasoning as a faculty for achieving better knowledge and making better decisions. You disagree. Why is the standard account of reasoning implausible?
HM: By and large, reasoning doesn’t fulfill this function very well. In many experiments — and countless real-life examples — reasoning does not drive people towards better knowledge or decisions. If people start out with the wrong intuitive idea, and then start reasoning, it rarely does them any good. They’re stuck on their initial wrong idea.
What makes reasoning fail is even more damning. Reasoning fails because it has a so-called ‘myside bias.’ This is what psychologists often call confirmation bias — that people mostly reason to find arguments that whatever they were already thinking is a good idea. Given this bias, it’s not surprising that people typically get stuck on their initial idea.
More or less everybody takes the existence of the myside bias for granted. Few readers will be surprised that it exists. And yet it should be deeply puzzling. Objectively, a reasoning mechanism that aims at sounder knowledge and better decisions should focus on reasons why we might be wrong and reasons why other options than our initial hunch might be correct. Such a mechanism should also critically evaluate whether the reasons supporting our initial hunch are strong. But reasoning does the opposite. It mostly looks for reasons that support our initial hunches and deems even weak, superficial reasons to be sufficient.
HF: So why did the capacity to reason evolve among human beings?
HM: We suggest that the capacity to reason evolved because it serves two main functions:
The first is to help people solve disagreements. Compared to other primates, humans cooperate a lot, and they evolved abilities to communicate in order to make cooperation more efficient. However, communication is a risky business: There’s always a risk that one might be lied to, manipulated or cheated. Hence, we carefully evaluate what people tell us. Indeed, we even tend to be overly cautious, rejecting messages that don’t fit well with our preconceptions.
Reasoning would have evolved in part to help us overcome these limitations and to make communication more powerful. Thanks to reasoning, we can try to convince others of things they would never have accepted purely on trust. And those who receive the arguments benefit by being given a much better way of deciding whether they should change their mind or not.
The second function is related but still distinct: It is to exchange justifications. Another consequence of human cooperativeness is that we care a lot about whether other people are competent and moral: We constantly evaluate others to see who would make the best cooperators. Unfortunately, evaluating others is tricky, since it can be very difficult to understand why people do the things they do. If you see your colleague George being rude with a waiter, do you infer that he’s generally rude, or that the waiter somehow deserved his treatment? In this situation, you have an interest in assessing George accurately and George has an interest in being seen positively. If George can’t explain his behavior, it will be very difficult for you to know how to interpret it, and you might be inclined to be uncharitable. But if George can give you a good reason to explain his rudeness, then you’re both better off: You judge him more accurately, and he maintains his reputation.
If we couldn’t attempt to justify our behavior to others and convince them when they disagree with us, our social lives would be immensely poorer and more complicated.
HF: So, if reasoning is mostly about finding arguments for whatever we were thinking in the first place, how can it be useful?
HM: Because this is only one aspect of reasoning: the production of reasons and arguments. Reasoning has another aspect, which comes into play when we evaluate other people’s arguments. When we do this, we are, on the whole, both objective and demanding. We are demanding in that we require the arguments to be strong before changing our minds — this makes obvious sense. But we are also objective: If we encounter a good argument that challenges our beliefs, we will take it into account. In most cases, we will change our mind — even if only by a little.
This might come as a surprise to those who have heard of phenomena like the “backfire effect,” under which people react to contrary arguments by becoming even more entrenched in their views. In fact, backfire effects seem to be extremely rare. In most cases, people change their minds — sometimes a little bit, sometimes completely — when exposed to challenging but strong arguments.
When we consider these two aspects of reasoning together, it is obvious why it is useful. Reasoning allows people who disagree to exchange arguments with each other, so they are in a better position to figure out who’s right. Thanks to reasoning, both those who offer arguments (and, hence, are more likely to get their message across) — and those who receive arguments (and, hence, are more likely to change their mind for the better) — stand to win. Without reasoning, disagreements would be immensely harder to resolve.
HF: Despite reason’s flaws, your book argues that it “in the right interactive context, works.” How can group interaction harness reason for beneficial ends?
HM: Reasoning should work best when a small number of people (fewer than six, say) who disagree about a particular point but share some overarching goal engage in discussion.
Group size matters for two reasons. Larger groups are less conducive to efficient argumentation because the normal back and forth of discussion breaks down when you have more than about five people talking together. You’ll see that at dinner parties: Four or five people can have a conversation, but larger groups either split into smaller ones, or end up in a succession of short “speeches.” On the other hand, smaller groups will necessarily encompass fewer ideas and points of view, lowering both the odds of disagreement and the richness of the discussion.
Disagreement is crucial because if people all agree and yet exchange arguments on a given topic, arguments supporting the consensus will pile up, and the group members are likely to become even more entrenched in their acceptation of the consensual view.
Finally, there has to be some commonality of interest among the group members. You’re not going to convince your fellow poker player to fold when she has a straight flush. However, it’s often relatively easy to find such a commonality of interest. For example, we all stand to gain from having more accurate beliefs.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.
Henry Farrell, washingtonpost.com © 1996-2020 The Washington Post