Wednesday, September 19, 2018

Can Christian Adoption Agencies Be Defended? - Part II

In a [previous post], I began looking at a defense for Christian adoption agencies which exclusively prefer Mother+Father homes.  These adoption agencies have found themselves under increasingly regular attacks over the past decade - the latest being in New York.

The first essay focused on the logic and implications of the debate.  Namely, if you say same-sex parenting is just as good as Mother+Father parenting, you're saying motherhood and fatherhood are indistinct, interchangeable, and (of themselves) unnecessary.  Plus, it means any belief in the uniqueness of motherhood and fatherhood is a form of superstition and bigotry.

Common sense rebels against that conclusion.  But some would have you believe it has been rigorously proven by modern social science.  That's what I want to begin considering today.

Taking a Step Back:

One thing which often comes up in these conversations are... studies.  Folks will say that even if we think motherhood and fatherhood are distinct and important, recent sociological data has disproved the idea.  One article from the UK Guardian states there have been nearly 80 studies which prove it.

But think about the scope  of that claim for a moment...

Motherhood and fatherhood have been distinguished and valued in every society in human history.  The distinct values of these vocations has been the subject of art, song, ritual, and stories.  It is part of the lived experiences of most people today.


In other words, the witness of human history says motherhood and fatherhood are different and important for a child.  Yet apparently it was all recently disproved.  As it turns out, motherhood and fatherhood are indistinct and a child is no better off with just one or the other.

...Really?




Tilted Scaled:

One thing to keep in mind is the influence of politics upon the sphere of research.

For instance, researchers at Brown University recently conducted a study on the effects of social networks on those who suddenly begin identifying as transgender in adolescence.  The findings suggested that peer influence, and not just biological factors, can lead to a person suddenly identifying as the opposite sex. 

However, this finding flies in the face of progressive orthodoxy, which would have us believe that transgenderism is biologically baked into a person from conception and has no voluntary components.  As a result, Brown University received a slew of complaints.  In response the school removed the study from their website and gave the following Orwellian explanation:
"The spirit of free inquiry and scholarly debate is central to academic excellence. At the same time, we believe firmly that it is also incumbent on public health researchers to listen to multiple perspectives and to recognize and articulate the limitations of their work."
Similarly, a recent study on the differences between men and women was produced by a team of mathematicians.  The study was going to be published by the Mathematical Intelligencer journal, but was yanked last minute.  The study then went to the New York Journal of Mathematics.  Once again it was accepted… then rejected.  The rejection wasn’t based on any flaw in their methodology, but because these publications feared the “very real possibility that the right-wing media may pick this up and hype it internationally.”

In the world of academic research there is professional pressure for researchers not to ask the wrong questions or come to the wrong conclusions.  The scale is tilted in the direction of progressive leftism.

This doesn't mean one can reject studies out of hand when they confirm the tenets of progressive orthodoxy.  But knowing that a study was produced in an environment where certain results would be severely punished means I can't accept results without close examination.




Peering Inside the Black Box:

So before we even look at the "studies" themselves, we need to cover some groundwork.  Every study has two things:
1) The Findings: The thing which is supposedly being proved.
2) The Methodology:  How used to prove it.
Both of those could warrant some questions.


Inflated Findings:

One common occurrence in science reporting is that you could have a study prove something relatively minor.  But then a reporter gets hold of it and writes an article as if some major discovery has been made.

Take this study on hookworms.  The Washington Post published it with the headline:
"Bloodsucking parasitic hookworms could help make millions of people healthier"
But when you look into it, the study found the following:
"Scientists report that a protein produced by hookworms eases the symptoms of asthma in mice."
The lesson is that headlines are primarily meant to grab your attention.  Whether or not they reflect the reality of the story is a different question.

Suppose for a moment that a study found that children with same-sex parents graduate high school at the same rate as those with Mom+Dad parenting.  Does that upend the notion of motherhood and fatherhood?  No... it just means a kid can get through high school lacking one or the other.


Bogus Methods:

In terms of methodology, you could find a study which purports to prove something, but have absolutely nonsensical methods of proving it. Take the following instance of  bad studies being used to support "Accountable Care Organizations" in the US.  The studies were used to trumpet the success of ACOs, but used a biased data sample:
"Programs like these are propped up by poor studies that gain prominent media headlines. A case study in the extreme was reported by 63 newspapers, wire services, and TV and radio networks, all celebrating the 'success' of a well-known Blue Cross-Blue Shield payment model similar to ACOs. 
But the Health Affairs study on which all of these exaggerated news stories were based has deep design flaws. To measure the impacts of the program, for example, the study simply compared physicians volunteering for it to those who did not. We have known for decades that physicians who volunteer for studies are the ones who are already meeting quality standards. In this study, the doctors participating in the program had higher quality ratings than non-participants before joining the payment program. 
The worst violation of research design in the study was its lack of a baseline measure of health. This means we can’t know the participants’ health before the program, so measurement of 'change' was impossible." - Washington Post  [Ironically, the same folks who gave us the hookworm article]
In other words, the studies ensured a bias sample by allowing doctors to volunteer themselves.  And for understandable reasons, the only ones who volunteered for the study were those who had their finances in good shape.  So the study wasn't worth the paper it was printed on.


Weighing Certainty:

Thus, when someone comes to me and says:
"This study I just Googled disproves everything humanity has ever believed about the value of motherhood and fatherhood!"
My first two questions for said person are:
"OK.  What exactly were their findings and could you explain their methodology?"
Many times the response is silence... because the person took the study on faith.  But without yet sifting through the methodology of a study, your first reaction should be to weigh certainty.


Next time we'll be opening up a few studies and seeing what's under the hood.

No comments:

Post a Comment