Rational Thought and Weighing Evidence

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Hawk

Peyronies Disease Treatments
Weighing the Evidence

Reaching accurate conclusions about Peyronies Disease is just like reaching accurate conclusions on any other mystery or question.  It requires rational thinking.  To be on a sound foundation, it is first essential that we all understand and attach the same meaning to the words we use.  Even then, human error can cause failings in establishing any of the following elements.

Necessary Terms- with Definitions

Fact – A bit of information that is true or based on a real occurrence.  Many facts have nothing to do with the question we are trying to answer, and some facts cannot be proven.

Evidence – Facts that are helpful in pointing a person toward drawing a conclusion.  Evidence is very different than proof.  Whether a medical mystery or a murder mystery, there is often evidence that tends to support one conclusion while at the same time other evidence would support a different, contradictory conclusion.  All evidence must be evaluated.  It is also common for evidence to point toward a certain conclusion but still not be compelling enough.  The amount or the quality of evidence is lacking, so no accurate conclusion can be made.

Proof – Evidence or information that is so convincing that it compels the mind to accept a claim as being true.  It is very common in conversation on all topics to hear this word misused.  People often mistakenly say, "It's been proven," when they actually mean the much less convincing term, there is" evidence."  As indicated above, evidence often proves nothing.  

Conclusions – A conclusion is our final determination based on our assessment of: which facts are relevant as evidence, where the evidence points, and whether it is compelling enough to give a convincing answer to the question.  Even with many facts, scientists and investigators often incorrectly assess those facts and reach false conclusions because of an error somewhere in the process.

Specific barriers in evaluating Peyronies Disease:

Many factors have limited objective scientific research on Peyronies Disease.  It is not the purpose of this page to address the reasons behind low numbers of participants or insufficient funding for research, but these are two factors that limit research.

In brief, a few of these barriers to evaluating Peyronies Disease treatments are; the inconsistent ways in which the disease progresses.  For instance, there is some normal percentage of spontaneous resolution of the disease (5-15%), and the very common resolution of pain are just two examples.  Both of these occur at given rates with absolutely no treatment, so in small studies, these are just two factors that can lead to false conclusions.  Maybe the patients that improved would have improved with no treatment.

The Bottom Line
There is also the question of what to assess; the reduction of pain, the change in size or hardness of plaque, degree of deformity, or the quality of erections.  While all of these might indicate a treatment could be working, a change in deformity or the improvement of erections is the only goal that matters to the patients.  Often, objective measurement of improvement is difficult to assess.  Subjective assessment is marred by everything from wishful thinking to the degree of erection when assessing deformity.  Another complication is the possible reduction in plaque without a resulting change in Deformity.  Again, reduction of deformity is the real bottom line of any treatment.

Methods
The most reliable evidence comes from assessing objective measurements from randomized, double-blinded, placebo-controlled studies where neither the patient nor the treating physician knows exactly which patient is receiving the treatment and which is receiving a  placebo.  Further, patients are not "pre-selected" for one treatment of another based on conscious or unconscious bias.  This eliminates the "wishful thinking" factor from influencing the outcome.  Since the results are based on comparing two groups with no chance for bias, the results tend to be much more objective.  Most of the studies we see are missing some of these components, and as a result, we must be increasingly skeptical when we read or hear the results.

A study must also have enough "power" for validity.  To determine power, the researchers must first determine how great a degree of a difference they desire to detect.  This then gives them a starting point to chose how many patients to enroll.  For instance, if you think that a given treatment will work very well all the time (in other words - that result will show a great difference from baseline or control group), then only a few subjects are needed.  If you are looking for a small difference (i.e., lowering blood pressure by 5 points average with a treatment), you will need many patients to accurately show a difference between a treatment group and a control group.

Now let's look at these terms one at a time and then look at how these methods provide increasing levels of reliable evidence.

Randomization - a patient is given drug or placebo treatment randomly.  Otherwise, a researcher may unconsciously favor those he thinks will benefit from the drug and treat those unlikely to benefit from the placebo.

Blind or blinded -  A party does not know if a patient is getting the treatment or a placebo or a control substance (another treatment).  Single blinded means the patient does not know whether he is getting the treatment under study or the placebo. Double-blinded means neither the patient nor the researcher knows.

Placebo - A known inert substitute.  Just two examples could be a sugar pill or a saline solution. Sometimes, a placebo cannot be used, or its use would be unethical, so an alternative treatment is used.  For example, there is no "placebo" for evaluating a surgical intervention.  It is also unethical to give a placebo when there is a known effective treatment.

Control group - defined as the group receiving alternative treatment, or no treatment, to be compared to the study group receiving the investigational treatment.

Study group - the group of patients treated with an investigational drug, to be compared to the "Control Group."

Objective measurements - measurements that are reproducible and which are not subject to bias, often obtained from equipment readings.  Accuracy requires that objective measurements follow strict, uniform standards.  Such measures may be as simple as a tape measure or as complex as imaging.  It also requires before and after measurements of all groups.

Subjective measures - measurements that are based on feelings or interpretation.  Such measures might include, for instance, "pain with erection" or "erectile dysfunction." Subjective measures are more difficult to compare between groups, but surveys may become more "objective" and reproducible by careful validation techniques.  Such scores are commonly used for, say, erectile dysfunction.

Things to Ponder as We Look at Evidence

Ask, "Was there a study"?  As we look at a study, we need to ask if any of these methods (double-blinded, randomized, control group, objective measurement, etc.) are missing, and if so, how many of these components are missing from the study.  That will give us a true sense of whether the study offers strong evidence or much weaker evidence.  The size of the sample group can also be important since small studies can be more easily thrown off by a few non-typical responses.  If the group is small, these rare responses (positive or negative) become a relatively high percentage of that tiny group.  This can lead to false conclusions about expected results in the general patient population.  Obviously, if those funding or conducting the study stand to gain either financially or in prestige, their bias may well be reflected in any study that is not double-blinded.

One last term, Anecdotal Evidence - Evidence that is not based on a study but rather a story or account that cannot be scientifically investigated.  While this may be better than no evidence, it is the very weakest form of evidence and is usually of little value.  These also often assume that because one thing follows another thing, there is some relationship of cause and effect between the two.  This is represented by thinking that concludes that since "A" precedes "B," "A" must have caused "B."  Example: The number of churches in our county doubled between 1850 and 1900 (A).  The alcohol consumption doubled during this same period (B).  Churches, therefore, contribute to alcohol consumption.  In this example, the population doubled, and there was no causation between A and B.

Conclusion

We all benefit when we understand how to evaluate medical claims and studies.  If we understand how to evaluate evidence logically and rationally, we can encourage and even demand properly conducted studies.  We can also help each other to sort through various claims.  This does not mean that we should always shun treatments solely because they never met the test of a well-conducted study.  Sometimes that may be our only option.  Therefore, we weigh what evidence we have, weigh possible risks, and make an informed choice.  
Prostatectomy 2004, radiation 2009, currently 74 yrs old
After pills, injections, VED - Dr Eid, Titan 22cm implant 8/7/18
Hawk - Updated 10/27/18 - Peyronies Society Forums

Hawk

I am a logic-driven person by nature.  I may often seem disagreeable or repeatedly take a contrary view.  I want to be convinced by actual facts that lead to some degree of certainty or proof.

A pet peeve is when I see non-sense creep into our discussion.  When one falsehood is seen as a fact, it undermines our quest for progress and sets back our efforts to understand the challenges we face and viable solutions to those problems.  These falsehoods then spawn spin-off falsehoods. Before long, we start dealing with more deception than truth.

Whenever I suspect this could be happening, I confront the concept to see where the evidence really points. If no one does this, our topics and understanding all start to drift into a hodgepodge of confusion and almost superstition-like ideas and treatments.

I encourage all of us to DEMAND evidence.  To do so is not being disagreeable.  To refrain from doing so is to be irresponsible.

Prostatectomy 2004, radiation 2009, currently 74 yrs old
After pills, injections, VED - Dr Eid, Titan 22cm implant 8/7/18
Hawk - Updated 10/27/18 - Peyronies Society Forums