Talk:Human penis size

(Redirected from Talk:Penis size)
Latest comment: 12 days ago by Cowboygilbert in topic Semi-protected edit request on 12 September 2024

Dubious validity of the BJU International review ?

edit

This article relies significantly on that 2015 study to talk about the size of erect penises (the crux of the matter), but most of the measurements reviewed are of flaccid and stretched penises. 20 studies (n = 15 521) are included, of which only four (n = 692) measured erect penises. The two studies that measured both stretched and erect length showed a 2 and 4 inches difference between the measurements, showing that stretched length isn't a reliable proxy for erect length. The authors admit this :

"Limitations: relatively few erect measurements were conducted in a clinical setting and the greatest variability between studies was seen with flaccid stretched length." "This was found by Chen et al. [30] who reported that a minimal tension force of ≈450 g during stretching of the penis was required to reach a full potential erection length and that the stretching forces exerted by a urologist in their clinical setting were experimentally shown to be significantly less than the pressure required. " 2001:861:4B40:C8F0:2811:6845:A0B3:77E9 (talk) 21:49, 16 July 2024 (UTC)Reply

Extremely questionable validity of Belladelli et al. 2023

edit

Scroll down a bit for the points of dubious validity in the study, if you don't want to read my explanation for why blatantly unsound and just incorrect studies should not be cited, even if they are published in a journal.

I should preface this by saying there are no professional papers that have published anything regarding the validity of this specific meta-analysis, but it's illogical to automatically regard a study as valid since no "experts" have spoken on its erroneously derived findings yet. I'm claiming that the sheer amount and severity of the inaccuracies contained in this study, many that are easily able to be recognized even by an unsophisticated observer, should prevent it from being cited. I can't find on Wikipedia any recommendations on how to handle these studies that are flawed beyond any justifiable reason they should be included.

I'm presuming there must a guideline or at least an unspoken rule preventing them being cited, because the comparatively unscientific, infamous, "r-K life history theory" and the many studies descended from it are not cited. Funnily enough, a study based off "r-K life history theory" is cited in Belladelli's review as a prior investigation consistent with his own findings of penile length variation by geographic regions. Taking a quick look at the study Belladelli cited, the sole author is Richard Lynn, a self-described scientific racist, which "Many scientists criticised Lynn's work for lacking scientific rigour, misrepresenting data, and for promoting a racialist political agenda." (source: Wikipedia). If you know anything about r-k life history theory and Richard Lynn, Belladelli citing Lynn's study is very odd, since r-k life history theory is racist pseudoscience that is quite clear if you look at any of the studies pertaining it. Lynn's study tries to prove racial differences in penile length and cites penile measurements in varying countries from a random, non-credible website. All numbers from the website were literally fabricated and the supposed studies they were pulled from were non-existent. You can read about the creator of r-k theory on Wikipedia (J. Rushton in Scientific racism) and his role as president in a white supremacist organization, Pioneer Fund.

If we strictly go by the Wikipedia articles on how to judge reliable sources, the r-k theory and all studies based off it could be included. (At least through my brief analysis of the articles, WP:MEDRS and WP:RS) Many well-done papers ands books have since debunked and gone through the plentiful flaws in r-k life history theory due to its widespread controversy. Unlike r-k theory, Belladelli's meta-analysis has not gained any attention, which is needed to get "experts" to weigh in with their opinions. If the many critical flaws in a study are plainly obvious, are "expert" opinions truly needed? Does being published in a journal make a source immune to being incorrect? I believe in good faith that since Wikipedia's goal is providing a free encyclopedia through a neutral point of view of the best current information, as well as the fifth official Wikipedia pillar stating "Wikipedia has no firm rules", it is in the best intent to remove blatantly unsound sources. Disagreeing with this would enable the inclusion of well... blatantly unsound sources, as well as of the many pseudoscientific studies that are primarily influenced by "r-k life history". Studies that find spurious associations between penile length with IQ, race with penile length, and race with IQ (just to name a few) through disingenuousness or errors in the study design, similar to Belladelli with his temporal trends of penile length (association of time with penile length, as well as geographic variation in penile length).

I am not trying to imply there was any malicious intent in Belldelli's review, or anything other than that it is a extremely poorly done review. Many of my reasons against including Belladelli's meta-analysis can be found in an article by Cremiux Recueil named "No, Penises Haven't Gotten Longer". Cremiux Recueil is not an expert and from my quick examination, he is a blogger with an interest in scientific papers. One may argue that an "expert" is needed to evaluate the reliability of a source for it to be "proven" inaccurate, but as I discussed earlier, allowing blatantly unsound sources since no "experts" have yet to talk about it is disingenuous to the goal of Wikipedia. Also, there is no need for an expert to spot the glaring mistakes contained in this review. A lot of the mistakes have already been discussed in Recueil's article, so I may simply quote him. Recueil has many studies that are linked in his article, and if you wish to find the links, you can go to his article. I have bolded and underlined specific points that I find extremely important. Points that I find somewhat important are just underlined, and points that I find less important yet still raise concern for the meta-analysis' validity are italicized. I apologize in advance if this is a bit messy as the whole study is just full of errors and mistakes ranging from severe to minor and sometimes overlap.


Belladelli uses many studies on men specifically with ED, getting surgery for ED, or including men with ED (You can see the many studies in Table 1, and urology patients tend to include men with ED): Men with erectile dysfunction are shown to have lower penile length than normal controls (by about 0.6" - 0.8" Kamel et al 2009, Awwad et al 2005). Yet many of the studies included are done specifically on men getting surgery for ED (erectile dysfunction) and/or are diagnosed with ED. I would also like to add that there are also some studies that show no difference between ED and non-ED subjects, but the fact we are using studies specifically on men who literally have a penile disorder is the main issue. It is also hypothesized that the penile tissue itself is compromised in men with ED, especially within elderly men, so there are issues with that as well.

Belladelli uses studies containing exclusively self-reported measurements that influenced his meta-analysis: "Third, though the meta-analysis claimed to not use any self-report samples and to be comprehensive, both claims were clearly wrong: several samples were based on self-reports, like Kinsey’s (1948), as noted above, and many were excluded for no good reason. If we allow self-report studies like Kinsey (1948), we have to then include exact replications like Richters, Gerofi & Donovan (1995). But, self-report samples obviously cannot be used because people lie about their penis sizes. Lying is so pervasive that people lie about their heights in the same direction they lie about their penis size (bigger is better!). As an example of this, consider a study (see also) by one of the authors of a penis length study included here. This study showed height overestimation, and greater overestimation in bisexuals and homosexuals, coupled with lower objective but not self-reported heights."

Self-reported measurement cannot be used as studies consistently report larger penile length when self-reporting, significantly larger than researcher measured penile length. This is a very common and well-known phenomenon that happens with height as well. Many self-reported studies were included (even though they claimed they were not), which affected the results. From Belladelli himself, "Because of their inherent biases, self-reported lengths should be regarded with caution."

The studies that specifically consist of only self-reported measurements are "Kinsey AC. Sexual behavior in the human male", "Bogaert AF The relation between sexual orientation and penile size", "Herbenick. Erect penile length and circumference dimensions of 1,661 sexually active men in the United States." ,and "Di Mauro Penile length and circumference dimensions: a large study in young Italian men". There may be many more, since I'm not going to go through all the studies. I was literally able to recognize all of these studies as self-reported when I saw the titles, since I have already seen them. All these studies report means significantly above the norm. (Just went through the sole study in "Oceania" and it is self-reported. "Smith, Does penis size influence condom slippage and breakage?" And yes, the mean is significantly above average.)

A significant impact caused by this is the large increase of erect penile length in Europe. The most recent study was Di Mauro 2021 which self-reports an average of 16.8cm/6.6in. The studies from which Belladelli starts reporting the trend are Sengezer 2002 and Schneider 2001, and the pooled means of these 2 studies are about 5.3 inches. I'm not going to run a full separate analysis on all the studies without self-reported penile length, but it's quite obvious the self reports had a significant effect on the trends, since they are always reporting an average significantly higher. If you take a good look at everything, the trends Belladelli finds are from the bad study selection, which caused confounding and problems through mixing up pendulous/total penile length, including self-reported studies, including studies with men that are unrepresentative of the normal population, mysteriously excluding good studies, including unreliable studies, and lack of well-done studies in the past.

Belladelli does not differ between pendulous and total penile length, even though they are always significantly different from each other: The pendulous penile length is defined as measuring from the base of the pubo-penile skin junction without depressing the pubic fat pad. Total penile length is defined as measuring from the base of the penis while depressing the pubic fat pad. These two measurements are inherently different by the presence of the fat pad and can vary from around 2-5cm. (At least in these studies: Salama 2015, Salama 2015(2)) It is dependent on total fat mass and therefore BMI. Belladelli not differentiating between these two causes many errors and likely aided in the ludicrous 24% increase of erect penile length in 29 years by using pendulous length in older studies and total length in newer studies. Many studies, especially studies on Pca patients, exclusively measure pendulous length, which is always significantly shorter than the total penile length and therefore studies that report pendulous length will have lower averages than ones that report total length. This becomes a massive problem when trying to find temporal trends and geographical variation in penile length with the already very limited data.

Belladelli cites studies with exclusively prostate cancer patients. Prostate cancer patients likely received treatment that decreased penile length before the study, and these measurements cannot be extrapolated to the general population. They also almost exclusively measure pendulous stretched flaccid penile length, and there are no studies on erect length in Pca patients. Stretched flaccid length is consistently lower than erect length in Belladelli's meta-analysis. Belladelli himself finds that stretched penile length in Pca patients is lower than the average erect length found in his study by 0.6in/1.5cm. Nearly all studies on Pca patients measure pendulously, and higher BMI inherently causes a larger pubic fat pad. Higher BMI is associated with Pca.: "Fourth, there is a considerable literature on prostate cancer treatment effects on penis size, including studies about surgeries and hormones... For an example of both, see Brock et al. (2015). Because hormonal and radiation treatment are common in the run-up to surgery for various reasons (despite potentially not being useful; cf. Kadono et al., 2018), the baseline measurements from many of these studies cannot be taken for granted since they virtually all lack the relevant prior case history information needed to make the estimates interpretable as general population estimates. For all we know, prostate cancer treatment may have improved over time in such a way that penis size losses have become more minimal. Given how penis size loss is considered the “final indignity” of prostate cancer and a notable worry for many affected men, there’s clearly some reason for practitioners to have worked on this.

Since several of these studies were cited by the authors of the meta-analysis, it’s curious that they didn’t realize or even seem to care about their problems. But it’s more curious that, yet again, there were studies mentioned in the studies of prostate cancer sufferers that they cited that led to yet more work with penis measurements, and, if we consider prostate cancer patients to be usable for penile length measurements at all, they shouldn’t have been excluded, but mysteriously, that’s exactly what Belladelli et al. did."

Belladelli's meta-analysis does not have nearly enough information for his findings on geographic variation of penile length and seemingly fabricates a data point: Belladelli reports that there are 5 studies in South America and 8 in Africa. In reality, there are only 4 studies cited for SA and 7 in Africa. There are only 2 studies that report erect penile length in Africa and SA, so 1 each. Also note, if Salama 2018, the sole study on erect penile length in Africa, only used pendulous length and did not push down on the fat pad, the average erect length in Africa reported by Belladelli would have only been 10cm. A good example for how much pendulous versus total length differs. In Fig. 3A, there are at least 2 data points for Africa, but there is only 1 study in Africa on erect penile length. The only way Africa has a trend line in the figure is if Belladelli has added a data point for Africa that doesn't exist.

Belladelli's findings on significant geographic variation in Asia are majorly caused by poor selection of studies and could possibly be maliciously motivated: As previously discussed, Belladelli cites a study based off r-k life history theory by Richard Lynn as a prior study consistent with his current findings. R-k life history theory is commonly cited and used amongst scientific racists and Richard Lynn himself is a well-known racist. Any investigation into the study by Lynn would let you know that the racial differences in penile length claimed in his study are completely unfounded and are based off a non-credible website. Supposedly, the numbers were from many different studies, but the numbers are fabricated and no studies containing them can be found. These supposed variations of penile length by race/geography come from the belief of genetically different androgen levels by race/geography, which has been disproven (1, 2, 3). It is possible there is geographic variation in penile length due to malnutrition/poor environment, but Belladelli's results show significant differences in penile length in a specific region. The average for Asia is very low, and the rest of the regions do not have nearly as much variation between them. The extremely low average for Asia seems to be caused by the abysmal study selection. 1. The inclusion of older studies in Asia, some that are very unreliable, reporting very low average penile length, as well as the exclusion of older studies in Asia that report higher average penile length (There are several on the Wikipedia page that all report averages of 13-14cm, like Yoon 1998, Chung 1971, Weicheng 1990, Weicheng 1993, as well as Weicheng 1994 which is not currently cited). You can see in Figure 3 that the erect penile length in Asia has supposedly increased from below 10cm to around 14cm through 1990 - 2015. This is primarily caused by the unreliable Taiwanese study discussed afterwards in Belladelli fabricating a data point, since it is the only data point cited for erect length in Asia until Promodu 2007. Mind you, the trend for erect length in Asia is calculated between 1992-2015. There was only one study between 1992-2006 in Asia measuring erect length (at least in Belladelli's meta-analysis, although if you included other studies the average would increase between 1990-2006 and the trend would not be nearly as significant). Also, Belladelli does not include Turkish studies in Asia, although it is generally agreed upon as a West Asian country. 2. Belladelli's study selection overall is odd, but is even more so for studies in Asia. Belladelli cites Hosseini 2008, but there is nothing on penile measurements found in the study. Presumably, he must have cited the wrong study. Belladelli reports in "2. Description of studies" that there are 20 studies in Asia, but he only cites 15. 4/15 of the studies in Asia report pendulous stretched flaccid penile length in Pca patients, which as previously discussed always report lower averages. Belladelli cites a Taiwanese study with 20 men with ED, 10 of which failed to get a good erection, more on this is discussed in the next point. Nikoobakht 2010 is a study done specifically on men who complained on having a short penis and followed through with using a device to increase their penile length. Obviously, the averages from that study cannot be taken for granted. Kim 2019 is a study done specifically on men who have ED and needed a surgical implant for their erectile function. It also measures pendulous stretched flaccid length, which as previously discussed is always lower than the true total penile length due to the nature of the measurement. Canguven 2016 is done on elderly men with not only ED, but also low testosterone. The measurement is not specified as pendulous or total length. Mehraban 2007 is measuring pendulous stretched flaccid length and reports a lower value, and again, as we discussed, this should not be surprising. And I could go on and on about the poor study selection, but you get the point. Belladelli's results are influenced by mistakes in his study selection, and would not appear elsewise.

Belladelli fabricates a data point to find a positive result: "More interestingly, their erection trend (p = 0.04) was driven by the inclusion of a study of 20 Taiwanese men with erectile dysfunction who were given erections by injection. Belladelli et al. also improperly claimed this study had twice as many men (check their Table 1) as it did. Their erection trend was also apparently based on a nonexistent study conducted in the same year, because we see two dots in their plots, while the Taiwanese study was the only one with erection data in the year 1992 and their two other 1992 citations involved only flaccid and stretched measurements. Remove this study or even just correct the weight and they didn’t find any significant trends for erect, flaccid, or stretched flaccid length." Also note, the penile length measurement method in this Taiwanese study of only 20 impotent men was quite clear that they are measuring pendulous length: "The penile length was measured from base to midglans manually before and after PGE1 injection. In the determination of the penile erectile volume, the length of the unstretched penis from base to midglans was measured manually with a ruler." Belladelli including this study with only 20 impotent men was already odd enough, but the measurement is also pendulous, which fundamentally decreases penile length. Additionally, erection was chemical induced via PGE1 which induces full erection in only 70-80% of men and Chen found 10 patients had poor response to PGE1.

Belladelli includes several studies with dangerously low sample sizes, often these low sample size numbers are also exclusively accompanied by an unreliable demographic of men: Just to name a few: Da Silva 2002 (Cadavers, n=25), Moreira 1992 (Cadavers, n=17), Chen 1992 (Men with ED, n=20, falsely reported as 40 in the meta-analysis), Carceni 2014 (Men with ED getting surgery, n=19), and there are more. Belladelli does not even weigh the studies by sample size, but pools all the means together while including studies like these.

Belladelli uses dubious methodology and studies with specifically unreliable subjects that bias results: "A lot of samples have to be excluded. You cannot run a meta-analysis for changes in penile length over time with samples of men seeking penile implants, men who think their penises are small, men with erectile dysfunction (and thus most urological patient samples unless explicitly noted otherwise), or men seeking — often hormonal, as androgen suppression — treatment for prostate cancer. Because these people are so odd with respect to the average, we have no idea if it would be appropriate to include them even if there were time-invariant numbers of them.

Second, men seeking penile lengthening procedures or men who think their penises are small are not simply deluded about the average size or their self-perceptions: they are at least somewhat correct and they do tend to have below-average penises. Mondaini et al. (2002) sought to disprove this notion by alleging that people who thought they had small penises really did just have more serious misperceptions. But in reality, their sample showed they have a distribution of penis sizes that was shifted to the smaller side, even if not significantly due to the small sample. Other samples support these sorts of men being smaller than average, so this isn’t concerning. More importantly, people who think they have small penises often have ED, which is much more robustly associated with smaller size. Plus, if we’re doing a meta-analysis, why would we use qualitatively different groups for the flaccid, stretched, and erect groups, as an analysis involving men who have ED must be? Using samples of these people is simply a way to confound results and introduce problems with time-varying sample compositions."

Belladelli's findings are self-contradictory: Somehow, flaccid and stretched flaccid length has decreased, but erect length has increased. Belladelli's results contradict the very nature of the penile tissue, which should raise suspicion.

Belladelli's findings in geographic variation are contradictory (Caused by the very low amount of studies in both Africa and South America.): Not much to explain here. Africa has a significantly lower flaccid and stretched flaccid length than Europe, North America, and South America, yet higher erect length. South America has significantly higher flaccid and stretched flaccid length, yet lower erect length somehow.

Belladelli includes studies without any listed measurement method (The method of measurement can significantly impact findings): Ansell. 2001 "The penis size survey" does not list any measurement method/technique like many other studies Belladelli cites (See Table 1). Obviously, including these studies can cause unreliable results, since we don't know how the researchers measured and if they used different techniques. Specifically, studies using measurements that push down the pubic fat versus ones that do not (See Salama 2018, the difference between these two measurements can vary by up to 5cm on average, even in controls), and studies that stretch the flaccid penis gently versus until maximal extension. There are also differences between measuring the penile length parallel/perpendicular to the floor or body while standing versus in dorsal decubitus.

Belladelli did not include studies cited in studies he cited in his meta-analysis: "Some studies included proper measurement and were not in the meta-analysis at all, and the authors should have known, since they were cited within studies they cited! For example, the studies Spyropoulos et al. (2002), Chaurasia & Singh (1974), Farkas (1971),... and Awwad et al. (2004) were all cited within studies the meta-analysis cited, but they weren’t included in the meta-analysis."

Belladelli gets data and numbers wrong, as well as not factoring in the time from data collection to the study being published (He also uses cadaver studies that have very low sample sizes and does not factor in the effects of death on penile length): "All of this said, I hope readers don’t think the meta-analysis authors got everything besides these issues right. They didn’t: there are incorrect numbers in the meta-analysis for penile lengths, sample sizes, years, and ages. There was also evidence of arbitrary exclusions from certain studies. One study was also cited twice (Park et al., 2011) despite having only one available data point, so something else was presumably meant to be cited.

Regarding penile lengths, the errors were usually understandable, but readers wouldn’t notice because the authors of the meta-analysis didn’t provide their data. Readers would have had to look at their graphs and pull the data to know that the meta-analysis authors messed up in cases like with Shalaby et al. (2014). This study included the wrong measurement in the abstract (13.84cm) and the correct measurement in the body of the study (13.24cm), but the authors of the meta-analysis seem to have used the abstract’s errant number.

Regarding sample sizes, the errors were often harder to understand. For example, Tomova et al. (2010) was listed as having 310 people, but it actually had 310 per listed age, and the authors used ages 18-19, so the sample size should have been 620! But, because several studies used ages as young as 17, I don’t see why that age shouldn’t be included too, to bring the sample size up to 930. Alves Barboza et al. (2018) was more baffling, because those authors had a sample of 450, but the author’s proposed sample size for this study was double that for no apparent reason.

Regarding years and ages, the errors were usually less significant, but they could have been consequential in aggregate. For example, for year, Kinsey was cited incorrectly in their references and was listed with the citation year 1950 instead of the real year, 1948. Many studies were published several years after data collection, so this becomes a source of error because some studies are published closer to their data collection years. For age, studies with cadavers were not helpful since they didn’t list ages, and obviously cadavers tend to be older and thus stretchier for two reasons: age and death. But worse, other studies just had careless age number errors, like Söylemez et al. (2012), who reported a mean age of 21.1 (+/- 3.1) rather than the author’s reported 21.3. Needless to say, the conflation of age and cohort effects in this study was considerable and not able to be addressed due to the small number of studies. Known effects of age (i.e., greater penile elasticity, selection for normal samples related to lack of ED or prostate cancer) or location (geographic variation in size) simply had to be ignored for this reason."


Belladelli's findings should be viewed as almost certainly incorrect: "Belladelli et al.’s penile length meta-analysis produced a result that is almost certainly not true, and it should never have been taken for granted. Because of the absence of measurements conducted on samples that aren’t unusual over sometimes multiple decades, the ability to reliably discern a trend is also severely handicapped and any trends produced in newer studies will have to be unreliably computed based on differences from small numbers of past datapoints."

There are many other flaws within the extremely odd selection of studies Belladelli uses and with the meta-analysis itself, but this is already too long. If you take a good look at everything, the trends Belladelli finds are from the bad study selection, which caused confounding and problems through mixing up pendulous/total penile length, including self-reported studies, including studies with men that are not representative of the normal population, mysteriously excluding good studies, including very poorly done studies, and lack of well-done studies in the past.

PS: Apologize if this was cluttered. There's a lot of mistakes that Belladelli makes and there are many more not included here. Way6t (talk) 04:21, 27 August 2024 (UTC)Reply

This meta-analysis is like death by 1000 little cuts. One problem by itself would not warrant the removal, but the whole study is unreliable. Way6t (talk) 04:50, 27 August 2024 (UTC)Reply

Discussion on the inclusion of Belladelli 2023

edit

Hello fellow Wikipedians, starting a discussion on the inclusion of Belladelli 2023.

I would like to hear some different thoughts on this meta-analysis, as well as expressing my own. I have already started a previous topic on this, but that was more towards discussing the general flaws of the review. It was also a bit lengthy and messy.

Currently, my view is that it is not a reliable study, and therefore does not meet WP:RS. In WP:RS, there does not seem to be a specific solution on studies that are extremely low in quality, but due to the definition of "reliable" (consistently good in quality or performance; able to be trusted.), I think literally unreliable studies do not meet WP:RS based on common sense. The conclusions in this meta-analysis are reasonably certainly incorrect and that should warrant the removal of the study, since it is unreliable (The reasonings for this are discussed in another topic). I understand that removing a study due to an editor's opinion should not happen, but anyone that properly reviews the study would find that there are an overwhelming amount of flaws and problems and that is in certitude. I think that the unreliability of this study is able to be ascertained using common sense, but of course that is subjective. I would like to know if you think this study is only minimally or not flawed, so we should include it. Or maybe you might believe we should still include heavily flawed studies, which I currently disagree with.

I am not obstinate in removing the study. We can try to preserve it and provide info on all the limitations and problems it contains. Although, I believe it would be extremely hard to maintain a neutral, nonjudgmental tone while discussing the issues with the study, as discussed in WP:NPOV. It would come across as disparaging, while discrediting it as a reliable study, since it literally is not.

I would also classify the review in anthropometry rather than in medicine, because "Biomedical information is information that relates to (or could reasonably be perceived as relating to) human health". A review about penile length measurements does not relate to or could even be perceived as relating to human health. Instead, we should use other Wikipedia principles to decide its inclusion.

There is nothing specific on WP:RS on how to include clearly unreliable studies, or if we should even include them to begin with. Although not specific, I believe these statements are still relevant.

"Proper sourcing always depends on context; common sense and editorial judgment are an indispensable part of the process." (- WP:RS) This quote is slightly ambiguous, but with a casual investigation of the review, one would recognize its many errors and unreliability. I would think that it would be common sense to remove or include a study depending on the context of its inherent quality and likelihood of being accurate. WP:COMMON

"Whether a source is reliable depends on both the source and the claim, with the ultimate criterion being the likelihood that the claim is true. That doesn't mean that an editor's opinion of the truth carries any weight."(- WP:APPLYRS) I'm trying to express that the likelihood of Belladelli's claims being incorrect is practically certain and not just an opinion. There are an overwhelming amount of problems contained in the meta-analysis that caused the erroneous conclusions.

"There's a common but misguided fatalism among editors who feel everything in a reliable source must be regarded as true, but editors are meant to interrogate their sources" - WP:APPLYRS. We are supposed to interrogate our sources, and we can expand this to that conclusions in even apparently reliable sources may be inaccurate.

"Remove material when you have a good reason to believe it misinforms or misleads readers in ways that cannot be addressed by rewriting the passage." -WP:NPOV. I believe the best possible option is to simply exclude this study. We can try to inform readers about the many problems with it, but in the end, the conclusions are almost certainly inaccurate and based off of unsound, absurd methodology, which is irremediable.

Anyways, main reason in creating this topic is to see others thoughts on why we should include or exclude this study. Way6t (talk) 23:49, 30 August 2024 (UTC)Reply

"Worldwide Temporal Trends in Penile Length: A Systematic Review and Meta-Analysis", authored by Federico Belladelli et al., and published in World Journal of Men's Health (from website: "Open Access, Peer-Reviewed", "Indexed in SCIE, SCOPUS, DOAJ, and More", "pISSN 2287-4208 eISSN 2287-4690") on Feb 15, 2023. Also, included in the National Institutes of Health/National Library of Medicine's PubMed Central and PubMed.
"This is ultimately a medical/scientific article, and we should use medical/scientific sources that meet the de-facto standards here for sources in articles on medical topics. Given that we now have high-quality evidence in the form of several peer-reviewed studies on this topic published in reputable journals, including a systematic review of other studies, as sources for this article, we should not now be citing either crowdsourced user-generated data, or non-peer-reviewed analysis thereof, even if they been reported on in reliable sources such as the popular press."
  • Per WP:MEDRS's "This page in a nutshell":
"Ideal sources for biomedical material include literature reviews or systematic reviews in reliable, third-party, published secondary sources (such as reputable medical journals), recognised standard textbooks by experts in a field, or medical guidelines and position statements from national or international expert bodies."
"Ideal sources for biomedical information include general or systematic reviews in reliable, independent, published sources, such as reputable medical journals, widely recognised standard textbooks written by experts in a field, or medical guidelines and position statements from nationally or internationally reputable expert bodies. It is vital that the biomedical information in all types of articles be based on reliable, independent, published sources and accurately reflect current medical knowledge."
"Studies can be categorized into levels in a hierarchy of evidence, and editors should rely on high-level evidence, such as systematic reviews." Also, "Several formal systems exist for assessing the quality of available evidence on medical subjects. Here, "assess evidence quality" essentially means editors should determine the appropriate type of source and quality of publication. Respect the levels of evidence: Do not reject a higher-level source (e.g., a meta-analysis) in favor of a lower one (e.g., any primary source) because of personal objections to the inclusion criteria, references, funding sources, or conclusions in the higher-level source. Editors should not perform detailed academic peer review." Daniel Power of God (talk) 08:25, 3 September 2024 (UTC)Reply
Thanks for replying.
I'm presuming for the first bullet point that you're showing me that the meta-analysis was published in a peer-reviewed journal and is included in PubMed as well (You might just be stating info on the study). I'm not trying to label the whole journal as unreliable, only the specific review by Belladelli.
On academic peer-review, I believe it is quite evident that it does not completely prevent the publication of nonsense and unreliable studies, even in reputable journals. "However, peer review does not prevent publication of invalid research" - Peer review
A good example are studies containing pseudoscientific racism. In the topic of the questionable validity of Belladelli 2023, I discussed a study that is cited in Belladelli's review by R. Lynn, an infamous racist. Let's take a small excerpt of the introduction: "Rushton (2000) has applied r–K life history theory to the three major races of Homo sapiens: Mongoloids (East Asians), Caucasoids (Europeans, South Asians and North Africans), and Negroids (sub-Saharan Africans)... Rushton has supported his theory by documenting that the three races differ in brain size, intelligence, length of gestation, rate of maturation in infancy and childhood, and a number of other variables including penis length and diameter." A lot of problems in this little excerpt, especially on the concept of race. As previously examined, Lynn's study's main body of evidence is based off a non-credible website reporting penile length by countries. There are many websites that spread racism by falsifying reported penile length measurements (sometimes IQ as well) by geography/countries. Supposedly, numbers were pulled from many different reliable sources, but they were just fabricated. If you try to Google or search up these numbers anywhere, they don't appear. The countries that actually have studies all report different measurements compared to Lynn's study and contradict Lynn's hypothesis. Not surprisingly, when comparing to real studies, Lynn reported increased lengths in Europe and Africa and decreased lengths in Asia. This study filled with racist pseudoscience was included, even after peer-review. Another excerpt from the introduction: "Rushton proposes that these colder environments were more cognitively demanding and these selected for larger brains and greater intelligence. There is widespread consensus on this thesis..." Lynn is just lying here, there was no widespread consensus except for those amongst the pseudoscientific racist community. His study was published in Personality and Individual Differences, "Personality and Individual Differences is a peer-reviewed academic journal published 16 times per year by Elsevier."
Another example is the journal Intelligence, "According to the New Statesman in 2018, the "journal Intelligence is one of the most respected in its field" but has allowed its reputation "to be used to launder or legitimate racist pseudo-science." One example is Templer and Rushton's IQ, skin color, crime, HIV/AIDS, and income in 50 U.S. states.
On this being classified as a medical source and judged by WP:MEDRS, I don't believe this is of much importance. I would simply claim that the editnotice is incorrect in classifying this. Definition of medical, according to Oxford Languages: "relating to the science of medicine, or to the treatment of illness and injuries." According to the biomed essay, Biomedical information is information that relates to (or could reasonably be perceived as relating to) human health.
Since I don't believe this would change anything, I'll concede and allow using MEDRS to judge whether or not Belladelli's review should be included.
"Respect the levels of evidence: Do not reject a higher-level source (e.g., a meta-analysis) in favor of a lower one (e.g., any primary source) because of personal objections to the inclusion criteria, references, funding sources, or conclusions in the higher-level source. Editors should not perform detailed academic peer review."
I agree with this. I would say that I am rejecting this meta-analysis not because of any personal objections, but because of its irremediable flaws that make its findings nearly certainly incorrect. My issue with Belladelli's review is that it has an overwhelming amount of objective problems that certainly influenced his findings. On editors not performing detailed academic peer review, "if a rule prevents you from improving or maintaining Wikipedia, ignore it"WP:IAR. This is quite contrary to WP:APPLYRS: "...editors are meant to interrogate their sources." I would say that I examined the study's reliability for the Wikipedia page and did not perform academic peer review, but you could also say that those two things are the same depending on your views. If we are not to scrutinize our sources, we might as well include Lynn's study that reports purely false data from a non-credible website and start citing the many racist, pseudoscientific studies that have been published in reputable, peer reviewed journals. (FYI, the reason I keep using racist pseudoscience / pseudoscientific racism as an example is that it is quite similar to and is also cited in Belladelli's review)
If a study is unreliable, it does not meet WP:RS, and therefore should not be included. Belladelli's meta-analysis has irredeemable problems that make it unreliable. This is my basis for removing it.
I am curious on whether or not you believe the review is reliable/unreliable, and to what extent.
Since you have not edited the Wikipedia page, maybe we have reached an agreement and you do not need to respond. If not, we can continue the discussion.
WP:COMMON "Wikipedia has many policies or what many consider "rules". Instead of following every rule, it is acceptable to use common sense as you go about editing. Being too wrapped up in rules can cause a loss of perspective, so there are times when it is better to ignore a rule. Even if a contribution "violates" the precise wording of a rule, it might still be a good contribution. Similarly, just because something is not forbidden in a written document, or is even explicitly permitted, doesn't mean it's a good idea in the given situation. Our goal is to improve Wikipedia so that it better informs readers." Way6t (talk) 04:48, 4 September 2024 (UTC)Reply

What I did previously can be understood as presenting the apparent facts of the matter: Belladelli et al. (2023) is a peer-reviewed, systematic review and meta-analysis. Human penis size, per its own editnotice, is a medical/scientific article that has WP:MEDRS as its de facto standard and medical/scientific sources are what should be used for it; also, stated high-quality evidence for the article explicitly includes systematic reviews. Both WP:MEDRS and WP:RS indicate that systematic reviews are ideal sources. More specifically, as per WP:MEDASSESS, editors should rely on high-level evidence like systematic reviews; also, a meta-analysis is a higher-level source. As Belladelli et al. (2023) is a peer-reviewed, systematic review and meta-analysis, it, thus, meets the WP:RS and WP:MEDRS standard for inclusion in the article. Not only does it apparently meet the standard, but, among other sources that meet the standard, it is regarded and is to be understood as a higher-level source within that standard.

With that said, as indicated by WP:MEDASSESS, there should be a respect for the levels of evidence, and higher-level sources, such as meta-analyses like Belladelli et al. (2023), should not be rejected in favor of lower-level sources that also meet the standard. Yet, the initial justification presented for the removal of Belladelli et al. (2023) seems to have been informed by, centered around, and/or based on the apparent non-expert blog (WP:SPS) authored by Cremiux Recueil, which apparently does not meet the standard of WP:RS. Now, the justification presented for the removal of Belladelli et al. (2023) seems to have shifted to claiming it does not meet WP:RS, supplemented by (WP:SUPPLEMENTAL), among others, WP:COMMONSENSE.

As you stated in response to part of WP:MEDASSESS, "I would say that I am rejecting this meta-analysis not because of any personal objections, but because of its irremediable flaws that make its findings nearly certainly incorrect." This appears to be an objection to the inclusion criteria and the conclusions made in Belladelli et al. (2023). By extension of that, other points that have been made appear to be efforts, though, perhaps, well-intended, to do what WP:MEDASSESS explicitly addresses: "Editors should not perform detailed academic peer review." Rather than continuing to do so, as previously indicated, it might be best to find a reliable source (WP:RS) that performs the detailed academic peer review, explicitly critiquing Belladelli et al. (2023), and simply add it after the sourced content from Belladelli et al. (2023) in the article.

As per WP:IARMEANS's "This page in a nutshell", which includes the section for WP:COMMONSENSE, "Editing Wikipedia is all about making improvements, not following rules. However, WP:IAR should not be used as a reason to make unhelpful edits." With Belladelli et al. (2023) apparently having been sufficiently demonstrated to be a higher-level source that meets the WP:RS and WP:MEDRS standard for inclusion in the article, its removal can be viewed and understood as being unhelpful; systematic reviews and meta-analyses are ideal, high-quality evidence and higher-level sources that editors should rely on, and include in medical/scientific articles such as this, in order to improve its overall quality. Daniel Power of God (talk) 11:53, 6 September 2024 (UTC)Reply

"As Belladelli et al. (2023) is a peer-reviewed, systematic review and meta-analysis, it, thus, meets the WP:RS and WP:MEDRS standard for inclusion in the article."
It seems you have misinterpreted my argument. I agree that Belladelli et al. 2023, on the surface, meets WP:RS and WP:MEDRS, since it is a review published in a peer-reviewed journal. "However, peer review does not prevent publication of invalid research." I do not agree that we should assume reliability of studies that are published in peer-reviewed journals, especially when they are quite obviously unreliable. As previously discussed, nonsense and pseudoscience is able to and has been published many times before in reputable, peer-reviewed journals. There are many examples, and I have already shown you a few. Claiming that since Belladelli 2023 is a peer-reviewed meta-analysis makes it somehow automatically reliable is just a surface level interpretation of WP:RS, and does not refute any claims of it being unreliable. In no way does this sufficiently prove that Belladelli 2023 meets WP:RS, and does not address my claim that there are many obvious, significant problems in Belladelli 2023, which make it unreliable.
"Yet, the initial justification presented for the removal of Belladelli et al. (2023) seems to have been informed by, centered around, and/or based on the apparent non-expert blog (WP:SPS) authored by Cremiux Recueil, which apparently does not meet the standard of WP:RS. Now, the justification presented for the removal of Belladelli et al. (2023) seems to have shifted to claiming it does not meet WP:RS, supplemented by (WP:SUPPLEMENTAL), among others, WP:COMMONSENSE."
To make this clear, the justification has always been that there are many very obvious, significant problems in Belladelli 2023 that make its findings practically incorrect, so by using common sense and the literal meaning of a source being reliable, it does not meet WP:RS. I have acknowledged that it is a non-expert blog I cite, and I stated that my intention was to quote parts from that blog so I didn't need to retype the points already mentioned.
WP:SPS needs to be used in context. The blog is about objective problems found in Belladelli 2023, and the author never claims to be an expert in this field, nor did he need to be an expert to spot the obvious flaws within the study. WP:SPS is about citing info from SPS to publish on Wikipedia: "Anyone can create a personal web page, self-publish a book, or claim to be an expert. That is why self-published material such as books, patents, newsletters, personal websites, open wikis, personal or group blogs (as distinguished from newsblogs, above), content farms, Internet forum postings, and social media postings are largely not acceptable as sources."
Anyone can make a blog, claim to be an expert in a field, and start making false claims. If those false claims were cited on Wikipedia, it would be problematic. But in this context, the blog being a SPS does not make any of the points less valid or even really affect anything. The problems found in Belladelli 2023 mentioned in the blog are clearly factual, because you can check the study yourself. I have already discussed on why needing an expert to identify and understand the obvious problems found in the study is irrational. The problems in Belladelli 2023 and their impact on its findings are easily ascertained by using common sense.
"This appears to be an objection to the inclusion criteria and the conclusions made in Belladelli et al. (2023). By extension of that, other points that have been made appear to be efforts, though, perhaps, well-intended, to do what WP:MEDASSESS explicitly addresses: 'Editors should not perform detailed academic peer review.'"
This point is made from the initial mislabeling of Belladelli 2023 as being a medical study and not using proper context. "Biomedical information is information that relates to (or could reasonably be perceived as relating to) human health." - (WP:BMI) Biomedical/medical info is info that relates to human health, thus a review on trends in penile length does not qualify for being medical info. The whole article of human penis size may be somewhat medically related, but Belladelli 2023 is not, so using WP:MEDRS to judge Belladelli 2023 is already a mistake.
Of course, editors generally should not perform detailed academic peer review on medical studies, as they likely do not have enough knowledge to make a proper judgment, especially with the complexity in certain medical fields. They should leave that to the researchers that are specialized within their field. Detailed academic peer review can also be interpreted differently, but let's assume it means scrutinizing a study for very clear unreliability even though that sounds incredibly nonsensical to completely prevent that. If a study is very clearly unreliable, should we not exclude it?
Belladelli himself displays a blatant lack of expertise, which causes many of the problems previously discussed (e.g. treating pendulous vs total length as the same measurement). Belladelli has only one study specifically on penile length, which is Belladelli 2023. There is no reason not to interrogate his only study on penile length, especially when his findings are almost outrageous and contradictory to common sense and compared to other studies. 'Editors should not perform detailed academic peer review.' is not very applicable to this situation for these reasons, particularly that Belladelli 2023 should not be classified as a medical study to begin with. Even if it was applicable, say an editor examined a study and found many glaring problems that made the conclusions almost certainly false, wouldn't that qualify for WP:IAR to remove the study? Obviously, including unreliable studies on the Wikipedia page is unhelpful and would misinform readers.
"As per WP:IARMEANS's "This page in a nutshell", which includes the section for WP:COMMONSENSE, "Editing Wikipedia is all about making improvements, not following rules. However, WP:IAR should not be used as a reason to make unhelpful edits." With Belladelli et al. (2023) apparently having been sufficiently demonstrated to be a higher-level source that meets the WP:RS and WP:MEDRS standard for inclusion in the article, its removal can be viewed and understood as being unhelpful; systematic reviews and meta-analyses are ideal, high-quality evidence and higher-level sources that editors should rely on, and include in medical/scientific articles such as this, in order to improve its overall quality."
I have demonstrated sufficiently that peer-reviewed meta-analyses can and have been unreliable, therefore not meeting WP:RS, and to assume reliability of peer-reviewed meta-analyses is irrational and problematic. From Wikipedia: "However, peer review does not prevent publication of invalid research."
I think the main reason for our disagreement could possibly be that you have not looked at the study yet, as your main reasoning for including it is simply assuming its reliability due to it being a peer-reviewed meta-analysis. It does make me slightly confused, because you never comment on the work itself, rather on the properties of the work. I am trying to discuss the reliability of the work itself, not any properties surrounding it (e.g. whether it is peer-reviewed or not).
"Being too wrapped up in rules can cause a loss of perspective" - WP:UCS
I believe this may be what is happening here. Belladelli 2023 and many other peer-reviewed meta-analyses like it are demonstratively practically incorrect, which makes them unreliable, literally. Including unreliable studies is unhelpful to Wikipedia readers, so they should be removed.
In the revision history, you comment that we should find a reliable source that performs detailed academic peer review on Belladelli 2023. This almost certainly will never happen as Belladelli 2023 is a very low profile meta-analysis and has been cited by only 1 other study in PubMed currently. Is it not illogical to require a "reliable source" to comment on plainly obvious errors?
"Our goal is to improve Wikipedia so that it better informs readers. Being able to articulate "common sense" reasons why a change helps the encyclopedia is good, and editors should not ignore those reasons because they don't reference a bunch of shortcut links to official policies. The principle of the rules—to make Wikipedia and its sister projects thrive—is more important than the letter. Editors must use their best judgment." (WP:UCS)
I think this matter is very simple; demonstratively unreliable sources should not be cited. Way6t (talk) 04:15, 9 September 2024 (UTC)Reply

WP:MEDSAY copyediting

edit

Hello, all,

This article needs some copyediting to comply with WP:MEDSAY. The example given in that guideline looks like this:

An uncontrolled survey involving 132 experienced long-distance backpackers on the Appalachian trail in 1997 concluded that washing hands after defecating reduces the incidence of diarrhea in the wilderness.
+
Washing hands after defecating reduces the incidence of diarrhea in the wilderness.

I think that making these changes will be relatively easy, and I hope that one of the regular editors of this article will be interested in doing this. The result should be an article that is more encyclopedic. WhatamIdoing (talk) 02:15, 12 September 2024 (UTC)Reply

Semi-protected edit request on 12 September 2024

edit

Add the following information: "In a study conducted by Park Kwan-Jin and six others from the Seoul National University College of Medicine in 1998, the average erect penis length of Korean men was found to be 14.06±1.49 cm, with an average girth of 12.11±1.1 cm. This study was based on a sample of 287 men." (Source: [1]) Tomath6624 (talk) 13:47, 12 September 2024 (UTC)Reply

For reference, in this document, 'functional length' refers to the BPEL (Bone-Pressed Erect Length) measurement method, which is commonly used worldwide. Tomath6624 (talk) 13:51, 12 September 2024 (UTC)Reply
  Not done: Where do you want the information to go? Cowboygilbert - (talk) ♥ 04:23, 18 October 2024 (UTC)Reply

Wtf is going on with this article?

edit

Saw this in recent edits and thought I was seeing things. Why does there need to be a section about black men and porn? Why does there need to be a photo from an account that got renamed from a grossly inappropriate name and has been posting inappropriate material that has had to be removed? Why does this article even exist? Can we just merge the actually meaningful information into the "penis" article and trash the stupid crap that really serves no intellectual purpose? Please? 216.168.91.9 (talk) 11:27, 16 October 2024 (UTC)Reply