Jump to content

Wikipedia talk:Identifying reliable sources (medicine)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Feedback for sum-up diagram

[edit]
classification of the different types of scientific literature

I am creating a new section to discuss the feedback you may have about this diagram. That way, you don't have to read the previous conversation (in the section "Proposal of sum-up diagram "), where we discussed previous versions of the diagram. Please only provide comments on how the diagram could better illustrate and sum-up the MEDRS guidelines. This section is not the right place to suggest changes to the guideline itself. I'm looking forward to reading your suggestions.

Proposal of sum-up diagram

[edit]
classification of the different types of scientific literature

Hello, I am trying to build a diagram to sum-up visually this page. Can you please provide your feedback, and suggestions of changes ?

Note A: I know that there is no mention of Letters to the editor in this page, but I took the freedom to add them in the diagram, as they have been used multiple-times for disinformation (e.g. 1, 2).

Note B: all images either come from Wikimedia or ChatGPT . Galeop (talk) 09:49, 1 February 2025 (UTC)[reply]

@Galeop, how do you envision using this?
I think that this is a bit too one-size-fits-most, because Wikipedia:Biomedical information#The best type of source depends on the claim that the source is supposed to be supporting. "Some research has been done on ____" needs a different kind of source than "Wonderpam cures cancer".
Also, this guideline is about medical information. Wikipedia:Identifying reliable sources (science) was an attempt to broaden these principles to non-medical scientific content, and it was not accepted. WhatamIdoing (talk) 01:58, 3 February 2025 (UTC)[reply]
While I agree with your assesment of this seeming a little too "one size fits all" I'd like everyone to imagine their first time editing a medical page and how daunting WP:MEDRS can look. I know personally I found it very hard to wrap my head around the whole "use tertiary sources but also those don't exist for some topics" when starting out editing. I think not having more basic, easier to understand versions of MEDRS does Wikipedia diservice (yes even at the risk of leaving out some important details).
That's my 2 cents. However I do not know the ins and outs of MEDRS to comment much on the infographic unfortunately. IntentionallyDense (Contribs) 02:46, 3 February 2025 (UTC)[reply]


Thank you @WhatamIdoing and @IntentionallyDense.
I understand the one-size-fits-all problem. I have narrowed down the scope of my annotations to medical claims only. I have added that invalid MEDRS sources may still be acceptable for non-medical claims.
I also share @IntentionallyDense's opinion that the lack of a more basic, easier to understand version of MEDRS does Wikipedia disservice. It's better to give a nutshell-diagram, to communicate the broad lines, and make readers immediately understand that MEDRS guidelines are not just "obvious common sense".
Do you have comments on the new version? Do you still think it's too one-size-fits-all? Galeop (talk) 03:45, 6 February 2025 (UTC)[reply]
MEDRS scientific information flow
I find that a lot of the information is not in the same location at each step (e.g. information about peer review is in the title of grey literature, but is a subtitle for the other boxes). I think you should standardise each box. I also think you would be better off separating the iconic/diagrammatic elements into a separate layer below the list of literature types.Daphne Morrow (talk) 05:39, 6 February 2025 (UTC)[reply]
@Daphne Morrow you're an artist! Your diagram is indeed much clearer.
I also like what you did for the Popular Science category; with the arrow pointing to it from all categories. It's a good reminder that popular press often prematurely cites pre-prints or working papers.
I've uploaded the LibreOffice file of my diagram HERE, so that you can take from it all the icons you might need.
A few comments:
- Grey literature: in my original diagram I was only talking about "non peer reviewed grey literature", not all grey literature. Indeed some grey literature is released by institutions with an internal peer reviewing process. On 2nd thought it's probably better to avoid the term "grey literature", and merely name this category "non-peer reviewed writings" instead. And in that case, it's better to remove the mention of conference proceedings published as supplements, and animals&petri-dish studies.
- Regarding self-published books, as pointed out by @CFCF, my annotation was unclear. I should have mentioned that the publisher is NOT a recognized scholarly publisher. I've updated my image to reflect this.
- For the arrow from "non-peer-reviewed writings" (f.a.k. grey literature), only pre-prints make it to the "primary literature" category. So the arrow should originate from pre-prints.
- I don't understand what you mean by "grey lit and early stage research informs, study focus and methodological design".
- It's a detail, but I meant the funnel icon as a way to symbolize "synthesis". As there is no synthesis from "non-peer-reviewed writings" to "primary studies", it's better not to put the funnel.
Aside those comments, I think it's really great. Galeop (talk) 13:50, 8 February 2025 (UTC)[reply]
I would do something more like this: Daphne Morrow (talk) 07:06, 3 February 2025 (UTC)[reply]


I love it! The only draw back is that tertiary literature seams to be preferred over secondary literature (even though it's the opposite), as it sits on top. But maybe a simple comment on the diagram could correct that perception. Galeop (talk) 07:17, 3 February 2025 (UTC)[reply]
Thanks! I have an idea for how I could switch them, I might have another go tomorrow. Daphne Morrow (talk) 07:43, 3 February 2025 (UTC)[reply]



MEDRS summary diagram
MEDRS summary diagram
New version here.
I would like feedback on whether I should include more literature types (eg clinical practice guidelines), whether animal studies / in vitro belong in the bottom section, and whether there are any other kinds of information I should add. Daphne Morrow (talk) 13:02, 4 February 2025 (UTC)[reply]
Thank you so much for your contribution @Daphne Morrow
I think both our diagrams complement each other. Your diagram ranks the sources for medical claims on Wikipedia. My diagram represents the flow of scientific literature, and mentions what kind of literature is preferred for medical claims.
There may be a need for both:
1) I am convinced there's a need for an illustration of the flow, as most people have never heard about the categories of scientific literature. But maybe my diagram should be lighter ?
2) There may well also be a need for a ranking of sources.
Suggestions for your diagram:
- Ranking sources for "grey literature" and "tertiary literature" is quite difficult however. Indeed, those categories are not codified/standardized. I think it would be less risky to bulk their respective items together in the same big box, without attempting to rank them. So I suggest a "grey literature" box with an unsorted list of items; and same thing for "tertiary literature".
- Also, I suggest naming it "pyramid of sources for medical claims" rather than "pyramid of evidence". Galeop (talk) 08:42, 5 February 2025 (UTC)[reply]
HI Galeop. Thanks for this. I will keep this in mind and wait to see if others have input too. Daphne Morrow (talk) 09:27, 6 February 2025 (UTC)[reply]
I think stylistically the pyramid is fine, however the content needs to be reworked before it can have any chance of being included.
Just as Galeop says, grey litterature is a very broad category, which contains basically all literature that lacks a PMID, DOI, or ISBN (and depending on definition some that have DOI:s such as preprints). This contains some of the highest quality reports, be they HTA:s, metaanalysis or review by government agencies, major reports by the WHO, CDC, FDA etc. These do not run through academic peer review, although they often employ many other types of peer reviews. These are among the best sources out there - both from a scientific vantage point, but even more so for building Wikipedia content.
This causes issues when you rank sources in a pyramid. It isn't always as clean as we try to make it. There is also the issue of a low quality meta-analysis being far worse than a high-quality RCT. The current guideline includes two pyramids to show that there are different rankings, and one of them places clinical practice guidelines at the top. Sometimes clinical practice guidelines can not only be the best evidence, but they can define the condition. To state that a meta-analysis is better in those cases is ... how should I put it - nonsensical.
I think you could probably get quite some guidance by reading the section two spots up on this very page WT:MEDRS#Improving the "referencing a guideline" illustration. CFCF (talk) 23:13, 6 February 2025 (UTC)[reply]
Also, in vitro studies are not grey litterature, and if you want to be that nit-picky you're missing in silico studies below in vitro, and umbrella reviews above meta-analysis. And what you call "literature reviews" are often referred to as "narrative reviews" or "narrative literature review". Also you have scoping reviews, that should place above narrative reviews, but below systematic reviews. And "other reviews" is to me not a useful category.
And what do you mean with researcher's book - do you mean self-published? Or just any book? There are biomedical tangential topics where a book is the best resource, for instance psychological, sociological, or anthropological books that are directly linked to medical outcomes. For instance you have Goffman's Stigma: Notes on the Management of Spoiled Identity, which is probably the most cited work extrapolated to HIV-related stigma, which is a field where MEDRS would apply. What differentiates a medical handbook from a researcher's book? Is it just that it has handbook in the name? CFCF (talk) 23:20, 6 February 2025 (UTC)[reply]
And what about outbreak reporting, and mortality data. I realize the COVID-19 Pandemic article is completely non-compliant to MEDRS when it reports on Deaths. However, I don't think one should insist on only academic and government sources there either. CIDRAP does excellent reporting, ... I need to take a look at that as well. Things change when you're gone from Wikipedia. CFCF (talk) 00:02, 7 February 2025 (UTC)[reply]
>I realize the COVID-19 Pandemic article is completely non-compliant to MEDRS when it reports on Deaths.
Ironically I feel like everything about covid is entirely non-compliant to any logical views of validity and what cause and effect are.
That's not the reason I'm writing here though. I wanted to ask about RCT's and this is the only chain of comments on this page mentioning them so I suppose it goes here.
Main point I wanted to make was I don't think it is exactly undisputed that RCT's are the end all be all for validity of medical literature. I am no expert of course and this is only my gist of it, but it seems like even if that is the accepted SoP in medical literature, great minds outside of medicine have looked things over and are asking a lot of questions for valid reasons.
See:
https://par.nsf.gov/servlets/purl/10059631
https://www.sciencedirect.com/science/article/pii/S0277953617307359
I don't think any of the authors are specifically in medical fields but last I checked medicine and healthcare does not have any special interpretation of what cause and effect means. The authors of those papers are highly respected and well known and qualify as experts if experts do indeed exist.
I don't exactly have a question or any specific suggested change here, just thought I would mention just because something was the accepted "fact" twenty years ago, the thing about science is usually it is updated over time as more things are understood and studied and more people input their thoughts. Just my .02 Relevantusername2020 (talk) 04:06, 8 February 2025 (UTC)[reply]
I could be wrong but I believe these weaknesses of RCTs in specific circumstances is part of why MEDRS prefers meta-analyses and systematic analyses as sources. They tend to evaluate the strength of RCTs against the strength of other studies. Daphne Morrow (talk) 06:53, 8 February 2025 (UTC)[reply]
Thank you for your comments @CFCF
About researchers' books, Daphne and I indeed meant "self published books", or published by non-scholarly publishers. I am thinking for instance of books by star-scientists, who write books for the general public, and mix in such books peer-reviewed results with never-published-anywhere-else own results. Typically such books are published by publishing houses that have nothing to do with academia (but everything to do with selling lots of books).
About covid mortality data, although I do agree with your point, I think it's okay if this pyramid doesn't feature any category for them. Indeed, such data is more "raw data" than "evidence" (i.e. results from analysis). Galeop (talk) 14:25, 8 February 2025 (UTC)[reply]
I agree Galeop - we do not need to include specifics on Covid-19 mortality data here, as for the comment by Daphne Morrow on RCTs and MEDRS - I think you are precisely right. Relevantusername2020: There are also other issues with RCTs, in part because they are very expensive, and this steers which topics are explored. I would suggest anyone with an interest in the topic to read Justin Parkhurst's The Politics of Evidence [1], which is OA.
As for the points on high quality grey litterature, and "other reviews" - I think that must be addressed before we can suggest including any infographic. CFCF (talk) 14:34, 8 February 2025 (UTC)[reply]
@CFCF, from your experience, would you say that clinical practice guideline could be considered as tertiary literature ? I know it's not published by publishing houses such as University Presses or Reference Works publishers; but aren't those clinical practice guidelines mostly based on published primary and secondary studies? (it's a honest question; I really don't know the answer) Galeop (talk) 07:57, 11 February 2025 (UTC)[reply]
Coming back to this, I am not so sure it isn't tertiary litterature. It depends, and I'm not sure it matters that much - but rather points to the somewhat arbitrary and artificial divide between secondary and tertiary litterature in highly technical fields such as medicine. CFCF (talk) 11:39, 20 February 2025 (UTC)[reply]
Probably not. Generally speaking, in wikijargon, tertiary sources are encyclopedias, dictionaries, and other sources that provide brief, general information summarizing pre-existing knowledge without adding anything of their own. This includes textbooks for children but not necessarily at the university level (and rarely at the graduate level). It sometimes includes bibliographies, directories, lists, timelines, and databases that provide bare facts, but not something like OMIM (whose entries usually include multiple paragraphs of custom description).
A clinical practice guideline adds 'something of its own', namely a recommendation for/against something. That makes it a secondary source. WhatamIdoing (talk) 05:00, 14 February 2025 (UTC)[reply]
Thank you for this clarification @WhatamIdoing
I've added Clinical Practice Guideline as a separate box in my attempt to illustrate the flow of scientific literature (which is a different diagram than the pyramid currently debated, which attempts to create a hierarchy). Any comment?
flow of scientific literature
The flow of scientific literature
Galeop (talk) 07:05, 16 February 2025 (UTC)[reply]
Overall, I think I'm not the best person to tell you what's useful to a newer editor.
I suspect that what's useful to a newcomer is going to depend partly on their background. For example, med students get some explicit training on these things, so they already know some of this. Other people, even with equal or more academic accomplishments, don't know what some of these words mean. WhatamIdoing (talk) 00:50, 18 February 2025 (UTC)[reply]
Hopefully all involved and interested get pinged from this. I'm glad to see there is a lot of care involved in getting this right and keeping things high quality.
I have a lot of links I could share here, and will share examples if needed, but I guess my overall point is that due to the nature of Wikipedia combined with the reality of healthcare, health research, publishing incentives (ie, publish or perish) and to the point, there is a lot of retractions and I suspect even more that should be retracted. There are a lot of questions about things that have been established fact for a long time and even more questions about things that never reached a conclusion satisfactory to anyone involved. If this were any other encyclopedia, it would be a simple answer. Wait til the experts establish the official textbook definitions and views and whatnot. Obviously that is not the case. So, this is much less of an issue in things that are strictly physical in nature but when it comes to brain things, the facts are on shaky ground to put it mildly.
I am not a professional. I am not always confident about things I say (though you would never know that lol) but on this topic I am 100%
Read this for a nice breakdown of some easy fallacies that we have all fell victim to: https://people.well.com/user/doctorow/metacrap.htm
I guess I don't have an overall point or a nice conclusion and that is kind of appropriate for the overall content of this message I suppose. Overall I think it is going to have to rely on base logic and barring any major change in policy on behalf of Wikimedia, if the power enabled via having the know how on how to "set the story" for whatever it may be hasn't quite hit you yet - not sure how it hasn't after 2020 - I suggest taking a step back and realizing what is posted here on whatever issue has a lot of validity to a lot of people so make sure what you add holds up under scrutiny - scrutiny the cited sources may not have received.
On that note, and specifically in regards to that last message about med students receiving explicit training in these things: not necessarily. I mean. I am entirely self taught on the things I know but there are a lot of experts in a lot of fields who I would consider numerically illiterate. That is they are terrible at critically thinking about the underlying information statistics are communicating. This is the source of a lot of issues with a lot of health studies. It doesn't take an expert to figure out where the fault lies a lot of times though, usually it is as simple as seeing an angle the original authors did not. Poor example, but here is a reddit comment *I made awhile back doing just that. For a much better written explanation of statistics and how they (might) lie (as the old saying goes): https://unherd.com/2024/05/the-danger-of-trial-by-statistics/
  • Forgive my language, attitude/excessive sarcasm/etc and weird punctuation/grammar/etc. I am trying to do better :D
Also that account is gone. This is my new one if you want to message me for whatever reason, I'm always happy to chat and I have a lot of links: irrelevantusername2024 Relevantusername2020 (talk) 15:57, 18 April 2025 (UTC)[reply]
Cory Doctorow is occasionally around and might be interested in knowing that you read his 2001 post. WhatamIdoing (talk) 19:40, 19 April 2025 (UTC)[reply]
Thank you for your comment @Relevantusername2020. I agree with your overall message, and I think the problems you point out highlight the importance of an evidence-based academic publishing process. Namely, it stresses the importance of Meta-science studies, to guide the policies of academic journals (as it is already the case). I think the metascience study here articulates quite well some of the problems you pointed out : https://doi.org/10.1007/s11192-018-2969-2
About the perverse incentives of publish or perish, I think that, ironically, high impact factor journals are best placed to fight them ("ironically", because such journals are a main source of those perverse incentives). Indeed, only such journals have the clout to "punish" the breaches of scientific ethics, that the publish or perish system incentivizes authors to do. Indeed, by temporarily black-listing authors who lied in their disclosures or findings, they create a strong deterrent to lying. Galeop (talk) 15:30, 27 April 2025 (UTC)[reply]
Interesting! I am not at all surprised he is a Wikipedian. I just might have to follow up on your suggestion, thank you Relevantusername2020 (talk) 01:07, 20 May 2025 (UTC)[reply]
Different blocks on the same row
About the pyramid with lots of blue lines: It would probably be interpreted as "this is slightly better than that". If that's not wanted, perhaps each main row should be split horizontally, like this stack of blocks? WhatamIdoing (talk) 04:37, 14 February 2025 (UTC)[reply]
Hi all,
I'd like your final opinion on my last edit, of the diagram here. The goal of this diagram is not to rank literature, but to illustrate the flow of scientific literature.
Any comment or objections?
classification of the different types of scientific literature
Galeop (talk) 18:44, 12 March 2025 (UTC)[reply]
@Galeop, I apologize for being so slow to respond. I have two questions:
  • Where in MEDRS did you get the idea for "Please prioritize reviews without COI"? Or is this your own idea?
  • Why does "Popular science" have its own box on the right, instead of being part of the first "Non peer-reviewed writings" column?
WhatamIdoing (talk) 21:11, 28 March 2025 (UTC)[reply]
Hi @WhatamIdoing, apologies for my slow response too. And thank you so much for your response.
- You're making a good point about COIs. I felt this was an overall principle in Wikipedia, but I am biased by my own readings*. I will gladly simply remove this recommendation, if I've over-interpreted the rules on COIs.
- About popular science, I would not categorize it as belonging to "scientific literature", so that's why I've put it in its own box, separated from the rest by a vertical line. But your comment makes me realize that my annotation of clinical practice guideline as "internal peer review" makes it sound better than it actually is (it's usually not a proper independent peer-reviewing). I am thinking of replacing it by "self-organized peer-review only"; what do you think?
_________*Why I am biased by my own readings________
I am influenced by two umbrella reviews. In short:
- They found that COIs did NOT influence strongly the conclusions of clinical trials (Risk Ratio: 1.34; which has to be put in perspective with the fact that efficacy results and harm results were also more likely to be positive (respectively RR 1.27 and 1.37), so there's not a big misalignment between results and conclusions). Lundh 2017
- But they found that COIs do influence strongly conclusions of systematic reviews (RR: 1.98), even though COIs don't seem to influence efficacy results. This suggests that results of systematic reviews with COIs are reliable, but conclusions have to be treated with caution (probably because there's an excessive use of spin). Hence their conclusion: "We suggest that patients, clinicians, developers of clinical guidelines, and planners of further research could primarily use systematic reviews without financial conflicts of interest. If only systematic reviews with financial conflicts of interest are available, we suggest that users read the review conclusions with skepticism, critically appraise the methods applied, and interpret the review results with caution." Hansen 2019 Galeop (talk) 10:56, 13 April 2025 (UTC)[reply]
The problem with COI in MEDRS-related sources is that you typically can't do any research on an as-yet-unapproved drug without the manufacturer agreeing to give you the drug, so all the available sources have some level of COI.
The perception of COI also varies. We have occasionally had new editors suggest, for example, that experts have a COI against anything they do professionally, e.g., that the only non-COI sources about knee surgery are those written by knee surgeons.
Because of this, COI has not been a functional model for identifying reliable sources. Of course, even though we don't use COI to invalidate a source (even for non-MEDRS subjects, sources are allowed to be WP:RSBIASED), it is helpful to keep an eye out for COI. You want to be careful about what you write with a source. For example, if all the mammogram device manufacturers say that the latest greatest expensive mammogram machine is much better and everyone must buy millions of dollars' worth of new devices right away, you might decide to write a very soft sentence – perhaps "incremental improvements over time" instead of "dramatic advancements". WhatamIdoing (talk) 21:27, 21 April 2025 (UTC)[reply]
classification of the different types of scientific literature
Thank you for your response @WhatamIdoing
I have removed the mention of COIs.
Any other comments ? ? Galeop (talk) 13:59, 25 April 2025 (UTC)[reply]
I have no other thoughts on this. WhatamIdoing (talk) 17:25, 25 April 2025 (UTC)[reply]
Thanks @WhatamIdoing. I'm going to post it in the main page, and wait for the new suggestions that it will surely spur :-) Galeop (talk) 08:20, 26 April 2025 (UTC)[reply]
I don't know what the main page is you're referring to, but I have thoughts still lol. Apologies for my lateness.
From the main article: I'll let you review the section yourself, but what is written almost contradicts itself or maybe leaves room for interpretation. I would say as long as it is a higher quality "popular publisher" it is at least equivalent to a lot of academic publishers, and many times better than, since - similar to Wikipedia - if they are publishing an article on a topic, that topic has reached a point of notoriety. Contrast that with journals incentivized to publish "new" findings, and I would conclude Wikipedia should be closer to a popular publisher than a health/medicine journal. WP:NOR, WP:TECHNICAL, WP:NOTTEXTBOOK
>[P]opular science magazines such as New Scientist and Scientific American are not peer reviewed, but sometimes feature articles that explain medical subjects in plain English. As the quality of press coverage of medicine ranges from excellent to irresponsible, use common sense, and see how well the source fits the verifiability policy and general reliable sources guidelines. Sources for evaluating health-care media coverage include specialized academic journals such as the Journal of Health Communication. Reviews can also appear in the American Journal of Public Health, the Columbia Journalism Review, and others.
Pop science doesn't mean junk science. Rather than copy and paste a wall of text and making this wall of text more wallier and textier, I'll instead link the page and end my turn Relevantusername2020 (talk) 17:00, 26 April 2025 (UTC)[reply]
If you want to propose changing WP:MEDPOP, then I suggest doing it in another section.
(For the avoidance of doubt, and speaking as someone who has been editing Wikipedia's medical content for almost 20 years now: I think you have a very, very low chance of getting MEDPOP changed. But if you want to try, then please start a separate discussion in a new section.) WhatamIdoing (talk) 19:00, 26 April 2025 (UTC)[reply]
I hope it's okay if I just say a few words about this here. @Relevantusername2020 Although I understand your point, and I agree that some popular science journals are very good, I think that when editing a Wikipedia article, it's best to go straight to the secondary source (I mean, review articles). I personally read more popular science than academic journals, but when editing a wikipedia article, it is our "job" to follow a common methodology (ie the MEDRS guidelines). Going straight to review articles is therefore our "job", even though we wouldn't do that if we were just browsing the news during breakfast ;-)
About the preference for "new findings" that some academic journals indeed have (but not all ; e.g. PLOS, BMJ Open, etc.), I would say that this preference is not as strong when it comes to review articles. Galeop (talk) 14:39, 27 April 2025 (UTC)[reply]
@Relevantusername2020, I've updated the picture, and replaced "Beware of popular science that cites non-MERDS sources" with "Do not cite popular science that relies on non-MEDRS sources". The previous version could have been understood as suggesting that such popular science is predatory ; but that's not what I meant. Let me know if you have suggestions on how to rephrase it better. Galeop (talk) 14:55, 27 April 2025 (UTC)[reply]
I apologize for the late response, I have no further suggestions at this time.
I'm simply trying to do what little I can to help slow the spread of incorrect health information since, as with everything, and irregardless of the factuality of the underlying information, it is much more difficult to change the belief in an idea than it is for it to become widely agreed upon.
I'll respond to your other reply here too:
This article from science.org says a lot of the same things, which is appropriate since science.org is more 'pop science' but kind of an in between source.
As far as your points about large journals, it reminds me of the debate about open source vs proprietary and the differences between centralized vs decentralized - and appropriately a back and forth I recently had on reddit about that topic. I think really there is no clear "better way" and there will always be a bit of a pendulum effect on the way things are done, in all of the relevant topics of discussion. Relevantusername2020 (talk) 01:06, 20 May 2025 (UTC)[reply]

WP:MEDRS is over-conservative and needs changing

[edit]

Wiki medical articles would be better if they could include content sourced from published, peer-reviewed research.

Insisting on review sources only leads to over-safe articles that (a) are less valuable, up-to-date and interesting than they could be (readers are adults), and thus (b) positions Wikipedia weakly in the coming survival battle vs AI (e.g. Grok DeepSearch).

A middle way would be to include content sourced only from published content with a 'B' grade quality label, with content based on reviews labelled as 'A' grade quality.

Asto77 (talk) 00:00, 21 March 2025 (UTC)[reply]

I'm not commenting upon the rest, since I'm under editing restrictions, but using LLMs for medical information should be considered suicidal. tgeorgescu (talk) 00:39, 21 March 2025 (UTC)[reply]
The problem is that primary research does not reliably reflect the "accepted knowledge" Wikipedia should be summarising, and a good proportion of it is wrong or fraudulent, See WP:WHYMEDRS for more. Bon courage (talk) 02:36, 21 March 2025 (UTC)[reply]
A better middle ground in my opinion is to include important but unverified/primary content only in Research or similar sections and also with proper attribution (e.g. "A study showed ..."). Bendegúz Ács (talk) 12:35, 19 April 2025 (UTC)[reply]
It would be "claimed" rather than "showed". Allowing this would open the door to a boatload of fraud and quackery (as well as just bad science). If something is "important", then that becomes apparent via secondary WP:MEDRS sourcing. Bon courage (talk) 12:48, 19 April 2025 (UTC)[reply]

fraud and quackery (as well as just bad science)

Obviously, these should still be filtered out but my impression has been that there is a lot of actually valuable research that cannot be reported on Wikipedia because there is only one study about the particular question and it is also unlikely to be ever repeated or reported in a meta-analysis or similar review article. Here are two examples: [2], [3].

If something is "important", then that becomes apparent via secondary WP:MEDRS sourcing

"Important" is relative here because as I mentioned above, some studies are important to gain insight into particular questions, but not important enough to be repeated just for replication. And even if they are, it could take a really long time so I feel like currently Wikipedia remains very outdated in many cases (which I think is the main criticism in this post). Bendegúz Ács (talk) 13:08, 19 April 2025 (UTC)[reply]
The problem is of course that "feeling outdated" is better than carrying statements that MMR vaccines cause autism or that ivermectin prevents COVID-19, as would have happened if MEDRS didn't enforce high standards. The purpose of this Project is only to be up-to-date with respect to accepted knowledge, and not to be cutting-edge at all. Bon courage (talk) 14:40, 19 April 2025 (UTC)[reply]
I do understand that the main point of having this policy is to prevent such content. And I agree that it serves that purpose very well. In any case, the current wording does not fully exclude what I described in my previous reply here, as it says: "If conclusions are worth mentioning (such as large randomized clinical trials with surprising results), they should be described appropriately as from a single study:".
What I feel is somewhat missing here is a distinction between heavily researched areas and more niche topics. An image caption in the policy says "A lightweight source may be acceptable for a lightweight claim, but never for an extraordinary claim.", perhaps this idea could be included somehow in the part discussing primary vs secondary sources too?
In many cases, Wikipedia is cutting-edge or at least up-to-date already though, for example, newly approved drugs. This policy does not really go into this, but the practice I've seen for those is to report the results of the phase III trial very precisely and not just a conclusion based on review sources ([4] is such an example).
Now of course the fact that it has been approved allows for this content, but then the question is could we somehow bring content about health effects of things other than pharmaceutical drugs closer to this up-to-dateness of approved drugs? Bendegúz Ács (talk) 11:33, 20 April 2025 (UTC)[reply]
It would open the very gates of Hell wrt fringe medical content and POV-pushing. Honestly, when the quality of medical content is one of the most conspicuous successes of this Project I fail to see why the foundations for that success come under such frequent attack. If people want to see what the current research on a topic is, they can jolly well use a search engine. Bon courage (talk) 11:39, 20 April 2025 (UTC)[reply]
I am not attacking it, I am just trying to further improve this (already great!) policy. Perhaps my first idea for an improvement would not even be about making it less conservative/more up-to-date but for it to allow showing research results and proofs more precisely, because in general, I think that's how you can convince people, rather than just declaring something to be true without showing the proof.
And of course, using a search engine or an LLM is an option, but isn't that true for every other content type as well? For many types of content, I turn to Wikipedia rather than those, and that's partially because it is up-to-date. Bendegúz Ács (talk) 12:05, 20 April 2025 (UTC)[reply]
No, Wikipedia is meant to deal in knowledge, i.e. by diligently selecting only the best sources and summarising what they say. Search engines don't do this and LLMs make a poor job of it usually. We don't show results and proofs in detail because of this need for summary for a lay audience without dwelling on the minutiae of sources: WP:MEDSAY. Bon courage (talk) 05:35, 21 April 2025 (UTC)[reply]
Yes, they are inferior usually, but then my point is, how can one get a reliable summary of cutting-edge research? I think Wikipedia could fulfill that need while still not losing the high quality of medical content in general.
I agree that dwelling on the minutiae of sources is not useful in general, but I think presenting the concrete results is. So for example, the recommended text in WP:MEDSAY could be extended like this:
"washing hands after defecating reduces the incidence of diarrhoea by 89% in the wilderness".
Another good example is a statement like "alcohol is carcinogenic" and all the details presented in Alcohol and cancer (even though it may actually be too verbose currently). Bendegúz Ács (talk) 15:23, 24 April 2025 (UTC)[reply]
You're putting the cart before the horse, here. I don't think that the Wikipedia community sees 'a reliable summary of cutting-edge research' as on mission for the encyclopedia. You need to convince people that this should be done at all before jumping to how we might do it. And given the ongoing replication crisis in medicine, I think that will be difficult. MrOllie (talk) 15:32, 24 April 2025 (UTC)[reply]
Especially since cutting-edge research has become even more of a fools' playground than before with the advent of MAHA. Medical research is rife with fraud, grift, incompetence and irrationality - and this is about get even worse. The last thing we want to be doing is empowering the antiknowledge movement on Wikipedia. Bon courage (talk) 04:42, 25 April 2025 (UTC)[reply]
“a reliable summary of cutting-edge research” is called a systematic review. (A less reliable summary is science & medicine news reporting.) In order to be reliable, there’s methodological processes required. Wikipedia editors can summarise the results of the systematic review, but wikipedia is not the place to do a systematic review. Daphne Morrow (talk) 10:33, 25 April 2025 (UTC)[reply]
See my other comments throughout this thread for more info, or feel free to send me a message/email/whatever - I am always happy to chat - but I disagree completely.
Wikipedia should not be anywhere near the first place for new research, in any field, to be communicated. Prior to the internet Encyclopedias, and for that matter, various Diagnostic Textbooks, or even dictionaries were not frequently updated.
That being said, I also see the other side of this of the benefit of the internet and freely accessible knowledge, and the specific point that conflicts with my previous reasoning which is more people having access to information to poke at the questionable bits is a good thing. Experts are not infallible. Experts also have conflicts of interest. So do the literally anyone that this could be writing this message to you. Plenty of examples but most obviously is the simple phrase in academic publishing of "publish or perish". This isn't even getting in to the self fulfilling prophecy fallacy, or the one which probably has a name describing when a group of experts get together they are prone to affirming each others ideas whether or not those ideas are any good.
Anyway I don't think any major policies need changed or whatever (not that I've read them, yet, but they are open in another tab believe it or not) but the important thing is to question conclusions and not to shut down dissenting voices because it doesn't take a rocket surgeon to pull out the bottom jenga stick if that stick is just some other angle of seeing things. It happens. If you are in a big crowd and all are going one way and nobody is seriously questioning things I would suggest turning around or at least taking a seat to see if they're heading for a cliff. Admitting mistakes sucks and the amount of suck directly correlates with the size of the mistake. On that note, correlation does not equal causation and there is a worrying amount of ideas where the chronological relationship between those is switched completely unbeknownst to the person who is doing the big think
---
On that note, and so this is not another wall of text wherein I do a big think, I happened across a message about an upcoming vote on the Universal Code of Conduct for WMF and that seems on topic. See I told you I had things open in another tab! I have a lot of tabs.
Here's a link Relevantusername2020 (talk) 14:43, 19 April 2025 (UTC)[reply]
WP:MEDRS insists on having content from published, peer-reviewed research. And, for the reasons outlined in WP:WHYMEDRS, being published in peer-reviewed venue is the very bare minimum, and usually not sufficient. Meta-analysis is when you can start to make definite statements, rather than "studies by Smith,[1] Appleton,[2] and Binnington[3] says chocolate is evil, while studies by Jones[4] and Dillinger[5] says chocolate is a healthy food". Headbomb {t · c · p · b} 14:36, 19 April 2025 (UTC)[reply]
I am very hesitant… first, saying “X study shows…” opens the door to original research (analysis and interpretation) by Wikipedians. If we are to pull back on MedRS, it is important that we keep it at “X study states…” and directly quote the conclusions from the study.
Secondly, DUE is a factor. If all we have is a single study that reaches a particular conclusion, that is not enough to say that the conclusion is DUE. We need to have additional studies that corroborate those conclusions. Blueboar (talk) 14:56, 19 April 2025 (UTC)[reply]
This is a good first-principles articulation of a principal reason why MEDRS exists. Bon courage (talk) 17:08, 19 April 2025 (UTC)[reply]
Two thoughts:
  • The level of sourcing for an article depends on what's available. For an ultra-rare condition like Oculodentodigital dysplasia, even a textbook or a systematic review is not actually very different from a primary source. On the other hand, if "what's available" is significant (e.g., for a common or heavily researched condition, like Hypertension, then the latest, greatest primary source should basically never be mentioned, or even hinted at.
  • A ==Research directions== section is supposed to be forward looking. The contents should sound like "In 2022, The Medical Organization called for research in X, Y, and Z" and not at all like "Dr I.M. Portant did a cool little pilot study".
WhatamIdoing (talk) 19:49, 19 April 2025 (UTC)[reply]
The first point is very important I think, do you feel like the wording of the current policy explains that adequately?
As for ==Research directions== sections, why not have both? Don't you think it feels strange to say it calls for research without mentioning why it does so? Bendegúz Ács (talk) 11:39, 20 April 2025 (UTC)[reply]
  • I wonder if something like
  Scale   Message   Audience   Transparency
Appropriate Limited posting AND Neutral AND Nonpartisan AND Open
Inappropriate Mass posting OR Biased OR Partisan OR Secret
Term Excessive cross-posting ("spamming")   Campaigning   Votestacking   Stealth canvassing
  • (from Wikipedia:Canvassing) could be adapted to MEDRS's approach to primary sources. The usual approach is that primary sources are more acceptable in rare diseases (best source you've got...), in sections/contexts that have little immediate bearing on real-life human health decisions (e.g., an explanation of drug mechanism, a famous historical paper, veterinary information), or to support an "ideal" secondary source by providing a fun fact or an expanded detail (e.g., the drug is safe,[review][review] even during pregnancy and breastfeeding[primary]).
  • I think that the reason for a proposed direction is often pretty obvious. You don't really need an explanation when the recommended direction is something like "treatments with fewer dangerous side effects" or "prevention". If it's obscure, of course, then an explanation is desirable, but it needs to be "recommended more basic research, because this will provide necessary background knowledge for future drug design", not "recommended more basic research, because I.M. Portant's lab just published this cool little pilot study WP:IN MICE, which gave us experts some hope that it's actually possible to do something more practical than just counting the number of people who get diagnosed each year".
  • Whether to include individual past studies (i.e., things that can't be a recommended direction for research to go in the future, because they've already happened), our experience is that the studies tend to be cherry-picked for contradicting the medical consensus (One small study found the opposite of all 172 other clinical trials ever run!) or the editor is engaging in self-promotion (Our lab published a paper!).
WhatamIdoing (talk) 01:45, 21 April 2025 (UTC)[reply]
TLDR: Attempting to be "on the cutting edge" of things, presenting "new" discoveries, is literally guaranteeing lower credibility because new discoveries are inherently based upon less evidence.
The thing is with health studies - any science that wants to be valid - is it takes time and effort to do studies and for those studies to be made available for peer review and that peer review to return to the original researchers and their correspondence to be further viewed for flaws in logic by more peer reviewers and ultimately most things are not ever really fully 100% settled - and, to my point, attempting to be "on the cutting edge" of these things, presenting "new" discoveries, is literally asking to have lower credibility because new discoveries are inherently based upon less evidence, and this actually causes many problems - this extends beyond Wikipedia and health studies - where since 69420 places repeated what once appeared to be true, in order for the one dude who noticed the flaw in the logic to disprove whatever it is, they must now deconstruct what 69420 different entities repeated and staked their institutional reputation on. So, point being, not only does it lower the credibility of Wikipedia by attempting to "be on the cutting edge" more places - especially ones as widely known and frequently accessed and trusted as Wikipedia - repeating things that have not passed sufficiently through the gauntlet of the scientific method quite literally makes all of us dumber and causes a spiraling number of problems that are nearly incomprehensible.
>"our experience is that the studies tend to be cherry-picked for contradicting the medical consensus"
You have a mouse in your pocket? My experience is exactly the opposite.
Frankly I don't think I have ever found a single publication - whether in a professional, academic, or "for public consumption" sources - that actually call discoveries in to question. The closest I can think of are ones that do so in indirect ways that don't actually address the fundamental issues of the evidence, eg questioning the study on ethical grounds - which is valid, but you can bypass the debate of ethics if you simply invalidate the evidence entirely. If the evidence is false there is no debate.
On that note, and as I believe I have stated elsewhere in this talk page, there is a monumental increase in studies being retracted for being based upon false evidence, so it must happen, yet from what I have read, which is a lot, the amount of studies being retracted is still a drop in the ocean compared to the amount of studies based upon questionable grounds.
The problem is, the people doing the research, are incredibly financially incentivized to continue it and to not invalidate similar research - since "healthcare" is a kajillion dollar industry (yet somehow the actual front line providers are underpaid and overstressed...) - and further, the general public is incentivized to believe in new discoveries for old problems because, "surely there must be some fix?!" - and additionally, the powers that be, whether they be government, industry, or academia, are further incentivized beyond the financial reasons because beneath all of the nonsense the important thing is to have hope for better.
The problem is when the sunk cost spent on "hope" is the cause of so many of the problems to begin with. Relevantusername2020 (talk) 12:41, 22 April 2025 (UTC)[reply]
The problems you describe are not Wikipedia's problems. Wikipedia's problems look more like this:
  • A hundred studies demonstrate that vaccines do not cause autism. All the meta analyses agree. All the systematic reviews agree. All the medical textbooks agree.
  • POV pusher says: But I wanna cite this one weird outlier study to say that everyone else is wrong!
WhatamIdoing (talk) 23:35, 22 April 2025 (UTC)[reply]
With a side argument of "this is the only study to have received this much media attention, so is obviously WP:DUE!". Bon courage (talk) 07:14, 23 April 2025 (UTC)[reply]
Yes. In those cases, I find it more effective to mention that the study exists (just that it exists, sometimes as distant as "On [date], Big University issued a press release about research done by the I.M. Portant lab which claimed a cure for cancer"). If necessary, editors make a note on their calendars to remove it once the media attention has died down. WhatamIdoing (talk) 18:47, 23 April 2025 (UTC)[reply]
What "coming survival battle vs AI"? That sounds like a very hypothesised, not proven, problem. Daphne Morrow (talk) 06:18, 21 April 2025 (UTC)[reply]

RSN discussion on Emergency Care BC

[edit]

Could editors with good MEDRS knowledge have a look at WP:RSN#Is Emergency Care BC an acceptable medical source? The question relates to expanding the Bupropion#Overdose section. -- LCU ActivelyDisinterested «@» °∆t° 21:42, 28 March 2025 (UTC)[reply]

So you aren't left on read and since answering this reiterates what I am saying in the other messages I wrote on this page:
I don't know. I have read a lot of medical studies, and just a lot of studies in general, and just a lot of everything and there are a lot of mostly trustworthy sources that look sketchy and a lot of terrible sources with mostly biased takes or blatant lies that have a shiny professional appearing website and some, of both groups, and every and any other possible configuration of what might or might not look to be a source you can trust also have credentials that reinforce that you should trust them and if this is hard to follow the point is even with credentials and money and websites and friends with credentials in high places and majority of public support/belief you still have to rely on logic and critical thinking. No human is infallible. None. Nobody. All humans don't like to admit they were wrong especially in highly visible or expensive or otherwise notable events.
Long story short is if you don't know if that source has information that checks out you probably should either not be editing pages where you need to cite that source (something I too am guilty of, no offense) or, what you really actually probably should do in that case is . . . read some more and make sure everything checks out.
This may not be surprising to you, but it was somewhat to me, and that is that for all of the endless words about bad information coming from "mainstream media", "social media", or as I stated in "professional publications" - which is absolutely true - Wikipedia is smack dab in the middle. It's amazing things work as well as they do Relevantusername2020 (talk) 14:22, 19 April 2025 (UTC)[reply]

Anti-cholesterol food fad text and citation at Oat

[edit]

An IP editor (see User talk:2601:642:4F84:1590:9D68:3412:E33:7827) has just made a series of edits to Oat about the food fad for oat bran in the 1980s concerning the belief that oats lowered cholesterol. I reverted their first attempt, adding a note "do not add individual primary research studies, see WP:MEDRS", but this was ignored with their later series of edits and their edit comment I didn't make a health claim. I noted a study's historical significance as the basis of a fad. There is some truth in their claim, but the edits have inserted a 1986 study, a piece of primary medical research, Van Horn, Linda; Liu, Kiang; Parker, Donna; Emidy, Linda; Liao, You-lian; Pan, Wen Harn; Giumetti, Dante; Hewitt, John; Stamler, Jeremiah (June 1986). "Serum lipid response to oat product intake with a fat-modified diet". Journal of the American Dietetic Association. 86 (6): 759–764.. I would be grateful for the opinion of MEDRS editors on whether the inserted material is compliant with policy, and whether any changes need to be made to the article text. Note that the new text spans both the "Health effects" and the "As food" sections of the article, with a "(described below)" textual cross-reference between the two. Thank you for your time. Chiswick Chap (talk) 06:35, 22 April 2025 (UTC)[reply]

Not looked at our Aricle (yet), but a relevant MEDRS would be PMID:33762150. Bon courage (talk) 07:10, 22 April 2025 (UTC)[reply]
Please do not post about individual sources here. For faster answers, ask WikiProject Medicine or WikiProject Pharmacology about the suitability of specific sources. WhatamIdoing (talk) 20:58, 22 April 2025 (UTC)[reply]
Also, we should focus on clinical outcomes, not contentious biomarkers. RememberOrwell (talk) 02:02, 27 April 2025 (UTC)[reply]

Does MEDRS apply to "further reading" on a medical topic?

[edit]

There is a [now very] long discussion at Wikipedia talk:External links#Proposal: expand the scope of ELNO to include "Further reading" that I opened last week but have since taken a bit of a back seat. In essence, it observes that we have quite firm rules about WP:external links but essentially none about wp:further reading – illustrated by the fact that the former is a formal guideline but the latter is only an essay. So I'm curious to know whether I could add a book to the further reading section of an article about a health condition, which makes wild assertions about its cause and which proposes a unique blend of snake oil and homeopathy to cure it. On what policy basis could I be sent packing?

The debate has crystallised around one or other of two opposing positions:

  1. it is not only OK but a positive benefit to readers for Wikipedia editors to addto a Further Reading section whatever sources they personally consider useful
  2. Wikipedia editors are also "some random person on the internet" and should not be recommending anything; at best, the entries in such a list should have some kind of RS support.

Full disclosure: I am in Camp 2 but this is not intended as a canvas for either perspective.

Does this debate raise any particular issues for MEDRS? and if so, is it best handled by a specific local guide-line? (To be clear, I don't contribute to medical articles but felt it appropriate to make the debate known. If this provokes another debate here, I wouldn't have any meaningful contribution to it.) 𝕁𝕄𝔽 (talk) 21:27, 13 May 2025 (UTC)[reply]

I'd say "not necessarily", but that there would need to be an exceptionally good argument (and a consensus) for such a source to stay if editors were objecting to it and it was a source discussing WP:BMI. Bon courage (talk) 21:38, 13 May 2025 (UTC)[reply]
I'm in the other discussion, but I should add that three types of not-exactly-MEDRS sources have been discussed in the past for ==Further reading== sections. They are:
I have advocated for the first of these to be listed under ==Further reading== instead of listed as reliable sources under ==References==. Other editors prefer to cite them as reliable primary sources. WhatamIdoing (talk) 00:17, 14 May 2025 (UTC)[reply]
For consistency with the ethical motivation behind the MEDRS self-restraint, I would like to think that the entries for any famous altmed/pseudoscience sources include a clear denunciation. We should not expect or require readers to have read the entire article beforehand. We should never underestimate the ability of pseudo-scientists and conspiracy theorists to cherry-pick for sources that appear to endorse their lunacies. Or indeed their ability to plant "evidence", which is why we have a particular responsibility to verify the quality of anything recommended in Further Reading of BMI articles at least. 𝕁𝕄𝔽 (talk) 09:46, 14 May 2025 (UTC)[reply]
Yes, historical and inaccurate sources should be labeled, though perhaps "denunciation" is the wrong word for it. One wants to write something like "First book promoting this idea" rather than "Lunatic nonsense".
And lest anyone be concerned, telling people facts about a book is 100% consistent with NPOV and also does not constitute censorship as understood by librarians. Librarians are opposed to saying things like "this book is not suitable for children" or "only healthcare professionals should be allowed to read this"; they have no concerns about accurately labeling porn as "pornography" or hoaxes as hoaxes, and IMO we shouldn't, either. WhatamIdoing (talk) 18:24, 14 May 2025 (UTC)[reply]
Nor am I, but when we recommend a particular set of sources, we are in effect saying that we endorse these sources unless there we also give a caveat that says otherwise. There is a big difference between stocking a wide variety of sources in an academic library, where visitors are expected to inform themselves (on the one hand) versus (on the other) advising high schoolers to read Uncle Tom's Cabin without also advising them clearly that, although the literary merit of the work still stands, the values of the time are not acceptable today 𝕁𝕄𝔽 (talk) 19:24, 14 May 2025 (UTC)[reply]
I don't think that we're "endorsing" sources. This has come up, just in the last couple of years, in discussions about refs, with a few editors claiming that we can't use this or that source, because if we cite it, we're not just saying it's reliable to support the claim that the article is making, but also publicly endorsing it as being somehow morally acceptable or otherwise desirable. I have assumed that this is one of the inevitable consequences of real-world acceptance of Deplatforming: "How dare you invite that wrong-headed politician to my university!" becomes "How dare you cite that wrong-headed political author in Wikipedia!" – even if the actual facts themselves aren't in contention. WhatamIdoing (talk) 21:17, 14 May 2025 (UTC)[reply]
Straw man argument, I expected more from you. The only proposal is that the basis be shown for putting an entry in the FR list, so that readers don't have to rely on the word of some anonymous person on the internet that this source is worth reading – despite the fact that it wasn't [good enough to be?] cited and so didn't have to pass the RS test (and in this context, the even higher MEDRS test). 𝕁𝕄𝔽 (talk) 22:16, 14 May 2025 (UTC)[reply]
You said: when we recommend a particular set of sources, we are in effect saying that we endorse these sources
I said: I don't think that we're "endorsing" sources
Where do I say anything that sounds like a weakened version of your "argument"? (It's not really an argument; you present this, sans evidence, as a given.) WhatamIdoing (talk) 01:35, 15 May 2025 (UTC)[reply]
if we cite it, we're not just saying it's reliable to support the claim that the article is making, but also publicly endorsing it as being somehow morally acceptable or otherwise desirable ← not quite, but Wikipedia supposedly publishes "only the analysis, views, and opinions of reliable authors", so using a source implies quite a lot about its (authorial?) worthiness. Bon courage (talk) 05:08, 15 May 2025 (UTC)[reply]
True but we have clear guidelines (in the case of BMI, very clear guidelines) on what is a valid source. For Further Reading we have none. 𝕁𝕄𝔽 (talk) 09:35, 15 May 2025 (UTC)[reply]
I don't think that's true. Consider that our usual guidelines generally prefer:
  • independent sources
  • non-self-published sources
  • sources with visible editorial structures (e.g., a newspaper editor, a peer review process)
  • sources with reputations for fact checking
  • secondary sources rather than primary sources
Now contrast that against the fact that tweets have zero of those qualities, but {{cite tweet}} is used in tens of thousands of articles. We're not "endorsing" these tweets, and our rules aren't "clear" that non-independent, self-published, un-reviewed, un-fact-checked, primary-source tweets constitute "a valid source". But we're using them anyway.
It is true that we have only one officially tagged {{guideline}} for what belongs in the FR section (NB: one ≠ none). You might think that MOS:FURTHER, which provides only three rules about which sources can be listed – namely:
  • "publications that would help interested readers learn more about the article subject"
  • avoid duplicating sources used to support article content, except in unusual
  • never duplicate sources in the External links section
– does not provide "clear" guidance, but it does provide three rules' more guidance than "none". Additionally, Wikipedia:Further reading provides significantly more advice. WhatamIdoing (talk) 17:41, 15 May 2025 (UTC)[reply]
In reply to WhatamIdoing, examples 1 and 3 appear to be cases where one is pointing the reader to primary sources. That is not, I suggest, the job of an encyclopaedia. We are writing a tertiary source and we generally use secondary or tertiary sources as citations. A "Further reading" section should be to other secondary or tertiary sources. Bondegezou (talk) 08:32, 15 May 2025 (UTC)[reply]
@Bondegezou, I don't think that's true. Consider an article such as A Midsummer Night's Dream: A copy of the play is a primary source. WP:ELYES #2 says that if a legal copy of the work can be read online, then we should add such an ==External link==. But you are saying here that we shouldn't ever include a link to that play under the ==Further reading== heading. That sounds silly to me. I bet it sounds silly to you, too. WhatamIdoing (talk) 17:45, 15 May 2025 (UTC)[reply]
That is a link to the actual thing itself. That’s different to providing primary source links about a disease. Bondegezou (talk) 07:43, 16 May 2025 (UTC)[reply]
"The actual thing itself" is a primary source.
When we take a disease-related article to FAC, editors usually want to provide a citation to an original description of the disease. It's a primary source, but remember that WP:PRIMARYNOTBAD. Sometimes this is added as a ref. Sometimes this is added as a Further reading link. Since the ref would usually be put after a sentence that it can't fully verify (e.g.,"The first description of ____ was published in 1888 by John Smith" – it can't usually verify "the first"), I usually prefer a Further reading link. What would you do?
To give another example, if there's a famous memoir about a disease (e.g., by someone who clears the high bar at Wikipedia:Manual of Style/Medicine-related articles#Notable cases), then Further reading is IMO the best place for it. Why would you write "Alice Activist wrote a memoir[1]" in the article text, when you could just write "Activist, Alice My Life and Hard Things (memoir)" in the Further reading section? WhatamIdoing (talk) 16:14, 16 May 2025 (UTC)[reply]

Time to close this discussion

[edit]

As this discussion has degraded into a cfork of Wikipedia talk:External links#Proposal: expand the scope of ELNO to include "Further reading", I suggest we draw a line under it at this point. It seems vanishingly unlikely that there will be any meeting of minds. --𝕁𝕄𝔽 (talk) 08:54, 17 May 2025 (UTC)[reply]

Scientific letters

[edit]

This page says when determining how to treat a source, see how the source is classified in PubMed and that a page that is tagged as "Comment" or "Letter" is a letter to the editor (often not peer-reviewed). However, some sources classified "Letter" in PubMed aren't "letters to the editor" - they are "scientific letters". These are short research articles, peer-reviewed, with a fast turnaround, like this.

See the distinctions:

  • Here Scientific Letter: Manuscripts that aim to announce scientific discoveries and data or preliminary reports that are of clinical significance are accepted for publication as scientific letter. Scientific letters do not contain subheadings and should not exceed 900 words. Number of references should be limited to 10 and the number of tables and figures should be limited to 2. - note "letter to editor" is described separately.
  • Here 5.4. SCIENTIFIC LETTERS Articles that include original data and describe the experience of the authors will be included in this typology. - note that 5.5 is "letters to the editor".
  • A paper on the difference - There is however an other type of academic, scientific and technical document, the titles of which have never been studied. We are referring to “scientific letters”, also known as “scientific communications”. Scientific letters (SLs), which may be categorized as a “primary source” like the research article, are short descriptions (4-5 pages) of important current research findings which allow researchers to rapidly publish (4-6 weeks) the results of their investigation. Like research papers (RPs), SLs are peer-reviewed and must meet the same high standard of quality with the addition of timeliness and brevity

I suggest that this should be clarified such that if in doubt the article classification should be clear from the article itself and the the publishing standards of the journal it is in. Eg. adding a sentence such as:

  • If in doubt, refer to the journal's classification of the original publication alongside the journal's own submission guidelines to determine what type of article it is and whether it has been peer-reviewed.

Void if removed (talk) 11:35, 27 May 2025 (UTC)[reply]

I don't see the example paper you give on PubMed, so can't determine what category it would be. It may be useful for this discussion to look at several examples of letters to the editor and scientific letters, from journals that accept them. Which article types are classified as "comment", which as "letter" and which as something else.

Earlier we say "Every rigorous scientific journal is peer reviewed." but fail to caveat this with "but not every article in a peer-reviewed journal is peer reviewed". That gotcha has come up so often, I don't know why we don't say this. Maybe we did and it got lost, or I can't find it. I think we should say that. And then at the bottom section on using pubmed, we could give advice on which sorts of articles are always peer reviewed (in a peer-reviewed journal), which are never, and which one might need to refer to the editorial/submission guidelines of each journal.

We may also need some words to say that "peer review" is just one form of review, peculiar to academic journal publications, and isn't perfect nor inherently superior to in-house review by a respected publication or organisation's own experts. It doesn't indicate scientific consensus, doesn't indicate research would meet the quality standards of a systematic review selection process, and doesn't analyse all aspects of an article with the same rigour. Maybe we should find some sources on this. -- Colin°Talk 16:39, 27 May 2025 (UTC)[reply]

So for example, here is a scientific letter in the South African Journal of Psychiatry, titled "Changes in schizophrenia symptoms, tryptophan metabolism, neuroinflammation and the GABA-glutamate loop: A pilot study". You can see at the top it is classed as a "scientific letter", and if you look at the whole journal issue, it is listed as a "scientific letter" separate from the "letters to the editor".
However, here it is in PubMed and if you scroll to the bottom you can see the type is "Letter".
The journal guidelines are here, where a "scientific letter" is described as Original research that is limited in scope can be submitted as a scientific letter rather than a full original research article.. Meanwhile "letters to the editor" are in a different section.
On further investigation, I find an older scientific letter here, however, is not categorized as a letter in pubmed here, but as a journal article. I'm wondering if this is some quirk of PubMed that will eventually be corrected for the newer paper? Void if removed (talk) 17:07, 27 May 2025 (UTC)[reply]
AIUI the publishers send their classifications to PubMed, which accepts whatever they tell them. I would not expect it to be updated.
Labeling proper research as "letters" has a long history. It does make things confusing for us. But mostly we're able to sort this out in the individual case. The trend towards journals posting article history (e.g., dates for milestones) is particularly helpful, because if they say that the article was sent for peer review on a particular date, then you know that it was peer reviewed. WhatamIdoing (talk) 17:34, 27 May 2025 (UTC)[reply]
It should be obvious though that the classification in the journal is accurate, and PubMed can be wrong, right? Void if removed (talk) 19:48, 27 May 2025 (UTC)[reply]
It should be obvious that editors ultimately need to use their best judgment and all the known facts to determine what something actually is, instead of relying on any label on any website. Typos and misclassifications can happen anywhere. WhatamIdoing (talk) 02:00, 28 May 2025 (UTC)[reply]
Maybe we should just remove "For example, a page that is tagged as "Comment" or "Letter" is a letter to the editor (often not peer-reviewed)." as it would appear to be an unhelpful example. Then a more general comment can be made somewhere that not every article is peer-reviewed, for example, editorials, letters to the editor, news and current affairs reporting. Any other/better examples? Void's proposed text could fit somewhere, though I'm not keen on "If in doubt". Alternative? -- Colin°Talk 07:38, 28 May 2025 (UTC)[reply]
I think it's important to say somewhere that "comments" and "letters to the editor" are not usually (i.e., basically never) peer-reviewed, but your wording is IMO an acceptable way to say it.
Editorials and news are occasionally peer reviewed, though most of the few I've seen have been internal peer review rather than the traditional external peer review. WhatamIdoing (talk) 22:11, 28 May 2025 (UTC)[reply]