Wikipedia talk:Notability (academics)
![]() | You can help! Click here to get a current list of open edit requests involving conflicts of interest on biographies about academics and scientists. |
![]() | This project page was nominated for deletion on 7 February 2006. The result of the discussion was keep. |
This discussion was begun at Wikipedia:Votes for deletion/Nicholas J. Hopper, where the early history of the discussion can be found.
This page has archives. Sections older than 30 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
See Wikipedia:Notability (academics)/Precedents for a collection of related AfD debates and related information from the early and pre- history of this guideline (2005-2006) and Wikipedia:WikiProject_Deletion_sorting/Academics_and_educators/archive, Wikipedia:WikiProject_Deletion_sorting/Academics_and_educators/archive 2 for lists of all sorted deletions regarding academics since 2007.
Provers of a well-known conjecture?
[edit]People that prove the Riemann hypothesis or the Collatz conjecture, are they automatically notable? Parid321 (talk) 10:11, 8 July 2025 (UTC)
- Only if their proof is accepted and cited by many reliable sources. Xxanthippe (talk) 10:14, 8 July 2025 (UTC).
- For example if they get the $1000000 does that count? Parid321 (talk) 10:16, 8 July 2025 (UTC)
- While winning the money is an indicator, independent peer recognition is far stronger as already mentioned. Ldm1954 (talk) 10:22, 8 July 2025 (UTC)
- I can't imagine anyone solving such a conjecture without gaining enough coverage to pass the general notability guideline, or enough peer-recognition to pass WP:ACADEMIC, but it will be because of those that they are notable, not because they solved the conjecture. Phil Bridger (talk) 12:42, 8 July 2025 (UTC)
- In particular, anyone winning a million dollars for solving a mathematics problem would _surely_ get plenty of coverage for GNG. Russ Woodroofe (talk) 14:06, 8 July 2025 (UTC)
- I can't imagine anyone solving such a conjecture without gaining enough coverage to pass the general notability guideline, or enough peer-recognition to pass WP:ACADEMIC, but it will be because of those that they are notable, not because they solved the conjecture. Phil Bridger (talk) 12:42, 8 July 2025 (UTC)
- While winning the money is an indicator, independent peer recognition is far stronger as already mentioned. Ldm1954 (talk) 10:22, 8 July 2025 (UTC)
- For example if they get the $1000000 does that count? Parid321 (talk) 10:16, 8 July 2025 (UTC)
- Only if their proof is accepted and cited by many reliable sources. Xxanthippe (talk) 10:14, 8 July 2025 (UTC).
Soft suggestion for h-index?
[edit]Citation measures such as the h-index, g-index, etc., are of limited usefulness in evaluating whether Criterion 1 is satisfied. They should be approached with caution because their validity is not, at present, completely accepted, and they may depend substantially on the citation database used. They are also discipline-dependent; some disciplines have higher average citation rates than others.
is probably accurate. However, I think a quick note regarding "An h-index of Y is indicative of fulfilling Criterion 1" might be useful. Does someone have a good idea regarding the number? Maybe the classic 40? FortunateSons (talk) 09:54, 15 July 2025 (UTC)
- No, that would not be a good idea. An h-index of 40 would be stellar for a philosopher, but nowhere near good enough for a computer scientist. Phil Bridger (talk) 10:09, 15 July 2025 (UTC)
- Ah, that's unfortunate. And creating discipline-specific guidelines is out of the question? FortunateSons (talk) 10:15, 15 July 2025 (UTC)
- I can't see a way to do that without substantial original research and creation of arbitrary thresholds on our part. The academic world is also slowly moving away from the h-index specifically and arguable citation metrics in general, so us moving from "limited usefulness" to even a 'soft' suggested threshold seems like a step in the wrong direction. – Joe (talk) 11:00, 15 July 2025 (UTC)
- If there was a reliable third-party data source grouping professors by field, I could see us using a per-field threshold defined by the 80th percentile in that field or similar as a rough guide, but to my knowledge no such dataset exists and building it would be practically impossible. I do think h-index is a questionable proxy for the thing we care about here anyway (notability among academics), so we probably shouldn't do that anyway. Suriname0 (talk) 16:45, 15 July 2025 (UTC)
- Yes, that makes sense. I sometimes use it as a shorthand to decide who to add to my "might be notable" list, but aknowledge that this isn't ideal. FortunateSons (talk) 09:04, 16 July 2025 (UTC)
- If there was a reliable third-party data source grouping professors by field, I could see us using a per-field threshold defined by the 80th percentile in that field or similar as a rough guide, but to my knowledge no such dataset exists and building it would be practically impossible. I do think h-index is a questionable proxy for the thing we care about here anyway (notability among academics), so we probably shouldn't do that anyway. Suriname0 (talk) 16:45, 15 July 2025 (UTC)
- I can't see a way to do that without substantial original research and creation of arbitrary thresholds on our part. The academic world is also slowly moving away from the h-index specifically and arguable citation metrics in general, so us moving from "limited usefulness" to even a 'soft' suggested threshold seems like a step in the wrong direction. – Joe (talk) 11:00, 15 July 2025 (UTC)
- Ah, that's unfortunate. And creating discipline-specific guidelines is out of the question? FortunateSons (talk) 10:15, 15 July 2025 (UTC)
- 40 is a bit high. I think most professors pass with 40. I personally use 20 as one of my quick tests. –Novem Linguae (talk) 16:09, 15 July 2025 (UTC)
- Good to know, thanks FortunateSons (talk) 09:04, 16 July 2025 (UTC)
- I definitely disagree with 20 in most of the sciences (as mentioned above), that is the level of a typical assistant to junior associate professor at a strong university. I would say that 40 is marginal, particularly if they are one of many on team papers.
- A common approach is to compare to peers using both the coauthors and their topics. Also look at whether they are first or last author, and whether they have a decent number of papers with just a few authors and high citation numbers, ideally > 1k. I have a Sandbox notes on my opinion for which suggestions are welcome (on my talk page).
- Just saying "she passes/fails with an h-factor of XX" is not useful. However, with context and analysis I argue the numbers are useful. Ldm1954 (talk) 09:55, 16 July 2025 (UTC)
- That's very interesting, thanks! FortunateSons (talk) 10:01, 16 July 2025 (UTC)
- A "look at h-index to approximate notability" system could be really useful if we can get it more accurate. Can you or someone else help me make it more accurate? Maybe we can fill in something similar to the below:
- Fields A, B, C - probably notable if h-index greater than X
- Fields D, E, F - probably notable if h-index greater than Y
- Fields G, H, I - probably notable if h-index greater than Z
- Of course, the more complicated cases will end up going to AFD where the NPROF experts can do a deep dive and hash things out, but a simple system that a non-professor patroller can use to do basic checks would, in my opinion, be really helpful. I think a lot of NPPs and AFCers get confused by WP:NPROF#C1. –Novem Linguae (talk) 01:31, 17 July 2025 (UTC)
- The suggestion made is very reasonable, but very difficult to apply. You cannot directly compare h-indexes for scientists across different scientific fields because the dynamics of publishing and citation practices vary dramatically between disciplines. The h-index is influenced not only by the quality of research but also by field-specific factorsm such as: (1) Publication volume: Some fields (like medicine or life sciences) tend to produce a far higher number of papers per researcher than others (like mathematics or theoretical physics), simply due to differences in research methods, collaboration sizes, and data availability; (2) Citation behavior: Citation rates differ widely. In biomedical sciences, articles often receive many more citations because of larger research communities and faster turnover of literature, while in fields like engineering or social sciences, citation rates are generally lower; (3) Co-authorship norms: In particle physics or genomics, for example, papers often have hundreds of authors, artificially inflating citation counts for all contributors. In contrast, other fields may favor single-author or small-team publications.
- These inherent differences mean that a physicist with an h-index of 40 is not necessarily “less impactful” than a medical researcher with an h-index of 60. Comparing their raw h-indices would conflate field effects with individual performance.
- This is precisely why tools like the Science-wide author databases of standardized citation indicators (by John Ioannidis et a.) were created. These databases normalize citation metrics by accounting for field-specific patterns, career stage, and co-authorship to provide field-adjusted metrics (e.g., a field-weighted citation impact). Such approaches enable more equitable comparisons across disciplines, avoiding unfair bias against researchers in fields with lower publication and citation rates. In sum, raw h-indices are not comparable across disciplines, and field-normalized indicators are essential for fair and meaningful evaluation. So, please read the article, c-score which we created last year. G-Lignum (talk) 11:21, 20 July 2025 (UTC)
- I think we're in agreement about not using the same h-index threshold for all fields. I'm just asking for someone who knows which fields have a lot of citations and which fields don't to help me put the common fields into buckets, and also estimate what h-index would be the threshold for passing NPROF#C1 on Wikipedia. A rough draft might look something like...
- High citation fields - Fields A, B, C - probably notable if h-index greater than 40
- Medium citation fields - Fields D, E, F - probably notable if h-index greater than 30
- Low citation fields - Fields G, H, I - probably notable if h-index greater than 20
- Then we look at what some of the most common fields are, and start plugging those into A, B, C, D, etc. –Novem Linguae (talk) 14:12, 20 July 2025 (UTC)
- I don't think you will get much support for this. Even within one field different subareas get vastly different citations. Many academics deliberately chase this by going after "hot" topics; some don't. Which group is more notable?
- I have an alternate suggestion which might be useful. Some form of script which will pull GS, Scopus, ResearchGate and maybe even database ranks just for review purposes. Doing the same for coauthors and placement within their GS field might also be useful, albeit I suspect harder to code. Ldm1954 (talk) 14:20, 20 July 2025 (UTC)
- I think we're in agreement about not using the same h-index threshold for all fields. I'm just asking for someone who knows which fields have a lot of citations and which fields don't to help me put the common fields into buckets, and also estimate what h-index would be the threshold for passing NPROF#C1 on Wikipedia. A rough draft might look something like...
- Good to know, thanks FortunateSons (talk) 09:04, 16 July 2025 (UTC)
- I think the best that can be done is some sort of list with a wide gap between "definitely notable" and "definitely not notable". For example there could be a field for which a researcher with an h-index of 40 is almost certainly notable, but one with an h-index of 10 is almost certainly not notable. Many people would fall in the middle, and other measures need to be used for them. Phil Bridger (talk) 16:05, 20 July 2025 (UTC)
- You’d have to go higher than 40 in certain fields, like AI, where I understand that 70+ is what’s looked for. Using 10 as a lower limit might not work for the humanities and some of the social sciences. Qflib (talk) 23:13, 20 July 2025 (UTC)
- Yes, my suggestion would still be by field. Phil Bridger (talk) 07:17, 21 July 2025 (UTC)
- You’d have to go higher than 40 in certain fields, like AI, where I understand that 70+ is what’s looked for. Using 10 as a lower limit might not work for the humanities and some of the social sciences. Qflib (talk) 23:13, 20 July 2025 (UTC)
Rudin Salinger
[edit]I've added a red-link for (Malaysia-based academic, brother of Pierre Salinger) Rudin Salinger in my article about his award-winning house (Salinger House) and thought about turning it blue. However, Salinger seems to not have as much coverage as you might have expected. Since he was 75 in 2006 it seems likely that an obituary would have been published by now, or at least some kind of valedictory statement on his retirement, but I wasn't able to find any. Does anyone want to take a look at thim from a NPROF point of view? FOARP (talk) 10:14, 20 July 2025 (UTC)
Five-year Rutherford Discovery Fellowship of the Royal Society Te Apārangi
[edit]There are quite a few new STEM pages on academics who have recently won this grant. The specific description is "The Rutherford Discovery Fellowships was set up to support the development of future research leaders, and assist with the retention and repatriation of New Zealand's talented early- to mid- career researchers." While this is an important grant, my read is that it is too junior to be an automatic pass for #C2. I want to get a little concensus as some of the recent awardees do not pass WP:NPROF IMO. Ldm1954 (talk) 14:05, 20 July 2025 (UTC)
- I agree that this doesn’t automatically pass C2. Also from their site:” The Rutherford Discovery Fellowships was set up to support the development of future research leaders, and assist with the retention and repatriation of New Zealand's talented early- to mid- career researchers.” This might be one indicator of academic notability but it doesn’t stand alone. Qflib (talk) 23:28, 20 July 2025 (UTC)
- "Support the development of future research leaders" suggests that the society does not believe the recipients to yet be notable, but wishes to assist them on the journey to eventually becoming notable. I don't think that receiving this award contributes much to notability. Russ Woodroofe (talk) 18:26, 21 July 2025 (UTC)
- To me it's suggestive that they are likely to eventually pass PROF, but not passing itself. It appears to be a grant for future research, rather than an award for past performance, and we have never had a criterion for which that fits. —David Eppstein (talk) 19:03, 21 July 2025 (UTC)
Criterion #3 (notability by award), does this include meta-scientific activities?
[edit]This request for discussion is a result of Wikipedia:Articles for deletion/Fariborz Maseeh. As background, Maseeh has had a stellar business career as an engineer, entrepreneur in engineering businesses, and has become a significant philanthropist and donor to academic organisations, endowing a chair, and funding projects that will create the next generation of engineers. He has education to PhD level, but does not appear to have taught since his PhD, nor to have carried out academic research work since then. He was recognised by election to the NAE, the citation being "For leadership and advances in efficient design, development, and manufacturing of microelectromechanical systems, and empowering engineering talent through public service", and one of the organisations he's greatly helped, Portland Sate University, has praised him as a "venture philanthropist".
My question is this: when someone with a background in science and engineering wins an award from a science/engineering organisation, does the award actually have to be for academic work in the field of science and engineering, if we're to count it towards NPROF#3? Or is it sufficient that the award is in recognition of outstanding positive benefits to academia? And if we make a distinction between these two situations, how do we draw the line? In Maseeh's case, do we think his award was primarily for "leadership" and "empowering engineering talent through public service", which are not academic activities, or "efficient design, development... of microelectromechanical systems" which might well be?
That's the end of my question. To be clear, I'll add my own view: I believe that NPROF should be restricted to academics' activities as an academic, their research and publication. If they are elected to a learned society because of their outstanding research, they are judged by NPROF. If they are honoured for their wider contribution in a field that they reached via a scientific education and by running tech companies, they should be judged by GNG. This is in line with our policy that a person can be notable via NPROF for sitting in a distinguished chair, but not necessarily for endowing the chair. In Maseeh's case I'd be happy to see his article retained with him as a generally notable person, but I think he's moved far enough from pure academia that he no longer needs the (rather generous) prop of NPROF. I am biased towards a science context, but of course this applies equally to any business that grows from an academic root, be it geography, law or economics too.
Pinging Ldm1954 who suggested I raise this here. Elemimele (talk) 10:46, 24 July 2025 (UTC)
- @Elemimele, thanks for posting this.
- My view is different, and I think we should ignore this specific case and discuss the general principle. I know that there is at least one other case of an industry-based NAE who has minimal publications, and there are also FRS. (I cannot remember names, let's leave aside controversial ones.)
- While I am biased towards academics, we should not ignore industry/technology. Often their results do not get published, and/or they only publish what fails or years later. However, their contributions are sometimes as or more significant for technology or science. Therefore if reputable peer societies such as NAE/NAS/RS etc decide that industry people with low-citations merit inclusion, we should to. Ldm1954 (talk) 11:02, 24 July 2025 (UTC)
- In my opinion, WP:PROF, and any of its criteria, should be used to determine notability when the notability results from scholarly distinction. In cases like this, the awards, even if given by scholarly societies, are made in recognition of contributions that are not really about scholarship. As such, instead of looking to Criterion 3, one should judge notability by WP:ANYBIO and WP:GNG. --Tryptofish (talk) 20:14, 24 July 2025 (UTC)
- The NAE and other engineering societies tend to have more of a mix of businesspeople than less-applied scholarly societies, for obvious reasons. There is a big gray area here between businesspeople whose recognized contributions are in business leadership and people who have a strong record of research publication but who do so while working in industry rather than academia. I would not want to shut off what is currently one of the main ways of recognizing strong researchers in industry (or for that matter independent researchers) by imposing arbitrary credential-based rules that we can only apply this to professors, but on the other hand I can see the argument that people elected to scholarly societies without scholarly accomplishments shouldn't be considered notable as scholars.
- By overall preference would be to keep this criterion as is, without new qualifications. The reasons are that (1) the people who it lets through but are "undeserving" by scholarly accomplishments are probably mostly notable anyway, so it does little harm while new qualifications could do more harm by imposing arbitrary credentialism on our standards, and (2) notability is not really about being deserving, anyway, but about recognition, and this is a high level of recognition regardless. —David Eppstein (talk) 20:51, 24 July 2025 (UTC)
- I can also see Ldm1954's point about recognising industrial scientists. I have great sympathy with people like those chemists who went into industry and were instrumental in developing pharmaceuticals that have changed the world, but who slip under the radar because they weren't publishing (except in the form of patents), they didn't occupy a chair, and modern media mostly doesn't care very much about how ibuprofen/mobile-phone-screens/circle-drawing-algorithms came to exist. This may be one of those situations where GNG needs to be used with some flexibility. How do we feel about patents as a notability criterion? I'd say being on a patent for an idea no one ever used is pretty irrelevant, but if you're named on the patent for aspirin or the transistor, that makes you quite encyclopedia-worthy? Elemimele (talk) 08:44, 25 July 2025 (UTC)
- @Elemimele, a page that is relevant to your comment above is Magid Abou-Gharbia. Just based upon his h-factor he would be marginal as almost all his papers are large team efforts. What (IMHO) makes him notable is that major scientific societies recognized his contributions to pharmaceuticals. Ldm1954 (talk) 10:21, 25 July 2025 (UTC)
- I can also see Ldm1954's point about recognising industrial scientists. I have great sympathy with people like those chemists who went into industry and were instrumental in developing pharmaceuticals that have changed the world, but who slip under the radar because they weren't publishing (except in the form of patents), they didn't occupy a chair, and modern media mostly doesn't care very much about how ibuprofen/mobile-phone-screens/circle-drawing-algorithms came to exist. This may be one of those situations where GNG needs to be used with some flexibility. How do we feel about patents as a notability criterion? I'd say being on a patent for an idea no one ever used is pretty irrelevant, but if you're named on the patent for aspirin or the transistor, that makes you quite encyclopedia-worthy? Elemimele (talk) 08:44, 25 July 2025 (UTC)
- In my opinion, WP:PROF, and any of its criteria, should be used to determine notability when the notability results from scholarly distinction. In cases like this, the awards, even if given by scholarly societies, are made in recognition of contributions that are not really about scholarship. As such, instead of looking to Criterion 3, one should judge notability by WP:ANYBIO and WP:GNG. --Tryptofish (talk) 20:14, 24 July 2025 (UTC)
- On one level, WP:ANYBIO suggests that a stand alone page could be created for anyone who "received a well-known and significant award or honor." This is similar to NPROF#2 and NPROF#3. So, a case could theoretically be made that the award is well-known. That said, as David Eppstein says above,
notability is not really about being deserving, anyway, but about recognition
. For me, though, the more we lean into NPROF#2 and NPROF#3 and NPROF#4, the more I want to see independent, reliable-sourced coverage of the honor or recognition. --Enos733 (talk) 17:08, 25 July 2025 (UTC)