Wikipedia talk:Notability (academic journals)/Archive 6

Archive 1Archive 4Archive 5Archive 6

Some clarifications

  • I have noticed several misunderstandings in the above discussion and will give some clarifications here. As the discussion is getting unwieldy, I am putting this in a separate section. I hope that this will make the discussion better informed. --Randykitty (talk) 09:06, 7 July 2023 (UTC)
    Your implication that people who disagree with you are uninformed is simply insulting. I am a professional physicist with plenty of experience in refereeing and editing, and much more knowledge about these bibliographic databases that I ever wanted to have.
    To briefly correct two falsehoods you've written below: being included in SCIE or Scopus does not make a journal influential. It is the bare minimum for a journal to be taken seriously. There are plenty of indexed journals with laughable impact factors. They are not influential.
    The other falsehood is about peer review. The process at Physics Essays is not normal, contrary to your claims. For the editor to declare in advance that the author is free to disregard the referee's comments is completely unheard of. There's also no hint that a paper can be ever rejected from the journal.
    In a normal peer-review process, the editor chooses referees they can trust and does listen to their advice. Only on rare occasions where the referees have clearly made a mistake or are being plainly corrupt the editors allows for their comments to be disregarded. Tercer (talk) 09:41, 7 July 2023 (UTC)
  • No insult was intended. If you look at the discussion above, comparing inclusion in SCIE with lists of recent books, for example, then it is clear that some editors are misinformed. There's a gazillion of subjects where I am misinformed, if somebody points that out in a calm and reasonable manner, I'm not insulted at all. Being included in SCIE does indeed not make a journal influential. It's the other way around: a journal has to be shown to be influential (among many other things), to be included in SCIE. And I did not say that an editor routinely ignores reviewers (otherwise veery soon nobody would be willing to review for your journal any more), I just wanted to point out that it is the responsibility of the editor to make a final decision, not the reviewers (who often enough disagree among themselves), they only provide advice. --Randykitty (talk) 10:25, 7 July 2023 (UTC)
    A calm and polite statement of a falsehood is still a falsehood. A journal most emphatically does not need to be influential to be indexed in SCIE. It's an index with literally thousands of journals! Do I need to give you a list of journals that are in SCIE but are nevertheless irrelevant? Tercer (talk) 12:06, 7 July 2023 (UTC)
  • It's absolutely not a falsehood. There exist well over 100.000 academic journals and only a fraction of those are in SCIE. And read the inclusion criteria that I have linked directly below. If a journal is not influential, it doesn't get in SCIE. --Randykitty (talk) 12:26, 7 July 2023 (UTC)
    Sigh. I do have to give you a list then. Of course I don't take Clarivate's word for it. Let's start with Physica Scripta and Entropy, crackpot journals well-known for having no standards. Both in SCIE. Also in SCIE is Scientific Reports, a borderline scam journal that managed to make a lot of money from the "Nature" name before the community realized that it had terrible quality. And these are just notoriously bad journals. The vast majority are serious but low-impact and little-known journals like Laser Physics, Physical Review Accelerators and Beams, Chinese Physics B, or Acta Physica Polonica.
    Do you seriously maintain that these are influential journals? Tercer (talk) 13:56, 7 July 2023 (UTC)
    I do, yes. There's not necessarily at the very top, but they are all impactful journals. That some are shit does not make them unimpactful. Entropy is a terrible journal, but it has nonetheless an h-index of ~91 for instance (Google lists h-index of 60ish, ranking 15th in General Physics). Headbomb {t · c · p · b} 14:06, 7 July 2023 (UTC)
    An impactful journal? With an impact factor below 1? I'm sorry, I can't believe you are arguing in good faith anymore. Tercer (talk) 14:12, 7 July 2023 (UTC)
Entropy has an IF of around 3, not below 1. Headbomb {t · c · p · b} 14:14, 7 July 2023 (UTC)
I'm talking about Acta Physica Polonica. Tercer (talk) 14:19, 7 July 2023 (UTC)
APP's history goes back to 1920 and has a rich history of fruitful publications. That it's current standing is not what it once was is irrelevant. Headbomb {t · c · p · b} 14:22, 7 July 2023 (UTC)

If it is a "rich history", then where are the sources that indicate that? The article right now is a stub that contains essentially no independent sources written about the journal. How does the standalone article as it is written help the reader? And is there any chance that we will find sources that can improve it to the point where a reader may be able to actually learn that it has a rich history?

Look, I'm all in favor of stub-culture when it looks like there are ways to expand articles. But in example after example I see articles about obscure journals that look like they have no chance to go anywhere. What is the point of an inclusion criteria which is essentially creating standalone articles that function exactly as a WP:DIRECTORY?

jps (talk) 13:27, 11 July 2023 (UTC)

Bibliometric databases

In the discussion above, inclusion in databases like the Science Citation Index Expanded (SCIE) and Scopus has been compared to sports databases and even indiscriminate lists of recently-published books. These comparisons are based on an apparent misunderstanding of these bibliometric databases. Inclusion into these databases is not automatic, as is the case with sports databases or lists of recently-published books. The procedure to get included in SCIE or Scopus is not easy and publishers and editors have to jump through several hoops before they get accepted. What they do not have to do (as suggested above or on one of the other pages where this discussion is raging) is pay a fee. Inclusion in these databases is absolutely free for a journal, so getting money from publishers is not a motivation for the database providers to include a journal.

To get included, a journal must have published a certain minimum number of issues, usually one or two years worth. When it has been shown in this way that a journal has "staying power", a journal's application goes to a commission of specialists, who evaluate the journal on contents and geographical spread of editors, editorial board, and authors, among other criteria. This evaluation is very detailed and very stringent: many journals fail to get included the first time they apply or even ever... And if they get rejected, they will have to wait several years before they can apply again.

There are more than hundred thousand journals in existence, only a small proportion of them are included in SCIE/Scopus/etc. It is databases like Google Scholar that resemble more the "recently-published books list", as GScholar strives to include everything (even predatory journals). DOAJ is not a selective database in this sense either. It's selective only in the sense that it tries to keep out overtly predatory journals, but apart from that it aims to include every open-access journal around.

As the late lamented DGG argued: do we WP editors know better than a committee of specialists? Only the best journals get included in these databases. And staying included is not automatic, if a journal turns bad, it will be dropped from coverage (as, indeed, happened to Physics Essays).

Once a journal is included in the SCIE (or similar databases like the Social Sciences Citation Index, it gets evaluated in-depth in the Journal Citation Reports (JCR). While part of that evaluation is automatic, it is important to note that the final results are hand-curated. The result is a lot of interesting data, varying from the impact factor to which journals get cited most by a certain journal and the other way around.

The roles of editors and referees

The description of the editorial procedures used by Physics Essays, specifically that the editor may deviate from what is suggested by referees, has been interpreted as meaning that the journal is not peer-reviewed. This is incorrect. An editor has the final responsibility for what gets published in a journal or not. Editors who simply count referees' "votes" are lazy bums that don't do the job they're supposed to do. And referees do not accept or reject a manuscript: they give advice to the editor, nothing more and nothing less. It is up to the editor to interpret their comments. Most of the time, an editor will follow the suggestions of the reviewers, but for all kinds of reasons they may deviate from that. This can go both ways: reviewers may recommend "major revisions", but an editor may find the issues raised too serious and reject the manuscript for publication. Or a reviewer may recommend rejection, but the editor judges the objections raised as more minor issues and ask for a revision before accepting the manuscript for publication. In all cases, this is normal peer review.

What is peer review

Peer review is a way of handling submissions to a journal. It's a procedure, nothing more, nothing less. It's not a badge of honor, nor is it a guarantee that a journal practicing peer review will be high quality. Bad peer review is peer review all the same. So just as we accept it if a journal says that John Doe is their editor-in-chief (unless we have evidence to the contrary, as happens occasionally with the more blatant predatory journals), we should accept a journal's self-description of whether it is peer reviewed or not.

  • One can say "Flat earth is a scientific theory supported by astronomers and experts." Technically, Flat Earth is science, and "astronomers" support this claim. However, this is a violation of Wikipedia:FRINGE. To parrot your logic, the label 'science' is not a guarantee that the science is valid. The label 'astronomer' is not a guarantee that they are qualified. Therefore, this lead sentence must be perfectly fine. Ca talk to me! 14:07, 7 July 2023 (UTC)
Luckily, we have sources describing Flat Earth shit as nonsense, so you are well justified in adding the label "fringe" on there. Headbomb {t · c · p · b} 14:20, 7 July 2023 (UTC)


Misconceptions in RandyKitty's essay

Inclusion into these databases is not automatic, as is the case with sports databases or lists of recently-published books. Inclusion in sports or recently-published books databases isn't "automatic" either. There are strict criteria for inclusion, but that still allows for a gigantic list of players and books. This is absolutely the same as bibliometric indices. Academia is not somehow special by comparison. It is the same gatekeeping that goes on anywhere. If you think it is easy to get included in a sports statistic database, I encourage you to try to get yourself into one!

Inclusion in these databases is absolutely free for a journal, so getting money from publishers is not a motivation for the database providers to include a journal. While it is true for some indexes, it is not true for a few that were uncritically being cited on Physics Essays. It would make sense to disparage those indexes that charge for inclusion. We are silent on this fact in this guideline.

While part of that evaluation is automatic, it is important to note that the final results are hand-curated. The result is a lot of interesting data, varying from the impact factor to which journals get cited most by a certain journal and the other way around. The reports, however, mention essentially nothing about the subject matter of the journal, for example. It's instead a lot of curated data without any analysis. Not something upon which we would be able to write meaningful prose, that I can see.

an editor may find the issues raised too serious and reject the manuscript for publication. Or a reviewer may recommend rejection, but the editor judges the objections raised as more minor issues and ask for a revision before accepting the manuscript for publication. In all cases, this is normal peer review. Corrupt pocket journals are a big problem in academia that we are not going to solve here. In the case where a EiC starts publishing garbage, the community responds by ignoring the journal. Wikipedia is ill-equipped to notice this. The correct approach is to be really stringent in sourcing. If there are sources which indicate something about the quality of the journal, then it is safe to have an article. If not (as in the case of many journals included here), I question the wisdom of an inclusive philosophy.

Bad peer review is peer review all the same. So just as we accept it if a journal says that John Doe is their editor-in-chief (unless we have evidence to the contrary, as happens occasionally with the more blatant predatory journals), we should accept a journal's self-description of whether it is peer reviewed or not. In no other part of Wikipedia do we take people at their word when there is controversy. This idealization of peer review is your own invention. It is not reflected in the literature on the subject. It is not the public understanding. It is a kind of approach which flourishes in libraries where there is a kind of enforced agnosticism when it comes to publication. They are not in the business of enforcing epistemic closure, etc. However, Wikipedia is not a library. We are tasked with looking at what experts evaluate various claims to be. If no expert has evaluated that peer review has taken place (for some value of peer review meaning "responsible peer review") then it is the height of arrogance for us to parrot a journal's self proclamation. It would also be the height of arrogance to say that the journal is not peer reviewed. The responsible thing for Wikipedia to do is to get an independent, reliable source to verify. Not an index that copies what the journal itself says. Not the journal itself. A proper third party. To do otherwise is to treat academic journals as special flowers in the Wikipedia universe that they simply are not.

jps (talk) 22:49, 9 July 2023 (UTC)

I wonder if you two have different ideas of what constitutes "automatic" inclusion in a database.
On the one hand, someone might say "Inclusion in the famous footy database 'Bundesliga Players' is not automatic. You have to go through years of training, work your way up through the various lower-level teams, and get hired by a professional team in the correct tier of the right association, and only then will you be added to the database." Another person might look at this and say, "Yeah: As soon as you join any of the teams they cover, you're automatically added to the database."
It does not appear that any selective index system is "automatic" in this way. These databases don't have single-criterion automatic rules like "anything published by Elsevier". WhatamIdoing (talk) 11:07, 12 July 2023 (UTC)
That's a fair contrast, but I'm not sure there is a functional difference between the two situations in terms of article writing. jps (talk) 14:15, 13 July 2023 (UTC)
While part of that evaluation is automatic, it is important to note that the final results are hand-curated. The result is a lot of interesting data, varying from the impact factor to which journals get cited most by a certain journal and the other way around.
The final approval for inclusion/review in the index are subject to human judgment, but the actual "coverage" of the journal is all autogenerated.
Citation indices also provide a lot of curated, highly-detailed metrics on authors (like h-index, graphs of their publication history, citation calculations, sometimes even basic network analysis) and papers (field impact score, other rankings). Those metrics are never considered secondary SIGCOV of people or papers, so why should they be considered so important as to be notability-conferring for journals? Simply being published in the highest-ranked journal in the world isn't a notability criterion for first/senior authors; surely that's a higher bar to clear than the Scopus journal inclusion criteria of: having articles cited in Scopus-indexed journals, having editors that have some professional standing (e.g. professors), having a publishing history of 2+ years, and publishing articles that contribute to their academic field and are within the journal's stated scope. Indexing services just want journals that aren't obvious trash or so minor as to be read by only undergrads at one college. A journal of relevant breadth not being indexed in Scopus is a red flag, whereas not being a member of the National Academy of Sciences doesn't mean anything at all for a researcher. JoelleJay (talk) 19:52, 13 July 2023 (UTC)
I have to concur with jps and JoelleJay, at least in this section. This is especially on-point: "If there are sources which indicate something about the quality of the journal, then it is safe to have an article. If not (as in the case of many journals included here), I question the wisdom of an inclusive philosophy."  — SMcCandlish ¢ 😼  00:42, 26 July 2023 (UTC)
You appear to be the first in this entire debate to base your argument on "an inclusive philosophy". Nobody else here is doing that; we are instead debating different criteria for how to judge the relative noteworthiness of journals, but have not addressed whether some of those criteria would allow more journals to be included, or fewer, or even whether that would be a good thing or a bad thing. So, pray tell: how many articles on journals is the right number? Are we above that, or below that, currently? Which of the two positions in this debate (that we should base notability on inclusion in selective indices, or on whether the journal can get some publicity for itself in independent publications) would support "an inclusive philosophy", and what data do you have to support the inclusiveness of that position and the exclusiveness of the opposite position? —David Eppstein (talk) 00:55, 26 July 2023 (UTC)
On your first point, I don't get what you mean, since I'm directly quoting someone else's prior commentary. Anyway, I think (even judging from your own bullet-point summary in the thread at or near the bottom, where you warn of the possibiliity of being less inclusive of high-quality journals and more inclusive of fringe/controversial ones, depending on how things go in this broader debate), that inclusiveness shifting will necessarily be a consequence, even if people don't want to focus on that as the prime-mover of the discussion. On your seeminly semi-facetious question: There is no specific "right number", of course, but it's clear that some editors think we have too many articles (not just on this topic; AfD is a very, very busy place). There is a general community feeling against "perma-stubs". On your latter question, it's the former criterion ("base notability on inclusion in selective indices") that would lead to an inclusive (i.e. m:Inclusionist) result. I don't have any particular data, and this to me is not about a "right number" or any other tabular measure, but about a) maintainability at the project level (are we wasting our time trying to create and maintain zillions of journal articles, when there are over 100,000 academic journals in the world?), and b) clarity at the editor level (am I wasting my time writing an article on a journal, only to have it deleted on the basis of pseudo-rules from an essay there is no actual clear consensus about?).  — SMcCandlish ¢ 😼  01:06, 26 July 2023 (UTC)

Recommend reinstatement of this edit

[1] After protection expires, I recommend reinstating this edit. Note that there are three editors acting as de facto owners of this essay, but since it is not a user essay, WP editors are free to change it as consensus dictates.

This edit will make it easier to remove the WP:FRINGE-cruft these three editors have been tacitly and, likely, unintentionally, supporting by allowing Wikipedia to function as a WP:DIRECTORY for journals.

jps (talk) 16:28, 21 July 2023 (UTC)

The essay reflects hundreds of deletion discus sions. So far, you have 3 editors that vehemently disagree with it because the community resoundingly endorsed keeping Physics Essays, and instead of writing their own essay, try to undermine this one and change its meaning. And that edit is already covered elsewhere at the very bottom of WP:CRITERIA. Go WP:ABF elsewhere. Headbomb {t · c · p · b} 16:39, 21 July 2023 (UTC)
It's so vaguely worded that I don't see how it could actually help resolve any deletion dispute in practice; because of its vagueness, it could be read as either redundant with § Criteria or in contradiction with the rest of the essay, neither of which is desirable. Regardless of one's take on the overall situation, I don't think this specific addition works. XOR'easter (talk) 17:51, 21 July 2023 (UTC)
I agree (see below) that this addition to this essay is unhelpful, as it describes a completely contradictory view to the rest of the essay. I suspect the point of view of the editors pushing this change is that we should not ever have essays interpreting GNG, we should just use the bare wording of GNG itself everywhere; if so, perhaps this explains their reluctance to write a separate essay setting out their alternative interpretation. However, such a point of view is not an adequate justification for sabotaging others' essays of interpretation by making them say the opposite of what their proponents are using them to mean. —David Eppstein (talk) 19:08, 21 July 2023 (UTC)
Concur with the others. And couldn't dispute FRINGE-cruft more strongly. By Wikipedia standards, our journal articles are well-maintained, mostly by editors with academic experience. They're a public good. When I cite a journal, I check our article on it. I don't have Clarivate access. If an infobox shows a low impact factor, or mass-resignation by the editorial board after pressure to increase acceptance rates, that's helpful info. For Physics Essays, I'd know to look for a better source. (Before you ask: yes, I still dig around, whether or not we have an article). What would deletionism achieve here? Bugger all. DFlhb (talk) 19:22, 21 July 2023 (UTC)
@DFlhb, how can we write a neutral article on a subject sourced only to its own website and a couple unexplained metrics? Probably 98% of readers and the vast majority of editors have no idea what being delisted from Scopus or having a very low impact factor or being indexed by Index Copernicus implies since these attributes are not accompanied by any contextualization whatsoever. We wouldn't accept an article on a nonprofit or school or website where literally all of the prose description--including important details like peer-review--is (and can only be) derived from ABOUTSELF. Why should we do so for an industry with well-documented self-promotion[2], reputation manipulation and gaming [3][4][5][6][7], and other abusive business practices?[8] I also used to use our articles on journals extensively whenever I was at an NPROF AfD or evaluating sources on FRINGE pages. This ended during the height of the lab leak lunacy when I saw our article on BioEssays was exclusively sourced to its own website and an index from 2012. That journal published multiple awful lab leak-apologist papers by wholly unqualified authors (like this one by two DRASTIC affiliates--a mycology/botany postdoc and some guy with only a CS bachelor's who is heavily involved in life-extension woo; and this one by a retired genetic databases curator and his extremely unsavory son who proudly states he hasn't taken biology since high school), yet no one would ever be the wiser from visiting Wikipedia because the journal is too minor to be discussed directly in RS, and we can't coatrack in the bountiful criticism of specific papers either.
Further:
  • Inclusion does not correspond to SIGCOV in IRS; the essay proposes a completely separate route to notability than SIGCOV or secondary sourcing, which historically would need an even higher level of consensus than that expected for GNG-based guidelines. Citation indices are also nowhere near selective enough to ensure inclusion even incidentally predicts significant secondary independent coverage.
  • Citation indices are clearly not selective enough to exclude junk journals. If not being listed on Scopus is a red flag for a journal that otherwise appears eligible, then inclusion clearly does not imply a journal is among the best in the world. It implies it is probably not total garbage. Merely being reliable is not an indication of notability. More problematically, journals that were indexed briefly and then delisted are treated exactly the same as ones that were continuously indexed, which means a journal that was quietly delisted for, e.g., lack of peer review, without those reasons being made public, will forever be entitled to a wikipedia article mirroring whatever it claims about itself regardless of its current reliability. JoelleJay (talk) 05:13, 22 July 2023 (UTC)
Regarding and we can't coatrack in the bountiful criticism of specific papers either. Why not? Presuming at least some of that criticism is published in somewhere more reliable than forum posts (and/or that the subject-matter expert clause applies), I don't see a fundamental objection to including a journal's most-criticized papers in the article. We can't draw a new conclusion from that criticism, like saying "and therefore you should never publish here", but critiques of what a journal has published are pertinent information about the journal. Half of the Social Text article is about the Sokal hoax; Entropy has a whole section about a controversial paper and its fallout, as does Frontiers in Psychology, while Scientific Reports has a big one. XOR'easter (talk) 19:14, 24 July 2023 (UTC)
There was extensive coverage of the journal's role in the controversy in each of those examples (literally front page NYT pieces for some...). The coverage doesn't merely excoriate an article they published. It would be coatracking to include the reception of individual articles that do not go into any detail on the journal itself. JoelleJay (talk) 02:14, 25 July 2023 (UTC)
Sorry, but I still don't get it. The journal always has a role in the controversy, tautologically: they are the ones who made the choice to publish the article. The publication of an article is an act taken by the journal, so nothing in policy prevents us from writing about it when we write about the journal. The things that do tie our hands are not being allowed to synthesize a conclusion and not being allowed to start the criticism from scratch ourselves.
I am not sure that having a Wikipedia article makes a journal look more respectable to any practical extent. People believe fringe nonsense because they want to, and everything else is reinterpreted to suit. Wikipedia has an article that says nothing much about the journal that stands out in any way? Ah ha, the Truth will out! No Wikipedia article for the journal? Well, they weren't expecting one, and/or Wikipedia is part of the Establishment working to censor the Truth. Wikipedia has an article that calls the journal sketchy or predatory? Obviously, the Establishment is at work, suppressing all those who would speak the Truth. Heads I win, tails you lose; the enemy is strong and weak as the occasion demands.
Suppose we redirected BioEssays and maybe other Wiley journals to a list, as has been a suggested course of action for journals whose articles are too stubby. Would appearing in a list of Biology journals published by Wiley (for example) make it look more questionable to someone wanting to know if some new paper is serious? Maybe, but I'm kind of doubtful. XOR'easter (talk) 02:58, 25 July 2023 (UTC)
We wouldn't include criticism of individual books at the publisher's page unless the criticism actually describes the role of the publisher. We don't do that with every controversial film at the articles of film production companies either. So why would we do the same for a journal?
Having a wikipedia article lends a ton of legitimacy toward a journal, just as it lends legitimacy to any business. It skyrockets the journal toward the top of search rankings. And since people expect articles on academic subjects in particular to be neutral and accurate, having an article on a journal sourced only to its own website and some uncontextualized numbers is even more misleading. Being in a list where the publisher's reputation in [field] is described provides a much less isolated treatment of a journal, doesn't confer the same degree of authority, and doesn't present ABOUTSELF from the journal as if it's a secondary independent evaluation. JoelleJay (talk) 22:09, 26 July 2023 (UTC)
Since input was requested on the Village Pump, I'll state for the record, as a heretofore uninvolved editor, that I do not support this edit. First, by its literal terms it says nothing at all -- any subject may not be notable, for any number of reasons, so the fact that a journal with only index entries may not be notable provides no new information. Verbiage that adds nothing should not be added. Second, looking at what the edit implies but does not say, it suggests a harder line against databases than is supported by the current language of WP:N, where footnote 1 only states that databases (presumably including indices such as these) are examples of RS coverage that may not actually support notability when examined. Footnote 1 is careful to close no doors, and to leave this fact-sensitive question to be settled based on the particulars of each situation. I suppose that nobody could really object to including the exact language of footnote 1 here, but I'm not really sure why we would do that either. If this essay has merit, its merit comes precisely from saying things that other pages don't. Otherwise, why have it at all?
Having spoken my piece, I will now depart; if for some unlikely reason further input from me is required, please ping. -- Visviva (talk) 05:12, 23 July 2023 (UTC)
@Visviva: You're inconsistently condemning in one place then praising in another the same kind of may language. So, it's not easy to follow your rationale. Regardless, it seems pretty clear to me that what "may not" means in this particular case is that inclusion in an index isn't a notability indicator. Maybe it could be phrased more emphatically (if consensus arrived to include something like this at all, which is an open question.)  — SMcCandlish ¢ 😼  08:08, 26 July 2023 (UTC)
I would hate to give the impression that I am praising footnote 1, of which I am the opposite of a fan. But one thing that is true about that footnote is that it adds new information, which this language does not. (That is, the baseline rule would otherwise be that all RSs providing sigcov contribute to notability, so footnote 1's statement that some RSs may not do so adds new information.) The problem with language like the proposed edit, which by the plain meaning of its words adds no new information at all, is that the reader is naturally led to draw a Gricean implication that the language must be trying to say something that it does not actually say, since otherwise the maxims of relevance and quantity would be violated. So at best, it seems to me that this edit would leave us with yet another provision that will be prone to being given an exclusionary reading not supported by its plain language. -- Visviva (talk) 23:05, 26 July 2023 (UTC)
Seems like a surmountable problem, of just needing to be written more clearly (in one direction or the other). And maybe "praising" was too strong a word; more like "relying upon".  — SMcCandlish ¢ 😼  23:32, 26 July 2023 (UTC)
I think that if you want to resolve the ambiguity, you change "may not be notable" to "may or may not be notable", and if you want to solve a bigger problem, you remind editors not to judge notability solely based on the sources that have already been cited in the article. WhatamIdoing (talk) 01:26, 27 July 2023 (UTC)
  • I have to agree with "If the only reliable independent sources about a journal are its inclusion in scholarly indices, then the journal may not be notable." I also don't disagree with the idea in a thread above that perhaps this question should be put to an RfC. It is probably better to get a site-wide consensus on the matter, which is apt to have some actual effect on AfD debates and such, than to rely on whether some disputed essay says it or not.  — SMcCandlish ¢ 😼  00:37, 26 July 2023 (UTC)
    You're missing the point. The question is not whether some Wikipedia editors may consider such a journal to be non-notable (the answer to that is obviously yes), nor whether we have a consensus for considering such journals notable or non-notable (neither position has consensus). The question here is whether this essay, used to encapsulate the point of view of some editors on this position should be amended to instead reflect the opposite point of view. Is that an appropriate thing to do to essays? Obviously, other essays do not in general reflect consensus views (otherwise they would not just be essays). So why is it that you are stating your agreement with handling an essay in that way? —David Eppstein (talk) 00:47, 26 July 2023 (UTC)
    Well, that's a question. I don't feel that even the existence of this essay at all is particularly important or dispositive of anything; what ultimately matters is forming better consensus on how to determine journal notability. A problem in the debate/point you want to focus on is that this essay is named as if it is a guideline on academic journal notability. It's probably ultimately permissible (though not very helpful) that an essay be named as if it's an NC guideline, but probably only if it's inclusive of significant divergent views on the subject, perhaps clearly denoted as conflicting opinions for the editor-reader to weight. In my own big pile o' essays, I've frequently "permitted" people to inject contrary viewpoints into them for this reason (though often re-edited to make it clearer that there are mutiple views to account for).
    But if people want to gate-keep this as an opinion (one particular opinion) piece, it should be moved to a title that encapsulates that opinion. (I say this as someone who spent a lot of time, back when, cleaning up "essay space" by moving confusingly named pages around to better, more identifying names, and I don't think I ever got seriously challenged on any of the moves, other than one, which ended up being rewritten from the top down anyway.) I.e., I'm not looking at how to "WP:WIN", from a particular "side", but how to make the dispute go away one way or another and how to arrive at a more solid consensus. I don't spend all my time in science articles, but I do have an interest in creating some articles on journals and on some academic biography subjects, and it's frankly very uncertain ground; I fear that I would waste a lot of research time, only to have the material targeted for deletion, under principles that are not actual rules and on which there is not widespread agreement.  — SMcCandlish ¢ 😼  00:55, 26 July 2023 (UTC)
    @SMcCandlish, yes, that's a great way of putting it. People expect that frequently-used essays (especially those that are used to influence mainspace outcomes directly, and especially especially those cited by admins) do have some consensus discussion behind them wherein significant dissenting opinions have actually been considered. JoelleJay (talk) 22:26, 26 July 2023 (UTC)
    I think part of the 'problem' is the name of the page. Editors expect that shortcuts beginning with "WP:NOT" will usually link to some part of Wikipedia:What Wikipedia is not, or at least to some policy, even though that is demonstrably false. Editors expect that pages that follow the naming convention for notability will reflect a view that the community in general more or less agrees with. WhatamIdoing (talk) 22:39, 26 July 2023 (UTC)
    Yes, that is why @ජපස was trying to get the essay marked as "historical" or "failed". JoelleJay (talk) 00:40, 27 July 2023 (UTC)
    Of which it is neither. Headbomb {t · c · p · b} 00:56, 27 July 2023 (UTC)
    Except for the fact that it failed when it attempted to gain consensus as a guideline? JoelleJay (talk) 01:24, 27 July 2023 (UTC)
    Failing as a proposal is more nuanced than that. "I think it needs a little work, and then come back later" is not a rejection in the same way that, say Wikipedia:Identifying reliable sources (history) was rejected on grounds of fundamental problems (like trying to require only scholarly sources and ban textbooks). WhatamIdoing (talk) 01:28, 27 July 2023 (UTC)
    If consensus for broad community support has not developed after a reasonable time, the proposal has failed. If consensus is neutral or unclear on the issue and unlikely to improve, the proposal has likewise failed. JoelleJay (talk) 02:15, 27 July 2023 (UTC)
    The rejection happened in 2009. The part "needs a little work, and then come back later" never happened. Clearly it is a failure then?
    The Wikipedia way would be indeed to work on it, address the problems identified in that discussion, and actually obtain consensus support. But the editors here adamantly refuse to do it. Tercer (talk) 10:10, 27 July 2023 (UTC)
    "needs a little work, and then come back later"
    It had a ton of work done on it. It also aims to reflects how deletion discussions actually go. And this is how deletions discussions actually go. Headbomb {t · c · p · b} 14:07, 27 July 2023 (UTC)
    Don't be disingenuous. I'm talking about working on the problems identified in the guideline discussion. What are they? I counted 13 oppose !votes, of which 9 were complaining about it being indiscriminate. They specifically mentioned the same problems that we are arguing about now: how it contradicts GNG, how it recommends articles without having SIGCOV, how it uses being indexed as a criterion for notability. You have always adamantly refused to fix them. No wonder that 14 years later this essay is still controversial for the same reasons.
    Frankly, Wikipedia is based on working towards consensus, not in retreating to your bunker. Tercer (talk) 19:32, 27 July 2023 (UTC)