Why don't I see publications criticising other publications?
This is about technology research specifically.
Often when reading a publication I'll be thinking to myself: "Yeah, you've got that thing, but what about this and that limitation you brush off and this and that assumption which seems pretty poorly justified?" After starting my PhD I got pretty shocked just how common that seems to be.
The only critiques of other publications I ever see are either in literature review sections of papers suggesting alternative methods, or mentioned in surveys. In either case those tend to be not very in-depth.
Why is this the case? It seems to me like something that would be pretty damaging to research in general, as then you see a lot of papers with built-in assumptions and limitations propagated by their predecessors.
publications technology
|
show 7 more comments
This is about technology research specifically.
Often when reading a publication I'll be thinking to myself: "Yeah, you've got that thing, but what about this and that limitation you brush off and this and that assumption which seems pretty poorly justified?" After starting my PhD I got pretty shocked just how common that seems to be.
The only critiques of other publications I ever see are either in literature review sections of papers suggesting alternative methods, or mentioned in surveys. In either case those tend to be not very in-depth.
Why is this the case? It seems to me like something that would be pretty damaging to research in general, as then you see a lot of papers with built-in assumptions and limitations propagated by their predecessors.
publications technology
78
2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There is are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
– MBK
Nov 23 '18 at 11:42
11
@Trilarion not a very good one imo - there's all sorts of reasons one paper could have fewer citations than another, and I think poor quality is reasonably far down the list
– llama
Nov 23 '18 at 15:28
4
Maybe that's field-dependent, but I see that a lot in political science. A major way to bolster the "relevance" of a publication is to situate it within a debate. Bonus points if you criticize "conventional wisdom". So I'm not.sure about the premise of this question.
– henning
Nov 23 '18 at 21:03
9
Having written such an article myself: they do exist, but getting the balance between constructive criticism and pure negativity right is tricky. Articles that cite our paper also often uncritically cite the paper we are refuting.
– Konrad Rudolph
Nov 24 '18 at 13:53
8
The complaint about "publish or perish" is naive. "Publish or perish" dates back to the 1920s, and peaked in the 1970s. The notion that this is something new reflects a lack of historic awareness. Criticisms are far more common today than they were in the mythical good old days, which were all about your (white male non-Jewish) friends in the field.
– iayork
Nov 25 '18 at 14:30
|
show 7 more comments
This is about technology research specifically.
Often when reading a publication I'll be thinking to myself: "Yeah, you've got that thing, but what about this and that limitation you brush off and this and that assumption which seems pretty poorly justified?" After starting my PhD I got pretty shocked just how common that seems to be.
The only critiques of other publications I ever see are either in literature review sections of papers suggesting alternative methods, or mentioned in surveys. In either case those tend to be not very in-depth.
Why is this the case? It seems to me like something that would be pretty damaging to research in general, as then you see a lot of papers with built-in assumptions and limitations propagated by their predecessors.
publications technology
This is about technology research specifically.
Often when reading a publication I'll be thinking to myself: "Yeah, you've got that thing, but what about this and that limitation you brush off and this and that assumption which seems pretty poorly justified?" After starting my PhD I got pretty shocked just how common that seems to be.
The only critiques of other publications I ever see are either in literature review sections of papers suggesting alternative methods, or mentioned in surveys. In either case those tend to be not very in-depth.
Why is this the case? It seems to me like something that would be pretty damaging to research in general, as then you see a lot of papers with built-in assumptions and limitations propagated by their predecessors.
publications technology
publications technology
asked Nov 23 '18 at 11:16
KubaFYIKubaFYI
544136
544136
78
2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There is are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
– MBK
Nov 23 '18 at 11:42
11
@Trilarion not a very good one imo - there's all sorts of reasons one paper could have fewer citations than another, and I think poor quality is reasonably far down the list
– llama
Nov 23 '18 at 15:28
4
Maybe that's field-dependent, but I see that a lot in political science. A major way to bolster the "relevance" of a publication is to situate it within a debate. Bonus points if you criticize "conventional wisdom". So I'm not.sure about the premise of this question.
– henning
Nov 23 '18 at 21:03
9
Having written such an article myself: they do exist, but getting the balance between constructive criticism and pure negativity right is tricky. Articles that cite our paper also often uncritically cite the paper we are refuting.
– Konrad Rudolph
Nov 24 '18 at 13:53
8
The complaint about "publish or perish" is naive. "Publish or perish" dates back to the 1920s, and peaked in the 1970s. The notion that this is something new reflects a lack of historic awareness. Criticisms are far more common today than they were in the mythical good old days, which were all about your (white male non-Jewish) friends in the field.
– iayork
Nov 25 '18 at 14:30
|
show 7 more comments
78
2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There is are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
– MBK
Nov 23 '18 at 11:42
11
@Trilarion not a very good one imo - there's all sorts of reasons one paper could have fewer citations than another, and I think poor quality is reasonably far down the list
– llama
Nov 23 '18 at 15:28
4
Maybe that's field-dependent, but I see that a lot in political science. A major way to bolster the "relevance" of a publication is to situate it within a debate. Bonus points if you criticize "conventional wisdom". So I'm not.sure about the premise of this question.
– henning
Nov 23 '18 at 21:03
9
Having written such an article myself: they do exist, but getting the balance between constructive criticism and pure negativity right is tricky. Articles that cite our paper also often uncritically cite the paper we are refuting.
– Konrad Rudolph
Nov 24 '18 at 13:53
8
The complaint about "publish or perish" is naive. "Publish or perish" dates back to the 1920s, and peaked in the 1970s. The notion that this is something new reflects a lack of historic awareness. Criticisms are far more common today than they were in the mythical good old days, which were all about your (white male non-Jewish) friends in the field.
– iayork
Nov 25 '18 at 14:30
78
78
2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There is are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
– MBK
Nov 23 '18 at 11:42
2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There is are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
– MBK
Nov 23 '18 at 11:42
11
11
@Trilarion not a very good one imo - there's all sorts of reasons one paper could have fewer citations than another, and I think poor quality is reasonably far down the list
– llama
Nov 23 '18 at 15:28
@Trilarion not a very good one imo - there's all sorts of reasons one paper could have fewer citations than another, and I think poor quality is reasonably far down the list
– llama
Nov 23 '18 at 15:28
4
4
Maybe that's field-dependent, but I see that a lot in political science. A major way to bolster the "relevance" of a publication is to situate it within a debate. Bonus points if you criticize "conventional wisdom". So I'm not.sure about the premise of this question.
– henning
Nov 23 '18 at 21:03
Maybe that's field-dependent, but I see that a lot in political science. A major way to bolster the "relevance" of a publication is to situate it within a debate. Bonus points if you criticize "conventional wisdom". So I'm not.sure about the premise of this question.
– henning
Nov 23 '18 at 21:03
9
9
Having written such an article myself: they do exist, but getting the balance between constructive criticism and pure negativity right is tricky. Articles that cite our paper also often uncritically cite the paper we are refuting.
– Konrad Rudolph
Nov 24 '18 at 13:53
Having written such an article myself: they do exist, but getting the balance between constructive criticism and pure negativity right is tricky. Articles that cite our paper also often uncritically cite the paper we are refuting.
– Konrad Rudolph
Nov 24 '18 at 13:53
8
8
The complaint about "publish or perish" is naive. "Publish or perish" dates back to the 1920s, and peaked in the 1970s. The notion that this is something new reflects a lack of historic awareness. Criticisms are far more common today than they were in the mythical good old days, which were all about your (white male non-Jewish) friends in the field.
– iayork
Nov 25 '18 at 14:30
The complaint about "publish or perish" is naive. "Publish or perish" dates back to the 1920s, and peaked in the 1970s. The notion that this is something new reflects a lack of historic awareness. Criticisms are far more common today than they were in the mythical good old days, which were all about your (white male non-Jewish) friends in the field.
– iayork
Nov 25 '18 at 14:30
|
show 7 more comments
12 Answers
12
active
oldest
votes
Such criticisms are common in journals like Nature and PNAS. They're often not called "papers" -- PNAS calls them "Letters", Science calls them "Technical Comments", and Nature calls them "Brief Communications Arising" -- but they do get a significant degree of pre-publication scrutiny and often are peer reviewed. Some examples from the past few weeks:
Is “choline and geranate” an ionic liquid or deep eutectic solvent system? (PNAS)
Tagging the musical beat: Neural entrainment or event-related potentials? (PNAS)
Comment on “Predicting reaction performance in C–N cross-coupling using machine learning” (Science)
Comment on “The earliest modern humans outside Africa” (Science)
Assumptions for emergent constraints (Nature)
Emergent constraints on climate sensitivity (Nature)
Climate constraint reflects forced signal (Nature)
Other journals also have systems for responses, though most are not as organized about it.
So the premise is not quite right -- criticisms of high-profile papers are not unusual -- but it's true that the vast majority of papers don't get formal critiques like this -- even though the vast majority of papers probably do have something that could be criticized.
That's part of the reason. Very few papers are completely above reproach (except for my own, of course). If I was to send a critical letter about every issue I see in every paper I read, I'd spend my time doing nothing else, and the journals would be filled with my letters to the editor with no room for new articles.
And we're all adults here. It's expected that scientists read papers critically; that's part of our job. That means we all find things to criticize. Finding a problem in a published paper isn't a shocking, scandalous thing. It's like a toll collector breaking a $20 bill, a minor technical part of your job. It would be condescending of me to assume that I'm the only person who noticed the issues in question, and that it's up to me to save my colleagues from their stupidity and ignorance.
Finally, when I do respond to publications that need criticizing, I rarely phrase my responses that way. For example, several years ago I read an interesting paper in my field that, I thought, made a significant error in its assumptions. To show that, I repeated some of their experiments, adding in the missing controls, and then extended the work to show where the corrected observations led. I didn't publish as a criticism of the previous work (which was from researchers who I know and admire and who have done a lot of great stuff). I published it as a paper that can stand on its own, noting the previous work in passing but not making a big deal of the mistake. Hopefully, if I ever make a mistake (unlikely!) my colleagues will correct me in the same way. Science is hard, and it shouldn't be like a hockey goalie, where every time you make a mistake a buzzer goes off and thirty thousand people yell at you.
29
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
2
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
2
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
|
show 4 more comments
Many do, just in a polite way. Most of numerical modelling seems to be a variation of nicely stating "previous folks did things wrong and/or inefficient, we improve upon that; or at least offer another approach despite not showing their failings." There, criticism/failing of other approaches is often shown throughout the article.
Additionally, you are starting your PhD.
- What you feel might be a serious omission regarding method applicability could be a well-known limitation of the method everyone in the community is aware of. In this case, your "you are propagating BS" is a pointless and wrong rant - they know their methods better than you. Say everyone might already know X will offer about 1% higher error, but 100x easier simulation/experiment. If you suspect this might be the case, rather write a polite mail asking for clarification and whether the obtained results are still valid or how the approach could be improved etc etc.
- On the other hand, everyone could be assuming something that is actually incorrect for the problem, and you know it. Here, bluntly stating "you are propagating BS" isn't technically wrong, but you won't be getting any new friends with that. Rather write a nice article showing how and why everyone is approaching the problem wrong way and how you solve their issues - you would be instantly a highly regarded member of the community.
But yeah, a lot of fairly shaky research is taken as true and this leads to severe reproducibility problem all over fields. There are journals for publishing reproductions of other's work, but most prefer doing something original. Can't blame them: Invent X and have name immortalized vs "yeah, X's results check out".
34
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
2
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
6
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
2
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
14
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
|
show 6 more comments
Politics. Criticising peers directly leads to lack of political support.
This is because science is increasingly based on connections and short-term appraisal, and this is greatly detrimental to the advancement of hardcore science, and intellectualism in general.
We are social animals. This leads to a natural tendency of taking others' actions personally and "return favours". Also people are very wary of others' opinions and public image, and as a result are naturally sensitive to negative criticism. Usually, when you hurt the feelings of someone in public, you've made yourself an enemy.
I firmly believe in criticism and frankness as the shortest path to improvement. That has put me in delicate situations many times. In my early papers I was eager to lift obvious flaws in whatever specific topic being discussed, as to attempt to contribute. I have the habit of signing my peer reviews, and I am rather picky as a reviewer. This eventually resulted in being avoided in the citation and collaboration lists of relevant peers. Nowadays I am internally struggling on how to discuss faulty literature and how to approach clearly flawed logic.
I have been told that in order to say what you want you must reach a certain level of authority. However reaching that point largely depends on connections and good relationships. There are no Isaac Newtons today.
7
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
1
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
add a comment |
Simply speaking: The better long term strategy for staying in academia is to be the nice guy that plays along.
Academia is full of systemic corruption because jobs in a specific field of research are very limited/competitive due to the pyramid nature of human resources (number of students > number of PhDs > number of professors). Academia is structured around the selection of outstanding and remarkable individuals. However, history has told us that most metrics tend to be overshadowed by character traits in the long run. Especially if you think about the fact that most professors are not doing much research themselves but are rather responslible for the management of workgroups and projects. Hence, the nice, non-critical guy has a structural advantage in the system. Letters of recommendation, which tend to be more positive for a friendly person compared to "troublemakers", are also a direct indication of this.
The system is the people and the people select their system. People don't like criticism. So the system/people deselected criticism.
5
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
add a comment |
I am going to convert my comment into answer. 2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
On average Writing a good article takes 1-2 years but to save tenure (in my country) an AP has to publish 3 SCI articles per year. With such kind of pressure a researcher can not write a quality research paper. Moreover, strong critique require time to analyse the findings of other researchers, while the race of publish don’t allow to spend much time on that part of article, which I believe is one of the most important part of article.
Yuval Noah Harari author of famous book Sapiens: A Brief History of Humankind, once he said in his interview that he start thinking out of box when he promoted to Associate Professor position, because in his university there is no requirement of publication once you get Associate Professor position. That's why he wrote this book within 2 years of time.
2
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
4
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
4
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
add a comment |
Have you ever encountered a "Letter-to-editor" article?
Check this, for instance, an article on the Turkish Journal of Urology about writing a letter to the editor:
How to write an editorial letter?
It is a kind of a "short-communication", and generally used to criticize previously published articles. If this is uncommon or absent in any of research subject, it is more of an issue of the researchers in that field.
2
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
5
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
2
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
add a comment |
Some of the available answers allude to the following principle, but none say it directly.
Identifying idiosyncratic flaws of a single article, if published alone, makes your article a satellite of that primary article; the importance of your article cannot possibly exceed the importance of the primary article, but usually the satellite weighs less.
The best way to reach publication-worthiness (in a journal of the same caliber like the primary publication), you should ideally get some results of your own, studying the same problem. That buys you a ticket under which you can not only showcase your own work, but you can also compare it to the state of the field (i.e., the flawed article, and any other relevant prior work that you are aware of) in as much detail as you find intellectually stimulating to a "generic" reader of the journal where you publish this.
The point of comparing your work to previous work is to make your contribution shine in its relevance and methodological superiority, rather than to punish your predecessor or to save your colleagues from falling into believing obviously flawed results. And that is an entirely non-political reason to approach the comparison with utmost indifference and politeness, in a way where many readers (who never encountered the actual earlier article) will not even realize the references to it to have motivated your article. The flawed article isn't enormously important, right? I'd even argue that you should be primarily comparing your new work to the best available results of others, unless certain flaws are propagating over most prior work and your own work stands out (in your eyes) by being more sophisticated and free of them.
The reason why some high profile journals pay more attention to "letters to the editor" than solid "average" journals is that if the primary article is widely seen as a breakthrough, its satellites might still be publication worthy if the critique is deep or if it adds a relevant multi-disciplinary angle, and if their points are really good.
add a comment |
I disagree with your assumption that his does not happen, although I suppose each discipline may be different.
What does happen, is that criticism is often presented in a very diplomatic, curteous, and professional way, such that it may not appear as criticism to the "untrained" eye (but a 'trained' person looking for the right paper will pick up on the criticism).
For example, you may find yourself writing a paper which corrects a pretty obvious (in your opinion) and damning flaw in a proposed method. It may be tempting to write "this paper had a serious methodological flaw which invalidates the results and should never have been published, here we show you how to do things properly". But this would be a highly inappropriate way to do this, and a good PhD supervisor will probably ask you (possibly to your dislike) to rephrase this as something like "We build on the novel methods proposed by X et al, but additionally consider adding Y. Adding Y is justified because adjusting for Z would likely lead to more optimal results for the following reasons, etc etc".
Note that the two are effectively saying the same thing. But to the untrained eye, the first reads like "wow the first paper was bad, and we're exposing them", whereas the second reads "ok, the first paper made a significant step in the right direction, and we found ways to improve on it". Which is kinder, and actually closer to the truth, in most cases.
Some might go to one extreme and call this politics, others might go to the other and call it professionalism. It's probably a little bit of both. But I would agree that the latter does make for a much more welcoming environment; if we wanted flamewars, there will always be youtube for that.
Additionally, don't forget that (hopefully) these papers have been peer reviewed. Reviewers don't always have to be experts in the particular area of the paper, just 'reasonably familiar' to offer a useful, educated review. What is an 'obvious methodological flaw' for someone who is in the process of becoming an expert in the field, may not be so obvious for someone who is otherwise knowledgeable and interested in the paper, but not necessarily an expert in the field.
So before you go out all guns blazing to criticise someone in a paper for having 'obvious' flaws, do keep in mind you may end up sounding like the nerdy guy who argues how the TX5800 calculator's third bolt under the second shelving unit is actually 5.49 mm, NOT 5.5mm, and how stupid can someone be for getting that wrong, all you had to do was disassemble the flux combobulator and measure the voltage, even a baby can do that".
You do not want to be that guy.
2
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
add a comment |
Other answers discuss good reasons (besides the fair point that answers do exist): e.g. good criticism takes time, also the authors may know the specifics better than you.
Another reason why criticism is usually very tuned down: the reviewers for your article are selected based on familiarity with the field, which means they possibly have a (favorite) method / approach of their own. So new articles need to strike a balance between saying "the existing methods are insufficient" (which is why we are publishing our novel one) while at the same time not being too harsh (which will offend the reviewers).
add a comment |
- Criticism is published, as pointed out by iayork. It may also appear in less formal places like blogs. So one reason you don't see it is that you aren't looking in the right places.
- The best place for criticism is in the peer review process. (That is, peer review should catch bad work before it is published, rendering subsequent criticism unnecessary. However, bad work does sometimes get published, so criticism of published work is appropriate then.) Reviews are not usually published, but some venues do publish reviews. E.g., ICLR and NeurIPS reviews are available with the papers (for ICLR, even the rejected papers have reviews published).
- Constructive criticism is better than simply trashing other people's work. Attacks can backfire -- they make the attacker look unprofessional, and things can degenerate into a back-and-forth flame war. Constructive criticism appears in the form of follow-up work that fixes the shortcomings of prior work. This is more polite and less obvious, so you may not notice it.
- Even when bad work gets published, it may simply be best to ignore it. Engaging in criticism is a messy business and some of the mud may stick to the critic. Criticism usually only appears when the work in question is getting a lot of attention.
2
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
1
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
2
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
2
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
1
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
|
show 3 more comments
After the second year of Ph.D. study, I started to learn extensively on how to write highly regarded publications. The first thing that was hammered in me is how to describe the assumptions and limitations of the given research, and give a framework or scope of the given research. My first publication described many of these limitations and scope in detail, but my second and third publications did not because I already established these issues in my first one. So, it is paramount that you check ALL the citations, especially the ones by the same author. These usually give you the 'background' of the research. In addition, be familiar with 'well known' assumptions, and practices as well. They are accepted for a reason.
Second, I have seen an article (peer-reviewed in high-ranking journal) that outright attacked another earlier published work by another author. Even though the attacking article may be correct, it did not exactly contribute anything to the field. On other hand, if you developed a novel methodology that addresses the issue that was presented in that offending publication, you can say, "here's my work on this, and I did this." and then cite that paper, and then say something along like, 'improves the results because their approach did not address this particular issue'.
Third,if you burn bridges and piss off the wrong people by attacking them outright, you will not get very far. It is a fact of life. Learn how to navigate the world with proper conduct and delicate touch (people ARE sensitive), and stay true to yourself.
add a comment |
None of the answers given so far covers the aspects that are really relevant here. Most of them are poorly written and only mention one or two arguments, probably only in an effort to gain reputation. My answer will lay out the arguments in a structured form, with unprecedented completeness and clarity, and with nice formatting. It will be enriched by an example which was not yet given in any other answer, and help the reader to properly and thoroughly understand this complex topic.
Well. Let's see how that plays out.
Seriously: The main arguments have already been given, and can roughly be classified into political/interpersonal ones or methodological/technical ones. I think that the details will vary depending on the subject, but the tags indicate that the question refers to the more technical fields.
The political arguments are mainly that the critique might backfire and might hurt your reputation. Beyond that, people are usually not funded for criticizing others: A paper that only criticizes another will hardly be published, and the publication count is in many cases the only measure of "success in academics". And even if your critique is justified and the paper is technically sound, a harsh critique may simply be deemed "unnecessary", and thus shed a bad light on the author.
The technical arguments are related to the efforts that are necessary for a profound and (optionally: ) "constructive" critique. In order to really identify technical flaws, you need a deep familiarity with the topic. Care has to be taken in order to eliminate the slightest doubts when criticizing others. This is particularly difficult when you are at the beginning of your career. The situation may be different when you really know the related work inside out and backwards.
Basically every non-trivial approach or insight has limitations or (hidden) assumptions. For a critique to be profound, the results often have to be replicated and the flaws have to be "verified", in that sense. The efforts for this are often prohibitively large. Whether a critique is considered to be constructive then mainly depends on whether you can suggest improvements. This was already summarized nicely in the answer by Jirka Hanika.
Therefore, much of the game that is played in the academic world consists of finding flaws and suggesting improvements: Find a paper that shows how to solve a certain problem with red, green and yellow balloons. Write a paper that points out the "serious limitation" of not considering blue balloons. Show that the same problem can be solved with blue balloons. You got a publication there, and maybe another year of funding.
However, there are papers that plainly criticize others. I'd like to refer to one of my favorite papers here, with the ballsy title "Clustering of Time Series Subsequences is Meaningless" :
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences, extracted via a sliding window, has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is meaningless.
[...]
These results may appear surprising, since they invalidate the claims of a highly referenced paper, and many of the dozens of extensions researchers have proposed ([a list of a dozen publications]).
So this paper basically burned down a whole research branch. Reading it can give you a glance at how difficult it is to criticize others in a way that can not be attacked or questioned on a methodological level. And even though the author himself says that the results are "negative", I think that one of the most useful contributions that a scientist can make is to put people back on the right track, instead of participating in the game that is essentially a politically and financially motivated waste of time.
So when you're sure that you can profoundly criticize others: Do it.
1
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
add a comment |
protected by eykanal♦ Nov 26 '18 at 13:06
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
12 Answers
12
active
oldest
votes
12 Answers
12
active
oldest
votes
active
oldest
votes
active
oldest
votes
Such criticisms are common in journals like Nature and PNAS. They're often not called "papers" -- PNAS calls them "Letters", Science calls them "Technical Comments", and Nature calls them "Brief Communications Arising" -- but they do get a significant degree of pre-publication scrutiny and often are peer reviewed. Some examples from the past few weeks:
Is “choline and geranate” an ionic liquid or deep eutectic solvent system? (PNAS)
Tagging the musical beat: Neural entrainment or event-related potentials? (PNAS)
Comment on “Predicting reaction performance in C–N cross-coupling using machine learning” (Science)
Comment on “The earliest modern humans outside Africa” (Science)
Assumptions for emergent constraints (Nature)
Emergent constraints on climate sensitivity (Nature)
Climate constraint reflects forced signal (Nature)
Other journals also have systems for responses, though most are not as organized about it.
So the premise is not quite right -- criticisms of high-profile papers are not unusual -- but it's true that the vast majority of papers don't get formal critiques like this -- even though the vast majority of papers probably do have something that could be criticized.
That's part of the reason. Very few papers are completely above reproach (except for my own, of course). If I was to send a critical letter about every issue I see in every paper I read, I'd spend my time doing nothing else, and the journals would be filled with my letters to the editor with no room for new articles.
And we're all adults here. It's expected that scientists read papers critically; that's part of our job. That means we all find things to criticize. Finding a problem in a published paper isn't a shocking, scandalous thing. It's like a toll collector breaking a $20 bill, a minor technical part of your job. It would be condescending of me to assume that I'm the only person who noticed the issues in question, and that it's up to me to save my colleagues from their stupidity and ignorance.
Finally, when I do respond to publications that need criticizing, I rarely phrase my responses that way. For example, several years ago I read an interesting paper in my field that, I thought, made a significant error in its assumptions. To show that, I repeated some of their experiments, adding in the missing controls, and then extended the work to show where the corrected observations led. I didn't publish as a criticism of the previous work (which was from researchers who I know and admire and who have done a lot of great stuff). I published it as a paper that can stand on its own, noting the previous work in passing but not making a big deal of the mistake. Hopefully, if I ever make a mistake (unlikely!) my colleagues will correct me in the same way. Science is hard, and it shouldn't be like a hockey goalie, where every time you make a mistake a buzzer goes off and thirty thousand people yell at you.
29
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
2
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
2
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
|
show 4 more comments
Such criticisms are common in journals like Nature and PNAS. They're often not called "papers" -- PNAS calls them "Letters", Science calls them "Technical Comments", and Nature calls them "Brief Communications Arising" -- but they do get a significant degree of pre-publication scrutiny and often are peer reviewed. Some examples from the past few weeks:
Is “choline and geranate” an ionic liquid or deep eutectic solvent system? (PNAS)
Tagging the musical beat: Neural entrainment or event-related potentials? (PNAS)
Comment on “Predicting reaction performance in C–N cross-coupling using machine learning” (Science)
Comment on “The earliest modern humans outside Africa” (Science)
Assumptions for emergent constraints (Nature)
Emergent constraints on climate sensitivity (Nature)
Climate constraint reflects forced signal (Nature)
Other journals also have systems for responses, though most are not as organized about it.
So the premise is not quite right -- criticisms of high-profile papers are not unusual -- but it's true that the vast majority of papers don't get formal critiques like this -- even though the vast majority of papers probably do have something that could be criticized.
That's part of the reason. Very few papers are completely above reproach (except for my own, of course). If I was to send a critical letter about every issue I see in every paper I read, I'd spend my time doing nothing else, and the journals would be filled with my letters to the editor with no room for new articles.
And we're all adults here. It's expected that scientists read papers critically; that's part of our job. That means we all find things to criticize. Finding a problem in a published paper isn't a shocking, scandalous thing. It's like a toll collector breaking a $20 bill, a minor technical part of your job. It would be condescending of me to assume that I'm the only person who noticed the issues in question, and that it's up to me to save my colleagues from their stupidity and ignorance.
Finally, when I do respond to publications that need criticizing, I rarely phrase my responses that way. For example, several years ago I read an interesting paper in my field that, I thought, made a significant error in its assumptions. To show that, I repeated some of their experiments, adding in the missing controls, and then extended the work to show where the corrected observations led. I didn't publish as a criticism of the previous work (which was from researchers who I know and admire and who have done a lot of great stuff). I published it as a paper that can stand on its own, noting the previous work in passing but not making a big deal of the mistake. Hopefully, if I ever make a mistake (unlikely!) my colleagues will correct me in the same way. Science is hard, and it shouldn't be like a hockey goalie, where every time you make a mistake a buzzer goes off and thirty thousand people yell at you.
29
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
2
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
2
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
|
show 4 more comments
Such criticisms are common in journals like Nature and PNAS. They're often not called "papers" -- PNAS calls them "Letters", Science calls them "Technical Comments", and Nature calls them "Brief Communications Arising" -- but they do get a significant degree of pre-publication scrutiny and often are peer reviewed. Some examples from the past few weeks:
Is “choline and geranate” an ionic liquid or deep eutectic solvent system? (PNAS)
Tagging the musical beat: Neural entrainment or event-related potentials? (PNAS)
Comment on “Predicting reaction performance in C–N cross-coupling using machine learning” (Science)
Comment on “The earliest modern humans outside Africa” (Science)
Assumptions for emergent constraints (Nature)
Emergent constraints on climate sensitivity (Nature)
Climate constraint reflects forced signal (Nature)
Other journals also have systems for responses, though most are not as organized about it.
So the premise is not quite right -- criticisms of high-profile papers are not unusual -- but it's true that the vast majority of papers don't get formal critiques like this -- even though the vast majority of papers probably do have something that could be criticized.
That's part of the reason. Very few papers are completely above reproach (except for my own, of course). If I was to send a critical letter about every issue I see in every paper I read, I'd spend my time doing nothing else, and the journals would be filled with my letters to the editor with no room for new articles.
And we're all adults here. It's expected that scientists read papers critically; that's part of our job. That means we all find things to criticize. Finding a problem in a published paper isn't a shocking, scandalous thing. It's like a toll collector breaking a $20 bill, a minor technical part of your job. It would be condescending of me to assume that I'm the only person who noticed the issues in question, and that it's up to me to save my colleagues from their stupidity and ignorance.
Finally, when I do respond to publications that need criticizing, I rarely phrase my responses that way. For example, several years ago I read an interesting paper in my field that, I thought, made a significant error in its assumptions. To show that, I repeated some of their experiments, adding in the missing controls, and then extended the work to show where the corrected observations led. I didn't publish as a criticism of the previous work (which was from researchers who I know and admire and who have done a lot of great stuff). I published it as a paper that can stand on its own, noting the previous work in passing but not making a big deal of the mistake. Hopefully, if I ever make a mistake (unlikely!) my colleagues will correct me in the same way. Science is hard, and it shouldn't be like a hockey goalie, where every time you make a mistake a buzzer goes off and thirty thousand people yell at you.
Such criticisms are common in journals like Nature and PNAS. They're often not called "papers" -- PNAS calls them "Letters", Science calls them "Technical Comments", and Nature calls them "Brief Communications Arising" -- but they do get a significant degree of pre-publication scrutiny and often are peer reviewed. Some examples from the past few weeks:
Is “choline and geranate” an ionic liquid or deep eutectic solvent system? (PNAS)
Tagging the musical beat: Neural entrainment or event-related potentials? (PNAS)
Comment on “Predicting reaction performance in C–N cross-coupling using machine learning” (Science)
Comment on “The earliest modern humans outside Africa” (Science)
Assumptions for emergent constraints (Nature)
Emergent constraints on climate sensitivity (Nature)
Climate constraint reflects forced signal (Nature)
Other journals also have systems for responses, though most are not as organized about it.
So the premise is not quite right -- criticisms of high-profile papers are not unusual -- but it's true that the vast majority of papers don't get formal critiques like this -- even though the vast majority of papers probably do have something that could be criticized.
That's part of the reason. Very few papers are completely above reproach (except for my own, of course). If I was to send a critical letter about every issue I see in every paper I read, I'd spend my time doing nothing else, and the journals would be filled with my letters to the editor with no room for new articles.
And we're all adults here. It's expected that scientists read papers critically; that's part of our job. That means we all find things to criticize. Finding a problem in a published paper isn't a shocking, scandalous thing. It's like a toll collector breaking a $20 bill, a minor technical part of your job. It would be condescending of me to assume that I'm the only person who noticed the issues in question, and that it's up to me to save my colleagues from their stupidity and ignorance.
Finally, when I do respond to publications that need criticizing, I rarely phrase my responses that way. For example, several years ago I read an interesting paper in my field that, I thought, made a significant error in its assumptions. To show that, I repeated some of their experiments, adding in the missing controls, and then extended the work to show where the corrected observations led. I didn't publish as a criticism of the previous work (which was from researchers who I know and admire and who have done a lot of great stuff). I published it as a paper that can stand on its own, noting the previous work in passing but not making a big deal of the mistake. Hopefully, if I ever make a mistake (unlikely!) my colleagues will correct me in the same way. Science is hard, and it shouldn't be like a hockey goalie, where every time you make a mistake a buzzer goes off and thirty thousand people yell at you.
edited Nov 23 '18 at 15:47
answered Nov 23 '18 at 14:31
iayorkiayork
12.5k53345
12.5k53345
29
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
2
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
2
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
|
show 4 more comments
29
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
2
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
2
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
29
29
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
This answer is quite optimistic in some regards and seems to assume that spotting problems while reading publications is easy as well as redoing experiments to verify scientific claims one way or the other isn't a waste of time. However, in the spirit of the answer I will refrain from writing my own answer criticizing this answer. :)
– Trilarion
Nov 23 '18 at 15:53
2
2
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I don't know if spotting problems is easy per se, but it's such a routine part of the job that novices are, or should be, intensively trained in it. The fact that academia SE and similar sites constantly see questions like this one, from new PhD students, suggests that even people fairly new to the field are able to spot problems quickly.
– iayork
Nov 23 '18 at 15:57
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
I think this is true for your immediate field of expertise, but if you move a bit further away it might be more complicated. What I think could help would be something similar to the customer reviews on Amazon product pages, in this case reader reviews of scientific publications. It would save me a lot of time if I could see what problems other readers have spotted without me needing to check every publication on my own every time. Problem spotting is much easier if we would all do it together instead of everyone for himself/herself.
– Trilarion
Nov 23 '18 at 21:34
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
There is also a category "Analysis" in Nature: nature.com/nature/articles?type=analysis
– Norbert
Nov 24 '18 at 12:15
2
2
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
@silvado You mean scientists should pay their friends to leave positive comments about them and then we all agree to only read papers that are 4 stars or more?
– sgf
Nov 27 '18 at 15:03
|
show 4 more comments
Many do, just in a polite way. Most of numerical modelling seems to be a variation of nicely stating "previous folks did things wrong and/or inefficient, we improve upon that; or at least offer another approach despite not showing their failings." There, criticism/failing of other approaches is often shown throughout the article.
Additionally, you are starting your PhD.
- What you feel might be a serious omission regarding method applicability could be a well-known limitation of the method everyone in the community is aware of. In this case, your "you are propagating BS" is a pointless and wrong rant - they know their methods better than you. Say everyone might already know X will offer about 1% higher error, but 100x easier simulation/experiment. If you suspect this might be the case, rather write a polite mail asking for clarification and whether the obtained results are still valid or how the approach could be improved etc etc.
- On the other hand, everyone could be assuming something that is actually incorrect for the problem, and you know it. Here, bluntly stating "you are propagating BS" isn't technically wrong, but you won't be getting any new friends with that. Rather write a nice article showing how and why everyone is approaching the problem wrong way and how you solve their issues - you would be instantly a highly regarded member of the community.
But yeah, a lot of fairly shaky research is taken as true and this leads to severe reproducibility problem all over fields. There are journals for publishing reproductions of other's work, but most prefer doing something original. Can't blame them: Invent X and have name immortalized vs "yeah, X's results check out".
34
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
2
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
6
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
2
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
14
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
|
show 6 more comments
Many do, just in a polite way. Most of numerical modelling seems to be a variation of nicely stating "previous folks did things wrong and/or inefficient, we improve upon that; or at least offer another approach despite not showing their failings." There, criticism/failing of other approaches is often shown throughout the article.
Additionally, you are starting your PhD.
- What you feel might be a serious omission regarding method applicability could be a well-known limitation of the method everyone in the community is aware of. In this case, your "you are propagating BS" is a pointless and wrong rant - they know their methods better than you. Say everyone might already know X will offer about 1% higher error, but 100x easier simulation/experiment. If you suspect this might be the case, rather write a polite mail asking for clarification and whether the obtained results are still valid or how the approach could be improved etc etc.
- On the other hand, everyone could be assuming something that is actually incorrect for the problem, and you know it. Here, bluntly stating "you are propagating BS" isn't technically wrong, but you won't be getting any new friends with that. Rather write a nice article showing how and why everyone is approaching the problem wrong way and how you solve their issues - you would be instantly a highly regarded member of the community.
But yeah, a lot of fairly shaky research is taken as true and this leads to severe reproducibility problem all over fields. There are journals for publishing reproductions of other's work, but most prefer doing something original. Can't blame them: Invent X and have name immortalized vs "yeah, X's results check out".
34
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
2
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
6
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
2
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
14
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
|
show 6 more comments
Many do, just in a polite way. Most of numerical modelling seems to be a variation of nicely stating "previous folks did things wrong and/or inefficient, we improve upon that; or at least offer another approach despite not showing their failings." There, criticism/failing of other approaches is often shown throughout the article.
Additionally, you are starting your PhD.
- What you feel might be a serious omission regarding method applicability could be a well-known limitation of the method everyone in the community is aware of. In this case, your "you are propagating BS" is a pointless and wrong rant - they know their methods better than you. Say everyone might already know X will offer about 1% higher error, but 100x easier simulation/experiment. If you suspect this might be the case, rather write a polite mail asking for clarification and whether the obtained results are still valid or how the approach could be improved etc etc.
- On the other hand, everyone could be assuming something that is actually incorrect for the problem, and you know it. Here, bluntly stating "you are propagating BS" isn't technically wrong, but you won't be getting any new friends with that. Rather write a nice article showing how and why everyone is approaching the problem wrong way and how you solve their issues - you would be instantly a highly regarded member of the community.
But yeah, a lot of fairly shaky research is taken as true and this leads to severe reproducibility problem all over fields. There are journals for publishing reproductions of other's work, but most prefer doing something original. Can't blame them: Invent X and have name immortalized vs "yeah, X's results check out".
Many do, just in a polite way. Most of numerical modelling seems to be a variation of nicely stating "previous folks did things wrong and/or inefficient, we improve upon that; or at least offer another approach despite not showing their failings." There, criticism/failing of other approaches is often shown throughout the article.
Additionally, you are starting your PhD.
- What you feel might be a serious omission regarding method applicability could be a well-known limitation of the method everyone in the community is aware of. In this case, your "you are propagating BS" is a pointless and wrong rant - they know their methods better than you. Say everyone might already know X will offer about 1% higher error, but 100x easier simulation/experiment. If you suspect this might be the case, rather write a polite mail asking for clarification and whether the obtained results are still valid or how the approach could be improved etc etc.
- On the other hand, everyone could be assuming something that is actually incorrect for the problem, and you know it. Here, bluntly stating "you are propagating BS" isn't technically wrong, but you won't be getting any new friends with that. Rather write a nice article showing how and why everyone is approaching the problem wrong way and how you solve their issues - you would be instantly a highly regarded member of the community.
But yeah, a lot of fairly shaky research is taken as true and this leads to severe reproducibility problem all over fields. There are journals for publishing reproductions of other's work, but most prefer doing something original. Can't blame them: Invent X and have name immortalized vs "yeah, X's results check out".
answered Nov 23 '18 at 13:32
ZizyZizy
51713
51713
34
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
2
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
6
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
2
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
14
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
|
show 6 more comments
34
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
2
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
6
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
2
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
14
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
34
34
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
If you don't see criticism in papers, its because you are not recognizing it when you see it. Its there. Its just between the lines.
– Ian Sudbery
Nov 23 '18 at 13:44
2
2
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
Generally good answer. Although repeating the premise that "everyone already knows X" alongside the premise that the OP doesn't is clearly contradictory.
– Lightness Races in Orbit
Nov 23 '18 at 23:00
6
6
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
@LightnessRacesinOrbit "Everyone" means "most experts", not literally everyone.
– David Richerby
Nov 24 '18 at 1:18
2
2
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
@DavidRicherby The word "everyone" means "everyone" - say what you mean!
– Lightness Races in Orbit
Nov 24 '18 at 14:02
14
14
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
@LightnessRacesinOrbit if you insist that the word "everyone" must mean literally every single person who is currently alive, the word becomes almost useless. Language is not mathematics. We don't communicate by concatenating dictionary definitions.
– David Richerby
Nov 24 '18 at 14:11
|
show 6 more comments
Politics. Criticising peers directly leads to lack of political support.
This is because science is increasingly based on connections and short-term appraisal, and this is greatly detrimental to the advancement of hardcore science, and intellectualism in general.
We are social animals. This leads to a natural tendency of taking others' actions personally and "return favours". Also people are very wary of others' opinions and public image, and as a result are naturally sensitive to negative criticism. Usually, when you hurt the feelings of someone in public, you've made yourself an enemy.
I firmly believe in criticism and frankness as the shortest path to improvement. That has put me in delicate situations many times. In my early papers I was eager to lift obvious flaws in whatever specific topic being discussed, as to attempt to contribute. I have the habit of signing my peer reviews, and I am rather picky as a reviewer. This eventually resulted in being avoided in the citation and collaboration lists of relevant peers. Nowadays I am internally struggling on how to discuss faulty literature and how to approach clearly flawed logic.
I have been told that in order to say what you want you must reach a certain level of authority. However reaching that point largely depends on connections and good relationships. There are no Isaac Newtons today.
7
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
1
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
add a comment |
Politics. Criticising peers directly leads to lack of political support.
This is because science is increasingly based on connections and short-term appraisal, and this is greatly detrimental to the advancement of hardcore science, and intellectualism in general.
We are social animals. This leads to a natural tendency of taking others' actions personally and "return favours". Also people are very wary of others' opinions and public image, and as a result are naturally sensitive to negative criticism. Usually, when you hurt the feelings of someone in public, you've made yourself an enemy.
I firmly believe in criticism and frankness as the shortest path to improvement. That has put me in delicate situations many times. In my early papers I was eager to lift obvious flaws in whatever specific topic being discussed, as to attempt to contribute. I have the habit of signing my peer reviews, and I am rather picky as a reviewer. This eventually resulted in being avoided in the citation and collaboration lists of relevant peers. Nowadays I am internally struggling on how to discuss faulty literature and how to approach clearly flawed logic.
I have been told that in order to say what you want you must reach a certain level of authority. However reaching that point largely depends on connections and good relationships. There are no Isaac Newtons today.
7
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
1
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
add a comment |
Politics. Criticising peers directly leads to lack of political support.
This is because science is increasingly based on connections and short-term appraisal, and this is greatly detrimental to the advancement of hardcore science, and intellectualism in general.
We are social animals. This leads to a natural tendency of taking others' actions personally and "return favours". Also people are very wary of others' opinions and public image, and as a result are naturally sensitive to negative criticism. Usually, when you hurt the feelings of someone in public, you've made yourself an enemy.
I firmly believe in criticism and frankness as the shortest path to improvement. That has put me in delicate situations many times. In my early papers I was eager to lift obvious flaws in whatever specific topic being discussed, as to attempt to contribute. I have the habit of signing my peer reviews, and I am rather picky as a reviewer. This eventually resulted in being avoided in the citation and collaboration lists of relevant peers. Nowadays I am internally struggling on how to discuss faulty literature and how to approach clearly flawed logic.
I have been told that in order to say what you want you must reach a certain level of authority. However reaching that point largely depends on connections and good relationships. There are no Isaac Newtons today.
Politics. Criticising peers directly leads to lack of political support.
This is because science is increasingly based on connections and short-term appraisal, and this is greatly detrimental to the advancement of hardcore science, and intellectualism in general.
We are social animals. This leads to a natural tendency of taking others' actions personally and "return favours". Also people are very wary of others' opinions and public image, and as a result are naturally sensitive to negative criticism. Usually, when you hurt the feelings of someone in public, you've made yourself an enemy.
I firmly believe in criticism and frankness as the shortest path to improvement. That has put me in delicate situations many times. In my early papers I was eager to lift obvious flaws in whatever specific topic being discussed, as to attempt to contribute. I have the habit of signing my peer reviews, and I am rather picky as a reviewer. This eventually resulted in being avoided in the citation and collaboration lists of relevant peers. Nowadays I am internally struggling on how to discuss faulty literature and how to approach clearly flawed logic.
I have been told that in order to say what you want you must reach a certain level of authority. However reaching that point largely depends on connections and good relationships. There are no Isaac Newtons today.
answered Nov 23 '18 at 18:07
ScientistScientist
7,05512657
7,05512657
7
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
1
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
add a comment |
7
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
1
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
7
7
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
This eventually resulted in being avoided in the citation... lists of relevant peers. — I'm sorry you have unethical peers.
– JeffE
Nov 24 '18 at 17:05
1
1
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
An important counterpoint: How not to be a crank, by James Heathers (in which he details how it is possible to lever some quite severe criticisms at the literature without receiving a career-ending backlash in return).
– E.P.
Nov 28 '18 at 2:39
add a comment |
Simply speaking: The better long term strategy for staying in academia is to be the nice guy that plays along.
Academia is full of systemic corruption because jobs in a specific field of research are very limited/competitive due to the pyramid nature of human resources (number of students > number of PhDs > number of professors). Academia is structured around the selection of outstanding and remarkable individuals. However, history has told us that most metrics tend to be overshadowed by character traits in the long run. Especially if you think about the fact that most professors are not doing much research themselves but are rather responslible for the management of workgroups and projects. Hence, the nice, non-critical guy has a structural advantage in the system. Letters of recommendation, which tend to be more positive for a friendly person compared to "troublemakers", are also a direct indication of this.
The system is the people and the people select their system. People don't like criticism. So the system/people deselected criticism.
5
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
add a comment |
Simply speaking: The better long term strategy for staying in academia is to be the nice guy that plays along.
Academia is full of systemic corruption because jobs in a specific field of research are very limited/competitive due to the pyramid nature of human resources (number of students > number of PhDs > number of professors). Academia is structured around the selection of outstanding and remarkable individuals. However, history has told us that most metrics tend to be overshadowed by character traits in the long run. Especially if you think about the fact that most professors are not doing much research themselves but are rather responslible for the management of workgroups and projects. Hence, the nice, non-critical guy has a structural advantage in the system. Letters of recommendation, which tend to be more positive for a friendly person compared to "troublemakers", are also a direct indication of this.
The system is the people and the people select their system. People don't like criticism. So the system/people deselected criticism.
5
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
add a comment |
Simply speaking: The better long term strategy for staying in academia is to be the nice guy that plays along.
Academia is full of systemic corruption because jobs in a specific field of research are very limited/competitive due to the pyramid nature of human resources (number of students > number of PhDs > number of professors). Academia is structured around the selection of outstanding and remarkable individuals. However, history has told us that most metrics tend to be overshadowed by character traits in the long run. Especially if you think about the fact that most professors are not doing much research themselves but are rather responslible for the management of workgroups and projects. Hence, the nice, non-critical guy has a structural advantage in the system. Letters of recommendation, which tend to be more positive for a friendly person compared to "troublemakers", are also a direct indication of this.
The system is the people and the people select their system. People don't like criticism. So the system/people deselected criticism.
Simply speaking: The better long term strategy for staying in academia is to be the nice guy that plays along.
Academia is full of systemic corruption because jobs in a specific field of research are very limited/competitive due to the pyramid nature of human resources (number of students > number of PhDs > number of professors). Academia is structured around the selection of outstanding and remarkable individuals. However, history has told us that most metrics tend to be overshadowed by character traits in the long run. Especially if you think about the fact that most professors are not doing much research themselves but are rather responslible for the management of workgroups and projects. Hence, the nice, non-critical guy has a structural advantage in the system. Letters of recommendation, which tend to be more positive for a friendly person compared to "troublemakers", are also a direct indication of this.
The system is the people and the people select their system. People don't like criticism. So the system/people deselected criticism.
edited Nov 26 '18 at 23:33
answered Nov 24 '18 at 13:23
imageimage
36227
36227
5
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
add a comment |
5
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
5
5
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
most professors are not doing much research themselves — ...in some fields. This is certainly not true in all fields.
– JeffE
Nov 24 '18 at 17:08
add a comment |
I am going to convert my comment into answer. 2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
On average Writing a good article takes 1-2 years but to save tenure (in my country) an AP has to publish 3 SCI articles per year. With such kind of pressure a researcher can not write a quality research paper. Moreover, strong critique require time to analyse the findings of other researchers, while the race of publish don’t allow to spend much time on that part of article, which I believe is one of the most important part of article.
Yuval Noah Harari author of famous book Sapiens: A Brief History of Humankind, once he said in his interview that he start thinking out of box when he promoted to Associate Professor position, because in his university there is no requirement of publication once you get Associate Professor position. That's why he wrote this book within 2 years of time.
2
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
4
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
4
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
add a comment |
I am going to convert my comment into answer. 2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
On average Writing a good article takes 1-2 years but to save tenure (in my country) an AP has to publish 3 SCI articles per year. With such kind of pressure a researcher can not write a quality research paper. Moreover, strong critique require time to analyse the findings of other researchers, while the race of publish don’t allow to spend much time on that part of article, which I believe is one of the most important part of article.
Yuval Noah Harari author of famous book Sapiens: A Brief History of Humankind, once he said in his interview that he start thinking out of box when he promoted to Associate Professor position, because in his university there is no requirement of publication once you get Associate Professor position. That's why he wrote this book within 2 years of time.
2
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
4
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
4
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
add a comment |
I am going to convert my comment into answer. 2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
On average Writing a good article takes 1-2 years but to save tenure (in my country) an AP has to publish 3 SCI articles per year. With such kind of pressure a researcher can not write a quality research paper. Moreover, strong critique require time to analyse the findings of other researchers, while the race of publish don’t allow to spend much time on that part of article, which I believe is one of the most important part of article.
Yuval Noah Harari author of famous book Sapiens: A Brief History of Humankind, once he said in his interview that he start thinking out of box when he promoted to Associate Professor position, because in his university there is no requirement of publication once you get Associate Professor position. That's why he wrote this book within 2 years of time.
I am going to convert my comment into answer. 2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
On average Writing a good article takes 1-2 years but to save tenure (in my country) an AP has to publish 3 SCI articles per year. With such kind of pressure a researcher can not write a quality research paper. Moreover, strong critique require time to analyse the findings of other researchers, while the race of publish don’t allow to spend much time on that part of article, which I believe is one of the most important part of article.
Yuval Noah Harari author of famous book Sapiens: A Brief History of Humankind, once he said in his interview that he start thinking out of box when he promoted to Associate Professor position, because in his university there is no requirement of publication once you get Associate Professor position. That's why he wrote this book within 2 years of time.
edited Nov 27 '18 at 15:59
answered Nov 23 '18 at 11:55
MBKMBK
2,49511628
2,49511628
2
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
4
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
4
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
add a comment |
2
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
4
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
4
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
2
2
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
@scaaahu Critiques require time to analyse the findings of other researchers, while due to race of publish mostly researcher avoid it.
– MBK
Nov 23 '18 at 12:10
4
4
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
We really need to all sit down and collectively agree that going on like that doesn't make any sense, if we are to be true to the explicit goal of academia of advancing human understanding of the world. Now what is just a metric had become the implicit goal.
– KubaFYI
Nov 23 '18 at 12:36
4
4
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
I'm pretty sure that all (or many) academics do agree with this. Unfortunately, while the explicit goal of academics might be advancing human knowledge, the academy is not run by academics. Mostly funding bodies, governments and university administrators to make the decisions..
– Ian Sudbery
Nov 23 '18 at 13:42
add a comment |
Have you ever encountered a "Letter-to-editor" article?
Check this, for instance, an article on the Turkish Journal of Urology about writing a letter to the editor:
How to write an editorial letter?
It is a kind of a "short-communication", and generally used to criticize previously published articles. If this is uncommon or absent in any of research subject, it is more of an issue of the researchers in that field.
2
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
5
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
2
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
add a comment |
Have you ever encountered a "Letter-to-editor" article?
Check this, for instance, an article on the Turkish Journal of Urology about writing a letter to the editor:
How to write an editorial letter?
It is a kind of a "short-communication", and generally used to criticize previously published articles. If this is uncommon or absent in any of research subject, it is more of an issue of the researchers in that field.
2
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
5
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
2
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
add a comment |
Have you ever encountered a "Letter-to-editor" article?
Check this, for instance, an article on the Turkish Journal of Urology about writing a letter to the editor:
How to write an editorial letter?
It is a kind of a "short-communication", and generally used to criticize previously published articles. If this is uncommon or absent in any of research subject, it is more of an issue of the researchers in that field.
Have you ever encountered a "Letter-to-editor" article?
Check this, for instance, an article on the Turkish Journal of Urology about writing a letter to the editor:
How to write an editorial letter?
It is a kind of a "short-communication", and generally used to criticize previously published articles. If this is uncommon or absent in any of research subject, it is more of an issue of the researchers in that field.
answered Nov 23 '18 at 11:55
user91300
2
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
5
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
2
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
add a comment |
2
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
5
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
2
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
2
2
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
The key question there is I guess if anyone will ever see such a critique. I mean, I encounter most of the papers I read either by a search engine or by being linked to it by someone. Does anyone still read entire journals, introductions and all?
– KubaFYI
Nov 23 '18 at 12:03
5
5
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
@KubaFYI These responses/letters act just like other journal articles - they get indexed and have their own reference. They can therefore crop up in a Google Scholar search or similar. In addition, journals websites often prominently link to the response from the original article's page. For example, see this article: pnas.org/content/115/32/8221
– user2390246
Nov 23 '18 at 14:07
2
2
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
@KubaFYI: Any time I see a paper that seems wrong to me, the first thing I do is to do a citation search for other papers that cite that paper. If someone has published a letter or comment saying it's wrong, it will pop up, because they will cite the paper they're criticizing.
– Ben Crowell
Nov 23 '18 at 20:54
add a comment |
Some of the available answers allude to the following principle, but none say it directly.
Identifying idiosyncratic flaws of a single article, if published alone, makes your article a satellite of that primary article; the importance of your article cannot possibly exceed the importance of the primary article, but usually the satellite weighs less.
The best way to reach publication-worthiness (in a journal of the same caliber like the primary publication), you should ideally get some results of your own, studying the same problem. That buys you a ticket under which you can not only showcase your own work, but you can also compare it to the state of the field (i.e., the flawed article, and any other relevant prior work that you are aware of) in as much detail as you find intellectually stimulating to a "generic" reader of the journal where you publish this.
The point of comparing your work to previous work is to make your contribution shine in its relevance and methodological superiority, rather than to punish your predecessor or to save your colleagues from falling into believing obviously flawed results. And that is an entirely non-political reason to approach the comparison with utmost indifference and politeness, in a way where many readers (who never encountered the actual earlier article) will not even realize the references to it to have motivated your article. The flawed article isn't enormously important, right? I'd even argue that you should be primarily comparing your new work to the best available results of others, unless certain flaws are propagating over most prior work and your own work stands out (in your eyes) by being more sophisticated and free of them.
The reason why some high profile journals pay more attention to "letters to the editor" than solid "average" journals is that if the primary article is widely seen as a breakthrough, its satellites might still be publication worthy if the critique is deep or if it adds a relevant multi-disciplinary angle, and if their points are really good.
add a comment |
Some of the available answers allude to the following principle, but none say it directly.
Identifying idiosyncratic flaws of a single article, if published alone, makes your article a satellite of that primary article; the importance of your article cannot possibly exceed the importance of the primary article, but usually the satellite weighs less.
The best way to reach publication-worthiness (in a journal of the same caliber like the primary publication), you should ideally get some results of your own, studying the same problem. That buys you a ticket under which you can not only showcase your own work, but you can also compare it to the state of the field (i.e., the flawed article, and any other relevant prior work that you are aware of) in as much detail as you find intellectually stimulating to a "generic" reader of the journal where you publish this.
The point of comparing your work to previous work is to make your contribution shine in its relevance and methodological superiority, rather than to punish your predecessor or to save your colleagues from falling into believing obviously flawed results. And that is an entirely non-political reason to approach the comparison with utmost indifference and politeness, in a way where many readers (who never encountered the actual earlier article) will not even realize the references to it to have motivated your article. The flawed article isn't enormously important, right? I'd even argue that you should be primarily comparing your new work to the best available results of others, unless certain flaws are propagating over most prior work and your own work stands out (in your eyes) by being more sophisticated and free of them.
The reason why some high profile journals pay more attention to "letters to the editor" than solid "average" journals is that if the primary article is widely seen as a breakthrough, its satellites might still be publication worthy if the critique is deep or if it adds a relevant multi-disciplinary angle, and if their points are really good.
add a comment |
Some of the available answers allude to the following principle, but none say it directly.
Identifying idiosyncratic flaws of a single article, if published alone, makes your article a satellite of that primary article; the importance of your article cannot possibly exceed the importance of the primary article, but usually the satellite weighs less.
The best way to reach publication-worthiness (in a journal of the same caliber like the primary publication), you should ideally get some results of your own, studying the same problem. That buys you a ticket under which you can not only showcase your own work, but you can also compare it to the state of the field (i.e., the flawed article, and any other relevant prior work that you are aware of) in as much detail as you find intellectually stimulating to a "generic" reader of the journal where you publish this.
The point of comparing your work to previous work is to make your contribution shine in its relevance and methodological superiority, rather than to punish your predecessor or to save your colleagues from falling into believing obviously flawed results. And that is an entirely non-political reason to approach the comparison with utmost indifference and politeness, in a way where many readers (who never encountered the actual earlier article) will not even realize the references to it to have motivated your article. The flawed article isn't enormously important, right? I'd even argue that you should be primarily comparing your new work to the best available results of others, unless certain flaws are propagating over most prior work and your own work stands out (in your eyes) by being more sophisticated and free of them.
The reason why some high profile journals pay more attention to "letters to the editor" than solid "average" journals is that if the primary article is widely seen as a breakthrough, its satellites might still be publication worthy if the critique is deep or if it adds a relevant multi-disciplinary angle, and if their points are really good.
Some of the available answers allude to the following principle, but none say it directly.
Identifying idiosyncratic flaws of a single article, if published alone, makes your article a satellite of that primary article; the importance of your article cannot possibly exceed the importance of the primary article, but usually the satellite weighs less.
The best way to reach publication-worthiness (in a journal of the same caliber like the primary publication), you should ideally get some results of your own, studying the same problem. That buys you a ticket under which you can not only showcase your own work, but you can also compare it to the state of the field (i.e., the flawed article, and any other relevant prior work that you are aware of) in as much detail as you find intellectually stimulating to a "generic" reader of the journal where you publish this.
The point of comparing your work to previous work is to make your contribution shine in its relevance and methodological superiority, rather than to punish your predecessor or to save your colleagues from falling into believing obviously flawed results. And that is an entirely non-political reason to approach the comparison with utmost indifference and politeness, in a way where many readers (who never encountered the actual earlier article) will not even realize the references to it to have motivated your article. The flawed article isn't enormously important, right? I'd even argue that you should be primarily comparing your new work to the best available results of others, unless certain flaws are propagating over most prior work and your own work stands out (in your eyes) by being more sophisticated and free of them.
The reason why some high profile journals pay more attention to "letters to the editor" than solid "average" journals is that if the primary article is widely seen as a breakthrough, its satellites might still be publication worthy if the critique is deep or if it adds a relevant multi-disciplinary angle, and if their points are really good.
answered Nov 25 '18 at 14:48
Jirka HanikaJirka Hanika
76147
76147
add a comment |
add a comment |
I disagree with your assumption that his does not happen, although I suppose each discipline may be different.
What does happen, is that criticism is often presented in a very diplomatic, curteous, and professional way, such that it may not appear as criticism to the "untrained" eye (but a 'trained' person looking for the right paper will pick up on the criticism).
For example, you may find yourself writing a paper which corrects a pretty obvious (in your opinion) and damning flaw in a proposed method. It may be tempting to write "this paper had a serious methodological flaw which invalidates the results and should never have been published, here we show you how to do things properly". But this would be a highly inappropriate way to do this, and a good PhD supervisor will probably ask you (possibly to your dislike) to rephrase this as something like "We build on the novel methods proposed by X et al, but additionally consider adding Y. Adding Y is justified because adjusting for Z would likely lead to more optimal results for the following reasons, etc etc".
Note that the two are effectively saying the same thing. But to the untrained eye, the first reads like "wow the first paper was bad, and we're exposing them", whereas the second reads "ok, the first paper made a significant step in the right direction, and we found ways to improve on it". Which is kinder, and actually closer to the truth, in most cases.
Some might go to one extreme and call this politics, others might go to the other and call it professionalism. It's probably a little bit of both. But I would agree that the latter does make for a much more welcoming environment; if we wanted flamewars, there will always be youtube for that.
Additionally, don't forget that (hopefully) these papers have been peer reviewed. Reviewers don't always have to be experts in the particular area of the paper, just 'reasonably familiar' to offer a useful, educated review. What is an 'obvious methodological flaw' for someone who is in the process of becoming an expert in the field, may not be so obvious for someone who is otherwise knowledgeable and interested in the paper, but not necessarily an expert in the field.
So before you go out all guns blazing to criticise someone in a paper for having 'obvious' flaws, do keep in mind you may end up sounding like the nerdy guy who argues how the TX5800 calculator's third bolt under the second shelving unit is actually 5.49 mm, NOT 5.5mm, and how stupid can someone be for getting that wrong, all you had to do was disassemble the flux combobulator and measure the voltage, even a baby can do that".
You do not want to be that guy.
2
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
add a comment |
I disagree with your assumption that his does not happen, although I suppose each discipline may be different.
What does happen, is that criticism is often presented in a very diplomatic, curteous, and professional way, such that it may not appear as criticism to the "untrained" eye (but a 'trained' person looking for the right paper will pick up on the criticism).
For example, you may find yourself writing a paper which corrects a pretty obvious (in your opinion) and damning flaw in a proposed method. It may be tempting to write "this paper had a serious methodological flaw which invalidates the results and should never have been published, here we show you how to do things properly". But this would be a highly inappropriate way to do this, and a good PhD supervisor will probably ask you (possibly to your dislike) to rephrase this as something like "We build on the novel methods proposed by X et al, but additionally consider adding Y. Adding Y is justified because adjusting for Z would likely lead to more optimal results for the following reasons, etc etc".
Note that the two are effectively saying the same thing. But to the untrained eye, the first reads like "wow the first paper was bad, and we're exposing them", whereas the second reads "ok, the first paper made a significant step in the right direction, and we found ways to improve on it". Which is kinder, and actually closer to the truth, in most cases.
Some might go to one extreme and call this politics, others might go to the other and call it professionalism. It's probably a little bit of both. But I would agree that the latter does make for a much more welcoming environment; if we wanted flamewars, there will always be youtube for that.
Additionally, don't forget that (hopefully) these papers have been peer reviewed. Reviewers don't always have to be experts in the particular area of the paper, just 'reasonably familiar' to offer a useful, educated review. What is an 'obvious methodological flaw' for someone who is in the process of becoming an expert in the field, may not be so obvious for someone who is otherwise knowledgeable and interested in the paper, but not necessarily an expert in the field.
So before you go out all guns blazing to criticise someone in a paper for having 'obvious' flaws, do keep in mind you may end up sounding like the nerdy guy who argues how the TX5800 calculator's third bolt under the second shelving unit is actually 5.49 mm, NOT 5.5mm, and how stupid can someone be for getting that wrong, all you had to do was disassemble the flux combobulator and measure the voltage, even a baby can do that".
You do not want to be that guy.
2
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
add a comment |
I disagree with your assumption that his does not happen, although I suppose each discipline may be different.
What does happen, is that criticism is often presented in a very diplomatic, curteous, and professional way, such that it may not appear as criticism to the "untrained" eye (but a 'trained' person looking for the right paper will pick up on the criticism).
For example, you may find yourself writing a paper which corrects a pretty obvious (in your opinion) and damning flaw in a proposed method. It may be tempting to write "this paper had a serious methodological flaw which invalidates the results and should never have been published, here we show you how to do things properly". But this would be a highly inappropriate way to do this, and a good PhD supervisor will probably ask you (possibly to your dislike) to rephrase this as something like "We build on the novel methods proposed by X et al, but additionally consider adding Y. Adding Y is justified because adjusting for Z would likely lead to more optimal results for the following reasons, etc etc".
Note that the two are effectively saying the same thing. But to the untrained eye, the first reads like "wow the first paper was bad, and we're exposing them", whereas the second reads "ok, the first paper made a significant step in the right direction, and we found ways to improve on it". Which is kinder, and actually closer to the truth, in most cases.
Some might go to one extreme and call this politics, others might go to the other and call it professionalism. It's probably a little bit of both. But I would agree that the latter does make for a much more welcoming environment; if we wanted flamewars, there will always be youtube for that.
Additionally, don't forget that (hopefully) these papers have been peer reviewed. Reviewers don't always have to be experts in the particular area of the paper, just 'reasonably familiar' to offer a useful, educated review. What is an 'obvious methodological flaw' for someone who is in the process of becoming an expert in the field, may not be so obvious for someone who is otherwise knowledgeable and interested in the paper, but not necessarily an expert in the field.
So before you go out all guns blazing to criticise someone in a paper for having 'obvious' flaws, do keep in mind you may end up sounding like the nerdy guy who argues how the TX5800 calculator's third bolt under the second shelving unit is actually 5.49 mm, NOT 5.5mm, and how stupid can someone be for getting that wrong, all you had to do was disassemble the flux combobulator and measure the voltage, even a baby can do that".
You do not want to be that guy.
I disagree with your assumption that his does not happen, although I suppose each discipline may be different.
What does happen, is that criticism is often presented in a very diplomatic, curteous, and professional way, such that it may not appear as criticism to the "untrained" eye (but a 'trained' person looking for the right paper will pick up on the criticism).
For example, you may find yourself writing a paper which corrects a pretty obvious (in your opinion) and damning flaw in a proposed method. It may be tempting to write "this paper had a serious methodological flaw which invalidates the results and should never have been published, here we show you how to do things properly". But this would be a highly inappropriate way to do this, and a good PhD supervisor will probably ask you (possibly to your dislike) to rephrase this as something like "We build on the novel methods proposed by X et al, but additionally consider adding Y. Adding Y is justified because adjusting for Z would likely lead to more optimal results for the following reasons, etc etc".
Note that the two are effectively saying the same thing. But to the untrained eye, the first reads like "wow the first paper was bad, and we're exposing them", whereas the second reads "ok, the first paper made a significant step in the right direction, and we found ways to improve on it". Which is kinder, and actually closer to the truth, in most cases.
Some might go to one extreme and call this politics, others might go to the other and call it professionalism. It's probably a little bit of both. But I would agree that the latter does make for a much more welcoming environment; if we wanted flamewars, there will always be youtube for that.
Additionally, don't forget that (hopefully) these papers have been peer reviewed. Reviewers don't always have to be experts in the particular area of the paper, just 'reasonably familiar' to offer a useful, educated review. What is an 'obvious methodological flaw' for someone who is in the process of becoming an expert in the field, may not be so obvious for someone who is otherwise knowledgeable and interested in the paper, but not necessarily an expert in the field.
So before you go out all guns blazing to criticise someone in a paper for having 'obvious' flaws, do keep in mind you may end up sounding like the nerdy guy who argues how the TX5800 calculator's third bolt under the second shelving unit is actually 5.49 mm, NOT 5.5mm, and how stupid can someone be for getting that wrong, all you had to do was disassemble the flux combobulator and measure the voltage, even a baby can do that".
You do not want to be that guy.
answered Nov 26 '18 at 10:34
Tasos PapastylianouTasos Papastylianou
1,04537
1,04537
2
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
add a comment |
2
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
2
2
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
I hate to break it to you, but the TX5800 doesn't have a flux combobulator, just how ignorant are you? Not only that it isn't a calculator, the two most likely potential products are the Timex TX5800 Digital Photo Frame with Temperature, Alarm Clock and Calendar or the Texwipe TexWrite MP-10 TX5800. Texas Instruments uses TI, not TX. Check ya'self, before you wreck ya'self. :P
– ttbek
Nov 26 '18 at 23:33
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
bwahahah. I'm sorry I'm sorry I'm sorry! xD ... dammit, I just realised, I just missed an amazing opportunity to use TIS-100 instead!
– Tasos Papastylianou
Nov 27 '18 at 6:58
add a comment |
Other answers discuss good reasons (besides the fair point that answers do exist): e.g. good criticism takes time, also the authors may know the specifics better than you.
Another reason why criticism is usually very tuned down: the reviewers for your article are selected based on familiarity with the field, which means they possibly have a (favorite) method / approach of their own. So new articles need to strike a balance between saying "the existing methods are insufficient" (which is why we are publishing our novel one) while at the same time not being too harsh (which will offend the reviewers).
add a comment |
Other answers discuss good reasons (besides the fair point that answers do exist): e.g. good criticism takes time, also the authors may know the specifics better than you.
Another reason why criticism is usually very tuned down: the reviewers for your article are selected based on familiarity with the field, which means they possibly have a (favorite) method / approach of their own. So new articles need to strike a balance between saying "the existing methods are insufficient" (which is why we are publishing our novel one) while at the same time not being too harsh (which will offend the reviewers).
add a comment |
Other answers discuss good reasons (besides the fair point that answers do exist): e.g. good criticism takes time, also the authors may know the specifics better than you.
Another reason why criticism is usually very tuned down: the reviewers for your article are selected based on familiarity with the field, which means they possibly have a (favorite) method / approach of their own. So new articles need to strike a balance between saying "the existing methods are insufficient" (which is why we are publishing our novel one) while at the same time not being too harsh (which will offend the reviewers).
Other answers discuss good reasons (besides the fair point that answers do exist): e.g. good criticism takes time, also the authors may know the specifics better than you.
Another reason why criticism is usually very tuned down: the reviewers for your article are selected based on familiarity with the field, which means they possibly have a (favorite) method / approach of their own. So new articles need to strike a balance between saying "the existing methods are insufficient" (which is why we are publishing our novel one) while at the same time not being too harsh (which will offend the reviewers).
answered Nov 23 '18 at 15:20
cheersmatecheersmate
14816
14816
add a comment |
add a comment |
- Criticism is published, as pointed out by iayork. It may also appear in less formal places like blogs. So one reason you don't see it is that you aren't looking in the right places.
- The best place for criticism is in the peer review process. (That is, peer review should catch bad work before it is published, rendering subsequent criticism unnecessary. However, bad work does sometimes get published, so criticism of published work is appropriate then.) Reviews are not usually published, but some venues do publish reviews. E.g., ICLR and NeurIPS reviews are available with the papers (for ICLR, even the rejected papers have reviews published).
- Constructive criticism is better than simply trashing other people's work. Attacks can backfire -- they make the attacker look unprofessional, and things can degenerate into a back-and-forth flame war. Constructive criticism appears in the form of follow-up work that fixes the shortcomings of prior work. This is more polite and less obvious, so you may not notice it.
- Even when bad work gets published, it may simply be best to ignore it. Engaging in criticism is a messy business and some of the mud may stick to the critic. Criticism usually only appears when the work in question is getting a lot of attention.
2
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
1
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
2
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
2
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
1
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
|
show 3 more comments
- Criticism is published, as pointed out by iayork. It may also appear in less formal places like blogs. So one reason you don't see it is that you aren't looking in the right places.
- The best place for criticism is in the peer review process. (That is, peer review should catch bad work before it is published, rendering subsequent criticism unnecessary. However, bad work does sometimes get published, so criticism of published work is appropriate then.) Reviews are not usually published, but some venues do publish reviews. E.g., ICLR and NeurIPS reviews are available with the papers (for ICLR, even the rejected papers have reviews published).
- Constructive criticism is better than simply trashing other people's work. Attacks can backfire -- they make the attacker look unprofessional, and things can degenerate into a back-and-forth flame war. Constructive criticism appears in the form of follow-up work that fixes the shortcomings of prior work. This is more polite and less obvious, so you may not notice it.
- Even when bad work gets published, it may simply be best to ignore it. Engaging in criticism is a messy business and some of the mud may stick to the critic. Criticism usually only appears when the work in question is getting a lot of attention.
2
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
1
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
2
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
2
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
1
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
|
show 3 more comments
- Criticism is published, as pointed out by iayork. It may also appear in less formal places like blogs. So one reason you don't see it is that you aren't looking in the right places.
- The best place for criticism is in the peer review process. (That is, peer review should catch bad work before it is published, rendering subsequent criticism unnecessary. However, bad work does sometimes get published, so criticism of published work is appropriate then.) Reviews are not usually published, but some venues do publish reviews. E.g., ICLR and NeurIPS reviews are available with the papers (for ICLR, even the rejected papers have reviews published).
- Constructive criticism is better than simply trashing other people's work. Attacks can backfire -- they make the attacker look unprofessional, and things can degenerate into a back-and-forth flame war. Constructive criticism appears in the form of follow-up work that fixes the shortcomings of prior work. This is more polite and less obvious, so you may not notice it.
- Even when bad work gets published, it may simply be best to ignore it. Engaging in criticism is a messy business and some of the mud may stick to the critic. Criticism usually only appears when the work in question is getting a lot of attention.
- Criticism is published, as pointed out by iayork. It may also appear in less formal places like blogs. So one reason you don't see it is that you aren't looking in the right places.
- The best place for criticism is in the peer review process. (That is, peer review should catch bad work before it is published, rendering subsequent criticism unnecessary. However, bad work does sometimes get published, so criticism of published work is appropriate then.) Reviews are not usually published, but some venues do publish reviews. E.g., ICLR and NeurIPS reviews are available with the papers (for ICLR, even the rejected papers have reviews published).
- Constructive criticism is better than simply trashing other people's work. Attacks can backfire -- they make the attacker look unprofessional, and things can degenerate into a back-and-forth flame war. Constructive criticism appears in the form of follow-up work that fixes the shortcomings of prior work. This is more polite and less obvious, so you may not notice it.
- Even when bad work gets published, it may simply be best to ignore it. Engaging in criticism is a messy business and some of the mud may stick to the critic. Criticism usually only appears when the work in question is getting a lot of attention.
edited Nov 23 '18 at 19:02
answered Nov 23 '18 at 18:18
ThomasThomas
13.9k63051
13.9k63051
2
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
1
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
2
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
2
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
1
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
|
show 3 more comments
2
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
1
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
2
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
2
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
1
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
2
2
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
"The right place for criticism is in the peer review process." -- is this to say published work ought not be criticised? I don't see why.
– Scientist
Nov 23 '18 at 18:48
1
1
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
@Scientist Of course published work should be criticized! However, peer review should be the first line of defense. Most of the time when published work is being criticized, it's because it should not have been published in the first place. I have edited to emphasize this.
– Thomas
Nov 23 '18 at 18:57
2
2
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
"it's because it should not have been published in the first place" You are assuming the criticism is because of poor quality or (obviously) wrong conclusions. In this case yes, it shouldn't be published. But there are a lot of cases where different views exist, different models, different explanations or similar. Or both the authors and reviewers missed something. In that case review is the worst place to put your critisim (although some try) since it prohibits the exchange of ideas and also doesn't get published.
– DSVA
Nov 24 '18 at 4:25
2
2
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
So what would you call it then? Here's a paper clearly critisicing the idea of "secondary orbital interactions" pubs.acs.org/doi/abs/10.1021/ar0000152 and here's a response ncbi.nlm.nih.gov/pubmed/17109435 clearly criticising the first paper. "Criticism is the practice of judging the merits and faults of something."
– DSVA
Nov 24 '18 at 5:53
1
1
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
I’m sure all published literature is open for criticism, and must be criticized. Science is built on correction of flawed assumptions. We’re always off the mark and missing important factors. It’s a never ending discussion.
– Scientist
Nov 24 '18 at 12:48
|
show 3 more comments
After the second year of Ph.D. study, I started to learn extensively on how to write highly regarded publications. The first thing that was hammered in me is how to describe the assumptions and limitations of the given research, and give a framework or scope of the given research. My first publication described many of these limitations and scope in detail, but my second and third publications did not because I already established these issues in my first one. So, it is paramount that you check ALL the citations, especially the ones by the same author. These usually give you the 'background' of the research. In addition, be familiar with 'well known' assumptions, and practices as well. They are accepted for a reason.
Second, I have seen an article (peer-reviewed in high-ranking journal) that outright attacked another earlier published work by another author. Even though the attacking article may be correct, it did not exactly contribute anything to the field. On other hand, if you developed a novel methodology that addresses the issue that was presented in that offending publication, you can say, "here's my work on this, and I did this." and then cite that paper, and then say something along like, 'improves the results because their approach did not address this particular issue'.
Third,if you burn bridges and piss off the wrong people by attacking them outright, you will not get very far. It is a fact of life. Learn how to navigate the world with proper conduct and delicate touch (people ARE sensitive), and stay true to yourself.
add a comment |
After the second year of Ph.D. study, I started to learn extensively on how to write highly regarded publications. The first thing that was hammered in me is how to describe the assumptions and limitations of the given research, and give a framework or scope of the given research. My first publication described many of these limitations and scope in detail, but my second and third publications did not because I already established these issues in my first one. So, it is paramount that you check ALL the citations, especially the ones by the same author. These usually give you the 'background' of the research. In addition, be familiar with 'well known' assumptions, and practices as well. They are accepted for a reason.
Second, I have seen an article (peer-reviewed in high-ranking journal) that outright attacked another earlier published work by another author. Even though the attacking article may be correct, it did not exactly contribute anything to the field. On other hand, if you developed a novel methodology that addresses the issue that was presented in that offending publication, you can say, "here's my work on this, and I did this." and then cite that paper, and then say something along like, 'improves the results because their approach did not address this particular issue'.
Third,if you burn bridges and piss off the wrong people by attacking them outright, you will not get very far. It is a fact of life. Learn how to navigate the world with proper conduct and delicate touch (people ARE sensitive), and stay true to yourself.
add a comment |
After the second year of Ph.D. study, I started to learn extensively on how to write highly regarded publications. The first thing that was hammered in me is how to describe the assumptions and limitations of the given research, and give a framework or scope of the given research. My first publication described many of these limitations and scope in detail, but my second and third publications did not because I already established these issues in my first one. So, it is paramount that you check ALL the citations, especially the ones by the same author. These usually give you the 'background' of the research. In addition, be familiar with 'well known' assumptions, and practices as well. They are accepted for a reason.
Second, I have seen an article (peer-reviewed in high-ranking journal) that outright attacked another earlier published work by another author. Even though the attacking article may be correct, it did not exactly contribute anything to the field. On other hand, if you developed a novel methodology that addresses the issue that was presented in that offending publication, you can say, "here's my work on this, and I did this." and then cite that paper, and then say something along like, 'improves the results because their approach did not address this particular issue'.
Third,if you burn bridges and piss off the wrong people by attacking them outright, you will not get very far. It is a fact of life. Learn how to navigate the world with proper conduct and delicate touch (people ARE sensitive), and stay true to yourself.
After the second year of Ph.D. study, I started to learn extensively on how to write highly regarded publications. The first thing that was hammered in me is how to describe the assumptions and limitations of the given research, and give a framework or scope of the given research. My first publication described many of these limitations and scope in detail, but my second and third publications did not because I already established these issues in my first one. So, it is paramount that you check ALL the citations, especially the ones by the same author. These usually give you the 'background' of the research. In addition, be familiar with 'well known' assumptions, and practices as well. They are accepted for a reason.
Second, I have seen an article (peer-reviewed in high-ranking journal) that outright attacked another earlier published work by another author. Even though the attacking article may be correct, it did not exactly contribute anything to the field. On other hand, if you developed a novel methodology that addresses the issue that was presented in that offending publication, you can say, "here's my work on this, and I did this." and then cite that paper, and then say something along like, 'improves the results because their approach did not address this particular issue'.
Third,if you burn bridges and piss off the wrong people by attacking them outright, you will not get very far. It is a fact of life. Learn how to navigate the world with proper conduct and delicate touch (people ARE sensitive), and stay true to yourself.
answered Nov 25 '18 at 3:02
Dr. Paul Kenneth ShreemanDr. Paul Kenneth Shreeman
585
585
add a comment |
add a comment |
None of the answers given so far covers the aspects that are really relevant here. Most of them are poorly written and only mention one or two arguments, probably only in an effort to gain reputation. My answer will lay out the arguments in a structured form, with unprecedented completeness and clarity, and with nice formatting. It will be enriched by an example which was not yet given in any other answer, and help the reader to properly and thoroughly understand this complex topic.
Well. Let's see how that plays out.
Seriously: The main arguments have already been given, and can roughly be classified into political/interpersonal ones or methodological/technical ones. I think that the details will vary depending on the subject, but the tags indicate that the question refers to the more technical fields.
The political arguments are mainly that the critique might backfire and might hurt your reputation. Beyond that, people are usually not funded for criticizing others: A paper that only criticizes another will hardly be published, and the publication count is in many cases the only measure of "success in academics". And even if your critique is justified and the paper is technically sound, a harsh critique may simply be deemed "unnecessary", and thus shed a bad light on the author.
The technical arguments are related to the efforts that are necessary for a profound and (optionally: ) "constructive" critique. In order to really identify technical flaws, you need a deep familiarity with the topic. Care has to be taken in order to eliminate the slightest doubts when criticizing others. This is particularly difficult when you are at the beginning of your career. The situation may be different when you really know the related work inside out and backwards.
Basically every non-trivial approach or insight has limitations or (hidden) assumptions. For a critique to be profound, the results often have to be replicated and the flaws have to be "verified", in that sense. The efforts for this are often prohibitively large. Whether a critique is considered to be constructive then mainly depends on whether you can suggest improvements. This was already summarized nicely in the answer by Jirka Hanika.
Therefore, much of the game that is played in the academic world consists of finding flaws and suggesting improvements: Find a paper that shows how to solve a certain problem with red, green and yellow balloons. Write a paper that points out the "serious limitation" of not considering blue balloons. Show that the same problem can be solved with blue balloons. You got a publication there, and maybe another year of funding.
However, there are papers that plainly criticize others. I'd like to refer to one of my favorite papers here, with the ballsy title "Clustering of Time Series Subsequences is Meaningless" :
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences, extracted via a sliding window, has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is meaningless.
[...]
These results may appear surprising, since they invalidate the claims of a highly referenced paper, and many of the dozens of extensions researchers have proposed ([a list of a dozen publications]).
So this paper basically burned down a whole research branch. Reading it can give you a glance at how difficult it is to criticize others in a way that can not be attacked or questioned on a methodological level. And even though the author himself says that the results are "negative", I think that one of the most useful contributions that a scientist can make is to put people back on the right track, instead of participating in the game that is essentially a politically and financially motivated waste of time.
So when you're sure that you can profoundly criticize others: Do it.
1
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
add a comment |
None of the answers given so far covers the aspects that are really relevant here. Most of them are poorly written and only mention one or two arguments, probably only in an effort to gain reputation. My answer will lay out the arguments in a structured form, with unprecedented completeness and clarity, and with nice formatting. It will be enriched by an example which was not yet given in any other answer, and help the reader to properly and thoroughly understand this complex topic.
Well. Let's see how that plays out.
Seriously: The main arguments have already been given, and can roughly be classified into political/interpersonal ones or methodological/technical ones. I think that the details will vary depending on the subject, but the tags indicate that the question refers to the more technical fields.
The political arguments are mainly that the critique might backfire and might hurt your reputation. Beyond that, people are usually not funded for criticizing others: A paper that only criticizes another will hardly be published, and the publication count is in many cases the only measure of "success in academics". And even if your critique is justified and the paper is technically sound, a harsh critique may simply be deemed "unnecessary", and thus shed a bad light on the author.
The technical arguments are related to the efforts that are necessary for a profound and (optionally: ) "constructive" critique. In order to really identify technical flaws, you need a deep familiarity with the topic. Care has to be taken in order to eliminate the slightest doubts when criticizing others. This is particularly difficult when you are at the beginning of your career. The situation may be different when you really know the related work inside out and backwards.
Basically every non-trivial approach or insight has limitations or (hidden) assumptions. For a critique to be profound, the results often have to be replicated and the flaws have to be "verified", in that sense. The efforts for this are often prohibitively large. Whether a critique is considered to be constructive then mainly depends on whether you can suggest improvements. This was already summarized nicely in the answer by Jirka Hanika.
Therefore, much of the game that is played in the academic world consists of finding flaws and suggesting improvements: Find a paper that shows how to solve a certain problem with red, green and yellow balloons. Write a paper that points out the "serious limitation" of not considering blue balloons. Show that the same problem can be solved with blue balloons. You got a publication there, and maybe another year of funding.
However, there are papers that plainly criticize others. I'd like to refer to one of my favorite papers here, with the ballsy title "Clustering of Time Series Subsequences is Meaningless" :
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences, extracted via a sliding window, has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is meaningless.
[...]
These results may appear surprising, since they invalidate the claims of a highly referenced paper, and many of the dozens of extensions researchers have proposed ([a list of a dozen publications]).
So this paper basically burned down a whole research branch. Reading it can give you a glance at how difficult it is to criticize others in a way that can not be attacked or questioned on a methodological level. And even though the author himself says that the results are "negative", I think that one of the most useful contributions that a scientist can make is to put people back on the right track, instead of participating in the game that is essentially a politically and financially motivated waste of time.
So when you're sure that you can profoundly criticize others: Do it.
1
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
add a comment |
None of the answers given so far covers the aspects that are really relevant here. Most of them are poorly written and only mention one or two arguments, probably only in an effort to gain reputation. My answer will lay out the arguments in a structured form, with unprecedented completeness and clarity, and with nice formatting. It will be enriched by an example which was not yet given in any other answer, and help the reader to properly and thoroughly understand this complex topic.
Well. Let's see how that plays out.
Seriously: The main arguments have already been given, and can roughly be classified into political/interpersonal ones or methodological/technical ones. I think that the details will vary depending on the subject, but the tags indicate that the question refers to the more technical fields.
The political arguments are mainly that the critique might backfire and might hurt your reputation. Beyond that, people are usually not funded for criticizing others: A paper that only criticizes another will hardly be published, and the publication count is in many cases the only measure of "success in academics". And even if your critique is justified and the paper is technically sound, a harsh critique may simply be deemed "unnecessary", and thus shed a bad light on the author.
The technical arguments are related to the efforts that are necessary for a profound and (optionally: ) "constructive" critique. In order to really identify technical flaws, you need a deep familiarity with the topic. Care has to be taken in order to eliminate the slightest doubts when criticizing others. This is particularly difficult when you are at the beginning of your career. The situation may be different when you really know the related work inside out and backwards.
Basically every non-trivial approach or insight has limitations or (hidden) assumptions. For a critique to be profound, the results often have to be replicated and the flaws have to be "verified", in that sense. The efforts for this are often prohibitively large. Whether a critique is considered to be constructive then mainly depends on whether you can suggest improvements. This was already summarized nicely in the answer by Jirka Hanika.
Therefore, much of the game that is played in the academic world consists of finding flaws and suggesting improvements: Find a paper that shows how to solve a certain problem with red, green and yellow balloons. Write a paper that points out the "serious limitation" of not considering blue balloons. Show that the same problem can be solved with blue balloons. You got a publication there, and maybe another year of funding.
However, there are papers that plainly criticize others. I'd like to refer to one of my favorite papers here, with the ballsy title "Clustering of Time Series Subsequences is Meaningless" :
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences, extracted via a sliding window, has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is meaningless.
[...]
These results may appear surprising, since they invalidate the claims of a highly referenced paper, and many of the dozens of extensions researchers have proposed ([a list of a dozen publications]).
So this paper basically burned down a whole research branch. Reading it can give you a glance at how difficult it is to criticize others in a way that can not be attacked or questioned on a methodological level. And even though the author himself says that the results are "negative", I think that one of the most useful contributions that a scientist can make is to put people back on the right track, instead of participating in the game that is essentially a politically and financially motivated waste of time.
So when you're sure that you can profoundly criticize others: Do it.
None of the answers given so far covers the aspects that are really relevant here. Most of them are poorly written and only mention one or two arguments, probably only in an effort to gain reputation. My answer will lay out the arguments in a structured form, with unprecedented completeness and clarity, and with nice formatting. It will be enriched by an example which was not yet given in any other answer, and help the reader to properly and thoroughly understand this complex topic.
Well. Let's see how that plays out.
Seriously: The main arguments have already been given, and can roughly be classified into political/interpersonal ones or methodological/technical ones. I think that the details will vary depending on the subject, but the tags indicate that the question refers to the more technical fields.
The political arguments are mainly that the critique might backfire and might hurt your reputation. Beyond that, people are usually not funded for criticizing others: A paper that only criticizes another will hardly be published, and the publication count is in many cases the only measure of "success in academics". And even if your critique is justified and the paper is technically sound, a harsh critique may simply be deemed "unnecessary", and thus shed a bad light on the author.
The technical arguments are related to the efforts that are necessary for a profound and (optionally: ) "constructive" critique. In order to really identify technical flaws, you need a deep familiarity with the topic. Care has to be taken in order to eliminate the slightest doubts when criticizing others. This is particularly difficult when you are at the beginning of your career. The situation may be different when you really know the related work inside out and backwards.
Basically every non-trivial approach or insight has limitations or (hidden) assumptions. For a critique to be profound, the results often have to be replicated and the flaws have to be "verified", in that sense. The efforts for this are often prohibitively large. Whether a critique is considered to be constructive then mainly depends on whether you can suggest improvements. This was already summarized nicely in the answer by Jirka Hanika.
Therefore, much of the game that is played in the academic world consists of finding flaws and suggesting improvements: Find a paper that shows how to solve a certain problem with red, green and yellow balloons. Write a paper that points out the "serious limitation" of not considering blue balloons. Show that the same problem can be solved with blue balloons. You got a publication there, and maybe another year of funding.
However, there are papers that plainly criticize others. I'd like to refer to one of my favorite papers here, with the ballsy title "Clustering of Time Series Subsequences is Meaningless" :
Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences, extracted via a sliding window, has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is meaningless.
[...]
These results may appear surprising, since they invalidate the claims of a highly referenced paper, and many of the dozens of extensions researchers have proposed ([a list of a dozen publications]).
So this paper basically burned down a whole research branch. Reading it can give you a glance at how difficult it is to criticize others in a way that can not be attacked or questioned on a methodological level. And even though the author himself says that the results are "negative", I think that one of the most useful contributions that a scientist can make is to put people back on the right track, instead of participating in the game that is essentially a politically and financially motivated waste of time.
So when you're sure that you can profoundly criticize others: Do it.
answered Nov 27 '18 at 19:11
Marco13Marco13
40519
40519
1
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
add a comment |
1
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
1
1
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
+1 for an answer that is also a practical example. It really captures the sound of many research articles' introductions.
– henning
Nov 27 '18 at 19:48
add a comment |
protected by eykanal♦ Nov 26 '18 at 13:06
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
78
2-3 decades ago a professor use to publish few research articles in his entire career now I have seen people publishing 10s of articles during PhD. The quality of published article is getting low day by day. There is are multiple reasons but most important reason in my opinion is Publish or perish culture imposed on academia.
– MBK
Nov 23 '18 at 11:42
11
@Trilarion not a very good one imo - there's all sorts of reasons one paper could have fewer citations than another, and I think poor quality is reasonably far down the list
– llama
Nov 23 '18 at 15:28
4
Maybe that's field-dependent, but I see that a lot in political science. A major way to bolster the "relevance" of a publication is to situate it within a debate. Bonus points if you criticize "conventional wisdom". So I'm not.sure about the premise of this question.
– henning
Nov 23 '18 at 21:03
9
Having written such an article myself: they do exist, but getting the balance between constructive criticism and pure negativity right is tricky. Articles that cite our paper also often uncritically cite the paper we are refuting.
– Konrad Rudolph
Nov 24 '18 at 13:53
8
The complaint about "publish or perish" is naive. "Publish or perish" dates back to the 1920s, and peaked in the 1970s. The notion that this is something new reflects a lack of historic awareness. Criticisms are far more common today than they were in the mythical good old days, which were all about your (white male non-Jewish) friends in the field.
– iayork
Nov 25 '18 at 14:30