[IPOL discuss] Handbook for Reviwers: your feedback required! :)

Jean-Michel Morel morel at cmla.ens-cachan.fr
Sun Mar 11 18:45:59 CET 2012


Dear Daniel and all,

Daniel's suggestion of having a list of questions for IPOL reviewers
is absolutely sound, and all journals have this.  The standard list
sent by Nicolas which I reproduce below is not quite adequate.

I have put an "R" in  front of items to remove.
I have added some comments starting with "JM"  for those that 
tentatively are more in the line of the journal.

It is a bit special, since we do not request originality, quite the 
contrary!

Could you start from this?

Best,
Jean-Michel
ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
R* Is this an original work that to your knowledge has not been
   published previously?
R* Is the subject matter appropriate to the scope of the
   journal? (If not, suggest journals that might be more appropriate.)
* Title. Does the title give a clear and accurate description
   of the subject of the paper?
* Abstract and key words. Have the authors provided a concise
   abstract or summary that provides sufficient information on the
   rationale, the procedures followed and the main outcomes and
   conclusions? Have the authors provided appropriate key words?
R* Does the paper make a worthwhile contribution to the state of
   knowledge or does it merely repeat existing information? Does it
   have international relevance?
* Has the author provided an Introduction that describes the
   rationale for the work, indicates familiarity with the ‘state of the
   art’ of the subject, with clear objectives and/or hypotheses which
   are followed up in the sections that follow?
R* If the paper reports on an experiment, was the experimental design
   appropriate?

JM COMMENT: THIS NEXT ONE IS THE MOST IMPORTANT ONE, TO EXPAND TO 
REQUEST ADEQUACY OF ALGORITHM DESCRIPTION AND CODE!:
* Methods. Are the methods and materials described adequately
   (ie at a level of detail that would enable an informed researcher
   to repeat the investigation, but without excessive details that an
   informed reader would be expected to know)?

JM: Is the algorithmic description in the text sufficient: would a good 
programmer with no particular knowledge of the discipline be able to 
produce a code from this description, and obtain the same numerical results?

JM: In particular, are all parameters of the algorithm sufficiently 
characterized to grant exact reproduction?

JM: Is the discussion on the choice of the parameters sufficient and 
rational arguments given for their choice?

JM: Does the paper describes a published method? If yes, is the 
implementation faithful to the original, or are the changes explained 
and justified?

R* Do any of the methods involve regulated procedures or other
   ethical issues (eg the use of live animals) that require approval
   by an ethical review committee? If so, is there clear evidence that
   standards have been fully met?
R* Is there an adequate description of the methods used for data
   analysis and are the data analysis procedures appropriate for
   the work reported?
R* Are the results clearly set out and the key findings described
   accurately?
R* Has the author interpreted non-significant findings as though
   they were significant?
R* Is the order of presentation consistent with that given in the
   objectives and methods sections?
R* Tables and Figures. Are the tables and figures (if
   applicable) clear, with appropriate statistical significances given?
* Are all the tables and figures (graphs etc) provided
   appropriate, and do they have precise headings that describe exactly
   what they are intended to show?
* Is there any evidence of excessive duplication in presenting
   results in tables and figures?
R* Are figures provided at a resolution that will allow for
   adequate reproduction in the printed version?
* Discussion. Does the discussion follow a clear and focused
   structure? Does it address the objectives as set out in the
   Introduction and consider the findings in relation to appropriate
   literature?
R*If the work has a public policy relevance, have
   the authors indicated their familiarity with policy objectives.
* Conclusions. Are the conclusions adequately supported by the
   results as given and the intellectual interpretation that the
   authors have applied to them?
* References. Have the authors made appropriate use of published
   literature and presented the references in a format that is
   compatible with the style required by the journal?
* Spelling, grammar and style. Is the paper written in clear English
   that requires only minor editorial corrections, or is there a
   need for more substantial revisions?
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo

Daniel Kondermann a écrit :
> Dear Jean-Michel,
> 
> thanks for this feedback! I am well aware of the reviewers' freedom in
> regular journals. I think this is passed on from generation to
> generation very well.
> 
> But I also think that for our special case, were the reviewers' role for
> the technical part is very new, the current reviewers should find a
> common denominator to define some questions to ask to the paper.
> 
> Psychologically speaking, I think the so-called "anchor effect"[1] is
> very important to ensure a high-quality journal: the first paper will
> pretty much define what the reviewers can expect from the authors, which
> will in turn affect what authors think they can submit. I would actually
> like to bias this effect towards high-quality by being very demanding.
> This will only make sense if this is
> a) wanted by the inventors
> b) supported by the other reviewers
> 
> One the one hand this can be achieved by simply selecting the "right"
> editors. Due to the lack of experiences with this type of journal in our
> field I would (carefully and naively ;) ) guess that this is at least
> difficult.
> On the other hand this can be achieved by finding at least a list of
> points which should be addressed by every reviewer so that all quality
> criteria are evaluated.
> 
> I think your suggestions (two or more types of paper) is a great way to
> emphasize that there are many qualities one could think of during the
> review process. To further help reviewers to realize the amount of
> qualities one might be searching for, I would like to create a set of
> questions to ask to the paper.
> I think this is commonly used in many other journals were one is asked
> for technical correctness, quality of writing, level of innovation,
> quality of experiments, and so on.
> 
> So I wonder whether the standard list can or should be extended for our
> special case with the demos and attached software and whether we can
> formalize this process a bit by creating an email template which asks
> this list of questions. The reviewer can then answer these questions or
> choose not to answer them. At least (s)he will be inspired :)
> 
> Best,
> Daniel
> 
> [1]: http://en.wikipedia.org/wiki/Anchoring
> 
> Am 20.02.2012 22:11, schrieb Jean-Michel Morel:
>> Dear Daniel,
>>
>> I am rather opposed to pile up a long list of exigences and make them
>> official rules for any journal. First of all, no journal whatsoever does
>> so. All journals trust authors, referees and editors to play a fair
>> game, which rules may actually be different for each paper.
>>
>>  Indeed, each paper fixes its own rules, because it fixes its own
>> claims. If a paper claims that it implements, say, the Mumford-Shah
>> minimization, the referees are entitled to control that it is the real
>> Mumford-Shah minimization.  If the paper claims that is implements its
>> own brand of the same minimization, the referees can ask the author to
>> compare to other brands and justify the choices.
>>
>> If a referee feels that a paper on a basic method makes sense to be read
>> by master students, he may require authors to give all details of the
>> derivations. But if it is an advanced research paper, all derivations
>> are not required. In short, the evaluation game must be left as free as
>> possible.  Which is also to say that the referees have almost all
>> rights, as we observe in good journals, and can therefore impose their
>> own rule to each paper, based on the claims of the paper.
>>
>> Thus, I suggest that you should rather think in terms of defining
>> certain types of papers that are encouraged at IPOL, and to give them in
>> as short as possible form.
>>
>> For example two sorts of paper you seem to have in mind might be:
>>
>> "Introductory papers on classic methods, motivating the method,
>> reasoning on the underlying assumptions on images, justifying the
>> parameter choices, proposing a neat implementation, and discussing the
>> most illustrative examples and counterexamples, flaws and successes of
>> the method". (Here it is the pedagogic aspect that dominates)
>>
>> and:
>>
>> "Implementations of state of the art methods on a given problem, as
>> faithful as possible to the original paper proposing them, and giving
>> the community a benchmark implementation it can refer to when comparing
>> to other methods". (Here it is the faithfulness to the original paper
>> that matters)
>>
>>
>>
>> Best,
>> Jean-Michel
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Daniel Kondermann a écrit :
>>> Hi!
>>>
>>> My main problem during review currently is to understand the aim of the
>>> article - e.g. which audience should be able understand it?
>>>
>>> From my point of view it would be great if the audience were Master
>>> students. Therefore, the authors need to either explain all theoretic
>>> derivations as in a tutorial or cite documents which do this job. In
>>> case of some theoretically involved stuff such as graphical models,
>>> texbook pointers should be a minimum requirement.
>>>
>>> Next point is in my opinion that each implementation choice needs to be
>>> thoroughly motivated and/or discussed: assume you want to interpolate
>>> image pixels. Why using linear/bicubic/spline/sinc-interpolation for
>>> this specific case? "Our experiments showed..." just means "we have no
>>> clue and we actually don't care (here)!". One answer might simply be:
>>> the authors of the original paper chose it this way, but they did not
>>> explain why. This could make up another minimum requirement which is
>>> special to IPOL. It would help to identify unreflected thoughts in
>>> existing papers.
>>>
>>> Finally, it would be great to carefully list the assumptions an
>>> algorithm makes. Usually, this can be done statistically by giving prior
>>> distributions and independence assumptions. This is a difficult task as
>>> most publications make their assumptions implicitly and sometimes even
>>> without knowing it on their own, especially when they are not formulated
>>> in a statistical framework. A great example for a clear motivation with
>>> assumptions in a somewhat heuristic paper is:
>>> http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/
>>>
>>>
>>> I think the first step to create such a Handbook for Reviewers is to
>>> loosely collect all thoughts in this mailing list. I volunteer to
>>> moderate the discussion and to finally organize the information into a
>>> catchword list and a rough document structure. Finally, we can jointly
>>> create the actual document.
>>> As the ECCV deadline is drawing near, I have to take care not to do too
>>> much, but I would guess that we should now discuss this topic (any
>>> reviewers with their experiences and thoughts here?) and see that we can
>>> create the first draft in about a month or so.
>>>
>>> Cheers,
>>> Daniel
>>>
>>> Am 17.02.2012 11:13, schrieb Nicolas Limare:
>>>> Hi,
>>>>
>>>>> I just noticed there is a IEEE Standard for Software Reviews [1].
>>>>> [1]: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5362
>>>>> (Google "1028-1997, IEEE Standard for Software Reviews" for PDF)
>>>> I didn't know of this standard. I found a revised version, named
>>>> IEEE Standard 1028-2008. Look for "1028-2008 BASKENT" in Google for a
>>>> good quality PDF.
>>>>
>>>> I quickly read it (I quote some passages at the end of this
>>>> message). It describes review teams, procedures, meetings,
>>>> documentation and reports, but not how reviewers should perform their
>>>> task. So, as a whole, I think this standard is not adapted to the
>>>> peer-reviews performed on software in a research journal.
>>>>
>>>> For the moment, we have no "Reviewer Handbook". Such document would be
>>>> helpful because reviewers are not used to this kind of task, and some
>>>> of them do not really know what IPOL expects and how to conduct their
>>>> review. The "Software Guidelines" can help them, but it is not
>>>> sufficient.
>>>>
>>>> I suggest that after a few reviews, veteran IPOL reviewers write a
>>>> short "Reviewer Handbook" to guide the new ones. This could include a
>>>> template for the report, with checklists and so on. This handbook
>>>> would be proposed to the reviewers, but not mandatory. Daniel, you are
>>>> welcome to propose a first draft :)
>>>>
>>>> 8<----------8<----------8<----------8<----------8<----------8<----------
>>>>
>>>> I found these interesting passages in the standard:
>>>>
>>>> * It defines 5 categories of reviews: management reviews, technical
>>>>   reviews, inspections, walk-throughs, audits. IPOL reviews may belong
>>>>   to the audit category:
>>>>       «An independent examination of a software product [...] to assess
>>>>       compliance with specifications [and resulting in] a clear
>>>>       indication of whether the audit criteria have been met.»
>>>>   The technical reviews, inspections and walk-throughs match somehow
>>>>   too. But IPOL reviews are cleary not management reviews.
>>>>
>>>> * Typical inspection rate is between 100 and 200 lines of code per
>>>>   hour for source code reviews; we have to keep that in mind when we
>>>>   ask for large codes to be reviewed.
>>>>
>>>> * Software anomalies in technical reviews can be ranked as
>>>>   catastrophic, critical, marginal or negligible.
>>>>
>>>> * The output of a technical review can be
>>>>   a) Accept with no verification or with rework verification. The
>>>>      software product is accepted as is or with only minor rework (for
>>>>      example, that would require no further verification).
>>>>   b) Accept with rework verification. The software product is to be
>>>>      accepted after the inspection leader or a designated member of
>>>>      the inspection team (other than the author) verifies rework.
>>>>   c) Reinspect. The software product cannot be accepted. Once
>>>>      anomalies have been resolved a reinspection should be scheduled
>>>>      to verify rework. At a minimum, a reinspection shall examine the
>>>>      software product areas changed to resolve anomalies identified in
>>>>      the last inspection, as well as side effects of those changes.
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> discuss mailing list
>>>> discuss at list.ipol.im
>>>> http://tools.ipol.im/mailman/listinfo/discuss
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> discuss mailing list
>>> discuss at list.ipol.im
>>> http://tools.ipol.im/mailman/listinfo/discuss
>> _______________________________________________
>> discuss mailing list
>> discuss at list.ipol.im
>> http://tools.ipol.im/mailman/listinfo/discuss
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> discuss mailing list
> discuss at list.ipol.im
> http://tools.ipol.im/mailman/listinfo/discuss


More information about the discuss mailing list