[IPOL discuss] Handbook for Reviwers: your feedback required! :)

Daniel Kondermann daniel.kondermann at iwr.uni-heidelberg.de
Thu Feb 23 14:37:55 CET 2012


Dear Jean-Michel,

thanks for this feedback! I am well aware of the reviewers' freedom in
regular journals. I think this is passed on from generation to
generation very well.

But I also think that for our special case, were the reviewers' role for
the technical part is very new, the current reviewers should find a
common denominator to define some questions to ask to the paper.

Psychologically speaking, I think the so-called "anchor effect"[1] is
very important to ensure a high-quality journal: the first paper will
pretty much define what the reviewers can expect from the authors, which
will in turn affect what authors think they can submit. I would actually
like to bias this effect towards high-quality by being very demanding.
This will only make sense if this is
a) wanted by the inventors
b) supported by the other reviewers

One the one hand this can be achieved by simply selecting the "right"
editors. Due to the lack of experiences with this type of journal in our
field I would (carefully and naively ;) ) guess that this is at least
difficult.
On the other hand this can be achieved by finding at least a list of
points which should be addressed by every reviewer so that all quality
criteria are evaluated.

I think your suggestions (two or more types of paper) is a great way to
emphasize that there are many qualities one could think of during the
review process. To further help reviewers to realize the amount of
qualities one might be searching for, I would like to create a set of
questions to ask to the paper.
I think this is commonly used in many other journals were one is asked
for technical correctness, quality of writing, level of innovation,
quality of experiments, and so on.

So I wonder whether the standard list can or should be extended for our
special case with the demos and attached software and whether we can
formalize this process a bit by creating an email template which asks
this list of questions. The reviewer can then answer these questions or
choose not to answer them. At least (s)he will be inspired :)

Best,
Daniel

[1]: http://en.wikipedia.org/wiki/Anchoring

Am 20.02.2012 22:11, schrieb Jean-Michel Morel:
> Dear Daniel,
> 
> I am rather opposed to pile up a long list of exigences and make them
> official rules for any journal. First of all, no journal whatsoever does
> so. All journals trust authors, referees and editors to play a fair
> game, which rules may actually be different for each paper.
> 
>  Indeed, each paper fixes its own rules, because it fixes its own
> claims. If a paper claims that it implements, say, the Mumford-Shah
> minimization, the referees are entitled to control that it is the real
> Mumford-Shah minimization.  If the paper claims that is implements its
> own brand of the same minimization, the referees can ask the author to
> compare to other brands and justify the choices.
> 
> If a referee feels that a paper on a basic method makes sense to be read
> by master students, he may require authors to give all details of the
> derivations. But if it is an advanced research paper, all derivations
> are not required. In short, the evaluation game must be left as free as
> possible.  Which is also to say that the referees have almost all
> rights, as we observe in good journals, and can therefore impose their
> own rule to each paper, based on the claims of the paper.
> 
> Thus, I suggest that you should rather think in terms of defining
> certain types of papers that are encouraged at IPOL, and to give them in
> as short as possible form.
> 
> For example two sorts of paper you seem to have in mind might be:
> 
> "Introductory papers on classic methods, motivating the method,
> reasoning on the underlying assumptions on images, justifying the
> parameter choices, proposing a neat implementation, and discussing the
> most illustrative examples and counterexamples, flaws and successes of
> the method". (Here it is the pedagogic aspect that dominates)
> 
> and:
> 
> "Implementations of state of the art methods on a given problem, as
> faithful as possible to the original paper proposing them, and giving
> the community a benchmark implementation it can refer to when comparing
> to other methods". (Here it is the faithfulness to the original paper
> that matters)
> 
> 
> 
> Best,
> Jean-Michel
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Daniel Kondermann a écrit :
>> Hi!
>>
>> My main problem during review currently is to understand the aim of the
>> article - e.g. which audience should be able understand it?
>>
>> From my point of view it would be great if the audience were Master
>> students. Therefore, the authors need to either explain all theoretic
>> derivations as in a tutorial or cite documents which do this job. In
>> case of some theoretically involved stuff such as graphical models,
>> texbook pointers should be a minimum requirement.
>>
>> Next point is in my opinion that each implementation choice needs to be
>> thoroughly motivated and/or discussed: assume you want to interpolate
>> image pixels. Why using linear/bicubic/spline/sinc-interpolation for
>> this specific case? "Our experiments showed..." just means "we have no
>> clue and we actually don't care (here)!". One answer might simply be:
>> the authors of the original paper chose it this way, but they did not
>> explain why. This could make up another minimum requirement which is
>> special to IPOL. It would help to identify unreflected thoughts in
>> existing papers.
>>
>> Finally, it would be great to carefully list the assumptions an
>> algorithm makes. Usually, this can be done statistically by giving prior
>> distributions and independence assumptions. This is a difficult task as
>> most publications make their assumptions implicitly and sometimes even
>> without knowing it on their own, especially when they are not formulated
>> in a statistical framework. A great example for a clear motivation with
>> assumptions in a somewhat heuristic paper is:
>> http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/
>>
>>
>> I think the first step to create such a Handbook for Reviewers is to
>> loosely collect all thoughts in this mailing list. I volunteer to
>> moderate the discussion and to finally organize the information into a
>> catchword list and a rough document structure. Finally, we can jointly
>> create the actual document.
>> As the ECCV deadline is drawing near, I have to take care not to do too
>> much, but I would guess that we should now discuss this topic (any
>> reviewers with their experiences and thoughts here?) and see that we can
>> create the first draft in about a month or so.
>>
>> Cheers,
>> Daniel
>>
>> Am 17.02.2012 11:13, schrieb Nicolas Limare:
>>> Hi,
>>>
>>>> I just noticed there is a IEEE Standard for Software Reviews [1].
>>>> [1]: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5362
>>>> (Google "1028-1997, IEEE Standard for Software Reviews" for PDF)
>>> I didn't know of this standard. I found a revised version, named
>>> IEEE Standard 1028-2008. Look for "1028-2008 BASKENT" in Google for a
>>> good quality PDF.
>>>
>>> I quickly read it (I quote some passages at the end of this
>>> message). It describes review teams, procedures, meetings,
>>> documentation and reports, but not how reviewers should perform their
>>> task. So, as a whole, I think this standard is not adapted to the
>>> peer-reviews performed on software in a research journal.
>>>
>>> For the moment, we have no "Reviewer Handbook". Such document would be
>>> helpful because reviewers are not used to this kind of task, and some
>>> of them do not really know what IPOL expects and how to conduct their
>>> review. The "Software Guidelines" can help them, but it is not
>>> sufficient.
>>>
>>> I suggest that after a few reviews, veteran IPOL reviewers write a
>>> short "Reviewer Handbook" to guide the new ones. This could include a
>>> template for the report, with checklists and so on. This handbook
>>> would be proposed to the reviewers, but not mandatory. Daniel, you are
>>> welcome to propose a first draft :)
>>>
>>> 8<----------8<----------8<----------8<----------8<----------8<----------
>>>
>>> I found these interesting passages in the standard:
>>>
>>> * It defines 5 categories of reviews: management reviews, technical
>>>   reviews, inspections, walk-throughs, audits. IPOL reviews may belong
>>>   to the audit category:
>>>       «An independent examination of a software product [...] to assess
>>>       compliance with specifications [and resulting in] a clear
>>>       indication of whether the audit criteria have been met.»
>>>   The technical reviews, inspections and walk-throughs match somehow
>>>   too. But IPOL reviews are cleary not management reviews.
>>>
>>> * Typical inspection rate is between 100 and 200 lines of code per
>>>   hour for source code reviews; we have to keep that in mind when we
>>>   ask for large codes to be reviewed.
>>>
>>> * Software anomalies in technical reviews can be ranked as
>>>   catastrophic, critical, marginal or negligible.
>>>
>>> * The output of a technical review can be
>>>   a) Accept with no verification or with rework verification. The
>>>      software product is accepted as is or with only minor rework (for
>>>      example, that would require no further verification).
>>>   b) Accept with rework verification. The software product is to be
>>>      accepted after the inspection leader or a designated member of
>>>      the inspection team (other than the author) verifies rework.
>>>   c) Reinspect. The software product cannot be accepted. Once
>>>      anomalies have been resolved a reinspection should be scheduled
>>>      to verify rework. At a minimum, a reinspection shall examine the
>>>      software product areas changed to resolve anomalies identified in
>>>      the last inspection, as well as side effects of those changes.
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> discuss mailing list
>>> discuss at list.ipol.im
>>> http://tools.ipol.im/mailman/listinfo/discuss
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> discuss mailing list
>> discuss at list.ipol.im
>> http://tools.ipol.im/mailman/listinfo/discuss
> _______________________________________________
> discuss mailing list
> discuss at list.ipol.im
> http://tools.ipol.im/mailman/listinfo/discuss


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 552 bytes
Desc: OpenPGP digital signature
URL: <http://tools.ipol.im/mailman/archive/discuss/attachments/20120223/87f9b857/attachment.pgp>


More information about the discuss mailing list