2020

MEASURE FOR MEASURE

In Review

The ins and outs of reviewing papers for technical conferences

by Christopher L. Grachanen

Did you ever wonder how papers receive the green light to be presented at a conference? How are the theories and information that authors put forth vetted? What influences the decisions on what exactly gets presented on a particular topic?

Recently, I was asked to become a technical reviewer for papers submitted to an upcoming technical conference, often heralded as one of the industry’s premier technical exchanges for new and emerging computing technologies. On several occasions, I have reviewed submitted conference papers in my field of metrology, but mostly from the standpoint of technical accuracy; that is, ensuring equations are correct or diagrams correctly depict measurement setups.

This recent invitation was the first time I was involved in reviewing papers for a conference that was not specifically tied to my core competencies.

As a precursor to receiving papers for review, this technical conference’s overseers required all reviewers to attend a web training session covering the criteria for evaluating papers and a reviewer’s responsibilities. According to the general conference guidelines, papers submitted:

  • Should describe substantive technical work.
  • Should be substantiated (that is, documented evidence of legitimacy).
  • May give insight into results that may not be obvious.
  • May offer insight into areas involving high risk vs. high returns.
  • Should not include business or marketing proposals.
  • Should not include intellectual property unauthorized for public disclosure.
  • Should not focus on subject specifics that have been previously published.
  • Should not purposely discredit the work of others in the field.
  • Should not in any way plagiarize the work of others in the field.

In addition, a submitted paper’s subject matter should align with the objectives of the conference in terms of:

  • Being available and interesting to a broad technical audience.
  • Promoting the value of collaboration.
  • Containing ideas that have the potential to benefit conference attendees.
  • Revealing important lessons learned from technical work or problem solving.
  • Encouraging interaction and feedback from conference attendees.

The web training session also covered each evaluation criteria requiring reviewers to grade each submitted paper on a descending scale from most favorable (maximum reviewer concurrence) to least favorable (minimum reviewer concurrence). The following are some of the criteria used for evaluating submitted papers:

  • Business impact—The paper’s subject matter should be linked to real business objectives and not speculative situations that are not realistic, viable or financially feasible. Paper subject matter should show a logical linkage between technical work and the clear pursuit of business intents in a purposeful manner.
  • Technical innovation—Paper subject matter should highlight technical work that goes beyond routine research and development to demonstrate real technical innovations. Technical innovations are deemed as advances in the state of the art in science and technology, as well as solving problems acknowledged as challenging by technical peers.
  • Clarity—Submitted papers should be easily readable by a broad technical audience. Content should be well structured and to-the-point, avoiding repetition and acronyms and vocabulary not commonly used.

Reviewing the reviewer

On the topic of reviewer responsibilities, the web training session focused on two requirements I thought were rather novel. The first requirement necessitated an honest evaluation of the reviewer by the reviewer in terms of his or her own technical prowess on the subject matter he or she is evaluating.

Given the virtual cornucopia of technical subjects covered at the conference, conference overseers deemed it prudent and necessary to create a reviewer’s score of his or her own technical knowledge about each paper’s subject matter to provide a qualitative weighting of each evaluation. Conference overseers obviously recognized:

  • Nobody can be a technical expert on all conference areas.
  • Conference paper reviewers are typically in short supply (that is, volunteer activity with limited participation).

The possibility of a reviewer being assigned a paper that he or she can only evaluate from a standpoint of a generalist can and will occur. As hundreds of papers are typically submitted for the conference (often more than 1,000) and only those papers receiving high reviewer evaluation scores are accepted for presentation (typically less than 5%), providing a qualitative ranking of a reviewer’s technical expertise on the subject matter he or she is evaluating is a way of flagging paper evaluations in terms of necessitating additional consideration. In this way, papers that receive low reviewer evaluation scores—essentially due to the reviewer’s lack of expertise or appreciation of the subject matter—can be reevaluated. The opposite holds true for higher-scored evaluations being overly appreciated.

The second reviewers’ requirement relayed in the web training session is to provide constructive criticism—in two to four paragraphs—of each submitted paper to its authors. It was emphasized that authors must commit many hours preparing and writing their papers, and reviewer comments are often the only feedback authors receive for papers not accepted for a conference presentation. According to the general guidelines, reviewer comments and feedback should be:

  • Honest and without bias.
  • Constructive.
  • Positive, even in addressing shortcomings.
  • Clear and precise.

At the conclusion of the web training session, I had a good understanding of what it takes to credibly evaluate submitted conference papers outside of my core competency, in addition to a real appreciation for what it takes to create conference content in terms of the quality of papers presented.

Because a quality assessment can be somewhat subjective in nature, when it comes to reviewing papers, I believe the obligatory reviewer training was most beneficial to help identify and explain criteria for evaluating papers congruent with conference objectives.

Given the up-front effort in paper submittal acceptance, it is no wonder
the conference for which I will be evaluating papers has a reputation for being cutting edge with superlative technical content. QP


Christopher L. Grachanen is a distinguished technologist and operations manager at Hewlett-Packard Co. in Houston. He earned an MBA from Regis University in Denver. Grachanen is a co-author of The Metrology Handbook (ASQ Quality Press, 2012), an ASQ fellow, an ASQ-certified calibration technician and the treasurer of the Measurement Quality Division.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers