Review Process

BUCLD Review Process

  1. We send each abstract to 5 reviewers.
  2. Reviewers rate each abstract independently on a scale of 1-7 (double-blind procedure), and optionally submit comments for the authors.
  3. We calculate two scores for each abstract: a. Mean raw score b. Mean z score
  4. We rank each abstract by raw score and z score, and calculate a composite rank.
  5. We select 72 abstracts to be presented as papers, 12 as alternates, and 72 as posters.
  6. Acceptance rates for recent BUCLDs

ASSIGNMENT OF ABSTRACTS TO REVIEWERS

Abstracts are individually assigned to reviewers by the BUCLD faculty advisors, with the help of an automated computer program, on the basis of information indicated by authors and reviewers. The submitting author of each abstract selects codes for the content area of the abstract, the types of learners represented, and the languages studied. Each reviewer similarly indicates his/her expertise in content areas, types of learners, and languages. Also taken into account are the following criteria:

  1. Ensure that the reviewer is sufficiently familiar with the content of the abstract.
  2. Ensure that the reviewer is not unfriendly to the theoretical perspective of the abstract.
  3. Don’t assign abstracts to reviewers who are colleagues, students, advisors, close friends, or enemies of the authors (insofar as we know this).
  4. Each reviewer gets between 7 and 20 abstracts.

2. Reviewers rate each abstract independently on a scale of 1-7 (double-blind procedure), and optionally submit comments for the authors.

RATING GUIDELINES

Reviewers are asked to use the following criteria, as appropriate, for the abstracts they evaluate. Note that not all criteria will apply equally well to each abstract.

  1. Is the question or issue clearly stated?
  2. Is the significance of the work clearly stated? Is relevant previous work appropriately cited?
  3. If relevant, are the method, data collection, and analysis procedures well-designed and appropriate to the question addressed?
  4. Is the conceptual framework coherent? If relevant, is the theoretical analysis well-argued?
  5. Is the work original? Does it present new data (if relevant), particularly from less-studied languages?
  6. Is the work completed, or does it show very strong promise of being completed in time for the conference?
  7. Are the conclusions justified in relation to the data and/or analyses?
  8. Is the abstract written clearly and organized well?
  9. Is the topic of scientific, methodological or theoretical importance?
  10. Is the paper timely in terms of current issues of interest in the field of language development?
  11. Is the paper likely to be of interest to a reasonable number of attendees at BUCLD?

3. We calculate two scores for each abstract:
a. Mean raw score     b. Mean z score

RAW SCORE

Definition: Score out of 7 from a reviewer.
Assumption: Every reviewer’s use of a particular score category is equivalent.
Problem: May be misleading if a reviewer is particularly lenient or stringent in their ratings.

Z SCORE

Definition: Standard score indicating how far, and in what direction, a given raw score deviates from the mean of all the raw scores assigned by a given reviewer.
Assumption: Every reviewer’s use of a particular score category may NOT be equivalent. Some reviewers may be more demanding or lenient than others, or may use a restricted range.
Problem: It may be misleading if a reviewer receives a set of unusually excellent or unusually terrible papers. (The z score effectively forces the ratings from a given reviewer to fit a bell curve.)

4. We rank each abstract by raw score and z score, and calculate a composite rank.

SAMPLE ABSTRACT RANKING DATABASE

5. We select 72 abstracts to be presented as papers, 12 as alternates, and 72 as posters.

PAPER SELECTION PROCESS

From the set of abstracts designated as “paper only” or “either paper or poster”:

  1. We select the top 50 abstracts from the raw score list, and the top 50 abstracts from the z score list. This totals 60-70 abstracts (there is a lot of overlap between the two sets).
  2. We create a pool of the next 40 abstracts based on composite rank.
  3. We select abstracts from the pool to complete the program of 72 papers, based as much as possible on composite rank, with the goal of forming coherent sessions.

ALTERNATE SELECTION PROCESS

From the set of abstracts designated as “paper only” or “either paper or poster” which were indicated by the authors as possible alternates:

We select 12 alternate abstracts from the remaining abstracts in the pool, based as much as possible on composite rank, with the goal of getting a good distribution of content areas.

POSTER SELECTION PROCESS

From the set of abstracts designated as “poster only” or “either paper or poster”:

  1. We eliminate all abstracts already selected as papers.
  2. We select the top 72 remaining abstracts based on composite rank.

6. Acceptance rates for recent BUCLDs (data here not entirely up-to-date)

Abstracts Submitted Abstracts Accepted* Acceptance Rate
2001 298 90 (90 papers, 0 posters) 30%
2002 277 90 (90 papers, 0 posters) 33%
2003 314 133 (87 papers, 46 posters) 42%
2004 386 133 (87 papers, 46 posters) 34%
2005 390 133 (87 papers, 46 posters) 34%
2006 526 153 (87 papers, 66 posters) 29%
2007 466 153 (87 papers, 66 posters) 33%
2008 479 153 (87 papers, 66 posters) 32%
2009 519 153 (81 papers, 72 posters) 29%
2010 423 153 (81 papers, 72 posters) 36%
2011 479 153 (81 papers, 72 posters) 34%

*Does not include the 12 alternate papers. BUCLD began having posters in 2003.