Sentence journal: Why I started keeping a daily “one-sentence journal” (ok, a not-quite daily journal).


Journal in a sentence (esp. good sentence like quote, proverb…)

1. The doctor is reading the Journal of Medical Science.

2. ‘Nature’ was the highest-ranked journal in the survey.

3. He kept a journal of his wanderings across Asia.

4. The journal rubs against the bearing surface.

5. Lady Franklin kept a journal during the voyage.

6. We publish a quarterly journal.

7. The Wall Street Journal uses 220,000 metric tons of newsprint each year.

8. A Wall Street Journal editorial encapsulated the views of many conservatives.

9. The ” Journal of Lexicographers ” is a bimonthly.

10. He kept a journal of his travels across Asia.

11. His career is profiled in this month’s journal.

12. Lady Franklin kept a daily journal of the voyage.

13. He’d lifted whole passages from a journal.

14. The journal comes out five times a year.

15. I put my journal away and prepared for bed.

16. Please send me two copies of your new journal.

17. The journal accused the professor of plagiarism.

18. The journal is published monthly.

19. She wrote faithfully in her journal every day.

20. He decided to keep a journal .

20. is a sentence dictionary, on which you can find nice sentences for a large number of words.

21. He wrote a journal of his travels.

22. The doctor reads the Journal of Medical Science.

23. He kept a journal during his visit to Japan.

24. The events are all recorded in her journal.

25. Membership entitles you to the monthly journal.

26. My father often contributes to a literary journal.

27. Cooper wrote to the journal immediately, defending himself.

28. It’s the official journal of the Medical Foundation.

29. On New Year’s Day in 1974, I started keeping a journal.

30. Do you wish to take out a full twelve-month subscription to the journal?

1. The doctor is reading the Journal of Medical Science.

2. Lady Franklin kept a journal during the voyage.

3. We publish a quarterly journal.

4. On New Year’s Day in 1974, I started keeping a journal.

5. The Wall Street Journal uses 220,000 metric tons of newsprint each year.

6. Do you wish to take out a full twelve-month subscription to the journal?

7. A Wall Street Journal editorial encapsulated the views of many conservatives.

8. The ” Journal of Lexicographers ” is a bimonthly.

9. He kept a journal during his visit to Japan.

10. A report recently published in the American Journal of Epidemiology suggested that smoking increased the risk of developing non – insulin – dependent …

11. The only magazine in the waiting room was a scientific journal full of technical jargon above my head.

12. He got a job as editor of a trade journal.

13. The notice was put up by the editorial board of the school journal.


Happiness Tip Of The Day: Keep A One-Sentence Journal

I’m working on my Happiness Project, and you should have one, too! Everyone’s project will look different, but it’s the rare person who can’t benefit. Join in — no need to catch up, just jump in right now.

Yesterday was the Little Girl’s last day in the “Purple Room,” which is what her nursery school calls the class for the school’s youngest children. She only went twice a week, for less than three hours, but the Purple Room was a very big part of her life.

There’s something so inexpressibly sweet about this age and this first experience of school. I’m having an emotion that I can only describe as preemptive nostalgia for this time. Her last morning there was yesterday, but already, I feel deeply sentimental about it.

The days are long, but the years are short.

For that reason, I’m so happy that I started keeping my one-sentence journal; otherwise I would worry that I wouldn’t remember any of the details about this time – the teeny tiny sinks, the coat hooks in the hallway marked with the children’s photos, the play kitchen and the board books.

Two years ago, I started keeping a one-sentence journal because I knew I would never be able to keep a proper journal with lengthy entries. I just don’t have the time or energy to write a long entry – even two or three times a week.

Instead, each day, I write one sentence (well, actually, I type on the computer) about what happened that day to me, the Big Man and the girls.

I can imagine one-sentence journal dedicated to more specific topics, as well. It might be useful to have one-sentence journal about your career – especially useful if you were starting a new business. It might be helpful to keep a one-sentence journal as you were going through a divorce, a cancer treatment, or other kind of catastrophic event. It would be lovely to keep a one-sentence journal when you were falling in love.

I posted about how one reader keeps a journal for his children.

I like keeping a one-sentence journal because it’s a manageable task, so it doesn’t make me feel burdened; it gives me a feeling of accomplishment and progress, the atmosphere of growth so important to happiness; it helps keep happy memories vivid (because I’m much more inclined to write about happy events than unhappy events), which boosts my happiness; and it gives me a reason to pause thinking lovingly about the members of my family.

One thing is true: we tend to overestimate what we can do in the short term, and underestimate what we can do in the long term, if we do a little bit at a time. Writing one sentence a day sounds fairly easy, and it is; at the end of the year, it adds up to a marvelous record.

If you’d like to read more about happiness, check out Gretchen’s daily blog, The Happiness Project, or sign up for her monthly newsletter.

Instructions for preparing an initial manuscript | Science

Format and style of main manuscript

Format and style of supplementary material

Preparation of figures

Science Citation Style

Information on manuscript types, including length constraints, can be found on our general information for authors page. The instructions below apply to an initial submission. For a manuscript submitted after peer-review and revision, the same style guidelines apply, but we require slightly different file preparation – see instructions specific to a revised manuscript.

Format and style of main manuscript

For the main manuscript, Science prefers to receive a single complete file that includes all figures and tables in Word’s .docx format (Word 2007, 2010, or 2008 or 2011 for a Mac) – download a copy of our Word template here. The Supplementary Material should be submitted as a single separate file in .docx or PDF format To aid in the organization of Supplementary Materials, we recommend using or following the Microsoft Word template supplied here.

LaTeX users should use our LaTeX template and either convert files to Microsoft Word .docx or submit a PDF file [see our LaTeX instructions here].

Use double spacing throughout the text, tables, figure legends, and References and Notes. Electronic files should be formatted for U.S. letter paper. Technical terms should be defined. Symbols, abbreviations, and acronyms should be defined the first time they are used. All tables and figures should be cited in numerical order. For best results use Times and Symbol fonts only.

Manuscripts should be assembled in the following order:

(For easy accurate assembly, download a copy of our Word template here.)

So that we can easily identify the parts of your paper, even if you do not use our template, please begin each section with the specific key words listed below, some of which are followed by a colon. Several of these headings are optional, for example, not all papers will include tables, or supplementary material. Please do not use paragraph breaks in the title, author list, or abstract.

One Sentence Summary:
Main Text:
References and Notes


List of Supplementary materials:
Fig. #:
(Begin each figure caption with a label, “Fig. 1.” for example, as a new paragraph) (or Scheme #)
Table #: (Begin each table caption with a label “Table 1.”, etc. as a new paragraph)
Supplementary Materials:

Titles should be no more than 96 characters (including spaces).

Short titles should be no more than 40 characters (including spaces).

One-sentence summaries capturing the most important point should be submitted for Research Articles, Reports and Reviews. These should be a maximum of 125 characters and should complement rather than repeat the title

Authors and their affiliated institutions, linked by superscript numbers, should be listed beneath the title on the opening page of the manuscript.

Abstracts of Research Articles and Reports should explain to the general reader why the research was done, what was found and why the results are important. They should start with some brief BACKGROUND information: a sentence giving a broad introduction to the field comprehensible to the general reader, and then a sentence of more detailed background specific to your study. This should be followed by an explanation of the OBJECTIVES/METHODS and then the RESULTS. The final sentence should outline the main CONCLUSIONS of the study, in terms that will be comprehensible to all our readers. The Abstract is distinct from the main body of the text, and thus should not be the only source of background information critical to understanding the manuscript. Please do not include citations or abbreviations in the Abstract. The abstract should be 125 words or less. For Perspectives and Policy Forums please include a one-sentence abstract.

Main Text is not divided into sub-headings for Reports. Subheadings are used only in Research Articles, and Reviews. Use descriptive clauses, not full sentences. Two levels of subheadings may be used if warranted; please distinguish them clearly. The manuscript should start with a brief introduction describing the paper’s significance. The introduction should provide sufficient background information to make the article intelligible to readers in other disciplines, and sufficient context that the significance of the experimental findings is clear. Technical terms should be defined. Symbols, abbreviations, and acronyms should be defined the first time they are used. All tables and figures should be cited in numerical order. All data must be shown either in the main text or in the Supplementary Materials or must be available in an established database with accession details provided in the acknowledgements section. References to unpublished materials are not allowed to substantiate significant conclusions of the paper.

References and Notes are numbered in the order in which they are cited, first through the text, then through the figure and table legends and finally through Supplementary Materials. Place citation numbers for references and notes within parentheses, italicized: (18, 19) (18-20) (18, 20-22). There should be only one reference list covering citations in the paper and Supplementary Materials. We will include the full reference list online, but references found only in the Supplementary Materials will be suppressed in print. Each reference should have a unique number; do not combine references or embed references in notes. Any references to in-press manuscripts at the time of submission should be given a number in the text and placed, in correct sequence, in the references and notes. We do not allow citation to personal communications, and unpublished or “in press” references are not allowed at the time of publication. We do allow citations to papers posted at arXiv or bioRxiv. Do not use op. cit., ibid., or et al. (in place of the complete list of authors’ names). Notes should be used for information aimed at the specialist (e.g., procedures) or to provide definitions or further information to the general reader that are not essential to the data or arguments. Notes can cite other references (by number). Journal article references should be complete, including the full list of authors, the full titles, and the inclusive pagination. Titles are displayed in the online HTML version, but not in the print or the PDF versions of papers. See Science Citation Style below for details of citation style.

Acknowledgments should be gathered into a paragraph after the final numbered reference. This section should start by acknowledging non-author contributions, and then should provide information under the following headings Funding: include complete funding information; Authors contributions: a complete list of contributions to the paper (we encourage you to follow the CRediT model), Competing interests: competing interests of any of the authors must be listed (all authors must also fill out the Conflict of Interest form). Where authors have no competing interests, this should also be declared. Data and materials availability: Any restrictions on materials such as MTAs. Accession numbers to any data relating to the paper and deposited in a public database. If all data is in the paper and supplementary materials include the sentence “all data is available in the manuscript or the supplementary materials.” (All data, code, and materials used in the analysis must be available to any researcher for purposes of reproducing or extending the analysis.)

List of Supplementary Materials After the Acknowledgments list your supplementary items as shown below.

Supplementary Materials
Materials and Methods
Table S1 – S2
Fig S1 – S4
References (26 – 32)
Movie S1

Tables should be included after the references and should supplement, not duplicate, the text. They should be called out within the text and numbered in the order of their citation in the text. The first sentence of the table legend should be a brief descriptive title. Every vertical column should have a heading, consisting of a title with the unit of measure in parentheses. Units should not change within a column. Footnotes should contain information relevant to specific entries or parts of the table.

Figure legends should be double-spaced in numerical order. A short figure title should be given as the first line of the legend. No single legend should be longer than 200 words. Nomenclature, abbreviations, symbols, and units used in a figure should match those used in the text. Any individually labeled figure parts or panels (A, B, etc.) should be specifically described by part name within the legend.

Figures should be called out within the text. Figures should be numbered in the order of their citation in the text. For initial submission, Figures should be embedded directly in the .docx or PDF manuscript file. See below for detailed instructions on preparation of and preferred formats for your figures. Schemes (e.g., structural chemical formulas) can have very brief legends or no legend at all. Schemes should be sequentially numbered in the same fashion as figures.

Back to Top

Format and Style of Supplementary Materials

Supplementary Materials (SM) are posted permanently at the Science web sites, are linked to the manuscript, and are freely available. Supplementary Materials must be essential to the scientific integrity and excellence of the paper, and their use is restricted to Reports and Research Articles. The material is subject to the same editorial standards and peer-review procedures as the print publication. To aid in the organization of Supplementary Materials, we recommend using or following the Microsoft Word template supplied here.

In general, the Supplementary Materials may comprise

  • Materials and Methods: The materials and methods section should provide sufficient information to allow replication of the study. It should be cited at relevant points in the text using a citation number that refers to a note in the reference list that reads “Materials and methods are available as supplementary materials at the Science website.” Study design should be described in detail and descriptions of reagents and equipment should facilitate replication (for example source and purity of reagents should be specified, there should be evidence that antibodies have been validated, and cell lines should be authenticated). Clinical and preclinical studies should include a section titled Experimental Design at the beginning of materials and methods in which the objectives and design of the study, as well as prespecified components, are described. Statistical methods must be described with enough detail to enable a knowledgeable reader with access to the original data to verify the results. The values for NP, and the specific statistical test performed for each experiment should be included in the appropriate figure legend or main text.  Please see our editorial policies for additional guidelines for specific types of studies as well as further details on reporting of statistical analysis. For papers in the life sciences that involve a method that would benefit from the publication of a step-by-step protocol, we encourage authors to consider submitting a detailed protocol to our collaborative partner Bio-protocol.

  • Supplementary Text: Additional information regarding control or supplemental experiments, field sites, observations, hypotheses, etc., that bear directly on the arguments of the print paper. Further discussion or development of arguments beyond those in the main text is not permitted in supplementary text. This can be referred to in the main text as “supplementary text” with no reference note required.

  • Figures: Figures that cannot be accommodated in the print version but that are integral to the paper’s arguments. Figures should meet the same standards as print figures. See below These are numbered starting at 1, with the prefix S (eg Fig S1) All figures should be called out in the main text, No reference note is required.

  • Tables: Extensive data tables useful in assessing the arguments of the print paper. Authors wishing to post presentations of data more complex than flat text files or tables that can be converted to PDF format need to consult with their editor.

  • Multimedia files: Online video clips should be in QuickTime (preferred) or AVI format; MPEG movies may also be acceptable. For Quicktime h364 compression is the preferred format. Authors should opt for the minimum frame size and number of images that are consistent with a reasonably effective on-screen presentation. Animated GIFs are not accepted. Authors should submit online videos or movies with accompanying captions. For audio files WAV, AIFF, or AU formats are accepted.

  • References only cited in the supplementary materials should be included at the end of the reference section of the main text, and the reference numbering should continue as if the Supplementary Materials was a continuation of the main text.

Both at initial submission, and at the revision stage, authors should submit the supplementary sections, materials and methods, text, tables and figures, as a single docx or PDF file that should not exceed 25 MB. For ease of reading, the text and tables should be single spaced; figures should be individually numbered, and each figure should have its legend on the page on which the figure appears, immediately beneath the figure. Supplementary multimedia or large data files that cannot be included in the Supplementary Materials file should be uploaded as Auxiliary Supplementary Materials or Movies. There is a 25 MB combined size limit on auxiliary or movie files and a limit of 10 auxiliary or movie files. Video clips should be in .mp4 format. Quicktime (.mov) files are acceptable provided the h.264 compression setting is used. Where possible please use HD frame size (1920×1080 pixels). Animated GIFs are not accepted. For audio files, WAV AIFF, AU or .m4a are preferred. MP3 or AAC files are acceptable but a bit rate of at least 160kb/s must be used. Authors should submit video and audio with clearly identifiable accompanying captions and credit information. If you have files essential to the evaluation of your manuscript that exceed these limits, please contact [email protected] . See Submitting your manuscript for further details on how to submit.

Back to Top

Preparation of Figures

Creating your figures It is best to create your figures as vector-based files such as those produced by Adobe Illustrator. Vector-based files will give us maximum flexibility for sizing your figures properly without losing resolution, as they can be altered in size while maintaining high print-quality resolution. We cannot accept PowerPoint files or files that are not readable by Adobe Photoshop, Macromedia Freehand, or Adobe Illustrator. To keep file sizes reasonable, please save art at a resolution of 150 to 300 dots per inch (dpi) for initial submission. A higher resolution applies for figures submitted at the revision stage – see instructions for preparing a revised manuscript. Digital color art should be submitted as CMYK (Cyan, Magenta, Yellow, Black) rather than RGB (Red, Green, Blue).

Paper The width of figures, when printed, will usually be 5.5 cm (2.25 inches or 1 column) or 12.0 cm (4.75 inches or 2 columns). Bar graphs, simple line graphs, and gels may be reduced to a smaller width. Symbols and lettering should be large enough to be legible after reduction [a reduced size of about 7 points (2 mm) high, and not smaller than 5 points]. Avoid wide variation in type size within a single figure. In laying out information in a figure, the objective is to maximize the space given to presentation of the data. Avoid wasted white space and clutter.

  • The figure’s title should be at the beginning of the figure legend, not in the figure itself.

  • Include the figure’s identifying number (e.g., “Fig. 1”) on the same manuscript page that includes the figure.

  • Keys to symbols, if needed, should be kept as simple as possible and be positioned so they do not needlessly enlarge the figure. Details can be put into the captions.

  • Use solid symbols for plotting data if possible (unless data overlap or there are multiple symbols). Size symbols so that they will be distinguishable when the figure is reduced (6 pt minimum). Line widths should be legible upon reduction (minimum of 0.5 pt at the final reduced size).

  • Panels should be set close to each other, and common axis labels should not be repeated.

  • Scales or axes should not extend beyond the range of the data plotted.

  • Use scale bars in place of, or in addition to, magnifications. Do not use minor tick marks in scales or grid lines. Avoid using y-axis labels on the right that repeat those on the left.

Color-mix and contrast considerations
  • Avoid using red and green together. Color blind individuals will not be able read the figure.

  • Please do not use colors that are close in hue to identify different parts of a figure.

  • Avoid using grayscale.

  • Use white type and scale bars over darker areas of images.

  • Units should be metric and follow SI convention.

Typefaces and labels
  • Please observe the following guidelines for labels on graphs and figures:

  • Use a sans-serif font whenever possible (we prefer Helvetica).

  • Simple solid or open symbols reduce well.

  • Label graphs on the ordinate and abscissa with the parameter or variable being measured, the units of measure in parentheses, and the scale. Scales with large or small numbers should be presented as powers of 10.

  • Avoid the use of light lines and screen shading. Instead, use black-and-white, hatched, and cross-hatched designs for emphasis.

  • Capitalize the first letter in a label only, not every word (and proper nouns, of course).

  • Units should be included in parentheses. Use SI notation. If there is room, write out variables – e.g., Pressure (MPa), Temperature (K).

  • Variables are always set in italics or as plain Greek letters (e.g., P, T, m). The rest of the text in the figure should be plain or bold text.

  • Type on top of color in a color figure should be in bold face. Avoid using color type.

  • When figures are assembled from multiple gels or micrographs, a line or space should indicate the border between two original images.

  • Use leading zeros on all decimals – e.g., 0.3, 0.55 – and only report significant digits.

  • Use capital letters for part labels in multipart figures – A, B, C, etc. These should be 9 pt and bold in the final figure. When possible, place part labels at the upper left-hand corner of each figure part; if a part is an image, set labels inside the perimeter so as not to waste space.

  • Avoid subpart labels within a figure part; instead, maintain the established sequence of part labels [e.g., use A, B, C, D, E instead of A, B, C(a), C(b), C©]. If use of subpart labels is unavoidable, use lowercase letters (a, b, c). Use numbers (1, 2, 3) only to represent a time sequence of images.

  • When reproducing images that include labels with illegible computer-generated type (e.g., units for scale bars), omit such labels and present the information in the legend instead.

  • Sequences may be reduced considerably, so the typeface in the original should be clear. There should be about 130 characters and spaces per line for a sequence occupying the full width of the printed page and about 84 characters and spaces per line for a sequence occupying two columns.

Modification of figures Science does not allow certain electronic enhancements or manipulations of micrographs, gels, or other digital images. Figures assembled from multiple photographs or images, or non-concurrent portions of the same image, must indicate the separate parts with lines between them. Linear adjustment of contrast, brightness, or color must be applied to an entire image or plate equally. Nonlinear adjustments must be specified in the figure legend. Selective enhancement or alteration of one part of an image is not acceptable. In addition, Science may ask authors of papers returned for revision to provide additional documentation of their primary data.

Back to Top

Science Citation Style

For journal articles, list initials first for all authors, separated by a space (e.g., A. B. Opus, B. C. Hobbs). Do not use “and.” Titles of cited articles should be included (lowercase except for the first word and proper nouns), followed by a period (see examples below). Journal titles are in italics; volume numbers follow, in boldface. (If there is no volume number, use the publication year in its place.) Do not place a comma before the volume number or before any parentheses. You may provide the full inclusive pages of the article. If the publication is online only, use the article number (or citation number) instead of the page. Journal years are in parentheses: (1996). End each listing with a period. Do not use “ibid.” or ”op. cit.” (these cannot be linked online).

For whole books, the style for author or editor names is as above; for edited books, insert “Ed.,” or “Eds.,” before the title. Italicize the book title and use “title case” (see examples below). After the title, provide (in parentheses) the publisher name, edition number (if any), and year. If the book is part of a series, indicate this after the title (e.g., vol. 23 of Springer Series in Molecular Biology).

For chapters in edited books, the style is as above, except that “in” appears before the title, and the names of the editors appear after the title. The chapter title may be provided before the book title; enclose chapter titles in quotes and use initial caps. After the information in parentheses, provide the complete page number range (and/or chapter number) of the cited material.

For monographs, memos, or reports, the style for author or editor names is as above. The title should be in quotes and should have initial caps. After the title, provide (in parentheses) the report number (if applicable), publisher name, and year. If these are unavailable, or if the work is unpublished, please provide all information needed for a reader to locate the work; this may include a URL or a Web or FTP address. Monographs in series (such as AGU Monogr.) may be treated as journals.

For unpublished proceedings or symposia, supply the title of meeting, location, inclusive dates, and sponsoring organization. Also include the abstract number (if applicable). There is no need to supply the total page count.

For a thesis, name the school but not the degree; we do not use “dissertation,” “Ph.D.,” “Master’s,” or other specifics. Name the city if the university could be mistaken for another. It is optional to include the thesis title.

For research first published in Science First Release, online journals, and preprints available on the Internet, see the examples below. These are considered published work.



1. N. Tang, On the equilibrium partial pressures of nitric acid and ammonia in the atmosphere. Atmos. Environ. 14, 819-834 (1980).

2. W. R. Harvey, S. Nedergaard, Sodium-independent active transport of potassium in the isolated midgut of the Cecropia silkworm. Proc. Natl. Acad. Sci. U.S.A.51, 731-735 (1964).

3. N. H. Sleep, Stagnant lid convection and carbonate metasomatism of the deep continental lithosphere. Geochem. Geophys. Geosyst. 10, Q11010 (2009). [online-only paper; use article number instead of page]

4. J. M. Dinning, Am. J. Clin. Nutr. 42 (suppl. 1), 12 (1984). [journal with supplement noted]


1. M. Lister, “[Chapter title goes here]” in Fundamentals of Operating Systems (Springer, New York, ed. 3, 1984), pp. 7-11.

2. J. B. Carroll, Ed., Language, Thought and Reality, Selected Writings of Benjamin Lee Whorf (MIT Press, Cambridge, MA, 1956).

3. R. Davis, J. King, “[Chapter title goes here]” in Machine Intelligence, E. Acock, D. Michie, Eds. (Wiley, 1976), vol. 8, chap. 3. [use short form of publisher name, not “John Wiley & Sons”]

4. J. Sprung, Corals: A Quick Reference Guide (Oceanographic Series, Ricordea, Miami, FL, 1999). [for books in series, include the series title]

5. National Academy of Sciences, Principles and Procedures for Evaluating the Toxicity of Household Substances (National Academy of Sciences, Washington, DC, 1977). [organization as author and publisher]

Technical reports

1. G. B. Shaw, “Practical uses of litmus paper in Möbius strips” (Tech. Rep. CUCS-29-82, Columbia Univ., 1982).

2. F. Press, “A report on the computational needs for physics” (National Science Foundation, 1981). [unpublished or access by title]

3. “Assessment of the carcinogenicity and mutagenicity of chemicals,” WHO Tech. Rep. Ser. No. 556 (1974). [no author]

4. U.S. Environmental Protection Agency, (EPA), “White Paper on Bt plant-pesticide resistance management” (Publication 739-S-98-001, EPA, 1998; [the easiest access to this source is via the URL]

Conference proceedings (unpublished)

1. M. Konishi, paper presented at the 14th Annual Meeting of the Society for Neuroscience, Anaheim, CA, 10 October 1984.


1. B. Smith, thesis, Georgetown University (1973).

2. R. White, “[Thesis title goes here],“ thesis, University of Illinois, Chicago, IL (1983). [Optional: The title of the thesis may be provided in quotes after the author name.]

Electronic publication before print

1. W. Jones, B. Smith, [Article title goes here]. Science 10.1126/science.1054678 (2005). [published in Science First Release; not yet published in print]

2. J. Moyron-Quiroz et al., Role of inducible bronchus associated lymphoid tissue (iBALT) in respiratory immunity. Nat. Med. 10.1038/nm1091 (2004).

3. After print publication of a Science First Release paper (or any other paper that was initially published online), use the standard format for citing journal articles: W. Jones, B. Smith, [Article title goes here]. Science 311, 496–499 (2006).

Other online publication

1. E. M. Pietras, G. Cheng, A new TRADDition in intracellular antiviral signaling. Sci. Signal. 1, pe36 (2008). [Science Signaling]

2. R. K. Aziz, V. Nizet, Pathogen microevolution in high resolution. Sci. Transl. Med. 2, 16ps4 (2010). [Science Translational Medicine]

3. A. Clauset, S. Arbesman, D. B. Larremore, Systematic inequality and hierarchy in faculty hiring networks. Sci. Adv. 1, e1400005 (2015). [Science Advances]


1. A. Smette et al., (2001).

2. K. Abe et al., (2001) [if now published, omit the URL and provide only a standard reference]

Back to Top

Short Poems and Essays from the World at Large published • Empty Mirror

Chris La Tray’s One-Sentence Journal: Short Poems and Essays from the World at Large has been published by Riverfeet Books. Chris’ essay, “Notes on the Sacred Art of Dog Walking”, which first appeared here at Empty Mirror, is included in the book, alongside poems and more essays.

Praise for

One-Sentence Journal:

One Sentence Journal by Chris La TrayChris La Tray’s One-Sentence Journal achieves the difficult task of creating a narrative out of snapshots. La Tray’s observations of the world around him not only take us into his world, but provide unique insights into our world. This book is proof of the power of language, even at its most spare.
— Russell Rowland, author of Fifty-Six Counties, High and Inside, In Open Spaces, and Arbuckle

An intimate journal of essays interspersed with seasonal American Haiku puts the reader in the center of a man’s introspection and study: nature and people — the stories of his landscape, reflections on family, dogs and work all told in a familiar voice, the voice of a friend, which you can hear clearly in your mind. Like I was riding in his truck looking at the river changing and remaining while he regales me with his language.
— Sheryl Noethe, Poet Laureate of Montana, 2011-2013; author of As Is, Grey Dog Big Sky, The Ghost Openings, and Poetry Everywhere

Reading Chris La Tray’s One-Sentence Journal I find myself wishing all kinds of things: that I went for more walks in the woods, that I spent more time in the company of owls, that I ate more fried chicken, that I woke each day in time to watch the sunrise. For this is a sunrise book, a book of revelations, of creekwalks and roadfood and ordinary sadnesses, ordinary joys—which are, in the end, the only kind. “I have a stake in this,” La Tray writes. And so do you. So do you.
— Joe Wilkins, author of The Mountain and the Fathers, When We Were Birds, and Notes from the Journey Westward

Buy One-Sentence Journal here.

You can also learn more on Chris’ website.

Empty Mirror News

Have something to contribute? Leave a comment below or drop us a line.

Keeping a One-Sentence Journal Can Make You Happier And Other Things You Can Do To Make Sure You’re Smiling Everyday

The pursuit of happiness can be long, arduous, and full of obstacles — which is why little pieces of advice that can help us find joy are often the best kinds of advice. In a new piece published on Science of Us, writer Melissa Dahl suggests that keeping a one-sentence journal can make you a happier person. Why? Because re-living day-to-day moments that seem ordinary at the time can actually feel extraordinary in the future. As she suggests, you don’t have to write a whole diary entry describing your day, all the colors you saw, all the smells you smelled, and all the people you met. It can just be one line of anything that happened to you. It acts as a fun little brain teaser and a documentation of your life at the same time.

Finding happiness in the little things can be just as important as making yourself big-picture happy. While it’s important to have long-term goals and relationships that you hope will bring warmth into your life, making sure you keep smiling every day will make that journey so much more pleasant. So how do you achieve that day-to-day happy? Different sources will tell you different things, most of which will likely work one way or another; it just depends on who you are and how you find joy.

Some people are able to find happiness in a purely solitary way. Getting in touch with the mind and body through meditation can be a powerful way for some people to experience the joys of existence and understand their emotions better. For others, bursts of happiness can be experienced by positively interacting with the world around them. Love the summer? soak up the sunshine. Love the winter? Put on a warm coat and go for a walk. Love the smell of freshly baked bread? Take a trip to a bakery and treat yourself to something delicious. Love to learn? Watch a TED Talk, read a new book, or watch a short film. Have a hobby? Do it.

A third type of person achieves happiness by interacting with others. While I’m definitely a little bit of the second type, I’d primarily describe myself as this kind of person. When I’m feeling a little down or in a funk, I make time to spend with my roommates, call an old friend who goes to college in a different state, post obnoxious comments to my little brother’s Instagram photos, or even go to the movies so I’m in a room full of people sharing an experience with me.

Whatever it is that brings you happiness, a walk, a one-line summary of your day in a journal, Facebook stalking a friend you haven’t seen in a long time, the most important thing is that you make sure to make time every day to do it. Oftentimes we get so caught-up in the hubbub of everyday life that it’s easy to forget to take a moment and just smile.

Images: Fotolia; Giphy

e-journal in a sentence – e-journal sentence

  • How to find e – journal article
  • The efficiency of searching is the key problem of an e – journal website
  • The application of english e – journals amp; amp; e – newspapers reading in the chinese college english teaching
  • In e – journal portal , you could search an e – journal by its title or browse the title alphabetically
  • Coordinating efforts to make an african consortium for e – journals and databases . includes 300 links to open access math e – journals
  • An aesthetic zen haven with a humorous touch . offers inspirational daily quotes , complimentary monthly e – journal , and an extensive collection of buddhist – themed e – cards
  • The university ‘ s ? 13 million boots library boasts 531 , 000 books , 2 , 800 journals , 9 , 000 e – journals , silent study areas and group study rooms
  • Fuzhi zhao has published more than 80 poems in china and abroad . he serves as a poetry editor in several poetry journals and e – journals , including chinese poetry
  • Chinese text automatic proofreading , a part of application foundation research of natural language processing , taking the chance of the development of e – journal , gradually gets more and more enough attention , also becomes an urgency task
  • This paper introduces detailed information about the german electronic journals library ( ezb ) , analyzes the status of chinese libraries in management and usage of the electronic research journals , and puts forward suggestion on integrated organizing and serving of the e – journals
  • It’s difficult to see e-journal in a sentence .
  • The applications of e – business into libraries may include : first , library services , ie . the btoc and ctoc modes in combination with e – business make available the online reader service , online reference consultation and knowledge navigation , online bibliographic searches , database searching , e – journal reading , e – book reading and lending , remote education , online sdi services , market investigation via internet , interlibrary loan , e – document delivery and bbs etc . ; and second , library operation , ie
  • Along with the swift and violent development of aparnet and internet , tcp / ip protocol has been widely prevalent and succeeded greatly in many application fields , such as electronic mail , information gain and reproduction , advertisement , amusement , e – journal , on – line education , on – line service , on – line client support and etc . however , owning to the technologies of hardware and software progress quickly , electronic government affair , electronic business affair , on – line phone , on – line game and on – line movie are booming in the world and producing more numerous high – speed real – time data continuously

Langsford | Quantifying sentence acceptability measures: Reliability, bias, and variability

1 Introduction

Acceptability judgments have formed a large part of the study of language since at least Chomsky (1965). They are one of many sources of evidence, alongside corpus linguistics (Sampson 2007), psychological experiments (Noveck & Reboul 2008), and neuroscience techniques (Shalom & Poeppel 2007), that each offer distinct and complementary information about language (Arppe & Järvikivi 2007). One major factor in the popularity of acceptability judgments is the way they allow theories to be tested against artificial constructions that passive observation would rarely or never provide (Schütze 1996). For instance, acceptability judgments can differentiate between constructions that are ungrammatical and those that are rare or missing but still grammatical.

Acceptability judgments come in a number of possible forms, each with their own advantages and disadvantages. The main differences between different forms are in the kind of response required from the participant. People can be offered a discrete rating scale, a real-valued scale, or be asked to make a relative comparison between items. The choice of what response options to offer is critical in two important respects: it determines the statistical tests available to researchers, and it may also significantly influence people’s interpretation of the task. For these reasons, the characteristics of different kinds of acceptability measures are well studied. We know that acceptability judgment data are influenced by details such as the selection of participants (Dąbrowska 2010), sample size (Mahowald et al. 2016), task structure (Featherston 2008), participant engagement (Häussler & Juzek 2016), and data processing decisions (Juzek 2015).

Most of the existing literature focuses on the question of to what extent acceptability judgment data can be used to adjudicate about individual phenomena or effects of linguistic interest (e.g., by presenting pairs of sentences that capture a specific contrast relevant to a particular theoretical claim). However, one might be interested in evaluating the range of acceptability measures along other dimensions as well. To what extent do acceptability judgments from different elicitation tasks support claims about larger-scale generalizations across many different sentences and phenomena? To what extent do different measures of acceptability agree with each other about specific items or sentences? To what extent is each measure robust to differences within individuals at different time points? This paper focuses on exploring these questions.

In the work presented here, we attempt to quantify the extent to which acceptability judgment data from a variety of different elicitation tasks supports different kinds of claims: claims about the global structure of acceptability across a large set of diverse sentences, claims based on the magnitude of acceptability differences, and claims made at the level of single items or sentences. We accomplish this by quantifying the relative contribution of multiple factors – individual participant differences, sample size, task structure, and response-style mitigation in data processing – to the empirical reliability of acceptability scores over specific items (rather than over specific effects) for different measures. We chose to focus on reliability because reliability places a ceiling on how appropriate acceptability judgments are as a test of linguistic theories. If acceptability judgments for some measure or in the presence of some factor are not reliable, we should be cautious about relying on them. Moreover, understanding what factors influence the reliability of a measure can be informative about exactly what that measure reflects.

Our approach aims to differentiate between possible sources of bias and variance. It is currently unclear what proportion of the variability seen in acceptability judgment data is due to lapses of attention, idiolect differences between participants, differences in interpretation of acceptability scales, or interference from simultaneously presented items.

A standard response to the diversity of potential sources of variability is to give them all equal status as noise independent of the linguistic effect and ask what can be concluded about true linguistic effects (focused on specific phenomena) in the presence of this noise, regardless of its source. An extensive literature explores this question, looking at the chance of identifying an effect where none exists (Sprouse & Almeida 2011; Sprouse et al. 2013), the chance of failing to identify an effect that is truly present (Sprouse and Almeida 2017a), and differences in sensitivity of different measures compared on a particular known effect (Weskott & Fanselow 2011). The consensus of such studies is that acceptability judgments are highly reliable across replications (Sprouse & Almeida 2017b).

As this literature shows, differentiation between different sources of bias and variance is not strictly necessary in order to test specific linguistic effects, which are the primary currency of linguistic research. Many measures of sentence acceptability have good psychometric properties when they are used for such a purpose (e.g., testing whether a set of sentences licensed under some linguistic theory have different acceptability than a set of sentences that are not licensed). If such differentiation is not necessary, why are we attempting to do so here?

The first reason is that such differentiation is important if we want to use acceptability judgments to explore questions that are not focused on hypothesis testing about specific linguistic effects. For instance, it is quite possible that the nature of the elicitation task may impose structure on the overall distribution of acceptability scores across multiple kinds of sentences. Thus, understanding to what extent different tasks do this is important for investigations of the global structure of acceptability in language. Such investigations would include issues like the extent to which clustering structure may be apparent in acceptability judgments (Hofmeister et al. 2013; Lau et al. 2016), whether there are dialect or language differences in global acceptability structure, or whether low acceptability sentences show greater variability than high acceptability ones. Indeed, global acceptability judgments (if they are reliable) may even provide a means to differentiate between dialects or evaluate the knowledge or fluency of individual speakers.

The second reason we are interested in distinguishing between different sources of variability is the expectation that some of these sources fall under an experimenter’s control and can be minimized. Different elicitation tasks may vary in their vulnerability to particular sources of variability, which affects their relative quality as scientific instruments. In general, a task that is more difficult might be expected to incur greater variability due to distraction or mistaken responding. Tasks with a small number of unambiguous response options, such as forced choice tasks, may be less vulnerable to response style variability than tasks with flexible free response options that are open to differences of interpretation, such as magnitude estimation. Conversely, forced choice tasks may be more vulnerable to item neighborhood effects, with sentences potentially processed differently in the context of a contrast rather than in isolation.

How much do these tasks vary and how large are these different sources of variation? Our goal is to provide a quantitative answer to this question.

The many possible sources of bias and variability cannot be completely disentangled, since they are generally all present in some unknown degree in every response. We give quantitative bounds for the distinct contribution of certain sources of variability in two different ways.

First, we contrast between and within participant test-retest reliability. Between-participant test-retest reliability is an important metric of measure quality in its own right, since no strong conclusions can be drawn from the results of a measure if it is liable to give different answers to the same question on different occasions (Kline 2013; Porte 2013; Brandt et al. 2014; DeVellis 2016). While distinct in the way it avoids appealing to a ground truth, between-participant test-retest reliability is closely related to error-rate reliability, if the underlying truth is considered stable over the time scales involved. As such, it is widely reported in existing work on the reliability of acceptability judgment data (Sprouse et al. 2013; Sprouse & Almeida 2017a). However test-retest reliability within the same participant can offer additional information, especially when contrasted with between-participant reliability. This contrast, which has no analogue in error rates, is informative about the composition of the variability: variability inherent to the construct itself and random noise due to inattention or other error can be expected in both, while individual differences in response style and subjective acceptability only contribute to the variability of between-participant replications. As a result, between-participant replications are expected to be less reliable, and the size of the reliability gap quantifies the combined impact of these particular sources of variability. Even further decomposition into the source of this within/between reliability gap is possible as well. For instance, the variability due to response style differences can be estimated by examining the effect of data pre-processing steps (e.g., z-transformation of scores) known to mitigate this particular source of variability.

Second, we contrast these within and between participant test-retest reliability results for measures based on different tasks. The tasks differ primarily in the kind of response options offered, which could potentially impose structure on results. For example, asking people to give responses on a discrete Likert scale might force them to collapse distinct acceptabilities onto one response if there are too few options or encourage them to make spurious distinctions if there are too many (Schütze 1996; Carifio & Perla Likert 2008; Schütze 2011). The comparisons involved in forced choice judgments could also direct people’s attention to specific syntactic details, particularly when the two sentences are related, as is typical of a well-controlled test pair. This might lead to different acceptability ratings than if each sentence was considered in isolation (Cornips & Poletto 2005). Contrasts between measures are therefore useful both in identifying the best-performing measures (Sprouse et al. 2013; Sprouse & Almeida 2017a) and to test the degree of agreement between them (Schütze 2011; Weskott & Fanselow 2011; Sprouse & Almeida 2012).

However, from the perspective of decomposing sources of bias and variance, distinct tasks may also be differently vulnerable to different sources of variability. As a result, we may be able to use them to cross-check against each other’s potential biases.

The structure of this paper is as follows. We first give a detailed introduction to the measures considered in this paper, the processing steps and statistical tests associated with each, and the series of experiments within which we collect the data. When reporting the results our primary focus is on test-retest reliability; it is first evaluated in terms of raw score correlation of all sentences in a dataset, then in terms of the decisions yielded by each measure on particular contrasts of interest. For each of these we compare within and between participant reliability and examine the impact of sample size. We conclude by examining the mutual agreement between the measures, with reference to expert judgments in the published literature. In the discussion, we address limitations of this work, consider recommendations for researchers interested in measuring sentence acceptability, and discuss future directions.

1.1 The measures

Early work on the reliability of formal measures was prompted by concerns about the practice of “armchair linguistics”, which considered phrases or sentences as the primary unit of evidence on which linguistic theories were built, taking for granted that the acceptability status of these sentences would be immediately obvious to a native speaker. With reference to previously discredited introspective approaches in psychology (Danziger 1980), critics pointed out that the intuitions of a linguist about a sentence they constructed themselves to demonstrate a particular point of syntax might not be the same as those of the broader language community (Spencer 1973; Schütze 1996; Wasow & Arnold 2005; Dąbrowska 2010).

Proponents of informal approaches argued in response that linguists were mainly concerned with phenomena that gave very large effect sizes, making multiple opinions on a particular acceptability difference redundant (Phillips & Lasnik 2003; Featherston 2009; Phillips 2009). This approach defended the legitimacy of the large literature built on such informal tests, but left open the question of how to decide what counts as an obvious case (Linzen & Oseki 2015).

Recent systematic work comparing expert and naive judgments has largely supported the argument that the majority of claims published in the linguistics literature are consistent with the results of formal tests against the judgments of large numbers of naive native speakers (Culbertson & Gross 2009; Sprouse & Almeida 2012; Sprouse et al. 2013). However the same program of research has shown that even for contrasts with large effect sizes, formal tests offer more information than informal ones. As well as giving an objective measure of whether a test sentence is more or less acceptable than a control to a language community, a formal test can also give an indication of the size of the difference, and the relative acceptability of both sentences on a global acceptability scale (Sprouse and Schütze 2017). It has also been argued that as a result of much productive work on large effects, smaller effects have become increasingly important to further progress (Gibson & Fedorenko 2013; Gibson et al. 2013).

One potential drawback of formal methods is their higher cost in time and participant-hours. However, as Myers (2009) points out, more representative samples and quantitative replicability need not be prohibitively expensive or complicated. Moreover, cost depends in part on the measurement task as well as the question being asked. For instance, many fewer judgments are required for a forced-choice task on an “obvious” effect (Mahowald et al. 2016) than for answering finer-grained questions about statistical power or sensitivity (Sprouse & Almeida 2017a).

Our goal in this paper was to evaluate all of the most commonly used formal measures of sentence acceptability, as well as variants on them, in order to isolate and expose the impact of task-specific assumptions. The primary distinction between existing measures is whether they ask participants to give each sentence a rating on a scale of some sort (a rating task) or make a choice between two sentences (a choice task). The two rating tasks we consider are LIKERT scales and Magnitude Estimation (ME), while the two choice tasks involve either deciding between two related sentences (TARGET PAIRS) or two random sentences (RANDOM PAIRS). This yields four separate tasks, but for two we separately evaluate alternative statistical methods for transforming the raw results, giving six distinct measures. One task for which we consider multiple analyses is magnitude estimation, where scores can be log transformed (ME(LOG)) or both log and z-transformed (ME(z-SCORE)). The other is the judgments involving random sentence pairs, which can either be used directly or input into a THURSTONE model based on a standard measurement approach in psychophysics.

The six measures, ME(z-SCORE), ME(LOG), LIKERT, THURSTONE, TARGET PAIRS, and RANDOM PAIRS are described in more detail in the Method section. One reason for this choice of tasks is to reflect current practice: LIKERT, TARGCHOICE, and ME are probably the most common instruments for eliciting acceptability judgments (Podesva & Sharma 2014). However another consideration is their diversity of assumptions. In particular, LIKERT and ME each supply a particular rating scale, while the choice tasks do not. A key contribution of this paper is the presentation of the THURSTONE model, which allows comparisons between these perspectives by inferring scale structure from choice data (Thurstone 1927). The THURSTONE model is capable of representing a wide range of latent acceptability structures: the degree of consistency between the structure inferred from choice task data and rating task data gives an indication of the extent to which the researcher-supplied scales impose structure on people’s responses.

1.2 Measure evaluation

In this paper we systematically investigate three criteria for evaluating each of the six measures: test-retest reliability, agreement, and robustness to sample size.

Measure agreement is an important check of validity for diverse measures claiming to reflect the same underlying construct. Here we are also interested in the vulnerability of different measures to different sources of noise, with the goal of allowing researchers to minimize the variability in results that are due to controllable properties of the elicitation task rather than the linguistic construct of interest. Although robustness to sample size is not directly related to the decomposition of measure variability and bias that is the main focus of this paper, we include it as important information for readers interested in the implications of this work for study design.

Test-retest reliability can be defined at various levels from responses (when repeating questions within-participants) to items (an aggregation of many responses) to effects (which aggregate over many theoretically-related items). Here we are primarily concerned with the item level, for several reasons. First, effect-level reliability is already well studied. Second, including only one item per effect (as we do) allows us to maximize variability across items and thus creates a much more stronger test of each measure. If a measure is highly reliable even across an extremely varied sentence set, this is more informative than finding that it is reliable along a more narrow set of stimuli. Finally, item-level reliability is not itself well-studied, yet is theoretically important: if people’s judgments about specific items are reliable for a given measure, a much wider range of theoretical claims about language are open to study with this data type.

The assessment of reliability depends in part on the nature of the hypothesis being tested. Some researchers might be particularly interested in a decision problem: determining whether people make different judgments for two different sentences or kinds of sentences. Others might be interested in an estimation problem, being able to accurately position sentences relative to each other on an acceptability scale. In this paper we evaluate reliability using both kinds of assessment. For a decision problem, we rely on statistical significance testing of the difference between acceptability scores produced by a particular measure for the two sentences. This allows us to precisely characterize our uncertainty in the estimate of the difference for each pair of sentences, and compare that degree of uncertainty across measures in a principled way. For estimation problems, we calculate correlations between scores from different time periods or people. Reliability at this level of detail is relevant to claims about the overall structure of acceptability, for example whether or not it exhibits strong clustering (Sprouse 2007).

A secondary factor we focus on is sensitivity to sample size. We do this by systematically repeating our reliability analyses with the judgments derived from different sample sizes of participants and comparing this to the results from the full sample. This is directly useful in estimating the sample size required for a target level of reliability in studies using these measures. It also gives an indication of how efficiently these measures are able to extract information from responses; this is useful because different methods might take different numbers of trials to produce reliable answers (Li et al. 2016; Sprouse & Almeida 2017a).

Our final factor of interest is the agreement between measures. This is of interest not only because substantial agreement suggests that the measures reflect genuine acceptability judgments rather than superficial measure-specific behavior, but also because such agreement provides converging evidence about the nature of those judgments. Cross-measure agreement is better studied than reliability (Schütze 2011; Weskott & Fanselow 2011; Sprouse & Almeida 2012), but still has not been investigated within the full array of measures we consider. It is therefore valuable as a replication and extension of previous work.

4 Summary and conclusions

Our main focus in this work is the test-retest reliability survey of the most common tasks used to measure sentence acceptability. All tasks considered here showed high reliability, with even the least reliable measure, TARGET PAIRS, producing large positive correlations across re-test data sets. By contrasting within-participant reliability with between-participant reliability on the same sentences with the same measures, we estimated what proportion of the variability observed can be attributed to factors unique to the between-participant replication. In all cases between-participant reliability was lower, and this reliability drop was particularly pronounced for ME and RANDOM PAIRS, suggesting these measures are particularly vulnerable to variability across people or how items are paired together. The TARGET PAIRS and LIKERT ratings showed not only the highest within-participant reliability but also had the least amount of decrease in reliability when comparing between- to within-participant correlations. This pattern is a hallmark of well-calibrated measurement instruments.

Secondly, we ask to what extent acceptability estimates depend on the particular assumptions of each measurement tool, and whether the conclusions a researcher would reach would change based on the measurement task they used. Here we find high consistency between measures, including near-uniform agreement with expert judgment. The least accurate global score (RANDOM PAIRS) was still highly correlated (r.9) with the most accurate global score (LIKERT). Where disagreements occurred between the measures, it was usually in the magnitude rather than the direction of the difference, with the less reliable scores more likely to not reject the null for closely matched pairs.

This overall consistency is striking given the structural differences between these tasks, especially between the LIKERT and THURSTONE tasks. Both these measurement tasks incorporate strong assumptions, and in different domains have not always agreed with each other (Roberts et al. 1999; Drasgow et al. 2010).

Specifically, the assumptions made by the LIKERT task center around people’s interpretation of the scale, which may impose structure on responses (Schütze 1996; Carifio & Perla 2008) or be vulnerable to differences in response style (Lee et al. 2002; Johnson 2003). The THURSTONE measure avoids these issues by removing the researcher-supplied scale and forcing a discrete choice, but instead assumes transitivity of acceptability, which is known to be violated in similar preference-ranking tasks (Tversky 1969). Such violations have been observed in sentence acceptability judgments (Danks & Glucksberg 1970; Hindle & Ivan 1975).

A core contribution of this paper is that these measures provide converging evidence in the domain of sentence acceptability: theoretically motivated concerns about the restrictions a fixed LIKERT response scale imposes on participants turn out not to matter in practice, with the scale-free THURSTONE measure based on choice task data arriving at essentially identical acceptability estimates. Although the LIKERT and THURSTONE acceptability scores agree, LIKERT scores are marginally more reliable and have the advantage of more easily accepting additional sentences into an existing set of comparisons.

Despite the close agreement between measures, TARGET PAIRS stands out as having noteworthy decision reliability. It showed the highest power, yielding very few null results, but as a result was also the only measure vulnerable to complete reversals of a significant decision. This pattern is characteristic of high-powered tests, where significant differences observed under high-noise/low information conditions tend to entail exaggerated estimates of effect size (Loken and Gelman 2017). While TARGET PAIRS is the highest performing measure in terms of test-retest consistency, and maintains this performance at small sample sizes, the relatively few errors it produces at low sample sizes can be of a qualitatively different and potentially much more misleading kind. Relatedly, the TARGET PAIRS measure had by far the highest disagreement with the informal expert ratings of any measure, endorsing the informally dispreferred sentence on 14 of the items (9.3%) while the other measures endorsed at most two. When using the TARGET PAIRS measure it is critical for researchers to include multiple pairs of target sentences within the same construct to increase decision reliability.

We find that ME tasks produce acceptability scores that are consistent with the other measures but somewhat less reliable. Contrasting the within and between participant test-retest reliability shows that this greater variability is likely to be due to variation in participant response styles, which appears as noise in the final measure. This source of variability can be mitigated somewhat by processing the scores using a transformation sensitive to response style, such as the z-transform. However, this is less effective than offering restricted responses in the task itself, as the LIKERT and THURSTONE measures do. In general, although ME measures performed overall better than we expected them to, they were still consistently inferior to most of the alternatives.

Although we expect these results to be indicative of the relative test-retest reliability of these measures, the particular reliability results we observed can be expected to depend to some extent on factors such as the specific sentences and the number of trials per participant, which were controlled across measures to ensure the comparisons were fair. For the rating tasks, reliability can be expected to be a function of the number of trials per item, so the analysis over participant sample sizes gives some indication of how reliability might be expected to change with different sentence set sizes. The situation is less clear for the THURSTONE and RANDOM PAIRS measures, which may be sensitive to the diversity of contrasts presented as well as the average number of presentations per sentence. By choosing to hold the set of sentences constant we ensured that each measure was tested on the same range of effect sizes, but this does limit the generalizability of our reliability results. However, we believe these 150 sentences are representative of the kinds of sentences commonly used for sentence grammaticality judgments.

Although individually these measures make a range of assumptions that could be considered strong limitations, the high agreement between them suggests that these measure-specific assumptions do not have a strong impact on acceptability judgments. We find that if multiple items targeting the same contrast are used, none of the methods considered here have an appreciable chance of giving a strongly misleading result (although there are differences in efficiency, with ME measures requiring more trials for any given level of reliability).

While we find that the most common measurement tasks are all reasonably effective, the LIKERT task performed especially well. In addition to achieving relatively high test-retest reliability, our results also suggest that the LIKERT measure admits a stronger interpretation of sentence acceptability scores than is usually attributed to it. Our findings suggest that the interpretation of LIKERT data need not be constrained by concerns that the limited response scale may impose structure on the data, or that the subjective distance between response options is unknown and may vary between people. The structure suggested by the LIKERT data is in high agreement with the structure suggested by the THURSTONE measure. Since the latter is both agnostic about the underlying structure of acceptability and capable of recovering various clustered or gradient but non-linear distributions of acceptability, this high agreement suggests that the nature of the LIKERT scale is not significantly shaping the structure of acceptability judgments it yields. The minimal difference between within-participant test-retest reliability and between-participant test-retest reliability suggests that the z-transformation offers effective protection against potential differences in the interpretation of the scale.

One interesting aspect of our results hinges on the fact that our dataset involved only one item per effect. This was intentional since it thus made the item set maximally variable and offered a stronger test of each measure. Our results indicating that many of these measures can reliably reflect global acceptability, rather than just effect-level acceptability, is gratifying and reassuring. It is also interesting that our item-level reliability is so high, differing from other work measuring effect-level reliability primarily in yielding slightly higher numbers of null decisions at lower sample sizes (Sprouse et al. 2013; Häussler et al. 2016; Mahowald et al. 2016). Aside from this, we found item-level reliability that was nearly as good as effect-level reliability incorporating multiple items. Taken together with the high item-level variability observed around effects in other work (Sprouse et al. 2013), this may suggest that people are surprisingly consistent on specific items but that the effect-level phenomena within any given item can at least sometimes be obscured by lexical choices or other superficial differences between sentences.

In terms of design recommendations for researchers interested in efficiently obtaining results that replicate with high confidence, we replicate previous results pertaining to the reliability of effects defined as ordinal relationships between sentence classes and extend them to include recommendations for ensuring the reliability of distances between individual items. We reproduce here both the general finding that acceptability judgments are highly reliable in between-participant replications (Sprouse & Almeida 2012; Sprouse et al. 2013), and also more detailed claims such as the high power of TARGET PAIRS (Schütze & Sprouse 2014; Sprouse & Almeida 2017), the lack of extra information in the extra variability of ME ratings (Weskott & Fanselow 2011), and the qualitative relationship between decision reliability and sample size (Mahowald et al. 2016). We further show that these reliability results extend to estimation analyses, with a high correlation in the acceptability scores assigned by different tasks to different sentences.

Overall, our work demonstrates that formal acceptability results are even more informative than previously realized. They agree substantially with each other (as well as informal measures) across the global structure of acceptability, not just individual targeted sentence pairs. Moreover, the best-performing measures (like LIKERT and THURSTONE) appear not to impose substantial structure of their own onto the pattern of acceptability responses. This licenses us to use acceptability judgments to address a wider variety of questions that we have previously been able – from identifying dialectical or language differences (or possibly even individual fluency) using acceptability judgments, to investigating the global structure of grammatical knowledge (e.g., is it all-or-none or multi-dimensional?). Not all of these questions may pan out, but it is exciting to think that the formal tools we have developed for evaluating targeted sentence pairs may have something to say about them as well.

Arppe, Antti & Juhani Järvikivi. 2007. Every method counts: Combining corpus-based and experimental evidence in the study of synonymy. Corpus Linguistics and Linguistic Theory 3(2). 131–159. DOI:

Bard, Ellen Gurman, Dan Robertson & Antonella Sorace. 1996. Magnitude estimation of linguistic acceptability. Language, 32–68. DOI:

Basilico, David. 2003. The topic of small clauses. Linguistic Inquiry 34(1). 1–35. DOI:

Borg, Ingwer & Patrick J. F. Groenen. 2005. Modern multidimensional scaling: Theory and applications. Berlin: Springer Science & Business Media.

Brandt, Mark J., Hans IJzerman, Ap Dijksterhuis, Frank J. Farach, Jason Geller, Roger Giner-Sorolla, James A. Grange, Marco Perugini, Jeffrey R. Spies & Anna Van’t Veer. 2014. The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology 50. 217–224. DOI:

Carifio, James & Rocco Perla. 2008. Resolving the 50-year debate around using and misusing Likert scales. Medical Education 42(12). 1150–1152. DOI:

Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.

Cornips, Leonie & Cecilia Poletto. 2005. On standardising syntactic elicitation techniques (part 1). Lingua 115(7). 939–957. DOI:

Cowart, Wayne. 1997. Experimental syntax: Applying objective methods to sentence judgments. Thousand Oaks, CA: Sage Publications.

Culbertson, Jennifer & Steven Gross. 2009. Are linguists better subjects? The British Journal for the Philosophy of Science 60(4). 721–736. DOI:

Cumming, Geoff & Robert Maillardet. 2006. Confidence intervals and replication: Where will the next mean fall? Psychological Methods 11(3). 217. DOI:

Dąbrowska, Ewa. 2010. Naive v. expert intuitions: An empirical study of acceptability judgments. The Linguistic Review 27(1). 1–23. DOI:

Danks, Joseph H. & Sam Glucksberg. 1970. Psychological scaling of linguistic properties. Language and Speech 13(2). 118–138. DOI:

Danziger, Kurt. 1980. The history of introspection reconsidered. Journal of the History of the Behavioral Sciences 16(3). 241–262. DOI:<241::AID-JHBS2300160306>3.0.CO;2-O

Davison, Anthony C. & David V. Hinkley. 1997. Bootstrap Methods and Their Applications. Cambridge: Cambridge University Press.

DeVellis, Robert F. 2016. Scale development: Theory and applications 26. Thousand Oaks, CA: Sage publications.

Drasgow, Fritz, Oleksandr S. Chernyshenko & Stephen Stark. 2010. 75 years after Likert: Thurstone was right! Industrial and Organizational Psychology 3(4). 465–476. DOI:

Ennis, Daniel M. 2016. Thurstonian Models: Categorical Decision Making in the Presence of Noise. Richmond, VA: The Institute for Perception.

Erlewine, Michael Yoshitaka & Hadas Kotek. 2016. A streamlined approach to online linguistic surveys. Natural Language & Linguistic Theory 34(2). 481–495. DOI:

Fabrigar, Leandre R. & Jung-Eun Shelly Paik. 2007. Thurstone Scales. In Neil Salkind (ed.), Encyclopedia of Measurement and Statistics, 1003–1005. SAGE publications.

Featherston, Sam. 2005. Magnitude estimation and what it can do for your syntax: Some wh-constraints in German. Lingua 115(11). 1525–1550. DOI:

Featherston, Sam. 2007. Data in generative grammar: The stick and the carrot. Theoretical Linguistics 33(3). 269–318. DOI:

Featherston, Sam. 2008. Thermometer judgments as linguistic evidence. Was ist linguistische Evidenz, 69–89.

Featherston, Sam. 2009. Relax, lean back, and be a linguist. Zeitschrift für Sprachwissenschaft 28(1). 127–132. DOI:

Fukuda, Shin, Grant Goodall, Dan Michel & Henry Beecher. 2012. Is Magnitude Estimation worth the trouble? In Jaehoon Choi, E. Allan Hogue, Jeffrey Punske, Deniz Tat, Jessamyn Schertz & Alex Trueman (eds.), Proceedings of the 29th West Coast Conference on Formal Linguistics, 328–336.

Gelman, Andrew & Francis Tuerlinckx. 2000. Type S error rates for classical and Bayesian single and multiple comparison procedures. Computational Statistics 15(3). 373–390. DOI:

Gibson, Edward & Evelina Fedorenko. 2013. The need for quantitative methods in syntax and semantics research. Language and Cognitive Processes 28(1–2). 88–124. DOI:

Gibson, Edward, Steven T. Piantadosi & Evelina Fedorenko. 2013. Quantitative methods in syntax/semantics research: A response to sprouse and almeida (2013). Language and Cognitive Processes 28(3). 229–240. DOI:

Hartley, James. 2014. Some thoughts on Likert-type scales. International Journal of Clinical and Health Psychology 14(1). 83–86. DOI:

Häussler, Jana & Thomas Juzek. 2016. Detecting and discouraging noncooperative behavior in online experiments using an acceptability judgment task. In Hanna Christ, Daniel Klenovšak, Lukas Sönning & Valentin Werner (eds.), A blend of MaLT: Selected contributions from the methods and linguistic theories symposium 2015, 15. 73–100.

Häussler, Jana, Thomas Juzek & Tom Wasow. 2016. Unsupervised prediction of acceptability judgements. In Patrick Farrell (ed.), To be grammatical or not to be grammatical – is that the question. Annual Meeting of the Linguistic Society of America.

Hindle, Donald & Sag Ivan. 1975. Some more on anymore. In Ralph Fasold & Roger Shuy (eds.), Analyzing variation in language: Papers from the second colloquium on new ways of analyzing variation, 89–110.

Hofmeister, Philip, T. Florian Jaeger, Inbal Arnon, Ivan A. Sag & Neal Snider. 2013. The source ambiguity problem: Distinguishing the effects of grammar and processing on acceptability judgments. Language and Cognitive Processes 28(1–2). 48–87. DOI:

Johnson, Keith. 2011. Quantitative methods in linguistics. Manchester, MI: John Wiley & Sons.

Johnson, Timothy R. 2003. On the use of heterogeneous thresholds ordinal regression models to account for individual differences in response style. Psychometrika 68(4). 563–583. DOI:

Juzek, Thomas. 2015. Acceptability judgement tasks and grammatical theory. Oxford: University of Oxford Phd thesis.

Keller, Frank. 2003. A psychophysical law for linguistic judgments. In Richard Alterman & David Kirsh (eds.), Proceedings of the 25th annual conference of the cognitive science society, 652–657.

Keller, Frank & Ash Asudeh. 2001. Constraints on linguistic co-reference: Structural vs. pragmatic factors. In Johanna Moore & Keith Stenning (eds.), Proceedings of the 23rd annual conference of the cognitive science society, 483–488.

Kline, Paul. 2013. Handbook of psychological testing. Routledge.

Lau, Jey Han, Alexander Clark & Shalom Lappin. 2016. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science.

Lee, Jerry, Patricia Jones, Yoshimitsu Mineyama & Xinwei Esther Zhang. 2002. Cultural differences in responses to a Likert scale. Research in Nursing & Health 25(4). 295–306. DOI:

Li, Linjie, Vicente Malave, Amanda Song & Angela Yu. 2016. Extracting human face similarity judgments: Pairs or triplets? Journal of Vision 16(12). 719–719. DOI:

Likert, Rensis. 1932. A technique for the measurement of attitudes. Archives of Psychology 140. 44–60.

Linzen, Tal & Yohei Oseki. 2015. The reliability of acceptability judgments across languages. New York, MS: New York University Press.

Loken, Eric & Andrew Gelman. 2017. Measurement error and the replication crisis. Science 355(6325). 584–585. DOI:

Mahowald, Kyle, Peter Graff, Jeremy Hartman & Edward Gibson. 2016. SNAP judgments: A small N acceptability paradigm (SNAP) for linguistic acceptability judgments. Language 92(3). 619–635. DOI:

Miller, Brent, Pernille Hemmer, Mark Steyvers & Michael D. Lee. 2009. The wisdom of crowds in rank ordering problems. In Andrew Howes, David Peebles & Richard Cooper (eds.), 9th International conference on cognitive modeling, 86–91. Manchester: ICCM.

Munro, Robert, Steven Bethard, Victor Kuperman, Vicky Tzuyin Lai, Robin Melnick, Christopher Potts, Tyler Schnoebelen & Harry Tily. 2010. Crowdsourcing and language studies: The new generation of linguistic data. In Jon Weese (ed.), Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s Mechanical Turk, 122–130.

Murphy, Brian, Carl Vogel & Conny Opitz. 2006. Cross-linguistic empirical analysis of constraints on passive. In Presentation to the symposium on interdisciplinary themes in cognitive language research. Helsinki: Finnish Cognitive Linguistics Association.

Myers, James. 2009. The design and analysis of small-scale syntactic judgment experiments. Lingua 119(3). 425–444. DOI:

Myers, James. 2012. Testing adjunct and conjunct island constraints in Chinese. Language and Linguistics 13(3). 437.

Nosofsky, Robert M. 1992. Similarity scaling and cognitive process models. Annual Review of Psychology 43(1). 25–53. DOI:

Noveck, Ira & Anne Reboul. 2008. Experimental pragmatics: A Gricean turn in the study of language. Trends in Cognitive Sciences 12(11). 425–431. DOI:

O’Mahony, Michael et al. 2003. Discrimination testing: A few ideas, old and new. Food Quality and Preference 14(2). 157–164. DOI:

Phillips, Colin. 2009. Should we impeach armchair linguists? Japanese/Korean Linguistics 17. 49–64.

Phillips, Colin & Howard Lasnik. 2003. Linguistics and empirical evidence: Reply to edelman and christiansen. Trends in Cognitive Sciences 7(2). 61–62. DOI:

Podesva, Robert & Devyani Sharma. 2014. Research methods in linguistics. Cambridge: Cambridge University Press.

Porte, Graeme. 2013. Who needs replication? CALICO Journal 30(1). 10–15. DOI:

Rensink, Ronald, Kevin O’Regan & James Clark. 1997. To see or not to see: The need for attention to perceive changes in scenes. Psychological Science 8(5). 368–373. DOI:

Roberts, James, James Laughlin & Douglas Wedell. 1999. Validity issues in the Likert and Thurstone approaches to attitude measurement. Educational and Psychological Measurement 59(2). 211–233. DOI:

Rosenbach, Anette. 2003. Aspects of iconicity and economy in the choice between the s-genitive and the of-genitive in english. Topics in English Linguistics 43. 379–412. DOI:

Sampson, Geoffrey. 2007. Grammar without grammaticality. Corpus Linguistics and Linguistic Theory 3(1). 1–32. DOI:

Schütze, Carson. 1996. The empirical base of linguistics: Grammaticality judgments and linguistic methodology. Chicago, IL: University of Chicago Press.

Schütze, Carson. 2011. Linguistic evidence and grammatical theory. Wiley Interdisciplinary Reviews: Cognitive Science 2(2). 206–221. DOI:

Schütze, Carson & Jon Sprouse. 2014. Judgment data. In Robert Podesva & Devyani Sharma (eds.), Research methods in linguistics, chap. 3. 27–51. Cambridge: Cambridge University Press.

Selker, Ravi, Michael D. Lee & Ravi Iyer. 2017. Thurstonian cognitive models for aggregating top-n lists. Decision 4(2). 87. DOI:

Shalom, Dorit Ben & David Poeppel. 2007. Functional anatomic models of language: Assembling the pieces. The Neuroscientist.

Simons, Daniel & Ronald Rensink. 2005. Change blindness: Past, present, and future. Trends in Cognitive Sciences 9(1). 16–20. DOI:

Sorace, Antonella. 2010. Using magnitude estimation in developmental linguistic research. In Elma Blom & Sharon Unsworth (eds.), Experimental methods in language acquisition research, 57–72. Amsterdam: John Benjamins. DOI:

Sorace, Antonella & Frank Keller. 2005. Gradience in linguistic data. Lingua 115(11). 1497–1524. DOI:

Spencer, Nancy Jane. 1973. Differences between linguists and nonlinguists in intuitions of grammaticality-acceptability. Journal of Psycholinguistic Research 2(2). 83–98. DOI:

Sprouse, Jon. 2007. Continuous acceptability, categorical grammaticality, and experimental syntax. Biolinguistics 1. 123–134.

Sprouse, Jon. 2008. Magnitude estimation and the non-linearity of acceptability judgments. In Natasha Abner & Jason Bishop (eds.), Proceedings of the 27th West Coast Conference on Formal Linguistics, 397–403. Somerville, MA: Cascadilla Press.

Sprouse, Jon. 2011a. A test of the cognitive assumptions of magnitude estimation: Commutativity does not hold for acceptability judgments. Language 87(2). 274–288. DOI:

Sprouse, Jon. 2011b. A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory. Behavior Research Methods 43(1). 155–167. DOI:

Sprouse, Jon & Carson Schütze. 2017. Grammar and the use of data. In Bas Aarts, Jill Bowie & Gergana Popova (eds.), The Oxford handbook of English grammar, chap. 3. Oxford University Press.

Sprouse, Jon, Carson Schütze & Diogo Almeida. 2013. A comparison of informal and formal acceptability judgments using a random sample from Linguistic Inquiry 2001–2010. Lingua 134. 219–248. DOI:

Sprouse, Jon & Diogo Almeida. 2011. Power in acceptability judgment experiments and the reliability of data in syntax. Irvine CA & Lansing MI: University of California & Michigan State University Master’s thesis.

Sprouse, Jon & Diogo Almeida. 2012. Assessing the reliability of textbook data in syntax: Adger’s Core Syntax. Journal of Linguistics 48(03). 609–652. DOI:

Sprouse, Jon & Diogo Almeida. 2017a. Design sensitivity and statistical power in acceptability judgment experiments. Glossa: a Journal of General Linguistics 2(1). 14.

Sprouse, Jon & Diogo Almeida. 2017b. Setting the empirical record straight: Acceptability judgments appear to be reliable, robust, and replicable. Behavioral and Brain Sciences 40. DOI:

Stevens, Stanley Smith. 1956. The direct estimation of sensory magnitudes: Loudness. The American Journal of Psychology, 1–25. DOI:

Thurstone, Louis. 1927. A law of comparative judgment. Psychological Review 34(4). 273. DOI:

Tversky, Amos. 1969. Intransitivity of preferences. Psychological Review 76(1). 31. DOI:

Wasow, Thomas & Jennifer Arnold. 2005. Intuitions in linguistic argumentation. Lingua 115(11). 1481–1496. DOI:

Weskott, Thomas & Gisbert Fanselow. 2008. Variance and informativity in different measures of linguistic acceptability. In Natasha Abner & Jason Bishop (eds.), Proceedings of the 27th West Coast Conference on Formal Linguistics, 431–439. Somerville, MA: Cascadilla Press.

Weskott, Thomas & Gisbert Fanselow. 2009. Scaling issues in the measurement of linguistic acceptability. The Fruits of Empirical Linguistics 1. 229–245. DOI:

Weskott, Thomas & Gisbert Fanselow. 2011. On the informativity of different measures of linguistic acceptability. Language 87(2). 249–273. DOI:


canceled / invalidated
Edition of 16.06.1994
detailed information

Name of the document ORDER of the Ministry of Justice of the Russian Federation of 16.06.94 N 19-01-88-94 “ON APPROVAL OF THE INSTRUCTIONS FOR BUSINESS PRODUCTION IN THE DISTRICT (CITY) COURT”
Type of document Order, instruction
Adopting body Min7000 Document number 9000 9000

Date of adoption 01.01.1970
Revision date 16.06.1994
Date of registration with the Ministry of Justice 01.01.1970
Status canceled / invalidated
  • At the time of inclusion in the database, the document was not published
Navigator Notes




9.978 9.978 9.978 9.978 9.97 control Surname, name, patronymic of the person who filed the complaint (application).Name of organization and institution Summary of the complaint (application) Recurrence mark 1 2 3 4 5 6

“__” ____________ 19__

Second page

The deadline for resolving the complaint (application) has been appointed and the contractor The receipt will execute. and the date of receipt Result of resolution of the complaint (application) Date of resolution How many days – was on the resolution Note
1 2 3 4 5 6

If the complaint is repeated, the index “P” is put in the 6th column – “repeat mark”.

Form N 24

90,000 The State Duma did not support the proposal of the Russian Orthodox Church to withdraw abortions from the compulsory medical insurance system

In the State Duma committees on health protection and on family, women and children, as well as in the Human Rights Council (HRC) under the President of the Russian Federation, they spoke out against the withdrawal of abortions from the compulsory health insurance system for non-medical reasons. Earlier, this initiative was made by the Russian Orthodox Church (ROC).

On the air of Radio 1, the chairman of the State Duma health protection committee, Dmitry Morozov, said that, in his opinion, careful preventive work with women rather than a ban would lead to a decrease in abortions.“I believe that with all women, from girls to women of reproductive age, you need to work, constantly explain and do everything to prevent these abortions. The mechanisms of this work can be different. I am in the plane of persuading women and constant daily work with the family, ”he explained.

In parallel, Oksana Pushkina, Deputy Chairman of the State Duma Committee on Family, Women and Children, spoke about the proposal of the ROC. On her Twitter, she wrote that the number of abortions and the birth rate are decreasing at the same time, and called the proposal of the ROC “a provocation of the purest water.”“We have a simultaneous (!) Reduction in the number of abortions and the birth rate. It makes no sense to restrict abortion rights. Women do not give birth for another reason, ”she says.

Irina Kirkora, Deputy Chairperson of the Human Rights Council, also considers the idea of ​​a ban on abortions inappropriate. Withdrawal from the compulsory medical insurance system, she said, would be a harbinger of even greater problems. “There will be proposals when this medical service will be sold cheaper, in unsanitary conditions, this will lead to terrible consequences when a woman will risk her life and health, the health of her future children.We cannot allow such tragedies, ”she told Interfax, adding that in order to stimulate the birth rate, it is necessary to create favorable conditions and provide women with social support.

During the X General Church Congress on Social Service on July 21, 2021, the Russian Orthodox Church came up with a proposal to gradually withdraw abortions, which women do at will, and not for medical reasons, from the compulsory medical insurance system. According to the document adopted at the end of the congress, it is first proposed to prohibit the termination of pregnancy for women with incomes exceeding the subsistence level, and then for everyone else.

In addition, the Russian Orthodox Church proposes to introduce at the federal level a norm on compulsory visualization and listening to the fetal heartbeat before the procedure, as well as consulting with a psychologist or social worker. It is noted that the ROC relied on the experience of Germany, “where the payment of abortion through insurance is provided only in case of social, medical reasons or because of the extreme poverty of a woman.”

“To allow or not to allow [abortions], based on the financial situation, I think that such a decision can hardly be made quickly,” Valery Ryazansky, First Deputy Chairman of the Federation Council Committee on Social Policy, commented later on the proposal of the Russian Orthodox Church.- The ROC has been announcing this for a long time and loudly. There is an absolutely correct position in this. The fact is that there are a lot of abortion deaths in the world. This topic really has a foundation and concern of the church. ”

Share on social networks

Form n 14 – journal of registration of proposals, applications and complaints about the work of the court that are not subject to consideration in the manner prescribed by the criminal procedure and civil procedure legislation order of the judicial department under the Supreme Court of the Russian Federation dated 28-12-99 169 on the approval of temporary instructions for office work in the supreme courts of the republics of regional and regional courts, courts of federal cities, courts of an autonomous region and autonomous okrugs (2021).Actual in 2019

font size

ORDER of the Judicial Department under the Supreme Court of the Russian Federation dated 28-12-99 169 ON APPROVAL OF INTERIM INSTRUCTIONS FOR BUSINESS PROCEEDINGS V… Actual in 2018

First page

No. Date of receipt Mark of taking control Surname, name, patronymic of the person who filed the complaint (application). Name of the organization (institution) Summary of the complaint (application) Recurrence mark
1 2 3 4 5 6

“__” __________________

Second page

The deadline for the resolution of the complaint (application) and the contractor Receipt of the contractor and the date of receipt Result of the resolution of the complaint (application) Date of resolution How many days was on the resolution Note
7 8 9 10 11 12

If the complaint is repeated, the index “P” is put in column 6 “Repeat mark”.

Form N 15

90,000 Value proposition of Russian airlines: drivers and directions of transformation


Khryseva A., Chekalova A. 2017. Priority areas for improving airline services in the passenger air transportation market. Bulletin of the Volgograd State Technical University 7 : 54–57.

Khudyakov Y., Nikolaykin N. 2009. Types of risks and peculiarities of their manifestation in the air transport service provided by the airline. Scientific Bulletin of the Moscow State Technical University of Civil Aviation 149 (1): 13–17.


Adrangi B., Chow G., Raffiee K. 1996. Passenger output and labor productivity in the US airline industry after deregulation: A profit function approach. Logistics and Transportation Review 32 (4): 389-407.

Bachwich A., Wittman M. 2017. The emergence and effects of the ultra-low cost carrier (ULCC) business model in the U.S. airline industry. Journal of Air Transport Management 62 (1): 155-164.

Bayus B., Putsis W. 1999. Product proliferation: An empirical analysis of product line determinants and market outcomes. Marketing Science 18 (2): 137-153.

Benson L., Beach L. 1996. The effects of time constraints on the prechoice screening of decision options. Organizational Behavior and Human Decision Processes 67 (2): 222-228.

Brueckner J., Pai V. 2009. Technological innovation in the airline industry: The impact of regional jets. International Journal of Industrial Organization 27 (1): 110-120.

Brueckner J., Spiller P. 1994. Economies of traffic density in the deregulated airline industry. The Journal of Law and Economics 37 (2): 379-415.

Button K., Ison S. 2008. The economics of low-cost airlines: Introduction. Research in Transportation Economics 24 (1): 1–4.

Camerer C., Loewenstein G., Rabin M. 2004. Advances in Behavioral Economics. In the Roundtable Series in Behavioral Economics .Princeton University Press: Princeton, NJ.

Chen F.-Y., Chang Y.-H. 2005. Examining airline service quality from a process perspective. Journal of Air Transport Management 11 (2): 79-87.

Cronqvist H., Thaler R. 2004. Design choices in privatized social security systems: Learning from the Swedish experience. American Economic Review 94 (2): 424-428.

Cuccurullo C., Aria M., Sarto F. 2016. Foundations and trends in performance management. A twenty-five years bibliometric analysis in business and public administration domains. Scientometrics 108 (2): 595-611.

Dennis N. 2005. Industry consolidation and future airline network structures in Europe. Journal of Air Transport Management 11 (3): 175-183.

DeSarbo W., Park J., Rao V. 2010. Deriving joint space positioning maps from consumer preference ratings. Marketing Letters 22 (1): 1-14.

Detzen D., Jain P., Likitapiwat T., Rubin R. 2012. The impact of low-cost airline entry on competition, network expansion, and stock valuations. Journal of Air Transport Management 18 (1): 59-63.

Doganis R. 2001. The Airline Business in the 21st Century . Routledge: London.

Doyle P. 1975. Brand positioning using multidimensional scaling. European Journal of Marketing 9 (1): 20-34.

Ellinger A., ​​McWhorter R. 2016. Qualitative case study research as empirical Inquiry. International Journal of Adult Vocational Education and Technology 7 (3): 1-13.

Elrod T., Johnson R., White J. 2004. A new integrated model of noncompensatory and compensatory decision strategies. Organizational Behavior and Human Decision Processes 95 (1): 1-19.

Fan Y., Yang C. 2020. Competition, product proliferation, and welfare: A study of the US smartphone market. American Economic Journal: Microeconomics 12 (2): 99-134.

Flint P. 2003. The world has changed forever. Air Transport World 40 (3): 22-26.

Francis G., Humphreys I., Ison, S., Aicken M. 2006. Where next for low cost airlines? A spatial and temporal comparative study. Journal of Transport Geography 14 (2): 83-94.

Garrow L., Jones S., Parker R. 2007. How much airline customers are willing to pay: An analysis of price sensitivity in online distribution channels. Journal of Revenue and Pricing Management 5 (4): 271-290.

Ge X., Häubl G., Elrod T. 2012. What to say when: Influencing consumer choice by delaying the presentation of favorable information. Journal of Consumer Research 38 (6): 1004-1021.

George A., Bennett A. 2007 Case Studies and Theory Development in The Social Sciences . MIT Press: Cambridge.

Gillen D., Gados A. 2008. Airlines within airlines: Assessing the vulnerabilities of mixing business models. Research in Transportation Economics 246 (1): 25–35.

Glanzel W., Schubert A., Czerwon H. 1999. A bibliometric analysis of international scientific cooperation of the European Union (1985-1995). Scientometrics 45 (2): 185-202.

Graf A. Maas P. 2008. Customer value from a customer perspective: A comprehensive review. Journal fur Betriebswirtschaft 58 (1): 1-20.

Haqiqat N. 2017. Evaluating airline service quality using fuzzy DEMATEL and ANP. Strategic Public Management Journal 3 (6): 57-77.

Häubl G., Dellaert B., Donkers B. 2010. Tunnel vision: Local behavioral influences on consumer decisions in product search. Marketing Science 29 (3): 438-455.

Hossan C. 2012. Sustainability and growth of low cost airlines: An industry analysis in global perspective. American Journal of Industrial and Business Management 1 (3): 162-171.

Hussain R., Al Nasser A., Hussain Y. 2015. Service quality and customer satisfaction of a UAE-based airline: An empirical investigation. Journal of Air Transport Management 42 (1): 167-175.

Ionides N. 2004. Three Thai carriers sound no-frills fanfare for Asia. Airline Business 20 (1): 19-25.

Iyengar S., Lepper M. 2000. When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology 79 (6): 995-1006.

Johnson M., Christensen C. Kagermann H. 2008. Reinventing your business model. Harvard Business Review 86 (12): 57-60.

Kissling C. 1998. Liberal aviation agreements – New Zealand. Journal of Air Transport Management 4 (3): 177-180.

Kling J., Mullainathan S., Shafir E., Vermeulen L., Wrobel M. 2011. Misprediction in Choosing Medicare Drug Plans .Harvard University Press: Cambridge.

Klophaus R., Conrady R., Fichert F. 2012. Low cost carriers going hybrid: Evidence from Europe. Journal of Air Transport Management 23 : 54-58.

Laming C., Mason K. 2014. Customer experience – An analysis of the concept and its performance in airline brands. Research in Transportation Business & Management 10 : 15–25.

Lampel J., Mintzberg H. 1996. Customizing customization. Sloan Management Review 38 (1): 21-30.

Lawton T. 2002. Cleared for Take OV: Structure and Strategy in Low Fare Airline Business . Ashgate: Aldershot.

Levav J., Heitmann M., Herrmann A., Iyengar S. 2010. Order in product customization decisions: evidence from field experiments. Journal of Political Economy 118 (1): 274-299.

Liou J., Tzeng G.-H. 2007. A non-additive model for evaluating airline service quality. Journal of Air Transport Management 13 (3): 131-138.

Lohmann G., Koo T. 2013. The airline business model spectrum. Journal of Air Transport Management 31 (1): 7-9.

Madrian B., Shea D. 2001. The power of suggestion: inertia in 401 (k) participation and savings behavior. Quarterly Journal of Economics 116 (4): 1149-1187.

Martín-Consuegra D., Molina A., Esteban Á. 2007. An integrated model of price, satisfaction and loyalty: An empirical analysis in the service sector. Journal of Product & Brand Management 16 (7): 459-468.

Mayer R., Ryley T., Gillingwater D. 2015. Eco-positioning of airlines: Perception versus actual performance. Journal of Air Transport Management 44-45 : 82-89.

Murphy P., Pritchard M., Smith B. 2000. The destination product and its impact on traveler perceptions. Tourism Management 21 (1): 43–52.

O’Connell J. 2011. The rise of the Arabian Gulf carriers: An insight into the business model of Emirates Airline. Journal of Air Transport Management 17 (6): 339-346.

Oum T., Zhang A., Zhang Y. 1993. Inter-firm rivalry and firm-specific price elasticities in the deregulated airline markets. Journal of Transport Economics and Policy 27 (2): 171-192.

Pakdil F., Aydın O. 2007. Expectations and perceptions in airline services: An analysis using weighted SERVQUAL scores. Journal of Air Transport Management 13 (4): 229-237.

Parasuraman A., Zeithaml V., Berry L. 1988. SERVQUAL a multiple-item scale for measuring consumer perceptions of service quality. Journal of Retailing 64 (1): 12-40.

Park J.-W., Robertson R., Wu C.-L. 2004. The effect of airline service quality on passengers ’behavioural intentions: A Korean case study. Journal of Air Transport Management 10 (6): 435-439.

Payne J., Bettman J., Johnson E. 1992. Behavioral decision research: A constructive processing perspective. Annual Review of Psychology 43 : 87-131.

Peters E., Västfjäll D., Slovic P., Mertz C., Mazzocco K., Dickert S. 2006. Numeracy and decision making. Psychological Science 17 (5): 407-413.

Redelmeier D., Shafir E. 1995. Medical decision making in situations that offer multiple alternatives. JAMA: The Journal of the American Medical Association 273 (4): 302-305.

Rohwer G. 2010. Qualitative comparative analysis: A discussion of interpretations. European Sociological Review 27 (6): 728-740.

Roswarski T., Murray M. 2006. Supervision of students may protect academic Physicians from cognitive bias: A study of decision making and multiple treatment alternatives in medicine. Medical Decision Making 26 (2): 154-161.

Sagara N. 2009. Consumer Understanding and Use of Numeric Information in Product Claims . Doctoral dissertation. University of Oregon.

Schkade D., Kahneman D. 1998. Does living in California make people happy? A focusing illusion in judgments of life satisfaction. Psychological Science 9 (5): 340-346.

Simon H. 1955. A behavioral model of rational choice. Quarterly Journal of Economics 69 (1): 99-118.

Sweeney J., Soutar G. 2001. Consumer perceived value: The development of a multiple item scale. Journal of Retailing 77 (2): 203-220.

Tanius B., Wood S., Hanoch Y., Rice T. 2009. Aging and choice: Applications to Medicare Part D. Judgment and Decision Making 4 (1): 92-101.

Tversky A., Kahneman D. 1974. Judgment under uncertainty: Heuristics and biases. Science 185 : 1124-1131.

Whyte R., Lohmann G. 2015. Low-cost long-haul carriers: A hypothetical analysis of a “Kangaroo route”. Case Studies on Transport Policy 3 (2): 159-165.

Windle R., Dresner M. 1999. Competitive responses to low cost carrier entry. Transportation Research Part E: Logistics and Transportation Review 35 (1): 59–75.

Yang K., Meho L. 2006. Citation analysis: A comparison of Google Scholar, Scopus, and Web of Science. Proceedings of the American Society for Information Science and Technology 43 (1): 1-15.

Zeithaml V.1988. Consumer perceptions of price, quality, and value: A means-end model and synthesis of evidence. Journal of Marketing 52 (3): 2-22.

Zou L., Chen X. 2017. The effect of code-sharing alliances on airline profitability. Journal of Air Transport Management 58 : 50-57.

Translation of references in Russian into English

Khryseva A.A., Chekalova A. A. 2017. Priority areas of improving airline’s service on the market of air passenger transportation. Izvestija Volgogradskogo Gosudarstvennogo Tehnicheskogo Universiteta 7 : 54–57. (In Russian)

Khudyakov Yu. G., Nikolaykin N. I. 2009. The kinds of risks and feature of their display in the aviatransport service given by airline. Nauchnyj Vestnik Moskovskogo Gosudarstvennogo Tehnicheskogo Universiteta Grazhdanskoj Aviacii 149 (1): 13-17.(In Russian)

Special offer for clients

The essence of the proposal is as follows – we take one of your theme / product / incentive and pump it through the target audience in four different ways (mailings, magazine, website / listing, online conference) in order to give you more leads. By adding context, Digital and SMM under a syndicated topic, we get the widest possible reach and the maximum number of contacts (touches) with the audience.

Next key topics for July – August
and in the journal “Security Systems” No. 4/2021:

1.Smart home, digital housing and communal services.
2. Intercom systems.
3. ASaaS. Cloud technologies in ACS.
4. Addressable wire fire alarm systems.

Natalya Matlakhova, Project Manager “Security Systems”
[email protected], +7 (910) 412-7364

* Hereinafter: prices are indicated without VAT
** You can select the desired type of service separately. The package provides the maximum effect
*** Offer valid until July 1, 2021

Our advantage is a reliable, professional, loyal audience, a comprehensive presentation of information using digital channels, and the possibility of lead generation.

Now – more about each service

1. News on the Secuteck.Ru website
Publication of press releases, news, event announcements, information about company promotions on the Secuteck.Ru website (unique number of visitors for 3 months 88 488, data for October – December 2020. ) .

News is automatically sent to the weekly newsletter Secuteck Weekly (1500+ subscribers) and social networks Facebook and VKontakte.

2. Individual mailing

Information about your company is sent to unique databases: subscribers of the magazine, Secuteck Weekly, visitors to the Secuteck.Ru website and the Security Technologies and All-over-IP forums.

The core of the audience is 48,000 people – participants of events and subscribers of the magazine. It is possible to segment and target the audience.

3.Online Product Overview at

We identify a topic that is relevant for customers, conduct research, prepare a set of expert materials, based on your applications, form an online product overview.

We promote reviews using mailings and contextual advertising
(the annual audience of the site is 303 571 unique visitors).

4. Review of products in the magazine “Security Systems

We identify a topic that is relevant for customers, conduct research, prepare a set of expert materials, based on your requests, form a review of products in print edition (circulation 25 thousand copies).

The magazine is received personally by the heads of the largest enterprises, who make purchasing decisions, and technical specialists from enterprises in key sectors of the Russian economy.

5. An article on the site

The statistics of our site views suggests that visitors pay the most attention to expert materials, the title of which is placed on the banner of the site’s main page.
We hold the title of your article at this position for 2 weeks.

Annual site audience – 303 571 unique visitors.

6. Online EVENT. Speeches, product presentations at online conferences

A new service from Grotek, developed in the context of limited mobility for conducting online events and presentations of experts in the security systems market.

The format is great for positioning a company as an opinion leader, forms a “warm” audience and replenishes the sales funnel with effective leads.
Number of event visitors from 150 to 500 people.

Nearest key topics

July – August 2021
1. Smart home, digital housing and communal services.
2. Intercom systems.
3. Cloud ACS.
4. Addressable wire fire alarm systems.

And more!

In the journal “Security Systems” and on the website we have introduced the heading “Editorial advises”. In this section, we additionally mention the partners of the Grotek company in publications and events.On the site – with active links, in the magazine – on a noticeable sidebar. This is another BONUS for our dear partners!

In any format of participation, we will form for you a high-quality audience that responds specifically to your appeal.

Journal of comments and suggestions for the conduct of construction and installation works

Download (DOC, 48KB)

Form 1.5

Base: VSN 012-88 (Part II )


Ministry ______________________

Association, trust _________________



Plot ___________________________

Construction: ______________________

Object: ___________________________


90,070 comments and suggestions for maintenance

construction and installation works

Commencement of work “___” _________ 20___

Completion of work “___” _________ 20___


Head of Department _____________________ ___________ ____________

(stream) (last name, initials) (signature) (date)


p / p

Content of comments and suggestions (identified deviations from the design and estimate documentation, violations of the requirements of building codes and regulations for the production of construction and installation works, etc.)etc.) Recording date The entry was made by (position, organization, surname, initials of the controlling person) Acquainted with the record:

date, signature of the person responsible for maintaining the journal

Information on elimination of remarks Last name, initials, position and signature of the person in charge checking the journal
1 2 3 4 5 6 7
1 Not provided PPR 11.11.2011 T.N. Tekhnadzorov Skkashov S.S. Eliminated 11/12/2011 Skkashov S.S.


Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *