Themed Sets | ICES Journal of Marine Science
ICES Journal of Marine Science strives to advance marine science by making judicious use of themed article sets. Themed sets are series of coordinated contributions – introduced by a synthetic overview – on a selected topic. Both individually and collectively, themed sets are instrumental in focusing attention, triggering opinions and stimulating ideas, discussion and activity in specific research fields.
If you are interested in submitting papers to be included in the forthcoming themed sets listed below, please review our author guidelines and submit via our submission site.
Impacts of fishing on seabirds
Submissions are now open for the Themed Set on impacts of fishing on seabirds. View the call for papers and submit before 31 March 2022.
Challenges to incentivising avoidance of unwanted catch
Submissions are now open for the Themed Set on challenges to incentivising avoidance of unwanted catch. View the call for papers and submit before 29 November 2021.
Building the knowledge base to support blue growth in small island developing states
Marine zooplankton time series: essential tools to understand variability in productivity-determining processes in the oceans
Exploring adaptation capacity of the world’s oceans and marine resources to climate change
Patterns of biodiversity of marine zooplankton based on molecular analysis
A tribute to the life and accomplishments of Sidney J. Holt
Marine aquaculture in the anthropocene
Marine recreational fisheries – current state and future opportunities
Applications of machine learning and artificial intelligence in marine science
Science in support of a nonlinear non-equilibrium world
Decommissioned offshore man-made installations
Mesopelagic resources – opportunities and risks
Plugging spatial ecology into sustainable fisheries and EBM
Biodiversity Beyond National Jurisdiction
Marine Protected Areas
Billfishes in a Changing World
Beyond Ocean Connectivity
Towards a Broader Perspective on Ocean Acidification Research – Part 2
Case studies in operationalizing ecosystem-based management
Balanced harvest and the ecosystem approach to fisheries
Towards a Broader Perspective on Ocean Acidification Research – Part 1
Marine Mammal Bycatch and Depredation
Revisiting Sverdrup’s Critical Depth Hypothesis
Parameterizing and Operationalizing Zooplankton Production and Trophic Interaction Models
The Value of Coastal Habitats for Exploited Species
Larval Fish Conference
Bycatch and discards: from improved knowledge to mitigation programmes
Fluctuations in the great fisheries of northern Europe – Commemorating 100 years since Hjort’s 1914 treatise.
Marine Harvesting in the Arctic
Using Excel and Benford’s Law to detect fraud
Even if a person fabricates numbers mentally (using his or her brain rather than a computer), there is little reason to believe such a mental exercise would produce results that adhere closely to a Benford’s curve. It is more likely that the person producing numbers mentally would tend to repeat certain patterns, and charting the frequency of the resulting leading digits might reveal those patterns. For example, a person may subconsciously overuse the digits 1, 3, and 4 to produce false data, and underuse the digits 6 and 8. If so, such anomalies would manifest themselves by producing an erratic bar chart that bears little resemblance to Benford’s curve.
As another example, a bookkeeper writing fictitious checks may intentionally keep the check amounts below the company’s $500 or $1,000 authorization thresholds, and therefore an analysis of those check amounts might show the numbers 4 and 9 occurring more frequently as the leading digits than Benford’s Law would predict.
GRADING ON THE CURVE
The results obtained using Benford’s Law analysis should not be considered definitive; the process of counting leading digits will never decidedly prove the absence or presence of fraud. The results obtained from this process are merely an analytical tool that may help the CPA gauge whether additional investigative work is warranted. However, when Benford’s curve fails to materialize, CPAs should step up their efforts to verify the data, as follows.
1. Reconsider the data’s suitability for Benford analysis. Before suspecting fraud, CPAs should reexamine the data set for the possibility of built-in bias toward certain numerals, and if bias is found, the results of your Benford analysis should be disregarded.
2. Apply analytical review procedures. If Benford’s predictions do not hold true for a given data set, you should suspect an anomaly and seek additional assurances that the data set is indeed valid. AU-C Section 520, Analytical Procedures, provides guidance for conducting an analytical review and briefly dictates the following. The CPA should:
- Consider whether specific unusual transactions or events, accounting changes, business changes, random fluctuations, or misstatements may have impacted the data set.
- Perform a test of transactions to verify the data set. For example, select a sample of data and physically trace the numbers to supporting documentation.
- Compare the data set to prior- or previous-year data sets. Investigate significant differences.
- Compare the data set to budgeted or expected amounts, if any. Investigate significant differences.
- Analyze the data set using ratios or relationships, and compare those results to expected ratio results or industry averages. As examples, financial ratios may be revealing when dealing with financial data sets and square-footage calculations, or per-hour measurements or per-mileage measurements may be revealing when dealing with statistical data sets.
- Consider using positive, rather than negative, confirmations to verify vendor and customer balances.
- If standard analytical procedures have been applied and produced no presence of fraud, but the data sets involved skew significantly from Benford’s expectations, consider expanding your analytical procedures to include larger-than-normal sample sizes and tightening standard-deviation calculations by using higher levels of confidence.
- If the data set involves inventory of any kind, perhaps the physical inspection of a sampling of these inventories is in order. AU-C Section 501, Audit Evidence—Specific Considerations for Selected Items, provides guidance for further investigating inventories.
3. Rethink internal controls. Consider whether reliable controls are in place to detect or prevent improprieties.
4. Consider the source. Reconsider the source where the data were obtained. Were they produced internally or obtained from an outside source? If from an outside source, inquire about the measures that source used to verify its data.
Another tool for fighting fraud. AU-C Section 240, Consideration of Fraud in a Financial Statement Audit, requires auditors to employ analytical procedures to help detect the existence of unusual transactions or potential fraud. To that end, CPAs are on a constant lookout for new methods and procedures that can help them detect and prevent fraud. As it turns out, fabricating a set of falsified data that conforms to Benford’s Law is a difficult proposition, and many would-be fraudsters are likely unaware of Benford’s Law or how to construct fraudulent data that abide by its rules. Therefore, this Excel-based Benford’s Law analysis will likely be a handy addition to any CPAs arsenal of fraud detection tools.
The history of Benford’s Law
The story of Benford’s Law begins in 1881, when astronomer Simon Newcomb noticed that the page numbers in a book of logarithm tables were worn (or smeared) more toward the front of the book and progressively less worn toward the end of the book. Where others would simply dismiss the worn page numbers, Newcomb recognized a distinct pattern related to the occurrence of lower versus higher numbers. He published an article explaining his observations and postulated that the probability of a single number n being the first digit of a number was equal to log(n+1) − log(n). Fifty-seven years later, in 1938, physicist Frank Benford tested Newcomb’s hypothesis against 20 sets of data and published a scholarly paper verifying the law. Despite Newcomb’s groundwork, Benford has garnered much of the credit for the discovery now commonly referred to as Benford’s Law.
The application of Benford’s Law to spot signs of accounting fraud grew out of an article published in 1972 by economist Hal Varian, who wrote that Benford’s Law might be used to detect the possibility of fraud in socioeconomic data submitted in support of various public planning decisions. Varian’s general idea was that a simple comparison of first-digit frequency distributions ought to reveal anomalous results (if any), per Benford’s Law. In 1999, a JofA article (“I’ve Got Your Number,” May 1999), written by Mark J. Nigrini introduced how forensic accountants and auditors could apply Benford’s Law to search for indicators of potential accounting and expenses fraud.
Benford attempted to explain his law by saying that “it’s easier to own one acre than nine acres,” implying (perhaps) that when people purchase land, it is reasonable to assume that more people purchase one acre as a starting point, rather than nine acres as their starting point.
About the author
J. Carlton Collins ([email protected]) is a technology consultant, a conference speaker, and a JofA contributing editor.
To comment on this article or to suggest an idea for another article, contact Jeff Drew, senior editor, at [email protected] or 919-402-4056.
Instructions for Microsoft Excel in this article refer to the 2007 through 2016 versions, unless otherwise specified.
JATS: Journal Publishing Tag Set
The Journal Publishing Tag Set is a moderately prescriptive set, optimized for the archives who wish to regularize and control their content, not to accept the sequence and arrangement presented to them by any particular publisher. Publishing is also intended for use by publishers for the initial XML tagging of journal material, usually as converted from an authoring form like Microsoft Word.
Complete documentation for the Tag Set is available in Tag Library. Each version has its own Tag Library that documents the rules and usage for that version.
The Tag Library for the most recent release of this Tag Set will always be available at the following URI:
The structure and suggested usage of the Tag Library is described in the How to Use (Read Me First) section of each Tag Library.
The models and constraints in this Tag Set are encoded in several schema languages:
- RELAX NG (RNG), and
- W3C XML Schema (XSD).
These schemas are, to the extent possible, equivalent, and there is no preference for which is used.
Please see the indivdiual Tag Set version for links to that version’s schemas.
Getting the files
All of the schema files are available by anonymous FTP:
Each schema is also available at a stable URI. Please see the indiviual Tag Set version for those URIs.
We welcome comments on the Tag Suite. If you have comments or suggestions you’d like to share with the NISO Working Group, please visit the NISO webwsite.
JATS Discussion List
The JATS-List, hosted by Mulberry Technologies, Inc., is a mailing list for open discussion of the Journal Article Tag Suite. For more information about the list, visit http://www.mulberrytech.com/JATS/JATS-List/index.html.
The Royal Society sets 75% threshold to ‘flip’ its research journals to Open Access over the next five years
In an exciting new chapter for its scientific publishing, the Royal Society sets out how it will transition its primary research journals to open access and make more of its world-leading research available to all.
Following a review by its Council, the Royal Society has committed to ‘flipping’ the journals Biology Letters, Interface, Proceedings A, and Proceedings B to a fully open access model when 75% of articles are being published open access.
This transition will be driven chiefly by the expansion of Read & Publish agreements with major research institutions, enabling their scientific research output to be published open access in the Society’s journals.
The process is already well underway, the Society launched Royal Society Read & Publish in January 2021 and has pioneered new agreements – including a shared funding arrangement announced this year with the University of California.
“This project is truly a landmark in the history of the Royal Society,” said Dame Wendy Hall DBE FRS FRSEng, chair of the Royal Society’s Publishing Board. “Just as we pioneered science publishing three and a half centuries ago, I am delighted that we are taking this important step forward to maximise the reach and usefulness of the research we publish.”
To underscore this commitment and to provide an additional compliant route for researchers, the Society will seek “transformative journal” status from the cOAlition S, the consortium of research organisations and funders supporting the Plan S open access initiative.
This requires committing to flip the journals to open access at the 75% threshold, to transparent pricing and to an annual increase in the proportion of articles published open access.
The move follows a review of the Society’s publishing strategy involving both Fellows and other experts. It continues the open access journey the Society began in 2006, with the introduction of open access publishing as an option on all articles and the launch of Open Biology in 2011, and Royal Society Open Science in 2014.
“As a Fellowship of some of the world’s leading scientists, the Society supports open access publishing to maximise the dissemination and impact of high-quality scientific research,” said Dr Stuart Taylor, the Royal Society’s Publishing Director.
“Publishing income supports our internationally recognised journals output, as well as the Society’s wider mission to promote scientific excellence for the benefit of all.
“We have seen steady growth in open access publishing and over 40% of our articles are now published open access. With Read & Publish, and other transformative agreements, we expect that trend to accelerate and all our research journals to become fully Open Access within five years.
“The approach set out today ensures we can transition towards a transparent and sustainable open access system while continuing to support our wider work.”
Open access policies across the Society’s journals already comply with all existing funder requirements and the past year has seen further rapid transformation in the Society’s publishing processes.
This includes cross-publisher work to support rapid peer review and publication of research relevant to the COVID-19 pandemic in an open access collection.
In total, the Society publishes 10 journals, including the six which are already open access or part of the OA75 commitment.
The remaining four journals; including the world’s first peer-reviewed journals, Philosophical Transactions A and B; Interface Focus; and the history of science journal Notes and Records, will continue to operate on a hybrid model for the time being.
The Society – along with other publishers of journals which commission content directly from authors – recognises that such journals are unlikely to progress to the 75% open access threshold at the same rate, if at all.
The future publishing model for these journals, and the Society’s wider output, will be kept under review as the publishing landscape continues to evolve and new ways of supporting research continue to emerge.
A Citation Analysis of Oceanographic Data Sets
Evaluation of scientific research is becoming increasingly reliant on publication-based bibliometric indicators, which may result in the devaluation of other scientific activities – such as data curation – that do not necessarily result in the production of scientific publications. This issue may undermine the movement to openly share and cite data sets in scientific publications because researchers are unlikely to devote the effort necessary to curate their research data if they are unlikely to receive credit for doing so. This analysis attempts to demonstrate the bibliometric impact of properly curated and openly accessible data sets by attempting to generate citation counts for three data sets archived at the National Oceanographic Data Center. My findings suggest that all three data sets are highly cited, with estimated citation counts in most cases higher than 99% of all the journal articles published in Oceanography during the same years. I also find that methods of citing and referring to these data sets in scientific publications are highly inconsistent, despite the fact that a formal citation format is suggested for each data set. These findings have important implications for developing a data citation format, encouraging researchers to properly curate their research data, and evaluating the bibliometric impact of individuals and institutions.
Citation: Belter CW (2014) Measuring the Value of Research Data: A Citation Analysis of Oceanographic Data Sets. PLoS ONE 9(3):
Editor: Howard I. Browman, Institute of Marine Research, Norway
Received: July 24, 2013; Accepted: February 25, 2014; Published: March 26, 2014
This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.
Funding: No funding beyond employment by NODC was received for this work. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The author has declared that no competing interests exist.
In recent years there has been increasing interest in, and use of, bibliometric indicators for the evaluation and ranking of research institutions. Bibliometric indicators feature prominently in global mixed-method ranking schemes such as the Academic Ranking of World Universities  and the Times Higher Education ranking . They also feature in national mixed-method research assessment exercises in the UK, Brussels, Italy, and Australia. Other global ranking schemes are based solely on bibliometric indicators , . Bibliometric indicators are often recommended to supplement – or even replace ,  peer review in evaluating research institutions.
Partially in response to the growing importance of bibliometrics in research evaluation, and partially in response to other factors, there is also a growing movement focused on the development of a standard method of citing data sets in academic publications . Reasons for developing a citation format for data sets include verification of published results, reuse of data sets for additional research purposes, and attribution to data collectors and archivists. Such suggestions have been made in bioinformatics –, genetics , climate sciences , geochemistry –, oceanography , , earth sciences , , and multidisciplinary sciences –, among others.
Although there is widespread agreement within the movement that a minimum set of information is necessary to a complete data set citation, there seems to be two schools of thought as to how this ought to be accomplished. One school favors a direct citation to the data set as it resides in an established repository. This model was first adopted for nucleotide sequence datasets in the formation of GenBank  and adapted for the marine  and earth ,  sciences before being more widely recommended – and implemented in various subject-specific and general data repositories such as the California Digital Library (http://www.cdlib.org/), DataONE (http://www.dataone.org/), the Dataverse Network (http://thedata.org/), Dryad (http://datadryad.org/), ICPSR (http://www.icpsr.umich.edu/), Pangaea (http://www.pangaea.de/), NOAA’s climatic (http://www.ncdc.noaa.gov/), geophysical (http://www.ngdc.noaa.gov/), and oceanographic (http://www.nodc.noaa.gov/) data centers, etc. One of the fundamental components of this model is the creation and citation of an identifier that uniquely identifies the data set being cited. This identifier typically takes the form of a doi assigned through DataCite , although other identifiers may also be used.
This model is beginning to be incorporated into the products of commercial scientific information providers. In 2012, Thomson Reuters launched the Data Citation Index (http://wokinfo.com/products_tools/multidisciplinary/dci/), a database of data sets that provides suggested citation formats for each data set indexed in the database and attempts to generate citation linkages to articles indexed in its other Web of Science databases. More recently, Elsevier, in cooperation with DataCite and numerous data repositories, launched a similar project that attempts to link papers available in ScienceDirect to the data sets that they use or have deposited in repositories (http://www.elsevier.com/about/content-innovation/database-linking) through data set dois or other unique identifiers.
The other school of thought favors the citation of a ‘data paper’ or ‘data publication’ describing the data set. In this model, the metadata necessary for using a data set and a link to the data set is presented in a paper published either in a traditional journal or in a specialized data journal. Data papers differ from more traditional publications in that no analyses or findings resulting from the data set are required. Researchers wishing to cite the data set would then cite the data paper, rather than the data set. This model has been suggested in the neurosciences , , genetic sciences , and bioinformatics  communities, and implemented in the geosciences community through the formation of data journals such as Earth System Science Data (http://www.earth-system-science-data.net) and Geoscience Data Journal (http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%292049-6060) and the publication of data papers in journals such as the Quarterly Journal of the Royal Meteorological Society, Eos, and Oceanography. Examples of recent data papers in the earth sciences include those describing the ERA-40 reanalysis in atmospheric sciences , the Argo profiling floats , and a database of iron enrichment experiment results  in oceanography.
Closely tied to the development of data citation standards is the growing awareness of the need to properly preserve, describe, and provide access to data sets, a collection of activities sometimes referred to as data curation. In order for a data set to be cited, it must first have been deposited in a repository, preserved in an interoperable format, adequately described by a formal set of metadata attached to the data set, and made available to other researchers for reuse. Although technical issues exist at each step in this process, the idea of sharing data sets with other researchers has proven to be the most controversial –. In addition to concerns over the idea of freely sharing research data, many researchers are reluctant to devote the time necessary to properly curate their research data, especially since many have not received training on how to do so. Although mandates for data preservation and sharing have been established by the National Science Foundation (http://www.nsf.gov/bfa/dias/policy/dmp.jsp), the American Geophysical Union (http://publications.agu.org/author-resource-center/publication-policies/data-policy/), and the US Office of Science and Technology Policy (http://www.whitehouse.gov/blog/2013/02/22/expanding-public-access-results-federally-funded-research) among others, it is not yet clear whether these mandates will motivate researchers to do so in the future.
Although bibliometric indicators can be a useful compliment to peer review in the evaluation of scientific research –, the growing reliance on publication-based indicators for research evaluation could potentially lead to the devaluation of activities that do not typically result in the publication of articles in scientific journals. Participation in workshops, policy formulation, peer-review of submitted manuscripts, public education, and mentoring are all critical to the advancement of scientific research and to the translation of that research into societal benefits, but few of these activities ‘count’ in bibliometric evaluations because they rarely result in formal publications. Since the incorporation of bibliometric indicators into research evaluation is known to affect the subsequent behavior of those being evaluated –, it seems likely that the growing reliance on bibliometric indicators could result in a disincentive to engage in such activities.
One such activity that is likely to be devalued in this context is data curation. Despite its importance to the scientific community, data curation rarely results in the production of scientific journal articles, meaning that scientists and institutions devoting time and effort to data curation are unlikely to be rated favorably by bibliometric indicators in comparison with their more prolific peers. This is likely to undermine support for data curation efforts, since scientists are unlikely to devote the time and effort required to properly curate their data sets if they are unlikely to be rewarded for doing so.
The purpose of this analysis is to combine these trends by attempting to show the value of data curation in bibliometric terms. Specifically, I attempt to generate citation counts for three oceanographic data sets curated by the National Oceanic and Atmospheric Administration (NOAA)’s National Oceanographic Data Center (NODC). In doing so, I hope to demonstrate the utility of data curation to scientific research, since these data sets could not have been cited without the curation activities performed by NODC and its partners. In the process, I also hope to inform the discussion surrounding the development of data citation standards by identifying how these data sets are currently cited and referenced in scientific articles. Such baseline information can be useful in identifying both the metadata that should be included in a data citation format and how such a format ought to be applied.
Although many articles advocating data citation standards mention the usefulness of such standards for bibliometric evaluation, efforts to actually generate citation counts for data sets are fairly rare. Chao  measured dataset reuse in the earth sciences and found that earth science data sets were primarily cited in physical science and multidisciplinary journals, suggesting that data sets generated in one discipline may also have applications in other disciplines. Parsons et al.  used Google Scholar to search for mentions of snow cover data sets archived at the National Snow and Ice Data Center and found between 100 and 600 mentions per year. Piwowar and colleagues ,  used a similar method, searching PubMed Central for mentions of data sets archived in the Gene Expression Omnibus (GEO) database and estimated that GEO data sets had been cited over 1,150 times by the end of 2010. The Inter-university Consortium for Political and Social Research (ICPSR) maintains a bibliography of several thousand publications that cite one or more data sets archived by ICPSR . Finally, several studies suggest that articles with publically available data sets are more highly cited than articles that do not make their data publically available [e.g. 56,57].
In consultation with NODC, I selected three highly-used data sets for this analysis: the World Ocean Atlas and World Ocean Database (WOA/WOD), the Pathfinder Sea Surface Temperature (PSST) data set, and the Group for High Resolution Sea Surface Temperature (GHRSST) data set. The World Ocean Atlas is a quality-controlled set of objectively analyzed global in situ observational data published in four volumes focused on the variables of temperature , salinity , oxygen , and nutrients . Although NODC considers the World Ocean Atlas a data product, rather than a raw data set, because it is a compilation of many individual data sets gathered at various times and locations around the world and because of the quality control and analysis done on the underlying data, I consider it a data set for the purposes of this analysis. The World Ocean Database  is an interactive database of the data used to create the World Ocean Atlas. Since the Atlas and the Database utilize same underlying data, I will refer to them in combination as the WOA/WOD. The WOA/WOD was initially published in 1982 as the ‘Climatological Atlas of the World Ocean’  and rereleased with updated data in 1994, 1998, 2002, 2006, and 2009–2010. The PSST data set  is a long-term set of global sea surface temperature data derived from the Advanced Very High Resolution Radiometer (AVHRR) sensor mounted on NOAA’s polar-orbiting satellites. The GHRSST data set ,  is a global set of combined satellite and in situ sea surface temperature data contributed by a number of institutions from around the world. GHRSST data are initially collected from these institutions by the NASA Jet Propulsion Laboratory and then transferred 30 days after observation to NODC for long term preservation and access.
In the first phase of this analysis, conducted in March 2013, I attempt to generate citation counts for these data sets using three data sources: Web of Science, Science Citation Index Expanded (WoS), the full text search capabilities provided by various journal publishers’ websites (Elsevier, Springer, Wiley, etc), and Google Scholar. I search these data sources to find citations to, or mentions of, these data sets in scientific publications and compile the number of results retrieved. In this analysis, I count mentions of these data sets as citations, a broader definition of ‘citation’ than is currently used, because scientific articles utilizing or discussing these data sets may or may not formally cite them. This definition is consistent with that employed by previous studies , , . The search strings used in this phase are listed in Table 1. These search strings are deliberately restrictive to improve the precision of the retrieved results. As a result, although the resulting counts are likely to be fairly, although not entirely, accurate, they are also likely to be undercounts of the actual number of publications citing or mentioning these data sets.
To generate citation counts for each data set in Web of Science, I use the search strings to search the title, abstract, keywords, and funding text (or acknowledgements) fields and add all of the resulting records to my ‘Marked List.’ I then perform cited reference searches to identify citations to publications or reports associated with each data set and add all of the resulting records to my ‘Marked List’. The final number of records in my ‘Marked List’ is then noted as the WoS citation count for each data set. Using the ‘Marked List’ in this way allows me to avoid potentially double counting records retrieved by multiple searches.
To generate citation counts for each data set using publishers’ websites, I use the search strings to search the full text of all records on each site using the sites’ internal search engine and note the number of results retrieved for each data set. I then combine these totals across websites to generate a final citation count for each data set. The publishers’ sites searched were: ScienceDirect, SpringerLink, the Wiley Online Library, the American Meteorological Society’s Online Journals page, the Nature Publishing Group website, the Science (AAAS) website, the PNAS website, Taylor and Francis Online, IEEE Xplore, the Public Library of Science website, the American Chemical Society website, and the Ecological Society of America’s online journals site. Searching each of these sites individually was necessary because no formal full text database covering the oceanographic, marine, and geo- sciences is available.
To generate citation counts for each data set using Google Scholar, I use the search strings to search the database without restriction and then note the number of results retrieved as the final citation count. Due to known indexing and metadata issues with Google Scholar , , these counts are likely to be inflated, and include non-peer-reviewed publication types, but are also likely to provide reasonable estimates of how often these sets are used overall and to provide accurate rankings of these data sets relative to each other , .
In the second phase of this analysis, conducted in January 2014, I attempt a more comprehensive cited reference search in WoS to generate citation counts to all editions of the WOA/WOD over time. Since WOA/WOD originally was, and still is, distributed as a print publication, it seems likely that formal citations to this data set would be more numerous than for the other data sets analyzed here. To allow for wide variance in citation formats, I search for the author(s) and publication year(s) of each edition of the WOA/WOD and then manually select the relevant search results. The search strategies used in this process are summarized in Table 2.
The process of executing these search strategies for one edition of the WOA/WOD is as follows. First, I perform a Cited Reference Search using the search criteria listed in Table 2. In step 2 of the Cited Reference Search process, I select the relevant citation variants through manual inspection of the step 1 results, and manually count the number of cited reference variants selected. After retrieving the final list of citing articles, I then analyze the results using the tools provided by WoS to obtain citation counts per year, subject category, and country for that edition of the WOA/WOD. I then repeat this process for each subsequent edition of the WOA/WOD.
The citation counts generated for WOA/WOD, PSST, and GHRSST during the first phase of this analysis are summarized in Figure 1. Two consistent patterns seem to emerge in these counts. First, the total number of citations generated for each data set increases as the coverage of the data source increases. WoS is the most limited of the three data sources, since it only indexes article metadata, acknowledgements, and cited references. Publishers’ sites have wider coverage, since they allow access to articles’ full text, but I only searched a limited number of these sites. Google Scholar has the broadest coverage, in that it offers access to the full text of a broad range of publishers’ websites as well as to conference proceedings, institutional repositories, and other websites. The citation counts generated using these data sources seem to follow this pattern, with citation counts generated from publishers’ sites being nearly four times higher than those generated from WoS and counts generated from Google Scholar being nearly eight times higher than WoS.
Second, the data sets are consistently ranked relative to each other across the three data sources. The citation counts for WOA/WOD are higher than those for PSST, which are higher than those for GHRSST. The magnitude of these differences also seems consistent, with WOA/WOD receiving approximately four times more citations in each data source than PSST and PSST receiving approximately three times more citations than GHRSST. The consistency of these patterns across data sets and data sources suggests that these findings are robust, although much additional work would be necessary to verify their accuracy.
In compiling these citation counts, I also find a wide variety in the methods used to refer to these data sets. Examples of this variety are given in Figure 2. Some articles include formal citations to these data sets, but the format of these citations is highly variable, despite the fact that NODC provides a suggested citation format for each of these data sets. Many other articles simply mention the data set in the text of the article, although the format of such mentions is also highly variable. The data sets are referred to by various names (PSST alone is referred to as ‘Pathfinder Sea Surface Temperature’, ‘Pathfinder SST’, ‘Advanced Very High Resolution Radiometer SST′,’ AVHRR SST′, etc) and a URL to the online source of the data is not always included.
In the second phase of this analysis, a more comprehensive Cited Reference Search in WoS for articles citing WOA/WOD, I find a total of 8,412 articles citing all six editions of the WOA/WOD from 1984 to 2013. The 1982 edition has been cited 2,987 times, the 1994 edition has been cited 2,577 times, the 1998 edition has been cited 810 times, the 2001 edition has been cited 842 times, the 2005 edition has been cited 795 times, and the 2009 edition has been cited 401 times. The distribution of articles citing WOA/WOD over editions and years is presented in Figure 2.
These distributions display a number of interesting features. First, versions of the WOA/WOD seem to require at least four, and up to 14, years after their initial release date to reach their peak citation rate. The time necessary for versions to reach their peak rate has declined with each version: the 1982 Climatological Atlas reached its peak 14 years after its initial publication, whereas the 1998 version required six years and the 2005 version required four. The amount of time necessary for the older versions of the WOA/WOD to reach their peak citation rate is longer than the 2–5 years required for most journal articles , , although the 2005 version seems to have peaked within that timeframe.
This delay may be due to the media in which each successive version of WOA/WOD was distributed. The 1982 climatological atlas was distributed via magnetic tape and personal communication, the 1994 and 1998 editions were distributed via CD-ROM, the 2001 version was distributed via DVD, the 2005 was distributed via DVD and online access, and the 2009 version was distributed online. Each successive version made the data more accessible and usable, possibly leading to quicker incorporation of the data into scientific articles. In addition, updates began to be incorporated into the WOA/WOD every three months starting in 2008, allowing the WOA/WOD to be used for more timely investigations.
Second, although all releases of the WOA/WOD are highly cited, some versions are clearly more highly cited than others. The 1982, 1994, and 2009 versions all received over 200 citations in a single year, whereas the 1998 version never received more than 82 citations in a single year and the 2001 version never received more than 114. Since the 1998 and 2001 versions presented similar data, were prepared using similar methods, and compiled by many of the same authors as the 1994 and 2009 versions, it is unclear what conclusions to draw from these trends. In addition, the 2005 version seems to have been the most rapidly cited version of the data set, accumulating 795 citations in the eight years since it was published, although the 2009 version seems to be following a similar trajectory, having received 401 citations in the five years following its publication.
Finally, all versions of the WOA/WOD continue to be highly cited well beyond their publication date, even when one or several newer versions of the data set are available. The 1982, 1994, 1998, and 2001 versions all received between 50 and 60 citations in 2013. It is unclear whether researchers continue to use the older versions of the WOA/WOD out of habit, convenience, unawareness of newer versions, or other reasons. For whatever reason, researchers continue to cite these data sets well beyond the cited article half-life of 9.1 years recorded for Oceanography in the 2011 edition of Journal Citation Reports, suggesting that all versions of the WOA/WOD continue to be valuable resources for scientific research.
Analysis of the articles citing all versions of the WOA/WOD also reveals some interesting features. An analysis of the WoS subject categories of these citing articles, presented in Figure 3, shows that although the WOA/WOD is predominantly cited by articles in Oceanography, it is also cited by other related fields. The high number of citations from the Meteorology & Atmospheric Sciences category suggests that the WOA/WOD is frequently used by studies examining the effects of the ocean on weather and climate. Its number of citations in Paleontology, primarily by articles published in the journal Paleoceanography, suggests that it is used in studies of the prehistoric ocean as well as those of the modern ocean. Finally, its use in the Environmental Sciences, Marine and Freshwater Biology, and Ecology subject categories suggest that it is being used by studies examining the effects of ocean conditions on marine biota.
Figure 4 presents an analysis of these citing articles by country. Citing articles by authors from multiple countries are counted as whole citations for each country, rather than fractionally. Creation of the WOA/WOD is an international project in that the WOA/WOD consists of data sets contributed by researchers from numerous countries around the world. Figure 4 suggests that the international nature of its creation is reflected by the international scope of its use. The WOA/WOD is not only highly cited by the traditionally prolific scientific countries such as the United States, France, Germany, and the United Kingdom, but also by rising scientific countries such as China, India, and Brazil. This suggests that although NODC is an institution of the US government, its work to archive, quality control, and freely provide the data comprising the WOA/WOD is useful to the global scientific community.
Finally, as with phase 1 of this analysis, I find the format of citations to the WOA/WOD to be highly inconsistent. I found 377 variant methods of citing the 1982 version, 305 variants of the 1998 version, 221 variants of the 2001 version, 200 variants of the 2005 version, and 77 variants of the 2009 version, for a total of 1,180 variant methods of citing all versions of the WOA/WOD captured in WoS as of early 2013. See Figure 5 for a sample of the citation variants to the 2005 version. Because I did not attempt to search for erroneous citations (citations to the wrong year of publication, misspelled author names, etc), these figures are likely to underestimate the actual number of variant methods of citing the WOA/WOD in WoS. In addition, since phase 1 of this analysis suggests that articles are more likely to reference data sets in their text than in their cited references lists, the actual number of methods that articles use to refer to the WOA/WOD is likely to be substantially higher than I estimate here.
These results seem to have a number of implications for data curation and data citation initiatives. First, my results indicate that all three of these data sets are highly cited. My phase 1 results suggest that, if they were counted as journal articles in WoS, both the WOA/WOD and the PSST data sets would have citation counts higher than 99% of all articles in Oceanography in WoS from any single publication year from 1995 to the present. Using the more expansive journal full-text method, each of the three data sets would be ranked in the top 1% for citation counts of all articles published in Oceanography during the same year, while the WOA/WOD and PSST data sets would be ranked in the top 0.1%. My phase 2 results indicate that each version of the WOA/WOD would be ranked in the top 0.1% of articles in Oceanography that were published during the same year and the 1982 and 1994 versions have been cited more than twice as often as the most highly cited article in Oceanography published in 1982 and 1994. Percentile values and article citation counts for journal articles in Oceanography were obtained by using the search string “WC = oceanography AND PY = 1995” and sorting the results by “Times Cited – highest to lowest.” This string was then repeated for the other publication years. Because of the limitations of my search methods noted above, these citation counts are likely to be underestimates of the actual totals for each data set.
These high citation counts are surprising in light of the fact that previous studies , – have reported more modest citation counts to individual data sets. I speculate that the high citation counts reported here could result from the unique features of the particular data sets that I analyzed. First, each data set is freely and publically available and has been described in enough detail to permit its reuse. Second, the GHRSST and WOA/WOD data sets are composites of multiple smaller data sets contributed by multiple researchers to form a more comprehensive, global data set. Third, each data set has been available from a consistent source for an extended period of time. Finally, each data set is quality controlled to ensure the consistency and accuracy of the data contained in each set. Each of these features adds value to the original data sets, making the final data sets more useful to the oceanographic community. It may be that the high citation counts to these data sets, and particularly to the WOA/WOD, reflect the somewhat unique nature of these data sets.
If this is the case, it suggests a potential path forward for data repositories and data sharing in other disciplines. A single data set in isolation may have limited applications because of the methodology and parameters of its collection, but if that data set is quality controlled, adjusted, and merged with other similar data sets, it can be used to create a more comprehensive, overarching data product that can be queried for analysis at local, regional, or global disciplinary scales. The more data incorporated into such a product, the more useful the product becomes. Data repositories creating such products could then become central hubs for disciplinary, and potentially interdisciplinary, research, leveraging the limited research funding available in each discipline to ensure that individual pieces of research performed in that discipline eventually benefits the entire disciplinary community. In a sense, such a model could be considered a quality-controlled Wikipedia of data – the combination of individual pieces of expertise to create a resource larger and more comprehensive than anything that could be achieved individually. Obviously there are significant social, technological, and political barriers to implementing such a model, but the examples in oceanography of NODC and the Intergovernmental Oceanographic Commission  show that such barriers can be overcome.
Second, my results suggest that the majority of references to these data sets occur in the full text of articles, rather than in the title, abstract, keywords, acknowledgements, or cited references sections of these articles. The citation counts retrieved from full text sources—publishers’ websites and Google Scholar—are consistently and substantially higher than those retrieved from WoS. The only exception is that the publishers’ website total for WOA/WOD is lower than my phase 2 results, but this may be due to the large number of reference variants I found during phase 2. This pattern suggests that most articles do not refer to these data sets in a section indexed by WoS, calling into question the appropriateness of citation-indexing databases for compiling citation counts for these data sets.
Third, I find wide disparities in the methods used to cite or refer to these data sets, despite the fact that a formal citation format is suggested for each. This suggests that although a suggested citation format exists, researchers are not, for whatever reason, using it consistently to refer to these data sets. This finding is consistent with that of Part 2 of Mooney and Newton  and with many anecdotal accounts of citation practices among authors. It is likely that the multiple points of access to these data sets may account for some of this inconsistency. PSST, for example, is also available from NASA, leading some to refer to the data set as the ‘NASA Pathfinder SST’ data set. The implication of this trend for the data citation community seems to be that although the development of a standard citation format is necessary, that format by itself is not sufficient to guarantee consistent citation of data sets. It seems that in addition to developing this format, it will be necessary to encourage researchers to use the format and, perhaps more importantly, to obtain commitments from journal editors, reviewers, and publishers to ensure that it is used.
More consistent adoption and usage of a doi to refer to a data set, either by directly assigning a doi to a data set or through the publication of a data paper with a doi, has the potential to considerably reduce the issues resulting from this inconsistency. From a purely bibliometric perspective, the format and content of a reference or citation is irrelevant as long as a doi is present. The consistent use of a doi to refer to a data set would enable a researcher to search full-text or citation-indexing databases for that doi to retrieve a reasonably accurate set of articles citing that data set. Again, however, such consistency requires both data providers to assign dois to their data sets and authors to include these dois in their papers. NODC has begun to lay the groundwork for such consistency in oceanography, having recently assigned its first doi to a version of the PSST data set and developing a process for assigning dois to its other data sets.
In this analysis, I attempted to generate citation counts for three oceanographic data sets curated by NODC by searching WoS, publishers’ websites, and Google Scholar for mentions of these data sets in the bibliographic information or full text of scientific articles. I found that although there were substantial differences in the citation counts derived from each source, all three data sets were highly cited in all sources. The WOA/WOD was particularly highly cited, with all versions of the data set having received over 8,000 citations since its first release in 1982. My results suggest that scientific articles are more likely to mention these data sets in the text than in the acknowledgements or cited references sections. I also find wide discrepancies in the methods used to refer to these data sets, both in the full text and in the cited references sections. I found 377 variant methods of citing different versions of a single data set, WOA/WOD, suggesting that researchers are not consistently using the citation formats provided for these data sets.
Although I limited this analysis to oceanographic data sets in an attempt to control for potential differences in citation practices among fields , it seems likely that the findings and issues identified here may be similar for data sets in other disciplines. Previous studies have suggested that data sets in other disciplines are highly cited ,  and that references to data sets retrieved from full text sources are higher than formal citation counts , both of which are consistent with the findings presented here. In addition, inconsistent referral to data sets in scientific papers is often raised as a motivator for the development of a data citation standard, suggesting that the large number of reference variants identified here is likely to be an issue with data sets in other disciplines as well.
However, although the findings of this analysis may be broadly applicable to data sets in other disciplines, much additional research would be needed to determine what, if any, differences exist in data citation and referencing patterns among disciplines. Since it is known that citation practices for journal articles differ among disciplines , it is likely that such differences also exist for data sets. Such differences may be compounded by differences in disciplinary collaboration rates, the existence and utilization of discipline-specific data repositories, or other factors.
In addition to citation counts, future research on the impact of data sets and data curation activities might focus on alternative metrics such as download counts, social network discussion, or social bookmarking to measure other forms of engagement with these data sets beyond formal citation , , or on comparing such altmetric indicators with traditional cited reference counts. Although download counts could be easily obtained, other altmetric indicators might be more problematic to obtain due to the inconsistency with which data sets are cited. Unique identifiers such as dois might alleviate this issue somewhat, but only for data sets that have been assigned such identifiers and to the degree that researchers include these identifiers in their publications.
Finally, this analysis demonstrates that individuals and institutions can make substantial contributions to scientific research without producing formal publications. My results suggest that these data sets are often used in the production of original research in oceanography. This use is possible because researchers posted their data sets to oceanographic data repositories and because these data sets were properly archived, described, and made available to the scientific research community. The high citation counts identified here suggest that these data sets – and, by extension, the curation activities necessary for their use in scientific articles – are at least as important to the advancement of oceanographic research as the findings presented in the vast majority of journal articles published in the field. Future evaluations of NODC and other organizations that curate scientific data ought to take such considerations into account.
I thank Kenneth S. Casey and Sydney Levitus of NODC for discussions about data citation and bibliometrics which ultimately led to the analyses presented here. Kenneth Casey and Timothy Boyer of NODC and two anonymous reviewers provided excellent suggestions that improved the manuscript considerably. Opinions presented in this paper are solely my own and do not necessarily represent those of NOAA, the Department of Commerce, or the US Government. Use of products in this analysis does not constitute a recommendation of these products by NOAA or the US Government.
Conceived and designed the experiments: CWB. Performed the experiments: CWB. Analyzed the data: CWB. Wrote the paper: CWB.
ShanghaiRanking Consultancy (2012) Academic Ranking of World Universities. Shanghai Jiao Tong University.
Times Higher Education (2013) The Times Higher Education World University Rankings.
Waltman L, Calero-Medina C, Kosten J, Noyons ECM, Tijssen RJW, et al. (2012) The Leiden ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology 63: 2419–2432
SCImago Research Group (2012) SCImago Institutions Ranking: SIR World Report 2012: Global Ranking. 98 p.
Allen L, Jones C, Dolby K, Lynn D, Walport M (2009) Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs. PLoS ONE 4: e5910
Bornmann L, Mutz R, Neuhaus C, Daniel HD (2008) Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics 8: 93–102
Haeffner-Cavaillon N, Graillot-Gak C (2009) The use of bibliometric indicators to help peer-review assessment. Archivum Immunologiae Et Therapiae Experimentalis 57: 33–38
Moed H (2007) The future of research evaluation rests with an intelligent combination of advanced metrics and transparent peer review. Science and Public Policy 34: 575–583
van Raan AFJ (1996) Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics 36: 397–420.
Abramo G, Cicero T, D’Angelo CA (2012) National peer-review research assessment exercises for the hard sciences can be a complete waste of money: the Italian case. Scientometrics 95: 311–324
Abramo G, D’Angelo C, Di Costa F (2011) National research assessment exercises: a comparison of peer review and bibliometrics rankings. Scientometrics 89: 929–941
Uhlir PF (2012) For Attribution — Developing Data Attribution and Citation Practices and Standards: Summary of an International Workshop: The National Academies Press. 219 p.
Chavan VS, Ingwersen P (2009) Towards a data publishing framework for primary biodiversity data: challenges and potentials for the biodiversity informatics community. Bmc Bioinformatics 10: S2
Costello MJ (2009) Motivating Online Publication of Data. Bioscience 59: 418–427
Moritz T, Krishnan S, Roberts D, Ingwersen P, Agosti D, et al. (2011) Towards mainstreaming of biodiversity data publishing: recommendations of the GBIF Data Publishing Framework Task Group. Bmc Bioinformatics 12: S1
Costello MJ, Michener WK, Gahegan M, Zhang Z-Q, Bourne PE (2013) Biodiversity data should be published, cited, and peer reviewed. Trends in Ecology and Evolution 28: 454–461.
Mons B, van Haagen H, Chichester C, t Hoen PB, den Dunnen JT, et al. (2011) The value of data. Nature Genetics 43: 281–283
Chandler RE, Thorne P, Lawrimore J, Willett K (2012) Building trust in climate science: data products for the 21st century. Environmetrics 23: 373–381
Helly J, Staudigel H, Koppers A (2003) Scalable models of data sharing in Earth sciences. Geochemistry Geophysics Geosystems 4: 1010
Staudigel H, Albarede F, Anderson DL, Derry L, McDonough B, et al. (2001) Electronic data publication in geochemistry: A plea for “full disclosure”. Geochemistry Geophysics Geosystems 2: 2001GC000234
Staudigel H, Helly J, Koppers AAP, Shaw HF, McDonough WF, et al. (2003) Electronic data publication in geochemistry. Geochemistry Geophysics Geosystems 4: 8004
Conway EM (2006) Drowning in data: Satellite oceanography and information overload in the Earth sciences. Historical Studies in the Physical and Biological Sciences 37: 127–151
Hofmann EE, Gross E (2010) IOC Contributions to Science Synthesis. Oceanography 23: 152–159
Parsons MA, Duerr R, Minster J-B (2010) Data Citation and Peer Review. Eos, Transactions American Geophysical Union 91: 297–298
Walton DWH (2010) Data Citation – Moving to New Norms. Antarctic Science 22: 333–333
Science editors (2011) Dealing with Data. Science 331: 692–729.
Klump J, Bertelmann R, Brase J, Diepenbroek M, Grobe H, et al. (2006) Data publication in the open access initiative. Data Science Journal 5: 79–83
Mooney H, Newton MP (2012) The Anatomy of a Data Citation: Discovery, Reuse, and Credit. Journal of Librarianship and Scholarly Communication 1: eP1035
Cinkosky MJ, Fickett JW, Gilna P, Burks C (1991) ELECTRONIC DATA PUBLISHING AND GENBANK. Science 252: 1273–1277
Dodge C, Majewski F, Marx B, Pfeiffenberger H, Reinke M (1996) Providing global access to marine data via the World Wide Web. Journal of Visualization and Computer Animation 7: 159–168
Brase J (2004) Using digital library techniques – Registration of scientific primary data. In: Heery R, Lyon L, editors. Research and Advanced Technology for Digital Libraries. pp. 488–494.
Schindler U, Brase J, Diepenbroek M (2005) Webservices infrastructure for the registration of scientific primary data. In: Rauber A, Christodoulakis S, Tjoa AM, editors. Research and Advanced Technology for Digital Libraries. pp. 128–138.
Altman M, King G (2007) A Proposed Standard for the Scholarly Citation of Quantitative Data. D-Lib Magazine 13 . doi:10.1045/march3007-altman.
Goodman L, Lawrence R, Ashley K (2012) Data-set visibility: Cite links to data in reference lists. Nature 492: 356–356
Thorisson GA (2009) Accreditation and attribution in data sharing. Nature Biotechnology 27: 984–985
Starr J, Ashton J, Brase J, Bracke P, Gastl A, et al.. (2011) DataCite Metadata Schema for the Publication and Citation of Research Data (Version 2.1). DataCite. 27 p.doi:10.5438/0003.
De Schutter E (2010) Data Publishing and Scientific Journals: The Future of the Scientific Paper in a World of Shared Data. Neuroinformatics 8: 151–153
Gorgolewski K, Margulies DS, Milham MP (2013) Making data sharing count: a publication-based solution. Frontiers in Neuroscience 7: 9
Peterson J, Campbell J (2010) Marker papers and data citation. Nature Genetics 42: 919–919
Chavan V, Penev L (2011) The data paper: a mechanism to incentivize data publishing in biodiversity science. Bmc Bioinformatics 12: S2
Uppala SM, Kallberg PW, Simmons AJ, Andrae U, Bechtold VD, et al. (2005) The ERA-40 re-analysis. Quarterly Journal of the Royal Meteorological Society 131: 2961–3012
Gould J, Roemmich D, Wijffels S, Freeland H, Ignaszewsky M, et al. (2004) Argo profiling floats bring new era of in situ ocean observations. Eos, Transactions of the American Geophysical Union 85: 185–191
Boyd PW, Bakker DCE, Chandler C (2012) A New Database to Explore the Findings from Large-Scale Ocean Iron Enrichments Experiments. Oceanography 25: 64–71.
Borgman CL (2012) The conundrum of sharing research data. Journal of the American Society for Information Science and Technology 63: 1059–1078
Nature editors (2009) Data Sharing. Nature 461..
Sedransk N, Young LJ, Kelner KL, Moffitt RA, Thakar A, et al. (2010) Make Research Data Public?-Not Always so Simple: A Dialogue for Statisticians and Science Editors. Statistical Science 25: 41–50
Tenopir C, Allard S, Douglass K, Aydinoglu AU, Wu L, et al. (2011) Data Sharing by Scientists: Practices and Perceptions. PLoS ONE 6: e21101
Kostoff RN, Geisler E (2007) The unintended consequences of metrics in technology evaluation. Journal of Informetrics 1: 103–114
Moed H (2008) UK Research Assessment Exercises: Informed judgments on research quality or quantity? Scientometrics 74: 153–161
van Dalen HP, Henkens K (2012) Intended and Unintended Consequences of a Publish-or-Perish Culture: A Worldwide Survey. Journal of the American Society for Information Science and Technology 63: 1282–1293
Weingart P (2005) Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics 62: 117–131
Chao TC (2011) Disciplinary reach: Investigating the impact of dataset reuse in the earth sciences. Proceedings of the American Society for Information Science and Technology 48: 1–8
Piwowar HA, Carlson JD, Vision TJ (2011) Beginning to track 1000 datasets from public repositories into the published literature. Proceedings of the American Society for Information Science and Technology 48: 1–4
Piwowar HA, Vision TJ, Whitlock MC (2011) Data archiving is a good investment. Nature 473: 285–285
ICPSR (2011) ICPSR Bibliography of Data-Related Literature. Inter-university Consortium for Political and Social Research: http://www.icpsr.umich.edu/icpsrweb/ICPSR/citations/.
Piwowar HA, Day RS, Fridsma DB (2007) Sharing Detailed Research Data Is Associated with Increased Citation Rate. PLoS ONE 2: e308
Piwowar HA, Vision TJ (2013) Data reuse and the open data citation advantage. PeerJ 1: e175
Locarnini RA, Mishnov AV, Antonov JI, Boyer TP, Garcia HE, et al.. (2010) World Ocean Atlas 2009, Volume 1: Temperature. In: Levitus S, editor. NOAA Atlas NESDIS 68 . Washington, DC: U.S. Government Printing Office. pp. 184.
Antonov JI, Seidov ID, Boyer TP, Locarnini RA, Mishnov AV, et al.. (2010) World Ocean Atlas 2009, Volume 2: Salinity. In: Levitus S, editor. NOAA Atlas NESDIS 69 . Washington, D.C.: U.S. Government Printing Office. pp. 184.
Garcia HE, Locarnini RA, Boyer TP, Antonov JI, Baranova OK, et al.. (2010) World Ocean Atlas 2009, Volume 3: Dissolved Oxygen, Apparent Oxygen Utilization, and Oxygen Saturation. In: Levitus S, editor. NOAA Atlas NESDIS 70 . Washington, D.C.: U.S. Government Printing Office. pp. 344.
Garcia HE, Locarnini RA, Boyer TP, Antonov JI, Zweng MM, et al.. (2010) World Ocean Atlas 2009, Volume 4: Nutrients (phosphate, nitrate, silicate). In: Levitus S, editor. NOAA Atlas NESDIS 71 . Washington, D.C.: U.S. Government Printing Office. pp. 398.
Boyer TP, Antonov JI, Baranova OK, Garcia HE, Johnson DR, et al.. (2009) World Ocean Database 2009. In: Levitus S, editor. NOAA Atlas NESDIS 66 . Washington, D. C.: U.S. Government Printing Office. pp. 216.
Levitus S (1982) Climatological Atlas of the World Ocean. Washington DC, US Government Printing Office: NOAA Professional Paper No. 13. 173 p.
Casey KS, Brandon TB, Cornillon P, Evans R (2010) The Past, Present and Future of the AVHRR Pathfinder SST Program. In: Barale V, Gower JFR, Alberotanza L, editors. Oceanography from Space: Revisited: Springer Verlag. doi:10.1007/978-90-481-8681-5_16.
Donlon C, Robinson I, Casey KS, Vazquez-Cuervo J, Armstrong E, et al. (2007) The global ocean data assimilation experiment high-resolution sea surface temperature pilot project. Bulletin of the American Meteorological Society 88: 1197–1213
Donlon CJ, Casey KS, Robinson IS, Gentemann CL, Reynolds RW, et al. (2009) The GODAE high-resolution sea surface temperature pilot project. Oceanography 22: 34–45.
Aguillo IF (2011) Is Google Scholar useful for bibliometrics? A webometric analysis. Scientometrics 91: 343–351
Jacsó P (2010) Metadata mega mess in Google Scholar. Online Information Review 34: 175–191
Franceschet M (2010) A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics 83: 243–258
Kousha K, Thelwall M, Rezaie S (2010) Using the Web for research evaluation: The Integrated Online Impact indicator. Journal of Informetrics 4: 124–135
Costas R, van Leeuwen TN, van Raan AF (2011) The “Mendel syndrome” in science: durability of scientific literature and its effects on bibliometric analysis of individual scientists. Scientometrics 89: 177–205
Eom YH, Fortunato S (2011) Characterizing and Modeling Citation Dynamics. PLoS ONE 6: e24926
Glover DM, Wiebe PH, Chandler CL, Levitus S (2010) IOC Contributions to International, Interdisciplinary Open Data Sharing. Oceanography 23: 140–151
Garfield E (1979) Citation Indexing. Its theory and application in science, technology and humanities. New York: Wiley.
Priem J, Hemminger BM (2010) Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday 15: 2874.
A step by step tutorial to setup your bujo for 2021
Setting Up A New Bullet Journal
When people are getting started with a bullet journal it is hard to understand exactly how to set up a journal. This is the easiest bullet journal setup tutorial with tips for anyone starting a bullet journal from zero, migrating to a new notebook, or changing your journal for the new year.
The flexibility of the Bullet Journal method can seem daunting to someone who is just getting started. When you have too many options it is easier to procrastinate and do nothing at all.
So how to get beyond that first page frighten and set up yourself for success in your bullet journaling journey?
Keep reading because in this post I will give you a quick tutorial to set up your planner efficiently, while also keeping the process simple.
How to set up a bullet journal
After you purchase your notebook it is good to have some bujo supplies for your journal setup.
To learn more about my favorite tools and supplies you can check out this post here.
If you already have your supplies then gather everything you need and let’s start. Some essential tools are:
There may be some other tools you like to use or need but these are the basics to grab for now.
When you start working on different pages, you can decide then if more needs to be used.
Now let’s follow this guide for bullet journal setup for beginners.
Bullet journal step by step setup
The way I explain below follows the original bullet journal setup. Remember to always do what works for you though.
But, as a beginner, you may not know yet what will be best for you, then you are free to follow the info below.
New Bullet Journal Set up
The general steps to set up bullet journal are:
- Key and index
- Future planning
- Monthly, weekly, daily spreads
- Customs collection whenever you feel the need
Bullet journal first page
You may start your bullet journal with a theme or a personal page, perhaps the year or even a quote.
There is really no hard rule here.
You can be as creative as you wish or go straight to the bullet journal keys.
Bullet journal index, key and future log setup
Next, it is time to set up your bullet journal keys and index.
- Bullet Journal Index: This will help you find your content over time, some notebooks already come with this page pre-printed. Here is my in-depth post about the bujo index.
- Bullet Journal Key: Bujo keys are essential for rapid logging, check this guide for bullet journal keys here.
- Future Log: This is where your mid and long-term tasks are logged for now. I teach how to set up a future log here!
Bullet Journal Monthly Setup
Now that you are ready with the standard set up bullet journal pages we will follow the monthly calendar for the current month you are in.
In this part, you will be able to log birthdays and anniversaries or other appointments and events for the near future that you already know about but don’t have your weekly spread ready to add them yet.
After the initial pages, I normally start to set up my next month’s layouts.
You can read more about monthly layouts in this post here but I basically use the following spreads:
- Bullet Journal Monthly Cover Page: You don’t necessarily need one but I love the feeling of a new start and I also change monthly themes.
- Monthly calendar spread: for the most important tasks and appointments of the month.
- Goals for the month and to-do list
- Custom bulllet journal collections: These are basically anything you want to track this month such as books to read, gratitude log, doodles page, budget tracker, habit tracker etc. I have a comprehensive list of collection ideas here!
Bullet Journal Weekly setup
The monthly pages are followed by the more detailed weekly schedule pages.
The weekly, as it is commonly called, is where all your tasks and rapid logging happen (this can also happen in dailies where you have more space per day to add your tasks).
Here you can plan what needs to be done each week, and on which day. You also get to be totally creative about your layout. Play with your bullet journal grid spacing to find the best weekly layout for you.
The weekly pages usually have larger sections for each day of the week so that you have enough room to write down daily trackers, detailed to-do lists for completing certain tasks or things you need before an appointment or event.
In my weekly set up I normally have:
- Weekly spreads using 2 pages, but you can set it up in less or more pages depending on how much space you need. I have some minimalist weekly ideas here!
- Daily tracker section: for instance water intake, sleep tracker or brain dump etc.
- Dailies: some people use weeklies and dailies (the difference between weeklies and dailies is just how you set your pages or what you call them). Dailies are more individual pages for days of the week, which are especially useful if you like to (art) journal in your bullet journal or to write prompts.
Bullet journal setup ideas
Check the options below to find the one that best suits you, then find bujo set up tips depending on what situation you are in at the moment.
1 ┃ I am starting my first bujo:
If you are just starting on your first notebook then there are a few collections that you add first before anything else!
These collections are present when setting up a bullet journal for the first time to help you use the bullet journal method in the best way possible.
Standard Bullet Journal collections to include:
- Bullet Journal Index: This will help you find your content over time, some notebooks already come with this page pre-printed. Here is my in-depth post about the bujo index.
- Bullet Journal Key: Bujo keys are essential for rapid logging, check this guide for bullet journal keys here.
- Future Log: This is where your mid and long-term tasks are logged for now. I teach how to set up a future log here!
The pages above are the normal pages to include in every new book, however, you don’t have to stop here.
2 ┃ I am starting a new bujo for the new year:
If you are starting a brand new book around New Year’s time then, on top of the pages already mentioned above, you can also include:
Keep reading below because we have the monthly and weekly set up to add to the journal too.
3 ┃ I have finished my notebook, what now?
If you are setting up your bullet journal when you have had previous notebooks already you might not need some of the collections again.
For instance, I don’t need to repeat my bujo key page because I know exactly what every signifier means already.
I will also not add another vision board if it is not the end of the year and I will revisit my old one in the previous notebook.
I might add though new custom collection pages if they are relevant to the moment for me.
I add them here, before the monthly setup, if these collections are permanent and irrelevant to the month I am in, such as:
- Washi tape swatch
- Konmari checklist
- Meal Plan
After the first standard and custom collection page set up, it is time to follow with the calendar spreads.
This has been the best bullet journal setup for me. Do you need to do the exact same thing?
You could do it totally random to start with and have the collection pages whenever you remember to add one.
How do you set up your bujo? Let me know in the comments below.
More bullet journal guide posts:
PIN FOR LATER!
New Bullet Journal Setup Ideas
EAI® Elementary Math Journal – Set of 10 – School to Home Hybrid Learning Solutions
Item # 533820
Our math journals are a great way to incorporate writing into your math class. Designed for students in grades 3-5, these journals feature a unique “dual page” design. Left-hand pages feature 3/8″ rules without baselines as well as space for drawings and diagrams. Right-hand pages feature a full page of graph paper with 1cm squares. 60 pages.
Shipping Dimensions (height, width & length):
-1 x -1 x -1
90,000 Toys are a better investment than securities, gold, art
Photo: Social Networks
According to economists from the Higher School of Economics, investing in toys – for example, Barbie dolls, superhero figures, model cars or trains – can bring good income. For example, discontinued LEGO sets are growing in price by 11% per year – faster than stocks, bonds and gold.
According to a Barclays survey, wealthy people invest roughly 10% of their funds in securities, jewelry, art, antiques, collectible wines and cars.The demand for such goods is especially high in developing countries: China, Russia, and the Middle East.
“If people buy things for investment, then they are jewelry, antiques or art. However, there are other options, for example, collectible toys, – said Victoria Dobrynskaya, Associate Professor at the Faculty of Economic Sciences at the Higher School of Economics.
– There are tens of thousands of transactions every day on the LEGO aftermarket. This is a huge market, little familiar to traditional investors. ” The research results are published in the journal “Research in International Business and Finance”.
Construction kits are growing rapidly in price for several reasons. Firstly, they are produced in limited quantities, especially special collections dedicated to films, books, historical events, explains the magazine “Naked Science”.
Second, once sales are complete, the number of toys available on the secondary market is low. Many owners do not see value in them, details are lost or thrown away. Others, on the contrary, attach great importance to them and do not want to sell.
Thirdly, the construction set has been produced for decades, therefore it has many admirers among adults. The more time passes after the release of a kit, the more they are appreciated as a sample or an object of nostalgia, the less they will remain on the market.
HSE researchers compared prices for 2,322 LEGO sets from 1987–2015 using data on primary market sales and transactions in online auctions. Secondary market prices begin to rise two to three years after release, up to + 600% per year.
Small and very large sets become more expensive faster. Small sets contain unique pieces or figurines, while larger sets are produced less frequently and are more attractive to adults. Sets dedicated to famous buildings, films or holidays add well to the price.
Most expensive: Millennium Falcon, Corner Cafe, Taj Mahal, Death Star II, and Imperial Star Destroyer). A separate category is rare sets that were sold for a short time or given away at promotions.
On average, LEGO sets add 10-11% in price per year. That’s more than stocks, bonds, gold, and many collectibles provide. In addition, the prices for sets are not very dependent on the situation on the stock market and are low compared to art, antiques and cars.
This makes them a reliable and affordable investment option. However, investing in LEGO pays off only in the long term – from three years, and is associated with higher shipping and storage costs than securities.
“Investors in LEGO make a significant profit from the resale of unopened sets, especially rare, limited editions or long gone from store shelves. Sets released 20-30 years ago evoke a feeling of nostalgia among LEGO fans, and their prices are sky-high, ” added Viktoria Dobrynskaya.
– But despite the high profitability of LEGO in the secondary market in general, not all sets are equally successful. You have to be a fan to understand the intricacies of the market and see the investment potential of a particular set. “
New paradigm for emerging markets “(BRICS Journal of Economics. 2022. T. 3. No. 1.)
Reception of articles in English – until December 1, 2021
Manuscripts drawn up in accordance with the requirements of the journal (see author guidelines) should be sent by e-mail to [email protected], in the line “subject line” should be indicated Submission to the Special Issue .
Special Issue: MNCs and (de) globalization: New paradigm for emerging markets
Dr. Andrei Panibratov, Professor, St. Petersburg State University, Russia. Email: [email protected]
Background to the topic
Trade disputes and political tensions between countries have evoked concerns of scholars about the ongoing deglobalization that has been actively problematized since the end of the 2010s (Witt, 2019; Tung & Stahl, 2018).The impact of the Covid-19 pandemic has only accelerated the pace of this phenomenon, leading to new restrictions on mobility and disruptions in value chains (Delios et al., 2021). While many scholars expect greater risk aversion, protectionism, and nationalism to become a paradigm for national economies and for multinational companies (MNCs) (Fontaine, 2020; Abdelal, 2020; Young, 2020), others oppose the assumption that the foundations of globalization have not eroded, and the post-pandemic world will need even greater globalization.This point is based on the idea that now and for a long time to come, the world will be fragmented and unequal, and international firms will exist as bridges connecting the fragmented reality (Contractor, 2021). There is also a third point of view, according to which the consequences of the Covid-19 pandemic will result in both globalization of labor and deglobalization of capital (Brakman et al., 2021).
Deglobalization, political turmoil and the Covid-19 consequences lead to disruptive and far-reaching changes in the social, political and technological environment (Panibratov, 2020).If these changes evoke qualitative shifts in international business, companies, as well as institutions and industries, will likely have to adapt (Witt et al., 2021). One of the possible consequences may be a revision of Buckley’s “Global factory” concept, and the choice of value chain and governance mode that result from this revision. These choices involve relocation, reshoring and nearshoring, and the strategic response here is to cover the contingencies and time horizons that shape these choices (Witt et al., 2021).
As part of the dynamic reassessment, relocation and reorganization of activities, divestment is one of the possible strategic decisions of firms in response to the (de) globalization-related uncertainty and turmoil (Arte & Larimo, 2019; Dachs et al., 2019 ). IB scholars have studied companies’ divestment strategy as an essential part of their (de) internationalization strategy when placed in an unfavorable environment (Panibratov & Brown, 2018), using the example of the behavior of Japanese and Korean firms when leaving China under the impact of the trade war (Chung et al., 2019; Trencher et al. 2020) or Western MNCs continually divesting from Russia due to economic sanctions applied by the US and EU governments.
The question that remains open is whether in the coming years firms will retract MNCs global markets, or just relocate their international activities to other foreign destinations (Delios et al., 2021). The special issue will be devoted to the phenomena of deglobalization, as well as foreign divestment and relocation of business, which are assumed to persist in the new post-Covid reality.
SI will seek out conceptual papers, literature reviews, empirical works, and case studies on the phenomenon of deglobalization and the strategy of foreign divestments and relocation of MNCs.
Topics for submissions
Illustrative, but by no means exhaustive, questions pertaining to the special theme include the following:
- What is the role of environmental uncertainty (caused by sanctions, Covid-19, geopolitical conflicts) for the de-internationalization strategy and for the host country of divesting MNCs?
- How do the institutional factors of domestic and host markets affect the decision of a firm to de-internationalize?
- How is a foreign divestment (FD) decision made? What precedes FD?
- How do companies choose a destination when they change locations? What is the motivation for relocating firms’ operations?
- To what extent is the FD decision conditioned by the image of the home country of the divesting firm? What is the effect of FD on the legitimacy of firms?
- What is the role of location-specific and firm-specific advantages in deciding on FD?
- What does FD mean for divesting firms? Is it a failure or part of a strategy?
90 108 90 087 read the Aims & Scopeto gain an overview and assess if your manuscript is suitable for this journal;
Create tabular datasets
Datasets are a class of repository objects that represent
two-dimensional data array.A database is used to store data. In Forsyth. Analytical platform ”
the presence of datasets is a prerequisite for creating
and cubes, elements required
to perform data analysis in the platform repository. Also
a tabular dataset includes a log intended for storage
information about performing ETL tasks. In fact, the magazine presents
a table with a given list of fields.
To create a dataset in the object navigator:
Run the Create
> Table “in
press the button “New object”,
located in the “Create” group
on the Home tab of the ribbon
tools, and select “Table”.
After completing one of the actions, the wizard for creating
tabular dataset. The general pages of the wizard are described in the section
with the masters “. Other pages of the wizard depend on
the selected object on the page “Type
After creating the dataset, go to work
There are the following types of data records:
The datasets support a parameterisation mechanism by which
dynamic change of the conditions for selecting data in the set is available.If
the dataset contains parameters, then each time the dataset is accessed
the user needs to enter the parameter values. Also available
setting the default value of the parameters. If the dataset is
the basis for building a reference book, the set parameters also allow
manage the composition of the directory elements.
is a physical table with a predefined set of fields stored
in the database.User can create new fields, delete predefined
fields is impossible. The log is used to store work data
built-in ETL tools. User adding data is not available.
To create a journal, use the New Journal Wizard:
in the object navigator execute:
Others> Journal ”context menu;
The command “New object
> Others> Magazine “in the” New “group
on the “Home” tab
in ETL open problem:
Run the “Task> Properties” command of the main
In the wizard that opens, go to
to the “Task parameters” page.
Click the Create button.
After completing one of the actions, the wizard for creating
The general pages of the wizard are described in the “Operation
with the masters “.
After creating the journal, go to work
90,000 Vasily Filippov. How I invented a young chemist’s kit and got into Science
Firstly, “serial” products hold the attention of adolescents and children well, they can be shown how a variety of phenomena are arranged from the point of view of chemistry – how electricity works, what causes global warming what processes are taking place in space.
Secondly, a subscription is an excellent reminder for parents that at regular intervals they should be distracted from an urgent project at work or from household chores in order to spend half an hour or an hour on their child and teach him something useful. A subscription product that comes in regularly by mail disciplines and turns such activities into a system. It actually works even better than an expensive gym membership.
According to our initial calculations, it would be optimal to send three different sets per month during the year at a fixed price.That is, for the entire cycle of our MEL Chemistry project, we had to invent, test, complete and package 36 sets with spectacular experiments that would cover a wide range of natural phenomena.
For a year and a half, we studied more than fifty books about chemical experiments, read thousands of descriptions on the Internet, watched tens of thousands of videos. From all the variety, it was necessary to choose such experiments that could be carried out using reagents permitted by safety standards.The most stringent requirements are adopted in the European Union. There is a special directive here that describes in detail what substances and in what concentration can be added to children’s chemical kits. There are about 50 such substances in total.
It was really difficult to turn around, and in some cases my partner chemists had to come up with new versions of classical experiments. For example, as I already said, it is forbidden to add vinegar to children’s kits, but you can use the sodium salt of acetic acid and another acidic salt, which, when combined, just form vinegar.
Realizing that we will most likely not be able to fully compete with smartphones and tablets, we decided not to take them away from the child, but, on the contrary, to use the capabilities of gadgets for the benefit of our product (my previous company, SPB Software, was engaged in the development of mobile applications , and I have a lot of experience in this area). In addition to the offline product, we have developed a mobile application that, using interactive graphics and videos, illustrates how molecules work during a chemical reaction and explains why it happens.The app is compatible with Google Cardboard VR glasses – they are included in the MEL Science starter kit and are used for all experiments in our line.
Products for children have specific features – they need to be sold twice: first – to the child, then – to his parents (or in reverse order). And if the motivation of parents is generally clear, then the motivation of children and adolescents is a whole science. Children do not know how to make long-term plans, and the motivation to do something now in order to become smarter in five years is too abstract for them.They need to see the result immediately.
All popular video games are built on instant confirmation of success – they award stars, artifacts or points for completed tasks. We decided to apply the same approach in our set and spent quite a lot of time trying to figure out how to help the child quickly get approval for a successful experiment and confirm his progress.
We have included a clip-on macro lens in our chemistry starter kit.It attaches to the camera of any smartphone or tablet and allows you to create fairly high-quality close-up photos and videos of what happens during a chemical experiment. The child then willingly shares this with friends on Facebook, VKontakte or Instagram in order to receive encouraging comments and “likes” from them. This is the best motivation for him.
Before starting the full-scale sales of MEL Chemistry, we screamed at friends and rather quickly recruited about a hundred volunteer testers, asking them to try out several of our experiments.It was a good decision. We found and fixed many bugs. For example, it turned out that when the kits were delivered by plane, the lid of the reagent jar, which seemed to us to be quite reliable, leaks due to the pressure difference. I had to introduce the stage of gluing all the jars with a special protective film in production.
In the fall of 2015, MEL Chemistry was ready for bulk deliveries. We did not want to be limited to Russia, and launched the kits in two more regions – in the UK and in the USA.These are large markets with good scientific and educational traditions. In addition, in the United States alone, two million school-age children are homeschooled. The quality of home education in this country is very high, but the problem arises with laboratory work at home. Here our product could come in handy.
At first we decided that the most effective tool for promoting MEL Chemistry is toy exhibitions. Our development sells itself with a live demonstration at the stand, and it seemed to us that all the opinion leaders, journalists and just geeks we need, gather at the exhibition.However, as it turned out, all major exhibitions are held from January to March, when retail forms its assortment and concludes contracts for a year. MEL Chemistry was ready to launch in the fall.
Looking ahead, I will tell you that later we nevertheless took part in several exhibitions, but we did not manage to produce the effect that we expected. It turned out that toy exhibitions are mainly a meeting place for manufacturers and retailers. If a product is aimed at retail, then there is no better place to promote.But we do not use this channel, and the press, which is of interest to us, is extremely reluctant to attend such exhibitions, because the participants, as a rule, do not show anything interesting. The journalist will not write about a new line of teddy bears or doll houses?
So, as I said above, we did not want to wait long – and we decided to contact the people of interest to us directly. To begin with, we collected a database of about 2,000 journalists and bloggers who deal with topics close to us – science, chemistry, education, virtual reality and mobile applications.