the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Planning a geostatistical survey to map soil and crop properties: eliciting sampling densities
Abstract. The communication of uncertainty is not only a challenge when soil information has been produced but also in the planning stage. When planning a survey of soil properties it is necessary to make decisions about the sampling density. Sampling density determines both the quality of predictions and the cost of fieldwork. In this study, we considered four ways in which the relationship between sample density and the uncertainty of predictions can be related, based on prior information about the variability of the target quantity. These were offset correlation, prediction intervals, conditional probabilities of the interpretation errors and implicit loss functions. Offset correlation is a measure of the consistency of kriging predictions made from sample grids with the same spacing but different origins. Prediction intervals and conditional probabilities are based on the prediction distribution of the variable of interest. All four of these methods were investigated using the information on soil pH and Se concentration in grain in Malawi. They were presented to a group of stakeholders, who were asked to use them in turn to select a sampling density. Their responses were evaluated and they were then asked to rank the methods based on their effectiveness, in their experience, and in terms of finding a level of uncertainty that they were able to tolerate when deciding about a sampling grid spacing. Our results show that the approach that stakeholders favoured was offset correlation, and some approaches were not well understood (conditional probability and implicit loss function). During feedback sessions, the stakeholders highlighted that they were more familiar with the concept of correlation, with a closed interval of [0,1] and this explains the more consistent responses under this method. The offset correlation will likely be more useful to stakeholders, with little or no statistical background, who are unable to express their requirements of information quality based on other measures of uncertainty.
- Preprint
(1018 KB) - Metadata XML
-
Supplement
(2749 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on gc-2023-1', Anonymous Referee #1, 28 Aug 2023
General comments:
This paper aimed to investigate how stakeholders (more like participants of a scenario study in this case) assess uncertainty when planning geostatistical surveys to determine appropriate sampling densities. These sampling densities are in turn used to create maps for soil and crop indicators. Recognizing the significant impact of errors linked to uncertainty, the timeliness and importance of studies like this cannot be overstated. Such endeavors, aiming to understand how uncertainty is communicated and subsequently propagated, have clear practical applications in the development of spatially-derived environmental maps. Despite its valuable aims, I find several significant concerns within this paper related to its context, justification, and overall content. The overarching issue lies in its extensive reliance on mathematical formulations, rendering it inaccessible to scientists and practitioners with limited statistical training. This intense focus on mathematical equations detracts from the paper's primary objective, indicating a lack of clarity throughout the manuscript. The shift from the goal of science communication towards complex mathematical modeling leaves me perplexed regarding several key aspects. For example, the process by which maps "were presented to a group of stakeholders, who were asked to use them in turn to select a sampling density" is vague. Apart from the presentation style to the participants, it's crucial to recognize that the map quality presented to stakeholders is influenced by multiple variables, such as the initial sampling density, data quality, sampling design, and robustness of the geostatistical models themselves. The paper's direction, asking stakeholders to rank methods based on their effectiveness, seems ill-defined and lacking theoretical grounding. Furthermore, the framing of the quality of input data solely as a function of sampling density is simplistic and ignores other essential qualitative considerations, such as the type of data collected. While the decision on sampling density is undoubtedly vital, the assumption that it should be taken a priori disregards other vital factors like the scale, sampling strategy (design), and the level of detail of the phenomenon under study. These become paramount when assessing the overall quality of the required output. A particularly concerning aspect of this paper is the scant information provided about the stakeholders' selection, background, and representation. Grouping of soil scientists and agronomists together, for instance, and juxtaposing them with public health experts and nutritionists, lacks clear justification. The omission of machine learning and AI algorithms, especially in an era defined by big data, further complicates the study's scenario of understanding uncertainties. Lastly, the conclusion that stakeholders preferred offset correlation for its simplicity may risk oversimplification of a multifaceted decision-making process. A preference for a method due to its comprehensibility doesn't inherently render it the most effective or universally applicable. Against this backdrop, I find myself unable to endorse this current work for publication. Below, I offer specific comments to guide the authors in revising the paper:
Specific comments:
Abstract:
The abstract begins with an unconventional approach, dedicating over five lines to general statements (L1-5). This choice leads to a lack of specificity in addressing the real problem of communicating uncertainty during the planning stage of a geostatistical survey. While there is no scientific dispute that sampling density correlates with prediction uncertainty, this paper fails to elucidate what sets it apart from existing knowledge. There's an opportunity to articulate unique perspectives or new insights on uncertainty, but the paper does not seize it. The introduction of four different ways in which "the relationship between sample density and the uncertainty of predictions can be related" falls short of justifying this research, as no novel insights or values are identified. The abstract would benefit from a more concise focus on the specific problem at hand and a clear rationale for why the chosen methodology is innovative or necessary. Without these clarifications, the abstract's approach feels redundant and fails to engage the reader in a meaningful way.
L8-9: “All four of these methods were investigated using information on soil pH and Se concentration in grain in Malawi” à Investigated in what sense? Please be specific. Additionally, the term "stakeholders" is used in an overly generic way, particularly in the abstract. This lack of specificity leaves the reader wondering who exactly these stakeholders are. Without understanding their roles, experiences, backgrounds, and locations, it's challenging to gauge the relevance and applicability of their opinions and decisions in the context of the research. The paper would benefit greatly from identifying these stakeholders more precisely. Are they soil scientists, agronomists, public health experts, or nutritionists? What qualifies them to contribute to this particular study? The clarity on these questions would not only strengthen the abstract but also establish a solid foundation for the rest of the paper. My concerns regarding the selection, experience, and location of these stakeholders will be elaborated further in the subsequent sections of this review.
1 Introduction:
The introduction of the paper provides a broad overview of the study's themes, but it appears superficial and lacks a specific focus on the paper's actual subject. Instead of honing in on the unique issue this research aims to address - namely, how stakeholders deal with uncertainty in planning mapping surveys - the section tends to wander through various unrelated topics. For instance, the introduction's first paragraph begins with a discussion of MND in sub-Saharan Africa, a detail that seems incongruent with the non-location-specific context of the research. A considerable portion of the content here is overly generic and fails to pinpoint the problem the study seeks to explore. Furthermore, the paper makes a convoluted attempt to rationalize the methods (such as offset correlation, implicit loss function, kriging variance, conditional probability) presented to the stakeholders. These methods are described in laborious detail, yet the rationale for their comparison remains unclear. The text also fails to address whether there is scientific consensus on the superiority of any one method. Much of this section is bogged down with technical details that may be unengaging for a wider audience. Simplifying some of these concepts would make them more accessible and align better with the journal's target readership. For example, the sentence '… an implicit loss function, conditional on a logistical model (i.e., a function of sampling effort and statistical information about the estimates of the cost of errors) can be modeled as the loss function that makes a particular decision on sampling effort rational (Lark and Knights, 2015)' is impossible to parse. The paper's emphasis on the intuitiveness and simplicity of the offset correlation method is presented as an advantage. However, this approach seems to oversimplify the complexities involved and may even have biased the stakeholders towards this method. The way the study was constructed raises concerns that the stakeholders might have been subtly steered towards favoring the offset correlation method. As already indicated, I suggest that a more engaging introduction is written underlining the theoretical foundations of the study and providing compelling justification for study’s approach.
2 Theory:
This section, as it currently stands, does not appear to add significant value to the overall paper. Although some readers might find it informative, its current content might be better suited for an appendix. I recommend relocating this material and replacing it with a comprehensive literature review that outlines the current state of knowledge regarding decision-making in soil and plant surveys. This could include case study examples highlighting the tangible costs incurred by stakeholders who failed to adequately plan and make informed decisions prior to their survey efforts. Further, it would be enlightening to specify the types of stakeholders you have in mind for this research. Detailing their background and roles will help readers better understand how their specific attributes might influence their decision-making process. By making these adjustments, you can create a section that not only maintains the reader's interest but also lays a more robust foundation for the arguments and findings presented later in the paper.
3 Materials and methods
3.1 Basic approach
L176: “We used the four methods, described above, to assess uncertainty in relation to sampling density, considering the problem of measuring a soil property relevant to crop management: soil pH, and a property of the crop: Segrain concentration.” àI am not sure what is meant here with assessing uncertainty in relation to sampling density. Also, what “problem” is there when measuring a soil property relevant to crop management? And why specifically soil pH? And Se? Providing this context will not only enhance the reader's understanding but also reinforce the motivation behind the study, making it easier to follow the progression of the research and its significance within the broader scientific landscape.
L177-180: “We used variograms from a national survey in Malawi for each variable (Gashu et al., 2021) to obtain sampling densities for further notional sampling for an administrative district in Malawi, Rumphi District, with an area of 4769 km2. The outputs were presented to participants”. à While the paper draws on the dataset from Gashu et al. (2021), further details on how this dataset was collected, along with the rationale behind its selection, would strengthen the connection between the data and the study's objectives. Specifically, it is essential to explain the methodology used in collecting the dataset, including how the parameters of the variograms were selected to derive the sampling densities. This information will provide readers with a clear understanding of the data's reliability and relevance to the study. The paper should address potential biases that could arise from using variograms of national level data to derive regional sampling densities, especially considering the comparison of four different methods. Are there similar machine learning approaches? This section must articulate the steps taken to minimize biases, ensuring that all four methods were optimal for the input dataset. Providing a context for how the study's scenario would apply to stakeholders needing to understand uncertainty without national-level data will help readers gauge the broader applicability of the findings. Further, clarity on how the output was presented to the participants, whether through PowerPoint, poster format, or other means, and the order of presentation is crucial. These factors could significantly influence participants' understanding and choices, and acknowledging them in the paper will enhance the transparency of the process. By addressing these points, the paper can offer a more comprehensive and clear understanding of the data, methods, and process, enhancing both its scientific rigor and accessibility to a broader audience.
L180: “The participants considered each method in turn and were asked to select a sampling grid density based on the method. After doing this they were asked, for each method: Has the method helped you assess the implication of uncertainty in spatial prediction in as far as it is controlled by sampling? They were then asked: Which of these methods was easiest to interpret? Finally, the participants were asked to rank the method in terms of ease of use. Evaluation of the test methods were done using an online questionnaire on Microsoft Forms” à How! Which aspect of the methods were considered? Was the quality of the method with regards to the output or which specific aspect? On the question of “easier to interpret”, how do authors define “easier”? This question is loaded with so much subjectivity that without a clear unbiased scale of what “easy” means, it is impossible to derive any meaning from their answers.
L187-195: “The invited participants self-identified as (i) agronomist or soil scientist or (ii) public health or nutrition specialists. The participants also self-assessed their statistical/mathematical background and their frequency of use of statistics in their job role (perpetual, regular, occasional use)”. à Given that this information is one of the pillars of your findings in this work, I wonder why there wasn’t the attempt to standardize the backgrounds of the participants. For instance, what qualifies one as any of the professions (agronomist, soil scientist, nutritionist, and public health specialist). Is it based on education level, years of practice, specific training, or other criteria? Was there a reason why such experts were chosen? Do these experts typically have training in interpreting uncertainty in maps? Elaborate on why the distinctions among these professionals were used as the basis for the response. Address whether the 26 participants were intended to represent a broader population or if they were selected for specific reasons. Justify the choice of only 26 participants for this study. Explain why this number was deemed sufficient, considering the scope and objectives of the research. If the sample size is indeed small, acknowledging its limitation and potential biases will improve the rigor of the study.
L195-200: “In the exercise, an introductory talk was given to explain the study’s objectives. During the talk, we explained the four test methods (offset correlation, prediction intervals, conditional probabilities and implicit loss function) and how they can be used to assess the implications of uncertainty in spatial predictions to determine appropriate sampling grid space for a geostatistical survey. We explained the structure of the questionnaire to the participants. We emphasized to the participants that we were not testing their mathematical/statistical skills and understanding but rather were testing the accessibility of the methods using their responses”à What drove the participants to engage in the exercise? Understanding their motivations can shed light on the relevance of their input and the validity of their responses. Were they incentivized in any way? Did they have personal or professional interests in the outcome? The term "stakeholder" typically implies an individual or group with a vested interest in the outcome of a particular process or decision. In this context, it remains unclear if the participants indeed stood to gain or lose anything from the exercise. If they did not have a direct stake in the findings or implications of the research, using the term "stakeholder" might be misleading. An explanation or justification for this terminology would enhance the clarity and precision of the paper.
L205-210: “The offset correlation was the first method presented to the participants. This was followed by prediction intervals and conditional probabilities. The implicit loss function was the final method presented to the participants. We started with a measure we thought all our stakeholders would most easily understand and then moved on to the more complex methods.” à The presentation of the offset correlation method within the research design appears to have been conducted in a manner that may have inadvertently favored this approach. Was there any randomization in how the different methods were presented to the participants? If the offset correlation was consistently presented first, or in a way that highlighted it more prominently, this could influence participants' perceptions and evaluations. Were all the methods described with equal clarity and neutrality? Any differences in language, emphasis, or complexity might have created an uneven playing field, leading participants to gravitate toward the offset correlation method. Was there any attempt to control or assess the potential for bias in how the methods were presented and evaluated? Implementing and reporting on measures such as blinding or counterbalancing could strengthen the credibility of the results. Were participants’ preconceived notions or preferences regarding these methods assessed or controlled for? Their prior knowledge or beliefs could also contribute to a bias in their evaluations. Addressing these questions would help to ascertain whether the apparent favoring of the offset correlation method is a genuine reflection of its merits or a product of the research design. A robust examination of these concerns would enhance the rigor and validity of the findings, ensuring that the conclusions drawn are founded on an unbiased assessment of the methods in question.
3.2 Test methods
Most of the information in 3.2.1 largely repeats the information in 3.1. Thus, I suggest to fuse the information here with that of the section 3.1. I think some of the questions I raised in 3.1 is answered here so I suppose it makes it easy to fuse them. While it is common to cite previous studies for established methods or data, in this case, where the dataset is central to the analysis, it may be beneficial to provide specific details rather than merely referring to other works. For instance, it is unexplained how soil pH and Segrain is measured. This will give readers a more comprehensive understanding of the methods and rationale behind the chosen measurements.
L218-220: I wonder how the mean of the soil pH was calculated. This is because it will be incorrect to just calculate the arithmetic mean of a phenomenon (like pH) that is on a log scale.
L228-230: Any specific reasons for these minimum and maximum grid spacings?
L231: “We considered different prediction for each variable but the prediction interval was fixed, depending only on grid spacing. The three predictions of soil pH were 4.8, 5.5 and 6.0 and those of Segrain were 20, 55 and 90 μg kg−1.”à I can understand the need to keep the same prediction intervals, but considering that the soil pH as a soil property and Segrain as a plant property will be subjected to different dynamics of spatial change, was there a way to account for this in the predictions?
L233: What kind of chart? Is it Figure 1? If so then please state it.
L234: “From the chart, we asked the participants to select the grid spacing that gives the widest prediction interval that would be acceptable if the mapped predictions were to be used to make decisions about soil management or interventions to address human Se deficiency.” à I find difficulty in embracing the premise upon which this question is constructed. Initially, my understanding was that the inquiries were primarily concerned with the planning of a geostatistical survey. Therefore, it confounds me as to why participants are questioned about employing the maps as a foundation for decision-making. In a theoretically optimal scenario, what would constitute the best choice of a prediction interval for such a decision?
L240-245: If a conditional probability of 1 indicates that the prediction is a equivalent to the overall mean of the dataset, does it suggest that the conditional mean of <1 is an indication of underestimation or over estimation of the true value at the given location? Also, I have the same issue with the question posed here as that posed in L234 above.
L245-263: My concerns mirror those I previously expressed in section 234. Participants are queried about interventions, but their responses are then utilized as a foundation for planning a geospatial survey. This connection appears incongruent, and it might be worth clarifying how the answers to these questions directly inform the planning process.
Section 3.2.5: I'm grappling with a particular aspect of the offset correlation method, namely its use as a measure of similarity between two grid spaces. For this measure to function meaningfully in decision-making, one grid space must be taken as a reference, representing the closest approximation to reality. Then, higher correlation with this given reference space would indicate an optimal choice among the others. However, in the method's current presentation to participants, an issue arises. Specifically, there's a risk of bias propagation; grid spaces that are closer together are likely to show higher correlation compared to those farther apart. Similarly, coarser grid spaces might exhibit greater correlation across the board. These biases can distort the method's effectiveness. How did the authors address this potential source of error?
Figure 4: Please check, the caption mentions Segrain, but the figure indicates soil pH. Also, it would be meaningful for the reader to know the grid space of map1 and map2 that is giving the correlation value of 4. As I have indicated in my comment above, it will be useful to know which of these two is closer to reality.
3.3 Data analysis
Section 3.3.1: “The expected number of responses under the null hypothesis, ei,j in a cell [i, j], is a product of row (ni) and column (nj ) totals dived by the total number of responses (N), and this the null hypothesis of the contingency table which is equivalent to an additive log-linear model of the table” à What is intended by this sentence? Please consider revising it to be more comprehensible for readers who may not be statisticians. For example, instead of stating 'Contingency tables allowed us to test the null hypothesis of random association of responses with the different factors in the columns,' it would be more helpful to specify what the null hypothesis was in relation to the different responses. This clarification would illuminate the process and make the statement more approachable for a broader audience.
Table 2 appears to neither enhance the flow of the paper nor contribute to its content. Consider relocating it to the appendix, where it can be accessed if needed without interrupting the main narrative of the paper.
Section 3.3.2: “However, in our analysis we reversed the order by assigning a score of 4 for the most preferred method and 1 for the least.” à Why was it necessary to do this? Why wasn’t it possible to also offer to the participants the same way you analysed the data? Perhaps, you could have also tested if the sequence of the choices offered would have had an effect on the decision.
4 Results
Section 4.1 test methods can be removed as there is no text under this section
Section 4.1.1 presents a discrepancy in the order of the methods, with the 'offset correlation' appearing last in the methods section but first in the results. To enhance clarity and consistency, I recommend aligning the order of appearance in both sections.
L338-340: From what I understand so far about offset correlation, the correlation value is combination pair of two grid spaces, so what does it mean here that the grid spacing for soil pH is 25 km and that for Segrain is 12.5 km? What is the other pair in this correlation combination? Also, from figure 5, it can be seen that while most people indicated 0.7 correlation value, there were still a substantial number of people that selected the full range of the correlation values. Given the low number of participants (n) it will be useful to not only report on the most but also critically consider the other correlations. I think this is one of the major flaws of this study.
Section 4.1.2: L345, do you mean there were no differences considering the p-value you reported? While I agree that the reported p-values suggest that the null hypothesis of uniformity in response cannot be rejected, it can be see from Figure 6 that the percentage of people that selected the grid spacing of 100 km (< 5 %) were substantially lower compared to the rest of the population, so what accounts for it?
5 Discussion
L383-385: “In this study, we presented to groups of stakeholders, four methods (offset correlation, prediction intervals, conditional probabilities and implicit loss functions) that can be used to support decisions on sampling grid spacing for a survey of soil pH and Segrain.“à I don’t think you can regard your participants as stakeholders in this case. It still remains to be answered what is at stake for them.
L385-390: “Offset correlation was ranked first as the method the stakeholders found easy to interpret (see Figure 9), and over 70% of the stakeholders specified a correlation of 0.7 or more as a criteria for adequate sampling intensity” à I am unsure where this 70 % is coming from because from Figure 5, it is only 30% that chose that 0.7 correlation. Since, the 0.7 value was chosen as part of an ordered categorical set of variables (from 0.4 to 0.9), it is inconsistent to draw a conclusion like “0.7 or more”. As I have already indicated in an earlier comment in the results, it is equally important to know why people chose 0.4 or 0.9 as their best choice of correlation coefficient for intervention.
“During the feedback session, stakeholders highlighted that they were more familiar with the concept of correlation, with a closed interval of [0,1]. This explains why there more consistent responses under this method.” à This here is another major flaw in the whole study. Was it the stakeholders that selected 0.7 who made this declaration or was is it also the same for those who chose 0.4 and 0.9, because if the concept of correlation is familiar, then you would strive for a stronger correlation of 0.9 and not 0.7. Also, it is wrong to indicate that correlation has a close interval of 0 to 1, because the interval of correlation is -1 to 1.
L390-393: “Our results are consistent with findings of Hsee (1998), that relative measures of some uncertain quantity (Hsee gives an example of the size of a food serving relative to its container) are more readily evaluated than absolute measures (the size of serving). An easy-to-evaluate attribute, such as the bounded correlation of [0,1], has a greater impact on a person’s judgement of utility. Hsee (1998) describe this as the “relation-to-reference” attribute. It is therefore, not surprising that the offset correlation is highly-ranked.” à As I have already explained above, correlation is not bounded between 0 and 1, and the fact that authors’ failed to grasp this clearly indicates that it is not a simple “easy-to-evaluate” attribute. It will be helpful for readers if the greater impact of an easy-to-evaluate attribute on judgement of utility is explained, given that this seem to be one of the main conclusions from this study. It will also be useful if Hsee(1998) “relation-to-refence” attribute can be explained as to how it relates to this study.
L394-395: “The offset correlation will be more useful for stakeholders who are not able to express their quality requirement for information in terms of quantities such as kriging variance.” à Was this statement also derived from the feedback session? In which way will it be more useful?
L395-399: “Furthermore, it is an intuitively meaningful measure of uncertainty, it recognises that spatial variation means that maps interpolated from offset grids will differ but that the more robust the sampling strategy the more consistent they will be. There is a paradox here, however, in that the previous study Chagumaira et al. (2021) showed that interpretation of survey outputs in terms of uncertainty was easiest for stakeholders with measures related directly to a decision made with the information. The offset correlation is a general measure, and the absolute magnitude of uncertainty.” à I wonder how offset correlation is an intuitive measure of uncertainty, can you please explain. And can you also explain the paradox you mention?
L403-411: Interesting explanation. I wonder if author’s don’t find it strange that the same people (stakeholders/participants) that could understand the bounded attribute [0,1] of the offset correlation cannot seem to understand a similar attribute of the conditional probabilities simply because it is “probabilities”?
L411-418: As I have already indicated in the results section the response on the grid spacing of 100 km is markedly lower than the rest, so I expected some explanation as to why this is so. The explanation given here is too superficial and inadequate to explain such a complex decision-making process.
General comment:
Based on the issues I've highlighted throughout my review, it's apparent that the remainder of the discussion and conclusion sections also warrant similar concerns. I strongly recommend a comprehensive revision of the manuscript to explicitly delineate its unique contributions to this most important field of science communication and particularly on communicating uncertainty.
Citation: https://doi.org/10.5194/gc-2023-1-RC1 - AC2: 'Reply on RC1', Christopher Chagumaira, 11 Mar 2024
-
RC2: 'Comment on gc-2023-1', Anonymous Referee #2, 03 Feb 2024
This manuscript compares four ways of communicating uncertainty associated with predictions made based on data from a geostatistical survey, so as to determine an appropriate sampling density to meet stakeholders expectations. The four methods were presented to groups of stakeholders and their responses to a series of questions (about each method’s effectiveness and the sampling density that they would select based on that method) analysed statistically. Results point to the offset correlation being the most useful.
The work is nicely thought out and well written, and I think this will be useful to add to the literature about how easy users find it to understand uncertainty when presented in different ways. It also nicely demonstrates that different sampling densities would result if the different methods of communicating the uncertainties would be used. I have only very minor comments to add, plus some typos.
Minor comments
I think the introduction should mention other mapping methods, which can make use of appropriate environmental covariates to model part of the variation, and then justify the focus of the work on kriging and gridded sample designs. If other mapping methods were used, optimal sampling schemes would then not be a grid, could the methods be applied to help in this case?
14: correlation is bounded on [-1,1]?
35: Clarify that different grid spacings is for the data (not grid of predictions).
42: Should this specifically say “ordinary kriging predictions” (ie not simple/regression/universal kriging)
Eq 2: I think z(x0) on right-hand side should be z(xi), also in the text below the equation.
Eq 3: This is a repeat of Eq 1. Probably better here (ie after presenting formula for prediction), so suggest to delete Eq 1.
103: definition of epsilon here should align with what is given on line 112 (probably line 112 version is better).
134-136: I’m confused by this sentence. If z is 0 in Eq 6, then the true data does not appear in the equation, and the error doesn’t seem to matter, only the value of the prediction?
162: I think it would be clearer to put “made from data collected on a square grid”
230: Was the size of blocks the same for all grids, or was it set to be the same size as the grid? The second option (block size = grid size) doesn’t really make sense to me in this context, as the meaning of the predictions (and their variances) would be different for the different sample designs.
Fig 4: Can brief details of how these maps be added? Were data for a pair of offsetted sampling grids jointly simulated, then those simulated data (for each of the two offsetted grids in turn) used to predict on the fine-scale mapping grid (as shown in the figure)?
450: Needs a sentence added here to summarise what you did (before talking about responses in the next sentence).
Tables A2 and A3: these could easily be combined into one table
Typos
Line number: suggested new text
7: “from data on sample grids…”
20: “concentrations”
26: “survey efforts”
33: “is quantified”
53: “tied to particular decisions”
16: delete comma after x0
67 “to an acceptable”
79: “or from a comparable region”
138: “reduces the error”
139: “an additional sample point”
145: “the sampling exercise”
149: “number of samples”
155: “denotes”
158: “loss functions”
230: “three different predictions”
244: “asked the participant what grid”
267: “pair of maps”
289: “partitioned into components corresponding to a pooled table”
291: I think “Figure 3” should be “Table 2”
302: “differences in responses”
311: “no difference”
312: “were uniformly”
329: “responses of the ?” Missing something here.
372: I don’t think this should be “by all respondents” (ie not every single respondent ranked it first?), maybe should be “Amongst all respondents, the offset…most effective”
380: “statistics”
388: “explains why there were”
404: “This suggests”
417: “by the respondents” and “would be of greatest value”
432: “beginning” and “explanation of”
441: “different variables”
Citation: https://doi.org/10.5194/gc-2023-1-RC2 - AC1: 'Reply on RC2', Christopher Chagumaira, 11 Mar 2024
Supplement
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
469 | 119 | 39 | 627 | 58 | 23 | 29 |
- HTML: 469
- PDF: 119
- XML: 39
- Total: 627
- Supplement: 58
- BibTeX: 23
- EndNote: 29
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1