|This paper aims to appraise how public outreach campaigns influence urban resilience in a specific case study around flooding in Paris, France. Whilst this idea is commendable and metrics to help evaluate the impact of public outreach can influence urban resilience are a good idea, this paper struggles to connect the work done with the broader aims proposed in the introduction. As such I think this paper should be resubmitted after major revisions. |
Many of the issues of this paper I feel are down to problems of structure - that with a more logical structure and careful selection of data this paper could make its case for using qualitative data as a metric for assessing resilience changes more effectively. Having said that I think there is also a bigger issue that would need to be addressed in a point raised by one of the previous reviewers, on the usefulness of using only quantitative data in a study like this - that the limitations of such approaches need to be seriously addressed as they are central to any discussions of resilience in urban landscapes.
Finally I think the data presented in section 5 (5.1, 5.2 and 5.3) was used as the basis for several large leaps of reasoning, and I would question if the data collection methods presented were actually measuring what the researchers described? This was most striking to me in sections 5.2 where there were several variables not identified (such as how long had it been since the exhibition, did the participants engage with any other sources of information in addition to the exhibition, did any of the participants have experience of the topic other than professional? Also several factors were not explained in anywhere enough detail (respondents had heard about the project, but from where?). In section 5.3 a lot of the problems seemed to be a result of the design of the survey and as mentioned by reviewer two previously, a reflection on the vast body of work on surveys to establish science literacy and comprehension in the context of your own work here would be very useful. Additionally I think it is dangerous to assume that just because a method provides numerical metric that it is a valuable way to measure the outcome of public outreach.
One of the key structural elements I think needs to be addressed would be a clear explanation of the RCI, they appear to be introduced early, before the body of literature, but it is unclear if these measure are the result of a previous study or a novel concept by the authors, followed by more depth of data presented in the results sections, perhaps in tabular form? If not in the paper itself then basic demographic data for the participants should be available in a supplement to validate the quality of the data you collected.
Additionally I would question if the case study provided in this paper actually does what the title suggests and assess the impact of the public outreach campaigns in this case in Paris. There is no data shown to measure a change in any resilience metrics and no clear measures of 'success' were presented. In fact this paper seems to be more of an assessment of outreach strategies used in a situation where resilience was an issue, than an assessment of how those strategies impacted resilience of participants. Perhaps a slight reframing of the paper may help clarify this point, or if the authors have additional supporting data that placed the results of the communications study in the context of other local resilience measures, that would be great.
On reflection of the previous reviewers comments:
I agree with reviewer one, this paper needs to be reviewed by a native English speaker before publication.
Review one, point 3 - I would like to see more evidence of this - what are the impact of the outreach strategies?
Reviewer two, point 2 - I think the authors could provide more context here - I think there is a very valuable point made by reviewer two on the place of the researcher in the data and I would like to see this addressed.
Reviewer two, point 7 - I agree with this point, but in your answer you state that you draw a line between communications and resilience assessment, but I would ask is that then followed up by your data?
Reviewer two, point 9 - I also wondered about your answer to this point as I'm not sure you have actually addressed the issue and also if the third experiment doesn’t actually address the RCI's then why is it included in this paper? Reframing the paper may help here.
P2 Line 4: This is too early to introduce the RCI's if they are a novel measure, you should base them within the literature.
P2 Line 17: a direct quote should have a page number.
P3 Line 31: 'must be considered.' Considered for what?
P3 Section 3.1: I would like more references and a greater criticism of this section.
P4 Section 3.3: The Hyogo Framework is out of date, you should be using the more recent Sendai Framework.
P4 Line 29: Indicators of knowledge and communication examples are listed, but how are these measured? Qual or quant? Do they support your method?
P5 Line 24: what is the principle of subsidiarity?
P6 Lines 6-23: are these referenced or are they your own measures? Also how are they measured, how can you use them to measure impact? Are they qual or quant? Why was the comparison factor included, what value does it bring?
P8 Figure 1: I don't think this is appropriate format to display this data, it is fairly confusing. Also the title states that that target values were compared with attained values, but how were these selected?
P9, Line 5 - reported data on audience size of printed media distribution is generally not very reliable - additionally you cannot tell if participants who received the paper actually saw your article or not, so the limitations of this need to be acknowledged.
P9, Line 8: What is WebTv?
P10, Figure 3: I don't understand why the numbers of newspapers disinvited is increasing so dramatically, surely this is not related to the project as newspaper readership is usually fairly consistent?
P11, Line 8: I would like some description of the stimulus material here, what was in the exhibition?
P11, Line 23: why were so many participants who had not seen the exhibition included in this data set if it was an attempt to assess the impact of the exhibition?
P13, Line 16: You cannot rely on 'a plausible explanation', that is just an assumption.
P13, Line 20: where had people heard about the project?
P13, Line 21: This is also an assumption, without clarifying some more of the variables for the reader I cannot be sure of this.
P14, Section 5.3: This seems to be measuring the success of an individual comms project, rather than contributing to the broader aims of the study.
P15 and 16, Table 2, Table 3: I would like to see some reasoning for how these surveys were designed, as there seems to be several design flaws and leading questions.
P16 Line 13: I would like an acknowledgement of the limitations of using only quant here.