paper_id
stringlengths
9
13
conference_year_track
stringclasses
139 values
reviewer_id
stringlengths
13
13
messages
listlengths
5
21
5zbRCWR3qQ
CLeaR 2022 Conference
Reviewer_tUp3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_tUp3, and the following are the paper you need to review:\n```<<5zbRCWR3qQ>>```", "role": "user" }, { "content": "Review Content: Summary: Review of Paper#14\nMain review: \nThis paper develops a category-theoretic criterion for determining the equivalence of causal models having different but homomorphic directed acyclic graphs over discrete variables. One of the contributions of this paper is that it interprets a structural causal model in the context of category theory. This allows one to translate a causal diagram into a corresponding string diagram, which appears to be instrumental in determining the homomorphic transformation between different structural causal models.\n\nThe authors develop a complete condition for an isomorphic transformation between two causal models using causal string diagrams. While I have not checked the details of the proof, the results in Theorem 6 seem reasonable. Since I am not familiar with the related work on this specific topic, I cannot comment on the significance and novelty of these results.\n\nOverall, I think this paper studies an interesting problem and proposes a reasonable characterization. However, I do have some questions, summarized as below:\nThe characterization in Definition 2 and Theorem 7 rely on the detailed parametrization of the underlying causal models. This assumption seems quite strong since these parametrizations are mostly not identifiable in practical applications.\n\nThis paper also assumes that all variables in the causal model are observed. Or equivalently, there exists no unobserved confounder (UCs). This brings me back to the previous point: when UCs are present, the underlying causal mechanisms (i.e., the functors) are not generally identifiable from the observational data. \n\nI think the writing of this paper could be improved. I would appreciate one or two possible applications of the proposed method or some practical challenges that involves identifying the homomorphic causal diagrams. This could help the readers to understand the significance of the results.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer tUp3 \nThank you for the comment and question. Yes, we assume that models are parametrized, and have no unobserved confounders. This is because we are interested in the nature of causal models themselves, rather than methods for identifying them from data. Please refer to our reply to reviewer 1 (6giK) for the value of such work. \n\nWe admit that the writing could be improved, and agree that some applications might be helpful. But since we have already run out pages, it is difficult to include more examples without sacrificing the rigor and clarity of the theoretical derivations. \n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The response covers nearly all of the issues I raised. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
5zbRCWR3qQ
CLeaR 2022 Conference
Reviewer_Lzzj
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Lzzj, and the following are the paper you need to review:\n```<<5zbRCWR3qQ>>```", "role": "user" }, { "content": "Review Content: Summary: This paper discusses equivalence between causal models. Somewhat unually, the causal models is represented in a causal string diagram, after which an equivalent abstraction of a causal model is defined by a isomorphism between functors. The motivation for using string diagrams is the allow for preserving different causal mechanisms in the abstraction. The approach is compared to several other approaches for abstractions and it is shown that it generalises those other proposals.\nMain review: Generally, the paper introduces a very elegant method for both thinking and representing abstractions of causal models. While\nthe mathematical language is somewhat unusual in this field, it is not difficult to understand the main points that the paper\ntries to make. I think it also convincingly shows that the approach provides a systematic criterion for the existing of a\ncausal abstraction. As such, though theoretical, I believe that is a very relevant contribution to this field.\n\nOne thing I would like to suggest is the title of the paper: if I understood correctly, the contribution is limited to providing\nequivalence between models and their abstractions (the 'micro' and 'macro' level), and not equivalence 'in general' (whatever that \nmay mean). While discussed in the introduction, I was expecting a somewhat different in the paper at first.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer Lzzj\nThank you for the positive comments. In particular, we are glad to know that the main points of the paper make sense. \n\nWe would like to point out that our criterion covers not just micro/macro translations, but all forms of equivalence in general, so that target models may be of completely different physical natures (it could be, for instance, brain networks of humans and macaques). We have to admit that our writing was misleading in this respect. We will emphasize this fact in revision — thank you very much for the pointer. \n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It satisfactorily addresses the major issues I had. I continue to recommend this work positively.", "role": "assistant" } ]
6GLEuG0d8i
CLeaR 2022 Conference
Reviewer_mkLG
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_mkLG, and the following are the paper you need to review:\n```<<6GLEuG0d8i>>```", "role": "user" }, { "content": "Review Content: Summary: An excellent, though unusual paper\nMain review: Overall, this is an excellent paper. Its findings are relatively modest, but that is because it takes on a very difficult challenge: to characterize the optimality of human causal exploration.\n\nIt has a relatively unusual set of contributions for a paper in a typical AI or ML conference (though any given year of NeurIPS often has at least a few papers of this kind). However, CLeaR should strive to be open to all research seeking to understand causal reasoning and learning. This paper certainly does that.\n\nThe thematic organization of the related work section is great. It makes it easy for readers to understand the diverse set of connections to related work in several disparate areas.\n\nThe study includes human subjects, which requires a great deal of effort to get approval, recruit subjects, and perform the experiment.\n\nThe hypothesis space described in section 5 seems to assume that the identity of the potential blickets (i.e., the unique combination of attributes) are the crucial factor rather than individual attributes of the potential blickets (e.g., shape, color). For example, it seems entirely reasonable to have causal structures such as “green-con” (two or more green objects) or “triangle-cube-con” (a triangle and cube) or even “blue-ish-con” (blue tinted objects). The assumption that concepts must concern identity greatly reduces the size of the hypothesis space. The authors should either account for this in their analysis or explain why they choose to use identity-based rather than attribute-based hypotheses.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Rebuttal\nThank you for your comments and feedback! We are glad to see that you believe our work is useful for ML research, in that it seeks to understand causal reasoning and learning. We also appreciate the recognition of the effort required for human studies with children.\n\nYou raise a great point that the children's overhypotheses about causal structure could be based on the attributes of the objects, like shape and color, rather than their identity. We designed the study to try to ensure that identity was more salient than attributes. First, in the testing portion of the experiment, we specifically chose colors and shapes that were different from those used in the demonstration phase, so that children could not directly apply attribute-based overhypotheses from the demonstration phase to the exploration phase. Note also that the objects we used are like those in Figure 1, which have quite distinctive and distinct shapes; the simplified shapes in Figure 2 are just for ease of understanding. Second, for each child we randomized which two of the three objects were blickets, in both the demonstration and exploration phases.\n\nWith that said, we cannot rule out that children could be considering these attribute-based hypotheses, since they also seem to consider other hypotheses that we did not anticipate (such as stochasticity, order, etc., as discussed in Section 5.1). We would very much like to include both these hypotheses and attribute-based hypotheses in our models, but you are correct that they would substantially increase the size of the hypothesis space, making it computationally intractable to compute the exact ideal observer analysis. Understanding exactly how it is that children are exploring in this broader hypothesis space---and how we might similarly implement such a strategy in agents---is a key question for future work, and one we believe our current paper highlights.\n\nWe will clarify the above points in the camera-ready version of the paper, and add a discussion of identity- versus attribute-based hypotheses.\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable comments and feedback on our paper! While we respond to each reviewer’s specific comments below, we also wanted to address a more general theme across the reviews regarding comparisons between the children and the ideal observer models. In particular, we want to emphasize that our study makes it clear the difficulty in doing this, and that it highlights some of the limitations of existing models and algorithms for causal inference.\n\nFirst, one issue is in ensuring that the models and children have the same prior information about the space of hypotheses. The GIVEN-HYPOTHESIS condition was designed to be the best test for this, because we explicitly give children demonstrations of both conjunctive and disjunctive blicket detectors. Ideally, this would prime them to consider only those two overhypotheses, making the comparison to our ideal observer models (which also only consider two overhypotheses) more fair. In the NOT-GIVEN-HYPOTHESIS condition, we wanted to remove this control to see whether (and if so, how much), children’s exploration would differ from the case where the set of overhypotheses was constrained. This condition is quite challenging to model using conventional causal reasoning techniques, as in this case the space of overhypothesis is potentially very large and underspecified, and we did not expect the ideal observer models to capture behavior in this condition particularly well. To our surprise, we found that regardless of being given evidence of the hypothesis space or not, children seemed to be considering and exploring a broad set of hypotheses. We believe this openness to consider so many different possibilities is what underlies humans’ (and especially children's’) powerful and rapid causal inferences. The fact that it is not clear how to incorporate a similar sort of openness in ML models is a question worth investigating, and we believe is an important insight for the causal reasoning community.\n\nSecond, another issue is in ensuring that the models and children are optimizing for the same objective. This is a classic issue in cognitive psychology and developmental science in general: how do you get people to actually do the task you have in mind? In this study, we asked children to “figure out how the machine works”, which could be interpreted in a variety of different ways (e.g., identify which objects are blickets, make the machine go, discover the rule by which blickets make the machine go, etc.). Some of these hypotheses can be encoded more-or-less straightforwardly in an ideal observer model; for example, “make the machine go” can be translated into a reward function of “get 1 reward for turning the machine on”. Similarly, we also tried another straightforward option of maximizing information gain. However, our comparisons make it clear that these are not how children have interpreted the question: they often try many more combinations after successfully making the light go on, and do not always seem to try combinations that are informative (though they could perhaps be informative under a different hypothesis space, as related to the previous point). Thus it is not necessarily straightforward to encode what children are optimizing for into a reward function, highlighting another challenge with formalizing childrens’ causal reasoning in machine learning algorithms.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' reply. The response covers nearly all of the issues I raised. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
6GLEuG0d8i
CLeaR 2022 Conference
Reviewer_GTJ1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_GTJ1, and the following are the paper you need to review:\n```<<6GLEuG0d8i>>```", "role": "user" }, { "content": "Review Content: Summary: This paper investigates the role of “overhypotheses” (i.e., structured priors for causal structures in a domain) in causal learning. \nMain review: These overhypotheses have previously been shown to play an important role in human causal learning, and while structured priors are sometimes used in machine causal learning, they are not typically expressed as overhypotheses. The authors develop an online environment to run blicket experiments involving active learning, and then report data about an experiment using that environment. That experiment showed that children’s behavior is not well-modeled by relatively simple (by the authors’ own admission) RL learning models.\n\nThe topics of exploration-based causal discovery and causal overhypotheses are both important ones, and ML work on them can certainly be informed by experimental work with humans. However, the present experiment has a number of issues and potential confounds that make it difficult to evaluate the results. For example, there is no indication about exactly what children were told about the blicket machine, including key information such as whether they were told that it was a deterministic device. The training cases have different numbers of successes between the “Given” and “Not given” conditions. The task is framed as figuring out which of two causal possibilities obtains (in terms of causal structure), rather than learning an overhypothesis. And so on for a number of other issues. I want to emphasize that I think this kind of experiment is potentially quite valuable; the concerns are about this particular realization.\n\nThe paper is generally framed with a focus on the development of novel ML methods, but there does not seem to be any significant ML development here. There are two different relatively simple RL methods that are compared with the developmental data, but no explanation about how the developmental data could actually lead to new methods or algorithms. I think that is a significant shortcoming given the focus of this conference.\n\nMinor note: The paper title has a notable typo\nRating: 3 reject, not good enough\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Rebuttal and Experimental Script\nThank you for your time in reviewing our paper. We appreciate your comments on experimental clarity, and will take these into account to improve the writing of the paper. However, we disagree that our paper is not an appropriate fit for the conference, and would like to clarify the contributions of this work.\n\nMinor note: We’ve fixed the typo! (It is really embarrassing to have this happen in the title!)\n\n# Contributions\nWhile we acknowledge that this paper does not make a specific algorithmic contribution (e.g. we do not introduce a new RL algorithm or approach to causal learning), we strongly believe that our work provides two important contributions for causal learning and reasoning in ML:\n\n1. We introduce a novel RL environment designed with a controllable causal structure, which allows us to evaluate exploration strategies used by both agents and children in a unified environment. e also provide human baselines for exploration in this environment that could serve as imitation data for ML agents, and provide a challenge for a range of ML agents and algorithms in the community, beyond those we consider in the paper. The environment and the developmental data together allows researchers to ask both whether children’s exploration techniques are effective and to what extent other agents produce similar behaviors. Note, the CLeaR call for papers explicitly invites submissions of benchmarks.\n\n2. Through experimentation on both computation models and children, we demonstrate that there are significant differences between information-gain optimal RL exploration in causal environments and the exploration of children in the same environments (Section 5.1). These differences are non-trivial, and highlight a fundamental difficulty in incorporating large and underspecified hypothesis spaces into causal reasoning systems (Section 6). Admittedly, we do not explore in this work how RL algorithms can operate when given a “full” overhypothesis set, but this is the first paper (that we are aware of) which demonstrates that in practice, children explore a strikingly wide, set of overhypotheses instead of optimally exploring for time-to-reward or information gain.\n\nWe also note that the other reviewers believe that our contributions are interesting, impactful, and a good fit for the conference, with Reviewer mkLG writing “CLeaR should strive to be open to all research seeking to understand causal reasoning and learning” and Reviewer NM7t writing that “the development of the web tool is a contribution to the community”.\n\n# Experimental Procedure\nWe agree that our discussion of the precise experimental procedure in the paper could have been clearer (as mentioned both by this review, and by reviewer NM7t). To address this, we will include the full experimental transcript of our work (reproduced below) in the supplementary materials of the final paper. To briefly address your specific points, we made a significant effort to avoid biasing the children prior to the experiment, thus, we did not tell them any information about the deterministic nature of the device or provide any other evidence about how the device works (except for what is shown in the demonstration phase, as described in Section 4 of the paper).\n\n\n# Overhypotheses\n\nRegarding \"The task is framed as figuring out which of two causal possibilities obtains (in terms of causal structure), rather than learning an overhypothesis”, we believe this is a misunderstanding, and would like to clarify. We will also make sure this is clearer in the camera-ready version of the paper. \n\nFirst, there may be some confusion about the difference between causal structures and causal overhypotheses. A “causal structure” refers to the structure of a specific causal graph (i.e., nodes and edges). As an example, $A\\rightarrow C\\leftarrow B$ and $A\\rightarrow B\\rightarrow C$ are two different causal structures. In contrast, a “causal overhypothesis” is a hypothesis about a family of structures and/or the functional forms of the causal relationships in those structures. For example, one overhypothesis might state that the causal structure is a chain, without specifying *which* chain precisely (i.e., it would allow for $A\\rightarrow B\\rightarrow C$, $B\\rightarrow C\\rightarrow A$, and so on, but not $A\\rightarrow C\\leftarrow B$). Another overhypothesis might state that the true causal structure is a common effect, without specifying the functional form of *how* the effects are determined (i.e., it could be $C = \\textrm{AND}(A, B)$, $C = \\textrm{OR}(A, B)$, $C = \\textrm{NAND}(A, B)$, etc.). Please refer to [1-3] for more background on this.\n\nIn our experiments, we look both to see (1) whether children can identify the correct causal structure (i.e., which objects are blickets, of which there are six possibilities in the case of a disjunctive causal overhypothesis: A, B, C, AB, AC and BC), and three possibilities for the conjunctive causal overhypothesis: AB, AC and BC and (2) whether children can identify the correct overhypothesis (i.e., what is the functional form of the relationship between the blickets and the machine, of which there are two possibilities: conjunctive/AND or disjunctive/OR) \n. Here (https://drive.google.com/file/d/1cPJElSxTpOdnZRiz9BDFl4gdCzvGfvQT/view?usp=sharing) is a figure that visually illustrates the setup. So to summarize, the children are not differentiating between two possible graphs, but rather two possible causal overhypotheses, and within an overhypothesis, between three possible causal graphs. We also note that even if two structures are the same for the disjunctive and the conjunctive cases, the conditionals between these graphs are different.\nNotably, the training data involves different objects and relations than the data in the test phase. Nevertheless children both explore and learn differently depending on condition suggesting that they have abstracted something about the functional form of the relations from the training and generalized this to the objects in the test condition\nWith respect to the framing of the paper, we did not intend to frame the paper as a solution of how, precisely, to learn causal overhypotheses, but an exploration of how children and current methods explore the overhypothesis space. Since humans , and certainly children, are unlikely to be able to explicitly articulate their representations of causal overhypotheses, e employ the standard developmental psychology technique of analyzing their simpler actions and responses, and inferring their representations from those responses., using the same method as Lucas et al. 2010 and 2014. \n\n[1] Tenenbaum, J. B., Griffiths, T. L., & Niyogi, S. (2007). Intuitive theories as grammars for causal inference. Causal learning: Psychology, philosophy, and computation, 301-322.\n[2] Kemp, C., Perfors, A., & Tenenbaum, J. B. (2007). Learning overhypotheses with hierarchical Bayesian models. Developmental science, 10(3), 307-321.\n[3] Lucas, C., Gopnik, A., & Griffiths, T. (2010). Developmental differences in learning the forms of causal relationships. In Proceedings of the Annual Meeting of the Cognitive Science Society, 32.\n# Other\n\nWhile we have done our best to address the issues that are discussed in this rebuttal, your review suggests you had other concerns with the work that are not explicitly mentioned. Would you be able to share these concerns in more detail? We would appreciate the opportunity to address or otherwise consider these!\n\n# Experimental Script\n\nDuring the training phases the children see a video demonstration for how two blicket detectors work, and they are pre-recorded for consistency. We have attached the links here, but cropped out the portion where you can see the researcher giving instructions to keep the submission double-blind. We will also include this in the supplementary material for the camera-ready.\n\nGiven hypothesis space condition: \nPart 1: Training: \nhttps://drive.google.com/file/d/1Z8bLzXjHSdXQL-0ap3OcpSq4HDPuhU5D/view?usp=sharing \nAnd this one since the order is counterbalanced \nhttps://drive.google.com/file/d/1CGdxi_VBpVuiq-VNUtU_ogynSqkOwFfm/view?usp=sharing \n\n\nPart 2: Testing: \n“Look, I have a third blicket detector. It could work like the polka dot one, or it could work like the striped one. Can you figure out how it works and make the detector go, and which blocks make it go?” \n\n“Great! Is there something else you want to try?”\n\nRepeat this until they say “no” Stop them after 10 minutes.\n\nThen ask which objects are blickets by pointing to each one, then clicking the star button.\n“Is this object a blicket?” (point to leftmost)\n“Is this object a blicket?” (point to middle)\n“Is this object a blicket?” (point to rightmost) \n\n“Now click the shapes that will make the machine go.” \n\n“Now that you tried it, can you tell me how this machine works?” \n\n\nNot given hypothesis condition: \nPart 1: Training: \n\nhttps://drive.google.com/file/d/1DhuYXZ48BuuT92l2Iq29NAzxjLB6s5dV/view?usp=sharing \n\nPart 2: Testing: script is the same as in the GIVEN-HYPOTHESIS condition\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable comments and feedback on our paper! While we respond to each reviewer’s specific comments below, we also wanted to address a more general theme across the reviews regarding comparisons between the children and the ideal observer models. In particular, we want to emphasize that our study makes it clear the difficulty in doing this, and that it highlights some of the limitations of existing models and algorithms for causal inference.\n\nFirst, one issue is in ensuring that the models and children have the same prior information about the space of hypotheses. The GIVEN-HYPOTHESIS condition was designed to be the best test for this, because we explicitly give children demonstrations of both conjunctive and disjunctive blicket detectors. Ideally, this would prime them to consider only those two overhypotheses, making the comparison to our ideal observer models (which also only consider two overhypotheses) more fair. In the NOT-GIVEN-HYPOTHESIS condition, we wanted to remove this control to see whether (and if so, how much), children’s exploration would differ from the case where the set of overhypotheses was constrained. This condition is quite challenging to model using conventional causal reasoning techniques, as in this case the space of overhypothesis is potentially very large and underspecified, and we did not expect the ideal observer models to capture behavior in this condition particularly well. To our surprise, we found that regardless of being given evidence of the hypothesis space or not, children seemed to be considering and exploring a broad set of hypotheses. We believe this openness to consider so many different possibilities is what underlies humans’ (and especially children's’) powerful and rapid causal inferences. The fact that it is not clear how to incorporate a similar sort of openness in ML models is a question worth investigating, and we believe is an important insight for the causal reasoning community.\n\nSecond, another issue is in ensuring that the models and children are optimizing for the same objective. This is a classic issue in cognitive psychology and developmental science in general: how do you get people to actually do the task you have in mind? In this study, we asked children to “figure out how the machine works”, which could be interpreted in a variety of different ways (e.g., identify which objects are blickets, make the machine go, discover the rule by which blickets make the machine go, etc.). Some of these hypotheses can be encoded more-or-less straightforwardly in an ideal observer model; for example, “make the machine go” can be translated into a reward function of “get 1 reward for turning the machine on”. Similarly, we also tried another straightforward option of maximizing information gain. However, our comparisons make it clear that these are not how children have interpreted the question: they often try many more combinations after successfully making the light go on, and do not always seem to try combinations that are informative (though they could perhaps be informative under a different hypothesis space, as related to the previous point). Thus it is not necessarily straightforward to encode what children are optimizing for into a reward function, highlighting another challenge with formalizing childrens’ causal reasoning in machine learning algorithms.\n\n```\n", "role": "user" }, { "content": "I'm grateful for the authors' response. They have adequately addressed the majority of my concerns, and I may raise the score accordingly.", "role": "assistant" } ]
6GLEuG0d8i
CLeaR 2022 Conference
Reviewer_NM7t
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_NM7t, and the following are the paper you need to review:\n```<<6GLEuG0d8i>>```", "role": "user" }, { "content": "Review Content: Summary: This study investigate the acquisition of causal knowledge and causal overhypotheses through exploration, and how the overhypotheses influence exploration and acquisition of causal knowledge. The reviewer finds this work interesting and well-motivated by the introduction. Several points in the methods and analysis can use further clarification. The development of the web tool is a contribution to the community, however as the author noted the experiments and analysis presented is on the exploratory end. \nMain review: CLEAR Review: Learning Casual Overhypotheses through Exploration in Children and Computational Models\nThe reviewer has background in statistics and machine learning. The reviewer only have familiarity with reinforcement learning, and about 4 graduate level courses in cognitive science. The reviewer is not up to date on the literature in this domain, therefore the review focuses on the technical and statistical aspects.\n\nSpecific comments:\n\nFigure 3: Are the reported proportions averaged over all children? It might be interesting to show variability among the children. Also, it might be helpful to indicate what is the performance level of random guess. \n\nPage 7-8: It would be interesting to see the proportion of children that fully understand the conjunctive and disjunctive relationships (rather than simply identify what are the blinkets). I think identifying blinkets vs. non-blinkets is a different problem (what are the causal factors) compared to understand the conjunctive and disjunctive relations (what is the more precise the causal mechanism). Although some children collected enough evidence to be able to fully disambiguate the conjunctive vs. disjunctive, it does not mean they actually can disambiguate them.\n\nPage 9: \"We excluded causal structures such as A-con and B-con where no observations can distinguish between them.\" I am not sure if I understand this. Maybe we cannot distinguish A-con vs. A-dis? becuase they are the something?\n\nSection 5.1: I think one important question is: were the children specifically told that they goal is to figure out how the machine work? And how was that information communicated. The RL agent have a very specific cost function, where the children's cost function is not well defined, or unconstraint possibly. The authors touched on this, but the experimental setting for the children can be described with more detail. It would be interesting to see if the children's explore/learning strategy change if the task is described or incentivized differently.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Rebuttal\nThank you for taking the time to review our paper. We appreciate your comments! You raise some valid points and questions, which we address below.\n\nFigure 3: Yes, the reported proportions are averaged over all children and blickets (to compute the true positive rate) and non-blickets (to compute the false positive rate). We agree that variability amongst the children is a useful metric as well, and have updated the plot to include the standard errors here: https://drive.google.com/file/d/1IFWayLhisyDwTUZFTVhDFsjj-sitqRrb/view?usp=sharing. The performance level of random guess would be 50% (as each object is either blicket or not). \n\nPage 7-8: We completely agree that it would be interesting to see the proportion of children that fully understand the conjunctive and disjunctive relationships. We also agree that although some children collected enough evidence to be able to fully disambiguate the conjunctive vs. disjunctive, it does not necessarily mean they actually can disambiguate them. However, in practice it's very challenging to get an actual answer to this question. Unlike the single forced-choice-style question in [1], we collected both the children’s exploration behavior and asked the children questions about the machine in three different ways (see our response to Reviewer GTJ1 for the exact scripts): (1) We asked them verbally which objects are blickets; (2) we asked them to click the objects to activate the machine; and (3) we also asked them how the machine works. As is typical in child development data, the answers to these questions varied even across responses from the same child (e.g. some children knew how to activate the machine, but also said those same objects were not blickets!). Overall, we found the results were clearest when analyzing the simple forced-choice questions, but we will update the supplemental material with analyses of the other judgments, too. \n\nPage 9: Having a single object on the detector in the conjunctive setting is a degenerate case, as it requires at least two objects to light up the detector. Thus, we exclude conjunctive hypotheses with a single object. We apologize for the confusing wording and will clarify this in the revised paper.\n\nSection 5.1: Yes, the children were told to figure out how the machine works (and we have attached the full script that was read to the children during the initial training phase, see our response to Reviewer GTJ1). We agree that despite this prompt, the cost function guiding the children’s behaviors is not well defined like the cost function of an RL agent, and for us this was one of the interesting takeaways of this work! Our hope is that shedding light on this discrepancy can guide future work. In particular, these data may help machine learning researchers to define cost functions that are more similar to those of children, and that may both help explain children’s impressive learning abilities and allow artificial agents to improve their own learning.. We also agree that demonstrating how children’s exploration behavior changes due to different incentives or prompts (such as, “identify which objects are blickets”, or simply “make the machine go”) would be an interesting experiment. In this work we study this to a small degree - that is, our two conditions GIVEN-HYPOTHESIS and NOT-GIVEN-HYPOTHESIS give slightly different demonstrations (Figure 2) and as a result we see an effect on the children’s behavior (Figure 3 and 4). Further investigation into this question is an interesting avenue of future work one we are currently pursuing.\n\n\n[1] Lucas, C., Gopnik, A., & Griffiths, T. (2010). Developmental differences in learning the forms of causal relationships. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 32, No. 32).\n\n Global/general Response:\n```\nWe would like to thank all the reviewers for their valuable comments and feedback on our paper! While we respond to each reviewer’s specific comments below, we also wanted to address a more general theme across the reviews regarding comparisons between the children and the ideal observer models. In particular, we want to emphasize that our study makes it clear the difficulty in doing this, and that it highlights some of the limitations of existing models and algorithms for causal inference.\n\nFirst, one issue is in ensuring that the models and children have the same prior information about the space of hypotheses. The GIVEN-HYPOTHESIS condition was designed to be the best test for this, because we explicitly give children demonstrations of both conjunctive and disjunctive blicket detectors. Ideally, this would prime them to consider only those two overhypotheses, making the comparison to our ideal observer models (which also only consider two overhypotheses) more fair. In the NOT-GIVEN-HYPOTHESIS condition, we wanted to remove this control to see whether (and if so, how much), children’s exploration would differ from the case where the set of overhypotheses was constrained. This condition is quite challenging to model using conventional causal reasoning techniques, as in this case the space of overhypothesis is potentially very large and underspecified, and we did not expect the ideal observer models to capture behavior in this condition particularly well. To our surprise, we found that regardless of being given evidence of the hypothesis space or not, children seemed to be considering and exploring a broad set of hypotheses. We believe this openness to consider so many different possibilities is what underlies humans’ (and especially children's’) powerful and rapid causal inferences. The fact that it is not clear how to incorporate a similar sort of openness in ML models is a question worth investigating, and we believe is an important insight for the causal reasoning community.\n\nSecond, another issue is in ensuring that the models and children are optimizing for the same objective. This is a classic issue in cognitive psychology and developmental science in general: how do you get people to actually do the task you have in mind? In this study, we asked children to “figure out how the machine works”, which could be interpreted in a variety of different ways (e.g., identify which objects are blickets, make the machine go, discover the rule by which blickets make the machine go, etc.). Some of these hypotheses can be encoded more-or-less straightforwardly in an ideal observer model; for example, “make the machine go” can be translated into a reward function of “get 1 reward for turning the machine on”. Similarly, we also tried another straightforward option of maximizing information gain. However, our comparisons make it clear that these are not how children have interpreted the question: they often try many more combinations after successfully making the light go on, and do not always seem to try combinations that are informative (though they could perhaps be informative under a different hypothesis space, as related to the previous point). Thus it is not necessarily straightforward to encode what children are optimizing for into a reward function, highlighting another challenge with formalizing childrens’ causal reasoning in machine learning algorithms.\n\n```\n", "role": "user" }, { "content": "I appreciate the authors' reply. The reply answers almost all of my questions. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
7UwoSnMDXWE
CLeaR 2022 Conference
Reviewer_Y9XU
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Y9XU, and the following are the paper you need to review:\n```<<7UwoSnMDXWE>>```", "role": "user" }, { "content": "Review Content: Summary: The paper considers representation identification in the setting where latent factors generate observations and labels for multiple tasks, and the latent factors are mutually independent. Within somewhat additional assumptions (compared with existing works), authors show that disentanglement (up to permutation and scaling) is possible in theory and also by experiment.\nMain review: Overall, the paper achieves the identification goal by placing somewhat different (but possibly strong) conditions on the data generating process, so the identification becomes somewhat easier. Nonetheless, it should be a good addition to the current literature. \n\nI have some minor questions regarding the definition of *tasks*. \n-\tIn particular, what does it mean by “different tasks” and how they would affect the identity proof? For instance, if some tasks are redundant, then $k=d$ tasks may not be enough. There should be an explicit condition on how diverse the tasks are.\n-\tIn the first two results, the theorem says the number of tasks is exactly equal to the number of latent confounders, i.e., $k=d$. What if $k>d$? How do we know the condition in practice? And how do we select $d$ in practical applications?\n.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Clarification regarding the relationship between k and d\nWe thank the reviewer for their encouraging feedback! We address the concerns raised by them below.\n\n- **Task Diversity:** The diversity of tasks is captured by the constraint that $\\Gamma$ is invertible (Assumption 2). Hence, we cannot have redundant tasks for the identification of the latent variables. However, we could have a poorly conditioned invertible matrix, which would lead to slower numerical optimization in linear ICA. We will update our paper with a discussion regarding this.\n\n- **k > d scenario:** When the number of tasks are greater than the data dimension ($k > d$), we could select a subset $S^{'}$ of tasks such that $|S^{'}|=d$. Then we can proceed ahead in a similar fashion as Theorem 1. This question arises commonly in linear ICA literature; selecting a subset of tasks is standard practice. There are other ways to address this issue as well that involve PCA, we will also elaborate on that in the paper. Also, we had shown in our paper (Theorem 3) that we can identify the latent variables when $k=1$ under certain conditions, which implies in general the case of $k<d$ can be handled. Hence, we aim to be agnostic to the exact relationship between $k$ and $d$. \n\n- **Selecting data dimension d:** Our approach comprises of two phases, first training under the ERM objective, followed by linear ICA on the representation. So we can treat $d$ as a hyperparameter and tune it based on the linear ICA objective value used in the latter part of our procedure. Although assuming the value of $d$ is known is a common practice in previous works (Zimmerman et al.), we thank the reviewer for this suggestion and would explore this further with our validation strategy proposed above.", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised. I maintain my positive recommendation for this work.", "role": "assistant" } ]
7UwoSnMDXWE
CLeaR 2022 Conference
Reviewer_N1Q2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_N1Q2, and the following are the paper you need to review:\n```<<7UwoSnMDXWE>>```", "role": "user" }, { "content": "Review Content: Summary: This paper investigate the Nolinear Independent Component Analysis problem in a new setting, where latent factors cause auxiliary information, rather than auxiliary information cause latent factors. The authors prove that the latent factors can be recovered up to permutation and scaling with some excess assumption. When the dimension of label is at least equal to the dimension of factors, the assumption is weak. When the dimension of label is one, the assumption is stronger. To learn the latent factors from the observations and labels, the authors propose IC-ERM which constrains the representaion layer are mutually independent. To implement it, the authors propose ERM-ICA.\nMain review: The Nonlinear ICA problem is interesting. The authors investigate this problem in a natural setting, which is different from the previous. The theoretical result is complete and covers many situations. The experimental results confirm the effectiveness of the proposed ERM-ICA.\nThere are some confusing statement. \nIn Page 8, I think \"If the predictor is \\theta^{\\top}\\Phi(X)=1^{\\top}(\\Phi(X)\\odot\\frac{1}{\\theta})\" should be \"If the predictor is \\theta^{\\top}\\Phi(X)=1^{\\top}(\\Phi(X)\\odot\\theta)\". \nIn Definition 2, I think the \\Phi(X) should be V.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Correcting the typos in equation and definition\nWe thank the reviewer for their encouraging feedback! We provide clarifications about the specific confusing statements raised by the reviewer and will update the paper later regarding any typos in general.\n\n- We will correct the equation on page 8 to component wise mutiplication of $\\Phi(X)$ and $\\theta$; $\\Phi(X) \\odot \\theta = [\\theta_1*\\Phi_1(X), \\cdots, \\theta_d * \\Phi_d(X)] $\n\n- We will update the typo in Definition 2, it should be $V$ instead of $\\Phi(X)$.", "role": "user" }, { "content": "I value the authors' response. The authors have answered almost all of my concerns. I continue to recommend this work for acceptance.", "role": "assistant" } ]
7UwoSnMDXWE
CLeaR 2022 Conference
Reviewer_xwJZ
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_xwJZ, and the following are the paper you need to review:\n```<<7UwoSnMDXWE>>```", "role": "user" }, { "content": "Review Content: Summary: This paper extends ERM paradigm with ICA under the setting of latents-generate-label process. Based on the assumption that all components of the representation layer are mutually independent, the authors provide proofs and experimental evidences to show the effectiveness of their method.\nMain review: This paper is well written and easy to understand. I think the idea of constraining representations with i.i.d assumption would bring insights to the community.\n\n1. It would be interesting to see some label-generate-latent rivals in the experiments. \n2. typos in the Proof sketch of Theorem 3: 'Baseed' -> 'Based'\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Different data generation process in our work and previous works\nWe thank the reviewer for their encouraging feedback! We provide clarification regarding the questions raised by them below.\n\n* We highlight that our data generation process assumes that the latent variables cause the labels. Hence, we cannot compare with the approaches that work under the assumption that the labels cause the latent variables as it would not be a fair comparison. Our approach and the previous approaches are not directly comparable, hence based on the underlying data generation process we should decide which approach is suitable for identifying the latent variables. Inferring what model explains the data better (latent variables $\\rightarrow$ labels versus labels $\\rightarrow$ latent variables), followed by identification using the right approach is an important problem to be addressed in a separate work.\n\n* We will correct the typo in Theorem 3 and any other typos in the paper.", "role": "user" }, { "content": "I acknowledge the authors' response. The reply answers almost all of my questions. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
83mo-LrHhR
CLeaR 2022 Conference
Reviewer_KuP2
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_KuP2, and the following are the paper you need to review:\n```<<83mo-LrHhR>>```", "role": "user" }, { "content": "Review Content: Summary: review\nMain review: The paper studies the hypothesis testing problem in learning\ntreatment-assignment policies. Specifically, they argue that we\nshould optimize for an assignment with maximal expected probability of\npassing the test. The goal is to help a drug company to get a drug\napproved, or help a researcher to find significantly positive\ntreatment effect. The post-selection issue is handled by sample\nsplitting.\n\nThis goal, though seemingly a little eye-brow raising at first, aligns\nwith active learning or Bayesian optimization, where we want to find\nthe data points that are most helpful for reducing the desired\nprediction or optimization loss.\n\nThe optimization objective is straightforward, though optimizing for\nit is challenging. In particular, the target of inference is a\nfunction, which could be hard to handle if it relies on many or\nhigh-dimensional covariates. Thanks to this concern, the paper\ndeveloped a solution based on submodular optimization, which is\ninteresting, though I am not an expert in submodular optimization to\njudge its soundness.\n\n\nThe derivation of the paper seems sound. However, one thing that could\nbe better clarified is its connection to Bayesian optimization.\nBayesian optimization seems to be solving a related and possibly\nharder problem. Striping away the sequential acquisition nature of\nBayesian optimization, I can think of Bayesian optimization as finding\nthe set of data points such that the resulting target (which is the\nprobability of passing the test here) is maximized. I can think of\nfinding treatment policy as a form of selection data points (e.g. we\nhave pool of data points with the counterfactual outcomes of all\npossible treatment configurations.) In this framing, does the proposed\nalgorithm outperform standard Bayesian optimization algorithm (using\nonly one acquisition step of Bayesian optimization)? How is the\nproposed algorithm better? I think the paper could benefit from\nclarifying this point.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to KuP2\nThank you for your comments and your suggestion to relate our work to Bayesian optimization. We agree that it may be interesting to consider Bayesian approaches for the target we motivate and optimize for, but believe that a more thorough treatment of Bayesian optimization would be beyond the scope of the current draft.\n\nOne specific Bayesian approach that targets maximizing the power of the (frequentist) test in the hold-out could take the following form: From the training data, form a posterior about the population distribution. Then obtain an optimal assignment given the features in the test data by maximizing the posterior expectation over our objective (that is, rejecting the test).\nWe note that such a procedure would have to form a posterior about the distribution of outcomes conditional on features that goes beyond conditional means, since our objective takes a non-linear form (and depends, in particular, on the conditional variances). We further would then face the challenge of optimizing the posterior expectation over all binary assignments. So while we agree that Bayesian optimization could be an alternative to the regularized greedy optimization we currently perform to find an assignment, we believe that such an implementation is non-trivial.\n\nAs an alternative interpretation, it could also be relevant to discuss the performance of Bayesian treatment assignment strategies, even if they do not directly target the power of a (frequentist) hypothesis test on the hold-out data.\n\nPlease let us know in case you were thinking of a simpler procedure, or if we misunderstood your suggestion.", "role": "user" }, { "content": "I acknowledge the authors' response. The response addresses most of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
83mo-LrHhR
CLeaR 2022 Conference
Reviewer_PGmN
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_PGmN, and the following are the paper you need to review:\n```<<83mo-LrHhR>>```", "role": "user" }, { "content": "Review Content: Summary: The paper proposes a heuristic method for finding subgroups of a population for which a treatment is most beneficial, using a decision tree with splits determined by statistical tests (e.g., t tests) applied to a hold-out sample. \nMain review: The originality seems to be in using the tests as split criteria. I am not familiar with the literature on competing methods but this seems well done.\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to PGmN\nThank you for your comment.\n\nWe agree that the main innovation in our approach is that model selection itself (here, the splits of the tree) as well as the assignment given the model are driven by the objective of rejecting a hypothesis test on the hold-out data.", "role": "user" }, { "content": "I'm grateful for the authors' response. It satisfactorily addresses the major issues I had. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
83mo-LrHhR
CLeaR 2022 Conference
Reviewer_EQsk
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_EQsk, and the following are the paper you need to review:\n```<<83mo-LrHhR>>```", "role": "user" }, { "content": "Review Content: Summary: This paper works on the problem of using batch data to estimate a policy that on average has a high probability of being significantly better than a baseline policy. They provide asymptotic and finite sample guarantees for their approach.\nMain review: High level overview of comments:\n-\tI would suggest a more through discussion of relevant literature, especially the literature on policy improvement (e.g. https://proceedings.mlr.press/v37/thomas15.html) and pure exploration (e.g. https://arxiv.org/abs/1507.04523), which are also concerned with ensuring estimation guarantees. Discussing how your work relates to this literature would help the reader better put your work in context.\n-\tAdditionally, the writing can be clearer early on (i) what problem you are approaching (e.g., online vs offline setting, defining training set) and (ii) why an optimal policy for your problem is different from the optimal policy for minimizing regret.\n\nMore detailed comments:\n\n-\tRegarding the abstract: “Yet such algorithms typically optimize expected outcomes without taking into account that treatment assignments are frequently subject to hypothesis testing.” What does hypothesis testing of the treatment assignments mean? I thought in a randomized control trial with binary treatments the treatment assignments are most commonly selected with probability 0.5 each---why are there a hypothesis tests on these treatment assignments?\n-\tWhey you say “we assume that a policy obtained from training data only gets implemented when a hypothesis test…” what do you mean by implemented here? How is implementation related to the training data?\n-\tFrom reading your introduction I am confused about whether you are (i) using batch training data to estimate what policy you want to use to collect more data in order to maximize the probability of having a significant hypothesis test or (ii) you are in an online setting in which you are changing treatment assignment probabilities (in reference to the sentence “existing algorithms for the assignment of treatment typically optimize welfare without taking hypothesis testing into account”, which makes me think you in an online setting). It would be helpful for the reader to better understand at a high-level what problem setting you are in.\n-\tIn your related work section, you do not mention literature on best arm identification and algorithms for pure exploration, which seem highly relevant. For example, see https://homes.cs.washington.edu/~jamieson/resources/bestArmSurvey.pdf, https://arxiv.org/abs/1507.04523, https://arxiv.org/abs/0802.2655, https://arxiv.org/pdf/2101.08534.pdf, https://maxkasy.github.io/home/files/papers/adaptiveexperimentspolicy.pdf . Is your method a non-online version of best arm identification? This literature is not concerned just with “optimizing welfare”, but very much take hypothesis testing into account.\n-\tIn your introduction you say that you compare your algorithm to “algorithms that target maximal average outcome by assigning based on positive estimated treatment effects.” I wonder if there are better baselines to compare to because this is a bit like comparing a best arm identification algorithm to a regret minimizing bandit algorithm on the task of best arm identification. Would also comparing to Dwivedi et al. (2020) and/or Leqi and Kennedy (2021) make sense? You say that these works don’t directly target out-of-sample power as an objective but their methods could “lead to better out-of-sample power of statistical tests”, which makes me think they would be reasonable to compare to.\n-\tAs I read into section 2, I want to confirm my understanding of the problem setup. I understand that you have n i.i.d. observations to train on and a hold-out set that you will eventually use to perform a hypothesis test. You are trying to estimate some optimal policy a^* using estimated policy \\hat{a}, which is most likely to be significant if we compare treatment \\hat{a} to the baseline treatment according to the hypothesis test on the held-out data. Based on my understanding, this is different from the problem of estimating an optimal policy regret minimizing policy from batch data, because the optimal policy for your problem will be a function of the noise variance in each state (which relates to the power of a test), while this does not matter for the problem of estimating optimal regret minimizing policies. If this is the case, it would be helpful to provide this intuition to the reader early on to prevent confusion.\n-\tThe presentation of Algorithm2 is hard to read. It is not in the typical format of an algorithm, but instead one long paragraph.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to EQsk\nThank you for engaging with our draft and for your extensive comments, suggestions, and questions.\n\n> Additionally, the writing can be clearer early on (i) what problem you are approaching (e.g., online vs offline setting, defining training set) and (ii) why an optimal policy for your problem is different from the optimal policy for minimizing regret.\n\nWe appreciate the suggestion for additional clarity. Would the below formulation in response to your two high-level points be helpful?\n\n(i) We approach an offline learning problem with two datasets drawn from the same distribution. Treatment is assigned randomly in both datasets.\nOn the first dataset (training), we estimate an assignment policy that maps features to treatment.\nOn the second dataset (hold-out), we run a hypothesis test to reject the null hypothesis that this assignment policy is no better than some reference policy (such as assigning everybody to control).\nWe propose a set of algorithms for the first step (training) that maximizes the probability of rejecting the hypothesis test in the second step.\nThis procedure is motivated by a desire to find an assignment policy with significant evidence for its efficacy, for example when regulatory constraints require the rejection of a hypothesis test.\n\n(ii) The optimal policy for maximizing the probability of rejection (power) of the hold-out test differs from optimal policies that maximize the policy value or minimize regret. This is because the power-maximizing test trades off the policy value with the variance in its estimation on the hold-out set, leading us to prefer not to assign units to treatment that have very noisy outcomes, even if their estimated treatment effect is positive. While some other objectives may also involve estimation noise, we are not aware of other procedures that trade off policy value and variance in the same way.\n\n> [...] What does hypothesis testing of the treatment assignments mean? I thought in a randomized control trial with binary treatments the treatment assignments are most commonly selected with probability 0.5 each---why are there a hypothesis tests on these treatment assignments?\n\nThank you for the clarification question. What we mean here (and should have written) is that we frequently test whether the value (expected outcome) achieved by an assignment policy is above some outside option (such as assigning everybody to treatment). So we are not evaluating the random assignment in the data. Instead, we estimate the policy value on the hold-out data and run a hypothesis test there.\n\n> [...] what do you mean by implemented here? How is implementation related to the training data?\n\nThe assignment policy is estimated on the training data. If it passes a hypothesis test on the hold-out data, we imagine that the policy is finally implemented on new data, that is, new units from the population distribution are assigned according to the policy in that case.\n\n> From reading your introduction I am confused about whether you are (i) using batch training data [...] or (ii) you are in an online setting [...]\n\nThanks for this suggestion. We are talking about an offline setting where training and hold-out data are given, both are drawn from the same distribution, and treatment is assigned randomly in both. This models a case of existing data from a randomized controlled trial (e.g. in medicine), which is our motivation.\n\n> In your related work section, you do not mention literature on best arm identification and algorithms for pure exploration, which seem highly relevant. [...] Is your method a non-online version of best arm identification? [...]\n\nThank you for suggesting additional references. We differ from best-arm identification not only in that we consider a non-online setting, but also in the specific objective we use, which is maximizing the power of a frequentist hypothesis test on the hold-out. We agree that alternative (especially online) approaches also take uncertainty into account, although to our knowledge they do not directly target the probability of rejection of a hold-out test.\n\n> [...] Would also comparing to Dwivedi et al. (2020) and/or Leqi and Kennedy (2021) make sense? [...]\n\nWe agree, thank you for this suggestion. We chose what we perceived to be the most common methods for treatment assignment as comparisons, and agree that the methods you suggest may be more relevant for the objective at hand.\n\n> As I read into section 2, I want to confirm my understanding of the problem setup. [...] If this is the case, it would be helpful to provide this intuition to the reader early on to prevent confusion.\n\nWe agree with your understanding, and appreciate the suggestion.\n\n> The presentation of Algorithm2 is hard to read. It is not in the typical format of an algorithm, but instead one long paragraph.\n\nThank you for catching this. Both algorithms are missing line breaks that should be there, which seems to be an issue with LaTeX.", "role": "user" }, { "content": "Title: Re: Rebuttal\nThank you for your responses and clarifications! \n\n\"Would the below formulation in response to your two high-level points be helpful?\" Yes, I like the explanations you wrote above.\n\nI have raised my score accordingly.\n\n", "role": "assistant" } ]
8X6cWIvY_2v
CLeaR 2022 Conference
Reviewer_DoYa
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_DoYa, and the following are the paper you need to review:\n```<<8X6cWIvY_2v>>```", "role": "user" }, { "content": "Review Content: Summary: The paper proposes a new method to identify causal direction between two variables exploiting the condition that the conditional distribution of effect given cause in invariant in shape.\nMain review: Originality: The paper provides a new idea and method. Exploiting stability in shape was not exploited before to detect causal relationships, as far as I know (but Fonollosa 2019). The notion of conditional divergence is new.\n\nSignificance: The paper addresses an important and relevant problem for the CLeaR community. It is likely that it will also have an impact outside the CLeaR community.\n\nThe proposed approach is technically sound. However, it should be mentioned that some SCMs may not satisfy the assumption of stable causal mechanism. The possible violations of this assumption should be better highlighted. \n\nTechnical quality: The proposed approach is technically sound.\n\nClarity: The paper is clearly written and well-organised.\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer DoYa\nWe sincerely thank Reviewer DoYa for your time and insightful comments, as well as a supportive assessment for our work. We agree that possible violations of our assumption should be emphasized and we will consider presenting in the next version a wide range of examples for causal mechanism families that match and do not match our assumption.", "role": "user" }, { "content": "I appreciate the authors' reply. It adequately addresses the points I raised. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
8X6cWIvY_2v
CLeaR 2022 Conference
Reviewer_oj7Q
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_oj7Q, and the following are the paper you need to review:\n```<<8X6cWIvY_2v>>```", "role": "user" }, { "content": "Review Content: Summary: The main contribution of the paper is an introduction into a novel criterion, Conditional Divergence based Causal Inference (CDCI), to determine the causal direction of effects, i.e., whether an effect is transmitted from X to Y or vice versa, using (nonexperimental) observational data. The core idea of CDCI is that, under a causal mechanism of the form X → Y, the conditional distribution of the outcome (the effect) given the predictor (the cause), P(Y | X), will be independent of the marginal distribution of X, P(X). From this it follows that the shape of the conditional probability P(Y | X = x) can be expected to be invariant for different values x. Stability of P(Y | X = x) is quantified via divergence measures of the conditional distribution. In essence, the model with the smaller divergence of conditional distributions is selected. Overall, five variations of CDCI (with different distance measures) are proposed and their effectiveness is evaluated using synthetic and real-world data. Further, CDCI variants are contrasted with four state-of-the-art causal discovery methods. Results suggest that, in the majority of cases, CDCI outperformed the four selected state-of-the-art causal discovery methods. \nMain review: The paper introduces a novel approach to discover causal effect directionality in the bivariate case. Overall, the paper is clearly written and the presented approach can be considered of high relevance for the audience of causal machine learning and related areas. However, there are some aspects (summarized below) that require additional attention prior to recommending acceptance of the paper in the Proceeding of Machine Learning Research. \n\np. 2 (end of introduction section): For sake of completeness, I suggest including a class of causal discovery algorithms proposed by Peters and Bühlmann (2014) which identifies causal structures using an equal variance assumption (see also Chen et al., 2019 and Park and Kim, 2020). \n\npp. 3 & 4: The authors state that the functional causal model under consideration allows the presence of dependence between the cause (X) and the error of the causal model. Therefore, the functional causal model allows the presence of hidden confounders (Z). Here, it is currently unclear whether the presented algorithm is able to detect the true causal direction despite the presence of confounding or whether the algorithm is simply able to distinguish an unconfounded causal model (e.g., X → Y) from a confounder model (similar to previous approaches discussed, e.g., by Wiedermann and Li, 2018 and Maeda and Shimizu, 2020; see also Wiedermann & von Eye, 2016). Formal model definitions presented on p. 3 suggest the former. However, on p. 4, the authors state that only three classes are considered (X → Y, Y → X and a non-causal class X ← Z → Y) which suggests the latter. Further clarifications on the exact nature of the confounder model that can be present in CDCI are needed. \n\np. 4: The introductory section on the asymmetry between cause and effect seems incomplete for two reasons: First, in discussing the historical origins of cause-effect asymmetry modeling, the authors do not mention the work by Dodge and Rousson (2000, 2001) who were, to the best of my knowledge, among the first ones that suggested to determine causation from distributional features of observational data. Second, the section does not discuss copula-based approaches to determine cause-effect asymmetry (see, e.g., Kim & Kim, 2014; Sungur, 2005). In contrast to the presented algorithms, copula-based approaches may have the advantage that one does not have to select between causally competing models (such as X → Y or Y → X). Instead, dependence coefficients are estimated for both causal directions allowing to detect potential reciprocal relations. The present paper would benefit from incorporating these perspectives. \n\nOn a related note, the authors currently focus on the two (unidirectional) causal models, X → Y and Y → X, and a non-causal (confounder) model (X ← Z → Y) to explain the association between X and Y. In practice, however, at least two further explanations exist: The presence of a collider and the presence of a reciprocal relation. Can one expect that CDCI performs acceptably under these causal mechanisms as well? \n\np. 6, In the equation below Eq (1), what does the superscript (k) refer to here? Please specify. \n\npp. 6 & 7: The role of hyperparameters, in particular, the degree of discretization, should be discussed early on when introducing the divergence based causal score and the different variants of CDCI. \n\np. 10: Currently it is unclear why the authors selected ANM, CDS, IGCI, and RECI as competitors to the proposed CDCI approach. The competing methods differ in their underlying assumptions and in the utilized criteria to discern decisions concerning the causal direction of effect. Here, additional clarifications are needed. Also, the authors should discuss whether divergent assumptions of the competing methods may have unfavorably impacted their performance in the synthetic and real-world datasets. \n\nMinor Issues: \n\np. 2, 1st paragraph, 2nd line from below: The authors mention that advances in causal machine learning enable one to detect causal structures even with “…unstructured data”. What do the authors mean with “unstructured”? Non-experimental/observational data? Please clarify. \n\np. 4: The authors state that the proposed method is able to classify a bivariate relation into three main categories and list four potential outcomes (X → Y, Y → X, non-causal relation, and independence). Please clarify. \n\np. 4, 2nd line from below: The authors mention that RECI is able to detect causal structures in “… the presence of a functional form”. It is unclear whether the authors refer to “knowledge” of the true functional form or the presence of a non-linear functional form. Please clarify. \n\np. 12 (1st line of Conclusion section): Please use “that a causal mechanism” instead of “that causal mechanism”. \n\nReferences: \n\nChen, W., Drton, M., & Wang, Y. S. (2019). On causal discovery with an equal-variance assumption. Biometrika, 106(4), 973–980. https://doi.org/10.1093/biomet/asz049\nDodge, Y., & Rousson, V. (2000). Direction dependence in a regression line. Communications in Statistics-Theory and Methods, 29(9–10), 1957–1972. https://doi.org/10.1080/03610920008832589\nDodge, Y., & Rousson, V. (2001). On asymmetric properties of the correlation coefficient in the regression setting. The American Statistician, 55(1), 51–54. https://doi.org/10.1198/000313001300339932\nKim, D., & Kim, J.-M. (2014). Analysis of directional dependence using asymmetric copula-based regression models. Journal of Statistical Computation and Simulation, 84(9), 1990–2010. https://doi.org/10.1080/00949655.2013.779696\nMaeda, T. N., & Shimizu, S. (2020). Causal discovery of linear non-Gaussian acyclic models in the presence of latent confounders. ArXiv: 2001.04197.\nPark, G., & Kim, Y. (2020). Identifiability of Gaussian linear structural equation models with homogeneous and heterogeneous error variances. Journal of the Korean Statistical Society, 49(1), 276–292. https://doi.org/10.1007/s42952-019-00019-7\nPeters, J., & Bühlmann, P. (2014). Identifiability of Gaussian structural equation models with equal error variances. Biometrika, 101(1), 219–228. https://doi.org/10.1093/biomet/ast043\nSungur, E. A. (2005). A note on directional dependence in regression setting. Communications in Statistics—Theory and Methods, 34(9–10), 1957–1965. https://doi.org/10.1080/03610920500201228\nWiedermann, W., & von Eye, A. (2016). Statistics and causality: Methods for applied empirical research. Hoboken, NJ: John Wiley & Sons.\nWiedermann, W., & Li, X. (2018). Direction dependence analysis: A framework to test the direction of effects in linear models with an implementation in SPSS. Behavior Research Methods, 50(4), 1581–1601. https://doi.org/10.3758/s13428-018-1031-x\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer oj7Q\nWe are greatly grateful for your time to review our paper and offer such detailed comments, as well as valuable suggestions. The shortcomings you pointed out are indeed relevant to this study and we will bridge those gaps in later versions, including approaches based on the equal variance assumption and a more thorough review. Below we give clarifications regarding your questions in your listed order:\n\n> Q1: pp. 3 & 4: The authors state that the functional causal model under consideration allows the presence of dependence between the cause (X) and the error of the causal model. Therefore, the functional causal model allows the presence of hidden confounders (Z). Here, it is currently unclear whether the presented algorithm is able to detect the true causal direction despite the presence of confounding or whether the algorithm is simply able to distinguish an unconfounded causal model (e.g., X → Y) from a confounder model (similar to previous approaches discussed, e.g., by Wiedermann and Li, 2018 and Maeda and Shimizu, 2020; see also Wiedermann & von Eye, 2016). Formal model definitions presented on p. 3 suggest the former. However, on p. 4, the authors state that only three classes are considered (X → Y, Y → X and a non-causal class X ← Z → Y) which suggests the latter. Further clarifications on the exact nature of the confounder model that can be present in CDCI are needed.\n\n- As for the nature of the confounder model involved in CDCI, we confirm that the algorithm aims to detect the true causal direction despite the presence of confounders. In Section 4, the “non-causal” class includes both cases X ← Z → Y and $X \\perp Y$.\n\n> Q2: On a related note, the authors currently focus on the two (unidirectional) causal models, X → Y and Y → X, and a non-causal (confounder) model (X ← Z → Y) to explain the association between X and Y. In practice, however, at least two further explanations exist: The presence of a collider and the presence of a reciprocal relation. Can one expect that CDCI performs acceptably under these causal mechanisms as well? \n\n- About your concern on the effectiveness of our method in the presence of colliders, since we rely on the i.i.d. assumption, the existence of a collider between X and Y should not affect the independence between them (if they are not confounded), which can still be detected with our method. We will consider mentioning this point in our revised version.\n\n- As for the reciprocal relation, indeed in this study we do not concern bi-directional causal relationships, which can be fundamentally different in assumptions and properties. We agree that this scenario surely presents in practice and is an interesting problem, thus we will highlight this as a future direction for research in the next version.\n\n> Q3: p. 6, In the equation below Eq (1), what does the superscript (k) refer to here? Please specify.\n\n- In the equation below Eq. (1), the superscript ($k$) indicates the number of the sample, i.e., $(x^{(k)},y^{(k)})$ refers to the $k$-th observation.\n\n> Q4: pp. 6 & 7: The role of hyperparameters, in particular, the degree of discretization, should be discussed early on when introducing the divergence based causal score and the different variants of CDCI. \n\n- We agree with your concern about this and will reorganize the manuscript according to your suggestion.\n\n> Q5: p. 10: Currently it is unclear why the authors selected ANM, CDS, IGCI, and RECI as competitors to the proposed CDCI approach. The competing methods differ in their underlying assumptions and in the utilized criteria to discern decisions concerning the causal direction of effect. Here, additional clarifications are needed. Also, the authors should discuss whether divergent assumptions of the competing methods may have unfavorably impacted their performance in the synthetic and real-world datasets. \n\n- On the choice of competitors to our approach, we choose these methods for two main reasons. Firstly, they are all recognized methods in the problem of predicting bivariate causal direction and have also been used as competitors in several closely similar studies (Goudet et al., 2018; Ton et al., 2021; Wu and Fukumizu, 2020). Secondly, they all have open source implementations (Kalainathan et al. 2020), which make experiments replicable and consistent.\n\n\n> Q6: p. 2, 1st paragraph, 2nd line from below: The authors mention that advances in causal machine learning enable one to detect causal structures even with “…unstructured data”. What do the authors mean with “unstructured”? Non-experimental/observational data? Please clarify. \n\n- By “unstructured data”, we refer to images, text, audio, etc., which are in contrary with tabular (structured) data.\n\n> Q7: p. 4: The authors state that the proposed method is able to classify a bivariate relation into three main categories and list four potential outcomes (X → Y, Y → X, non-causal relation, and independence). Please clarify.\n\n- Regarding your concern about the categories of causal relationships involved in this study, there are three classes, namely X causes Y, Y causes X (despite the presence of confounders), and “others” (this class includes both X ← Z → Y and $X \\perp Y$ and we do not distinguish between them).\n\n> Q8: p. 4, 2nd line from below: The authors mention that RECI is able to detect causal structures in “… the presence of a functional form”. It is unclear whether the authors refer to “knowledge” of the true functional form or the presence of a non-linear functional form. Please clarify.\n\n- By “… the presence of a functional form”, we refer to the knowledge of a non-linear functional form.\n\nThank you for pointing out these issues and the manuscript will be revised with all the clarifications above.\n\n**References**\n\nOlivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, and Michele Sebag. Learning functional causal models with generative neural networks. In Explainable and Interpretable Models in Computer Vision and Machine Learning, pages 39–80. Springer, 2018.\n\nJean-Francois Ton, Dino Sejdinovic, and Kenji Fukumizu. Meta learning for causal direction. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 9897–9905, 2021.\n\nPengzhou Wu and Kenji Fukumizu. Causal mosaic: Cause-effect inference via nonlinear ICA and ensemble method. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pages 1157–1167, 2020.\n\nDiviyan Kalainathan, Olivier Goudet, and Ritik Dutta. Causal Discovery Toolbox: Uncovering causal relationships in Python. Journal of Machine Learning Research, pages 37–1, 2020.\n", "role": "user" }, { "content": "Thanks for the authors' response. It adequately addresses the points I raised. I continue to recommend this work for acceptance.", "role": "assistant" } ]
8X6cWIvY_2v
CLeaR 2022 Conference
Reviewer_hk9s
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_hk9s, and the following are the paper you need to review:\n```<<8X6cWIvY_2v>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a bivariate causal discovery method from purely observational data based on the assumption that the conditional distribution of the effect given the cause has the same shape for any value of the cause. It implicitly assumes that, on the other hand, conditioning the cause on the effect would not have this property. By exploiting this condition, they propose the conditional divergence based causal score, which measures how much the shape of the conditional distribution Y|X changes for various values of X, compared to that of X|Y. If X|Y changes more that Y|X, it is then concluded that X causes Y, and vice versa. The shape variation of Y|X is computed by estimating a divergence measures between the distributions of Y and Y|X after standardizing the two distributions, i.e., equating the first and second order moments. The estimation of this quantity is performed using a binning approach on the data. Various divergence measures are tested on synthetic and real world data, and show improved performance against state-of-the-art algorithms. The influence of the choice of divergence measure and other hyper-parameters, is shown to be small.\nMain review: Strenghts:\n- The method is simple to understand, and applies to a wide range of bivariate causal discovery settings.\n- The improve performance against state-of-the-art seems significant.\n\nAs mention in the conclusion, theoretical identifiability results and extension to multivariate causal discovery would be greatly appreciated:\n- In terms of theory, it seems that your assumption in section 3.1 is not sufficient. Indeed, for additive linear Gaussian noise model, which is not identifiable, all X, Y, X|Y and Y|X are Gaussian, hence C_{X->Y} = 0 while there may exist some causal dependence in any direction. A deeper analysis of which settings give an actual asymmetry between cause and effect according to this criterion would be interesting.\n- In terms of extension to the multivariate setting, it looks like the current binning approach for computing the divergences would highly suffer from the curse of dimensionality. However, this work builds an interesting basis for future work.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer hk9s\nWe greatly thank Reviewer hk9s for offering such thorough and constructive feedback, which we all agree. Regarding your concern about our assumption being insufficient to distinguish the causal direction in the case of linear non-Gaussian additive noise models, we will include it as part of a section discussing trivial identifiable and unidentifiable cases using our method. We also highly appreciate your insightful thoughts on how multivariate causal discovery can benefit from this study.\n", "role": "user" }, { "content": "I appreciate the authors' reply. It adequately addresses the points I raised. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
AyZoloYRaZW
CLeaR 2022 Conference
Reviewer_4ts3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_4ts3, and the following are the paper you need to review:\n```<<AyZoloYRaZW>>```", "role": "user" }, { "content": "Review Content: Summary: The paper proposes to perform clustering before causal structure learning, so as to identify structurally homogenous subsets of samples. To perform such clustering, they propose a distance-covariance based kernel. The main contribution is the proposal of the so-called dependence contribution kernel, so that the resulting kernel space is isometric to the space of causal ancestral graphs.\nMain review: 1. An earlier reference for the distance correlation is \"Feuerverger, A. (1993), “A consistent test for bivariate dependence,” International Statistical Review/Revue Internationale de Statistique, 61, 419–433.\"\n\n2. Why do you choose distance covariance in particular, rather than alternative nonparametric metrics of dependence, such as the mutual information, or the ball covariance?\n\n3. Causal structure learning is notoriously different with a large number of covariates. Performing causal structure learning within subgroups identified by the clustering algorithm appears to reduce the sample size even further. There seems to be a tradeoff between homogeneity within each cluster, and the number of observations left for structure learning within each cluster. So my worry is that maybe the best number of clusters for the purpose of clustering, which only takes homogeneity into account, may not yield the best result in terms of structure learning. \n\nAs a related comment, in Section 1.1, the authors assume there are enough samples for statistical inference. This does not sound realistic, especially for structure learning.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer's evaluation is an educated guess\n", "role": "assistant" }, { "content": "Title: There are mathematical/convenience reasons for choosing dCov over other nonlinear measures of dependence; Statistical guarantees for the number of clusters should be possible to derive \nThanks for the earlier reference for distance correlation---we were not aware of that but will add it now to the manuscript.\n \nWe chose the distance covariance because it has mathematical properties that are convenient (if not necessary) for defining the kernel:\nnamely, (1) the pairwise distance correlation matrix can be rewritten in terms of the per-sample covariance (or dependence, if one subtracts a critical value) contribution matrix, which facilitates comparing two samples and thus defining the kernel---this is for example complicated by the logarithms used in defining mutual information;\nand (2) the distance covariance under the null hypothesis provably approaches the chi-square distribution in the sample limit, facilitating the direct computation of critical values corresponding to a given significance level (T(\\alpha) in Definition 2), whereras estimating critical values in the case of mutual information or ball covariance using permutations would be computationally infeasible.\nHowever, we are admittedly more familiar with distance covariance than other measures, so perhaps defining an analogous kernel for other measures is more straightforward than we realize.\nOur hope is, having defined such a kernel using distance covariance, it is now easier for us and other researchers to similarly define kernels based on other measures of dependence.\n\nThanks for pointing out the difficult \"tradeoff between homogeneity within each cluster, and the number of observations left for structure learning within each cluster\".\nThis is an important problem not just in our setting here, but in clustering generally.\nIn cases where the data is known to be heterogeneous, the best that can be done is to find the correct number of clusters and assignment of samples to the clusters, and our method offers an effective way of doing this (for which statistical guarantees should even be possible to derive---see the last paragraph of our response to \"Reviewer trr8\")", "role": "user" }, { "content": "Thanks to the authors for their clarification. It adequately addresses the points I raised. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
AyZoloYRaZW
CLeaR 2022 Conference
Reviewer_LhTe
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_LhTe, and the following are the paper you need to review:\n```<<AyZoloYRaZW>>```", "role": "user" }, { "content": "Review Content: Summary: This paper propose the dependence contribution kernel to do causal clustering. DEPCON could identify samples with different latent causal structures in heterogeneous pooled data so that actual learning task could get a more accurate result in each cluster. The proposed method could be used in many problems such as invariant learning.\nMain review: reasons to reject\n There are some notion errors in this paper.(e.g section2.1,paragraph2)\n The reason(advantages) for choosing distance covariance could be explained clearly.(There are still some other methods to measure nonlinear dependence)\n\nreasons to accept\n In general, the presented idea seems to be novel and interesting. Identifying heterogeneous causal structures from pooled data is important and could be used in many other applications.\n The motivation of the work is well explained and the pros and cons of different methods are clearly discussed.\n The proof is sufficient.\n The experiment setting is clear and the proposed method performs really good.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: There are mathematical/convenience reasons for choosing dCov over other nonlinear measures of dependence\nWe thank you for the positive feedback!\n\nThanks also for pointing out that our motivation for choosing distance covariance could be more clearly explained---hopefully the second paragraph of our response to \"Reviewer 4ts3\" helps to clear it up.\n", "role": "user" }, { "content": "Thanks for the authors' response. It satisfactorily addresses the major issues I had. I will continue to recommend accepting this work.", "role": "assistant" } ]
AyZoloYRaZW
CLeaR 2022 Conference
Reviewer_trr8
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_trr8, and the following are the paper you need to review:\n```<<AyZoloYRaZW>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposed a novel kernel method to measure similarities between two causal structures. The validity and theoretical properties of this kernel is established, and it's use in a causal clustering setting is demonstrated through a simulated and real world dataset.\nMain review: Originality: Yes, this paper proposed a novel kernel based method for measuring the similarity between two causal structures. To the best of my knowledge this work is novel and hasn't been solved by existing methods. \n\nSignificance: The problem of getting data generated from heterogenous causal structures arises in various settings such as genomics, economics or any other system that can be thought of as comprising various sub modules. And so, the results are of importance to anyone interested in utilizing causal discovery methods, regardless of domain. This paper proposed a new way of looking at the problem of causal clustering using kernels.\n\nTechnical quality: Yes, the proposed method seems technically sound. The claims are substantiated by both empirical and theoretical results. To the best of my understanding, the proofs in this paper are correct. \nQuestion for authors: For equivalence classes defined in the traditional causal discovery way, how does this kernel differentiate between two causal graphs within the same equivalence class (maybe I missed this in my reading of the paper)? \n\nQuestion for authors (I understand this may be very complicated one): Do you have an statistical guarantees for a method to choose the number of causal clusters?\n\nClarity: Yes - this paper is clear, well written and well explained. The main theme of the paper is conveyed effectively, as well as a intuition behind the method is presented. Great paper overall.\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: The kernel can distinguish between Markov equivalent models by relying on distance covariance values; Statistical guarantees for the number of clusters should be possible to derive\nWe thank you for the very positive feedback!\n\nFor equivalence classes defined in the traditional causal discovery way (i.e, Markov equivalence), our kernel relies not just on the structural information in the distance covariance matrix (i.e, the binary in/dependence pattern it encodes) but also on the actual covariance values.\nThus, samples from Markov equivalent models may still be distinguished by our kernel according to the actual distance covariance values.\nIn other words, our kernel primarily measures similarity of independence patterns, but it still secondarily measures similarity of distance covariance values.\n\nWe do not have any statistical guarantees for choosing the number of causal clusters, however such guarantees should in principle be possible (though difficult) to compute.\nBecause the kernel is so explicitly related to statistical hypothesis testing, it should be possible to compute error rates, e.g., on the statement given in Theorem 10.\nFor example, consider the element-wise product of two points in the kernel space (note that summing over the resulting matrix would lead to the actual kernel value):\nthe probability of the sign (measuring the similarity of the two points in that dependence dimension) being wrong is $2 \\alpha(1-\\alpha) + 2\\beta(1-\\beta)$, corresponding to two times (because there are two samples) the probability of a type I error in the test for one sample but not the other plus two times the probability of a type II error for one sample but not the other.\nNext, one must figure out how this error rate for one element of the matrix combines with the other elements (the kernel sums these elements) along with the actual values of the element (the whole matrix is normed), and the result should be the error rate for Theorem 10.\nWith these bounds on the error of two samples being incorrectly clustered together, it should be possible to devise a method for picking the number of clusters and finding its corresponding error bounds.", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The authors have answered almost all of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
CMvbgEping
CLeaR 2022 Conference
Reviewer_zgzK
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_zgzK, and the following are the paper you need to review:\n```<<CMvbgEping>>```", "role": "user" }, { "content": "Review Content: Summary: The paper presents causal identification theory and testable implications for linear Hawkes' processes with unmeasured confounders. The results are novel, interesting, and present interesting paths forward for possible extensions.\nMain review: Strengths:\n\n- The results are presented clearly and intuitively: the connection and explanations provided via ordinary linear SEMs is much appreciated.\n- Similar to Verma constraints for ADMGs, the results on equality constraints here may be translated into improvements in structure learning algorithms. That is, more informative structures can be learned by algorithms that take advantage of such constraints.\n\nWeaknesses:\n\n- I think some of the theory could have been pushed a bit further -- many of the results stem from applications of existing criteria phrased for ADMGs.\n\nOpen questions for the authors:\n\n- Are there ways to think of these generalized equality constraints in linear Hawkes processes as \\mu-separation in a conditional DMG, just as we are able to some Verma constraints as m-separation in a conditional ADMG. That is, can these be phrased as constraints that appear in post-interventional distributions? At first glance it would seem so -- \\mu-separation between 1 and 4 should hold after cutting incoming edges (intervening) on 3? I wonder if that would lead to an interesting theory in the non-parametric form of these constraints.\n\n- Is there always an equality constraint implied by a missing edge? I think this probably isn't the case, because it's not true for regular ADMGs. Sorry for answering my own question there, but hopefully the follow up is also interesting -- is there a notion of maximality that can be defined here, similar to MAGs for ordinary conditional independence and MArGs for generalized equality constraints? The latter might be quite hard, but it may be worth thinking of a maximality condition for ordinary constraints (if there isn't one already)? Because maximal graphs form the natural units for structure learning in some sense.\n\nAdditional related works:\n\nThe paper is very thorough in its literature review. I only have a few suggestions that I think are relevant to the introductory discussion on structure learning algorithms that take advantage of generalized equality (Verma) constraints. Many of the references listed there currently are technically papers that lay out theory for what these equality constraints may look like without proposing specific algorithms to take advantage of them.\n- Structure learning with equality constraints in linear Gaussian SEMs -- https://arxiv.org/pdf/2010.06978.pdf\n- Structure learning with equality constraints in linear non-Gaussian SEMs -- https://arxiv.org/pdf/2007.11131.pdf\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Reply\nThank you for your comments and questions. These are partial answers to the open questions:\n\n1) There is a way to think about some of these constraints as induced by m-separation in conditional (A)DMGs. First, define a zero-intervention on process i in the Hawkes process to be an intervention which sets the causal link functions into i to be zero and also sets the baseline intensity (mu_i) to zero (this means that no events occur in process i). Consider also the corresponding linear SEM in which we define an intervention on variable i, X^i, by simply removing the i'th equation and keeping the other equations (setting the random variable X^i equal to some fixed number x^i in the other equations). Then the interventional (under a zero-intervention) Hawkes integrated covariance equals the covariance matrix of the interventional distribution of the linear SEM. This means that whenever there's a constraint on the covariance matrix of the linear SEM which is implied by m-separation in a conditional (A)DMG, this constraint also applies to the Hawkes integrated covariance (the graphs representing the linear SEM and the Hawkes process are the same so we can construct such conditional (A)DMGs from either). One should note that the notion of causality defined in the paper is essentially a type of 'soft intervention' which is different from the zero-intervention above. However, the zero-intervention has this simple connection to the interventional covariances of the linear SEM. The above uses m-separation (not \\mu-separation) and perhaps this is natural as the Hawkes integrated covariance is something similar to a covariance matrix, i.e., there's no explicit time. This argument uses the fact that m-separation in conditional (A)DMGs may imply equality constraints. To our knowledge, this is only well understood in ADMGs, and not in the cyclic case.\n\n2) Yes, there is such a notion in the case of the ordinary conditional independence relation (for these graphs, this is (conditional) local independence, a notion very similar to Granger causality). The paper by Mogensen and Hansen which is cited in the manuscript proves that there's a unique maximal element within each Markov equivalence class (using \\mu-separation instead of m-separation) of DMGs. As argued in that paper, this solves the maximality question as learning based on tests of local independence can output that unique graph. The paper also contains a method for visualizing the entire Markov equivalence class starting from the maximal element of the class. For generalized constraints, we agree that it seems difficult and we probably need to first better understand the generalized constraints.\n\nWe'll update the paper with these partial answers. Thank you also for the additional references. It's definitely useful to include references to some papers that tackle the actual learning problem using data.", "role": "user" }, { "content": "Title: Thank you for the response\nThank you for the thoughtful response, and additional insights. I think this is a good paper, and would love to see it accepted", "role": "assistant" } ]
CMvbgEping
CLeaR 2022 Conference
Reviewer_id5m
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_id5m, and the following are the paper you need to review:\n```<<CMvbgEping>>```", "role": "user" }, { "content": "Review Content: Summary: Summary: The authors prove the existence of equality constraints in graphical models of Hawkes processes, and posit that these constraints can be used for structure-learning tasks.\nMain review: Originality:\n\nThe originality of this work is primarily in noting that graphical models of\nHawkes processes are closely related to linear structural equations models, and\nin demonstrating properties of the parameters of the model. This is an original\ninsight, and might motivate discovery of additional properties of such models.\n\nSignificance:\n\nThe theory of causal inference in continuous time settings is not well\ndeveloped. This work makes some progress on developing this theory, and will\nlikely be of use to analysts working in these settings.\n\nThat said, the main _causal_ results of this paper in Sections 4 and 5 have a somewhat sparse presentation. The approach taken to presenting the results of these sections is that once the\nrelationship between SEMs and Hawkes process models is established, results from\nlinear SEMs can be used in the Hawkes process models under consideration. A\nsimple example of the titular equality constraints is presented illustrating a\nsetting in which the novel equality constraints can help adjudicate between\ncandidate models (Figure 3 B, C), but it is not clear when these constraints\narise in general. It would be helpful to see some discussion of graphical\nconditions under which these constraints arise, and when they are non-trivial,\neven if that takes some recounting of material from Chen et al 2014. Such\nelaborations may be added as supplementary materials, as they will not\nnecessarily be original contributions.\n\nTechnical Quality and Clarity:\n\nThe paper is clearly written and seems technically correct, though I do not have\nthe necessary background to validate all the proofs.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply\nThank you for your comments. We would be happy to recount some of the results from Chen et al (2014) to make our paper more self-contained, in particular describing how to find equality constraints in a more general fashion. Adding a description of the connection between zero-interventions in Hawkes processes and interventions in linear SEMs (see 1 in the reply to Reviewer zgzK) may also help explain the causal interpretation better.", "role": "user" }, { "content": "Thanks to the authors for their clarification. Most of my concerns have been addressed. I continue to recommend this work for acceptance.", "role": "assistant" } ]
CMvbgEping
CLeaR 2022 Conference
Reviewer_P9hg
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_P9hg, and the following are the paper you need to review:\n```<<CMvbgEping>>```", "role": "user" }, { "content": "Review Content: Summary: This paper considers the problem of identification of the linear Hawkes Processes . The main contributions is that the authors transfer the linear Hawkes processes to a linear SEM under some assumptions and provide the identification results of linear Hawkes processes by using the existing results of linear SEM. \nMain review: Identification of a model is an important topic, especially in causal inference.\\\nThe paper builds on prior work in the field of Hawkes processes and testable constraints (latent projection and algebraic equality constraints). \\\nThe authors study the new identification results for a well-known Hawkes process when there exist some unobserved variables. \\\nThough most of technics have been proposed, their conclusions in Hawkes processes appear both novel and interesting.\n \nIt would be helpful to describe some real-world implications of their conclusions.\n\nI might have some misunderstandings about this paper. So it is very likely that I will change my score if the other reviewers provide some new points.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply\nThank you for your comments. We'll add some description of real-world applications. We believe that in many applications of Hawkes processes, analysts only have partial observation (some coordinate processes are unobserved) and due to the nature of the Hawkes process (very explicit interactions modeled by what we call causal link functions), it is often used when analysts are interested in an underlying 'structure' or causal network. Therefore, structure learning in Hawkes processes finds application in the study of, e.g., social networks and interacting neurons.", "role": "user" }, { "content": "Title: Thanks\nThank you for your clarification. I think this is a good paper, and my score remains unchanged.", "role": "assistant" } ]
D3wmPdXOvK
CLeaR 2022 Conference
Reviewer_xSua
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_xSua, and the following are the paper you need to review:\n```<<D3wmPdXOvK>>```", "role": "user" }, { "content": "Review Content: Summary: Estimating causal structure area challenge in causality, especially in presence of unobserved variables. The authors first define a particular latent causal model, latent factor causal models, allowing the edges between observed variables and latent variables. Then, the authors propose a three-step method to estimate the model by making use of Tetrad constraints. Although the work relies on strong assumptions, it appears to be both novel and significant.\nMain review: The paper tackles a non-trivial task and proposes a novel method to estimate the causal structure under their problem setting. \nThe paper builds on prior work in the field of Tetrad-based latent variables models. The authors put some effort into dropping the traditional measurement assumption and estimating the causal structure, including causal direction.\nThe authors have an in-depth understanding of the related works and provide a detailed review.\n\n\nHowever, there are some concerns and suggestions,\n\n1. The authors say that the casual order between clusters can be identified. Are there any theories to support this result? As far as I know, in a linear Gaussian setting, there is no possibility to detect the causal direction between variables, except the V-structure and Meek rules. \n\n\n2. The example of In Section 4: The Phase I of the algorithm firstly selects the cluster \\{ X_1, X_2, X_3 \\}. What is the reason? Why not is the \\{ X_4, X_5, X_6 \\}? It seems that those two clusters have the same Tetrad constraints.\n\nBesides, the authors should firstly give the ground-truth graph. \nIt would be helpful to describe the details of the example in the main graph.\n\n3. In Section 5.2: How do you remove the direct effect of Raf on Mek? Why don't you do the same process for the edge between PLCy and PIP2? \n\n\nminor typos:\nFigure 3: there are two variables X_7 subgraph (b) and (c);\n\nIf the authors sufficiently address the mentioned concerns I am happy to change my assessment.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to xSua\nThank you for the helpful review!\n\n## Identifying Causal Order\nThanks for your comment - we realize now that our identifiability result was not clearly emphasized. The short answer is **yes** - our identifiability result is theoretically supported, our proof is in Appendix B. In the final version, we will clarify that when we say that our Algorithm is consistent in Theorem 2, we mean that it **completely** identifies the graph, i.e., the order between the clusters, as well as the edges between latent and observed variables. \n\nAs you noted, this differs from the setting where (1) the structural equations are linear Gaussian and (2) all variables are observed and no special structure is imposed. Our setting differs on point (2). In particular, the assumptions that each observed variable has a single latent parent, and that each latent variable has at least three children, allow us to guarantee being able to identify the bottom-most cluster(s). We then marginalize over the bottom-most cluster and repeat this process to build an order over clusters (Algorithm 2). \n\n## Example in Figure 2\nWe assume that the two clusters you refer to with the same tetrad constraints are $\\\\{ X_1, X_2, X_3 \\\\}$ and $\\\\{ X_7, X_8, X_9 \\\\}$ - as seen in Fig. 2b, the cluster $\\\\{ X_4, X_5, X_6 \\\\}$ does **not** have the same tetrad constraints.\n\nIf this interpretation is correct, then you are right, the algorithm can pick either cluster (e.g., selecting between the two at random), or we could have opted to select all clusters at each step. If we instead picked $\\\\{ X_7, X_8, X_9 \\\\}$, the initial ordering between clusters would be different, but the ordered clustering after Phase 2 - the “merging” step in Algorithm 3 - would be the same, leading to the same final result.\n\nWe will clarify that this “tie-breaking” does not affect the final result. The true graph used in our examples is given in Fig 3c as the final output of the algorithm; we will move this to come before Fig 2.\n\n## Removing the effect of Raf on Mek\nThanks for pointing out this detail which we forgot to include. To remove the effect of Raf on Mek, we perform linear regression of Mek on its parents, then we subtract the value of Raf times its corresponding regression coefficient from Mek. If the data were indeed generated by a linear structural causal model, and we were given the true regression coefficients, this would exactly correspond to “removing the direct effect” of Raf on Mek, i.e., setting the corresponding edge parameter to 0.\n\nWhen we remove the effect of PLCg and on PIP2, all nodes collapse into a single cluster. We intended to note this in the submission and will add a line describing this result.\n\n## Minor typos\nThanks for pointing these out!", "role": "user" }, { "content": "Title: Additional question\nThanks for your clarification! Regarding the identification of causal order: Consider the two structures (1) $\\{X_1,X_2,X_3\\} \\to L_1 \\to \\{X_4,X_5,X_6\\} $ and (2) $ \\{X_4,X_5,X_6\\} \\to L_1 \\to \\{X_1,X_2,X_3\\}$. Does these two structures satisfy your setting? If they satisfy, how to distinguish them?", "role": "assistant" }, { "content": "Title: Response to additional question\nHi, thanks for taking the time to go over the response. This is a great question for elucidating what's going on.\n\nFirst, our structures always require each observed variable to have a latent parent. So, I would replace your structures with (1') $L_1 \\to X_1, X_2, X_3 \\to L_2 \\to X_4, X_5, X_6$ and (2') $L_1 \\to X_4, X_5, X_6 \\to L_2 \\to X_1, X_2, X_3$\n\nThese two structures are distinguished by the ranks of certain submatrices: in structure (1') $\\Sigma_{12,3456}$, $\\Sigma_{13,2456}$, $\\Sigma_{23,1456}$ are (generically) rank 2, while in structure (2'), all of these submatrices are rank 1. This is because in structure(1), the sets $A = \\\\{ X_1, X_2 \\\\}$ and $B = \\\\{ X_3, X_4 \\\\}$ are only t-separated by the set $\\\\{ L_1, L_2 \\\\}$ in structure (1'), but they are t-separated by the set $\\\\{ L_2 \\\\}$ in structure (2'). The result then follows from Theorem 1.", "role": "user" }, { "content": "Title: Feedback\nThanks for your additional clarification- this makes a lot more sense.\n\nOverall, though their setting is restricted, this is a good start to recover causal structures without the traditional measurement assumption (allowing observed variables to affect latent variables). I will keep my overall score.", "role": "assistant" } ]
D3wmPdXOvK
CLeaR 2022 Conference
Reviewer_bvqT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_bvqT, and the following are the paper you need to review:\n```<<D3wmPdXOvK>>```", "role": "user" }, { "content": "Review Content: Summary: This paper tackles the problem of causal discovery in the presence of latent variables. Motivated by gene regulatory networks, the authors design a three-stage algorithm that is consistent under some conditions. Theoretical analyses are provided for the identifiability of results. Empirical evaluations are provided to verify the effectiveness of the proposed method in this paper. \nMain review: Pros:\n(1)\tThe addressed problem is realistic and significant. \n(2)\tExperiments results are convincing. \nCons/Questions: \n(1)\tThe motivation is still not very clear. Specifically, the authors state that they would address the problem of causal discovery in the presence of unobserved confounders. They carefully discuss the two types of methods for handling this problem. However, why is the proposed method better? Could it address some issues of existing methods? In the current version, the answers are not clear. Therefore, it is a bit hard to understand the motivation and contribution of this paper well. Such explanations are important and should be added. \n(2)\tThis paper argues that one of the weaknesses of the existing methods is the rationality of assumptions. Nevertheless, it seems that the method in this paper needs more assumptions. How to check them? For Definition 1 and Assumption 1, 2, 3, there are not enough analyses, e.g., the application scenario. \n(3)\tCould the authors add more details for Definition 1? For example, the discussions on Definition 1(b), the relationship between edges and the rank of sub-matrix. \n(4)\tIs the complexity of the method in this paper is relatively high?\n(5)\tThe descriptions of experiments can be improved. \n-\tThe descriptions of experimental settings are less logical, especially in Section 5.1. \n-\tDiscussions on experimental results are insufficient. \n\nIn summary, this paper is appreciated. However, there are major issues/concerns listed above.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to bvqT\nThank you for the helpful review!\n\n## Motivation and Assumptions\nFirst, we want to clarify that our method is only intended to be better **in certain settings**, such as the genomic setting that we use as motivation. In such settings, observed variables can be parents of latent variables. As we point out at the end of the section “Learning causal models with latent variables”, this breaks the measurement assumption used by a majority of existing methods. We will include an example of what these methods would do in our setting: each true latent variable would be “exogenized” (see “Graphs for margins of Bayesian networks” (Evans 2014)), a process which adds a potentially large number of edges between observed variables.\n\nWe also discuss methods such as FCI which learn an equivalence class of “ancestral graphs”. However, the ancestral graphs for the setting we consider would also lead to dense networks over the observed variables, and the edges between observed variables would not be directed. We will also include an example of what these algorithms do in our setting for further clarity.\n\nAccordingly, we did not intend to argue that the weakness of existing methods was the rationality of the assumptions in all settings. Rather, the different methods, including ours, are tailored to different settings, and to capture different information. Because we do not make the measurement assumption, and we are not satisfied with the information captured by ancestral graphs, we must introduce new assumptions, such as the ones in Definition 1 (Assumptions 1-3 are less significant - faithfulness assumptions such as these are standard necessities for “dotting your i’s and crossing your t’s” in proofs). As discussed in the Limitations section, we are aware of the drawbacks imposed by these assumptions, and suggest avenues for removing these assumptions. We will also add references to possible ways of testing these assumptions, such as by extending recent work (e.g., “Synthetic Interventions” by Agarwal et. al.) which tests for structural assumptions on latent variable models using spectral information.\n\n## Details in Definition 1\nYes, we will add a description after Theorem 1, after the necessary concepts are introduced, to clarify how the bipartite assumption in Definition 2 relates to sub-matrix ranks. In particular, if two observed variables $i$ and $j$ have an edge between them, then Theorem 1 says that the rank of any submatrix which includes those two variables in both the rows and the columns will generically be of rank 2, since any t-separating sets $C_A, C_B$ must include $i$ and $j$. This would prevent the variables from being merged into a single cluster if they are in separate clusters after the first phase of the algorithm. \n\nWe will note that this assumption could therefore be removed and replaced by an assumption that all observed variables in a cluster have directed edges into the next cluster, a stronger form of the double-parent assumption.\n\n## Complexity\nThe complexity of our method is actually relatively low for causal structure discovery. If we pessimistically take $K = M = p$, (where $K$ is the number of clusters, $M$ is the maximum cluster size, and $p$ is the number of variables), and we use Sidak adjustment (in which case $f(d, n) = \\mathcal{O}(d)$), then our method is $\\mathcal{O}(p^5)$. This is equal to the complexity of the PC algorithm when only conditioning sets up to size 3 are considered, which is only consistent when 1) there are no latent variables and 2) the maximum in-degree of the true DAG is less than or equal to 3, i.e., when the network is very sparse. We will add these explanations to our paper to describe how the computational complexity of our algorithm compares to that of other causal structure learning methods.\n\n## Descriptions of experiments\n\nWe would be very happy to clarify the descriptions of our experiments. Are there any specific parts of the description that you found unclear? This would help us improve the section.\n", "role": "user" }, { "content": "Thanks for the authors' response. The response covers nearly all of the issues I raised. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
D3wmPdXOvK
CLeaR 2022 Conference
Reviewer_nsAC
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_nsAC, and the following are the paper you need to review:\n```<<D3wmPdXOvK>>```", "role": "user" }, { "content": "Review Content: Summary: The authors proposed a latent factor causal model with some restricted assumptions, which was motivated by gene regulatory networks. And they proposed a consistent three-stage algorithm to identify such a model. \nMain review: The authors proposed a latent factor causal model with some restricted assumptions, which was motivated by gene regulatory networks. And they proposed a consistent three-stage algorithm to identify such a model. Overall, I think the paper is written with some minor mistakes, but the motivation from the biology field is interesting. I have some concerns listed in the following.\n\n1. The identifiability seems not to be clear. What actual identifiability results did they achieve? And to what extent they identify? Did the overall structure including the edges from the observed to the latent could be identified, or only the cluster? It is better to make them clarified. \n\n2. As for the [Double-parent assumption] in Definition 1, what would happen if there is only one observed variable X_i in C_k that points to L_{k’}? May be better to explain it and add this case in the synthetic experiments.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to nsAC\nThank you for the useful review!\n\n## Identifiability\nPlease see our response to Reviewer xSua. Briefly, the fact that our algorithm is consistent constitutes a constructive proof that the graph, including the latent variables, can be exactly identified under the model assumptions.\n\n## Double-parent assumption\nThank you for pointing out an opportunity to better clarify the necessity of our assumptions. We will add an example to the paper of how the algorithm fails if the double-parent assumption is violated. In particular, say that the double parent assumption is violated, with $X_i$ in one cluster being the only node pointing to $L_j$, the latent parent of another cluster. Let $X_j, X_u, X_v$ be children of $L_j$. Then $(X_i, X_j)$ may be t-separated from $X_u, X_v$ by $L_j$ alone, so that $t_{ij,uv} = 0$, in which case $X_i$ may be incorrectly put into the cluster belonging to $L_j$.", "role": "user" }, { "content": "Title: Further comment\nThanks for the authors' response.\n\nAs for the Double-parent assumption, it might be better to provide some synthetic experiments where the assumptions are violated or at least add more discussions in Section 6. \n\nOverall, though I think this model has some strong assumptions, its motivation by gene regulatory networks based on the latent factor model is still interesting and their model may attain the potential application ability in discovering causal relations between clustered genes in the biological domain. Thus, it is acceptable for me to increase my score to 6.\n\n", "role": "assistant" } ]
EMZBuEEk7jJ
CLeaR 2022 Conference
Reviewer_1JUg
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_1JUg, and the following are the paper you need to review:\n```<<EMZBuEEk7jJ>>```", "role": "user" }, { "content": "Review Content: Summary: This work introduces a novel generative model based on a prior specified as a mixture of degenerate gaussians and provide a proof of its identifiability. The authors proposed a training procedure that minimizes a regularized Wasserstein distance. The disentanglement property of their approach is qualitatively confirmed with latent traversals. The authors also argue that their disentangle representation is more robust to some class of dataset shifts. \nMain review: The approach this paper suggests is interesting, but I have two main criticisms that prevent me from recommending acceptance. The first one being the overall lack of clarity and technical rigor of the paper. The second point is that this algorithm would be best described as a clustering algorithm than a disentanglement algorithm, and should be acknowledged as such both in its presentation and in its empirical evaluation by comparing to SOTA clustering algorithms. See below for details.\n\n### Originality:\nThe approach is interesting and, to the best of my knowledge, novel. \n\n### Significance: \nThe degenerate priors idea is simple and seems to yield good qualitative results on simple tasks like MNIST, FashionMNIST and Sprites. That being said, the baselines were not well tailored to the tasks at hand. I believe state-of-art methods in clustering would make stronger baselines, like VaDe from \"Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering\" by Jiang et al.(2017). Significance would be easier to assess with a more suited set of baselines to compare the authors' algorithm to. \n\n### Clarity:\n- Section 2.1: It seems like a lot of unnecessary notation is introduced that adds almost nothing to the understanding of the algorithm, for example the submodules $M\\_{G\\_i}$'s and $M\\_{T\\_i}$'s. Figure 2 added to my confusion. The actual model proposed is explained in Section 2.2, in a much simpler way, using the formulation with the mixture of degenerate gaussians as prior. I would get to this explanation much earlier in the text, to avoid requiring unnecessary work from your reader. \n- The introduction could also be shortened, since a lot of what is discussed is related to the model that the reader has not encountered yet. \n\n### Technical quality:\n- I believe the presentation of the algorithm does not requires the disentanglement machinery and would be more easily explained as a clustering approach: each subspace of Z corresponds to a cluster, e.g. a digit in MNIST.\n- MNIST and FashionMNIST do not lend themselves very well to disentanglement and are better modelled by clustering approaches. This is why I am not surprised that the ICM method performs better than the VAE and $\\beta$-VAE which are disentanglement methods. I would like to see the proposed approach being compared to state-of-the-art clustering methods.\n- Definition 1 is vacuous in the sense that any pair of diffeomorphisms (i.e. differentiable bijections with a differentiable inverse) will be equivalent. A similar point can be made about Definition 2: it is a very weak notion of identifiability. Weirdly, these definitions are not referenced in a later result, which makes them superfluous.\n- The statement of Theorem 4 is imprecise. For instance, it says \"$h$ maps each $\\mathcal{Z}\\_{G\\_i}$ to $\\hat{\\mathcal{Z}}\\_{G\\_i}$, disjoint from $\\hat{\\mathcal{Z}}\\_{G_i}$, $\\forall j \\not= i$, ...\". A first problem is that, since $h$ is a map from $\\mathcal{Z}$ to $\\mathcal{Z}$, it is unclear what is meant by \"$h$ maps each $\\mathcal{Z}\\_{G\\_i}$ to...\". I think what is meant is that \"$h$ maps $\\\\{0\\\\} \\times ... \\times \\\\{0\\\\} \\times \\mathcal{Z}\\_{G_i} \\times \\\\{0\\\\} \\times ... \\times \\\\{0\\\\}$ to ...\" or something along these lines. A second problem is that the $\\hat{\\mathcal{Z}}\\_{G_i}$ where not defined anywhere and, more importantly, the statement does not allow for an indeterminacy due to the ordering of the subspaces that cannot be identified. The statement should have been \"there exists a permutation $\\sigma$ such that $h$ maps each $\\mathcal{Z}\\_{G\\_i}$ to $\\hat{\\mathcal{Z}}\\_{G\\_{\\sigma(i)}}$\", or something similar. \n- Again in Theorem 4, in \"if there exists a smooth invertible function $h$ such that $g = g^* \\circ h$, then ...\", the if-clause is always satisfied, since one can always choose $h = (g^*)^{-1} \\circ g$. But of course, the then-clause can't hold for all pair of $(g, g^*)$. What is missing is a \"Suppose $g(Z)$ has the same distribution as $g^*(Z)$, where $Z$ follows the degenerate prior defined in ...\". The proof presents a similar lack of rigor.\n- The loss in equation (4) has a huge number of terms. An ablation study showing the importance of all of them would be more convincing. I am not sure why the second and third terms are important for invertibility, the reconstruction loss (fifth term) should encourage invertibility, right? \n\n# Response to rebuttal\nI thank the authors for their response. At the time I wrote the review, the fact that the method disentangles cluster identity from \"data transformation\" was not clear to me. This is indeed a clear distinction from other clustering algorithms. I also appreciate the additional ablation study. Consequently I have decided to update my score from 4 to 5. I am not increasing my score further because the theory remains imprecise. For instance, the author's response did not address my point about definition 1 and 2 being completely vacuous. Also, the fact that no permutation appears in theorem 4 is worrisome and cast doubts on the validity of its proof (worries also shared by reviewer 6Xbo). I believe the approach suggested is promising and interesting, but its theoretical foundations has to be clarified before being published.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 1JUg\nWe thank the reviewer for the constructive feedback. Below are our responses to the reviewer's comments:\n\n&nbsp;\n\n**Comment:**\n\nIt seems like a lot of unnecessary notation is introduced that adds almost nothing to the understanding of the algorithm, for example the submodules $M_{G_i}$'s and $M_{T_i}$'s.\n\n**Response:**\n\nThe reviewer may have confused the assumed data generating process, defined in Section 2.1, and our algorithm. The $M_{G_i}$'s and $M_{T_i}$'s are modules (i.e., the independent causal mechanisms) in the data generating process, which we aim to recover using our algorithm.\n\n&nbsp;\n\n**Comment:**\n\nThe introduction could also be shortened, since a lot of what is discussed is related to the model that the reader has not encountered yet.\n\n**Response:**\n\nWe will shorten the introduction as the reviewer suggests.\n\n&nbsp;\n\n**Comment:**\n\nI believe the presentation of the algorithm does not requires the disentanglement machinery and would be more easily explained as a clustering approach: each subspace of Z corresponds to a cluster, e.g. a digit in MNIST. \n\nMNIST and FashionMNIST do not lend themselves very well to disentanglement and are better modelled by clustering approaches. This is why I am not surprised that the ICM method performs better than the VAE and beta-VAE which are disentanglement methods. I would like to see the proposed approach being compared to state-of-the-art clustering methods.\n\n**Response:**\n\nOur approach differs from the clustering approaches as we propose to disentangle the data-transforming mechanisms from the data-generating mechanisms. The clustering approaches may recover the data-generating mechanisms but do not include any data-transforming mechanism. Specifically, in addition to the subspace for each cluster, we allocate a shared subspace, which is missing in prior clustering approaches, representing the data transforming mechanisms. The shared subspace encodes the data transformations related to the environment, such as lighting conditions and rotations. Disentangling the shared subspace from the subspaces for each cluster improves the robustness of the downstream model under environment shift, as is discussed in Section 1 and empirically verified in Section 6.3.\n\nAs the reviewer suggests, we conduct additional experiments on VaDE, finding that VaDE does not outperform our method under environment shift, with the evaluation protocol in Section 6.3, as Table 1 shows. The reason is that VaDE does not disentangle the data transforming mechanisms, which are affected by the environment shift, from the data generating mechanisms, which are not affected by the environment shift.\n\n**&nbsp;Table 1**\n\n| MODEL| MN (T)&emsp;&emsp;&emsp;&emsp;|MN (W&R)&emsp;&emsp;&ensp;|MN (R)&emsp;&emsp;&emsp;&emsp;|FMN (D&W)&emsp;&ensp;|Sprites&emsp;&emsp;&emsp;&emsp;|Average|\n\n| ICM&emsp;&nbsp;&nbsp;| 74.63% +-2.34%| 56.91% +-3.70%|43.71% +-7.10%|46.61% +-5.24%|99.25% +-1.79%|64.22%&ensp;|\n\n| VaDE&emsp;|73.64%&nbsp; +-2.69%|48.57% +- 4.92%|48.54% +-7.16%|43.21% +-7.35%|98.25% +-2.95%|62.44%&ensp;|\n\n&nbsp;\n\n**Comment:**\n\nDefinitions 1 and 2 are not referenced in a later result, which makes them superfluous.\n\n**Response:**\n\nThe equivalence relationship defined in Definitions 1 and 2 is necessary for identifiability analysis. Our main identifiability result in Section 3.2 refers to Definitions 1 and 2. Lemma 3 first outlines conditions for $h$-identifiable, and Theorem 4 further shows that the function $h$ does not entangle the mechanisms.\n\n&nbsp;\n\n**Comment:**\n\nThe statement of Theorem 4 is imprecise.\n\n**Response:**\n\nThank you for your comment. Responses and planned additional clarification are as follows.\n\n1. We believe “$h$ maps each $\\mathcal{Z}_ {G_i}$ to $\\hat{\\mathcal{Z}}_{G_i}$” is the same as the reviewer’s suggestion.\n2. $\\hat{\\mathcal{Z}}$ is the same as $\\mathcal{Z}$. \n3. To improve clarity, we will add a permutation function $\\sigma$ as the reviewer suggests.\n4. Section 3.2 and Theorem 4 operate under the definition of the degenerate mixture prior in Section 2.2. We will add the definition in Section 2.2 to Theorem 4. Since both the true generative model $g^*$ and the estimated generative model $g$ take the same latent variable $\\mathbf{z}$ as input, $\\mathbf{z}$ and $h(\\mathbf{z})$ follow the same distribution. Therefore, $g^*(\\mathbf{z})$ and $g(\\mathbf{z}) = g^*(h(\\mathbf{z}))$ follow the same distribution.\n\n**Comment:**\n\nThe loss in equation (4) has a huge number of terms. An ablation study showing the importance of all of them would be more convincing.\n\n**Response:**\n\nWe attached the ablation result via an anonymous link: https://drive.google.com/file/d/13KnG5EdxdRhm4YA5p2lJU7BwHC4hAehP/view?usp=sharing, showing that all the terms are necessary, as is discussed in Section 4. Specifically, removing the second and third terms results in entanglement between the data generating and transforming mechanisms. Removing the fourth term results in entanglements between data generating mechanisms. If we remove the fifth term, the generative model may miss some mechanisms (e.g., the mechanism of digit 5 in the attached figure).\n\n&nbsp;\n\n**Comment:**\n\nI am not sure why the second and third terms are important for invertibility, the reconstruction loss (fifth term) should encourage invertibility, right?\n\n**Response:**\n1. The second and third terms enforce the sampled $\\mathbf{z}$ to be predictable from the generated data $\\mathbf{x}$. Then, we could go from $\\mathbf{x}$ to the sampled $\\mathbf{z}$, which implies that the function $g: \\mathcal{Z} \\rightarrow \\mathcal{X}$ is invertible.\n2. The issue with the reconstruction loss (fifth term) is that it does not include any sampled $\\mathbf{z}$. Therefore, it is possible to encode the data $\\mathbf{x}$ to $\\mathbf{z}'$, differing from the sampled $\\mathbf{z}$, while succeeding in reconstruction but violating invertibility. The role of the reconstruction term is discussed in Section 5.", "role": "user" }, { "content": "I value the authors' response. Most of my concerns have been addressed.", "role": "assistant" } ]
EMZBuEEk7jJ
CLeaR 2022 Conference
Reviewer_Na2R
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Na2R, and the following are the paper you need to review:\n```<<EMZBuEEk7jJ>>```", "role": "user" }, { "content": "Review Content: Summary: This paper develops a method for discovering latent generating mechanisms, perhaps with post-discovery distribution shifts, from an unlabeled mixture dataset. \nMain review: This is a hard problem, and the algorithm in this paper achieves success by leveraging the assumption of independent generating mechanisms. Theoretical characterizations, a simulation study, and performance on three standard benchmarks are all provided.\n\nOverall, I felt that this was a very strong paper. It is clearly written (or as clearly as possible given the complexity of the algorithm), makes progress on a significant problem, and provides some useful tests. It would have been nice to have more systematic simulation testing, not just on a case that is particularly favorable to this method (rather than standard VAE). I was also surprised that there was no reference to recent work on clustering datapoints based on shared causal structure (e.g., Huang, et al’s 2019 NeurIPS paper on shared causal relation clustering). Obviously, that challenge is slightly different from this one, but there are many similarities and overlaps, so it would be helpful to have some connections with that type of research.\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Na2R\nWe thank the reviewer for the positive review! We are glad the reviewer found the paper solid and relevant.\n\n&nbsp;\n\n**Comment:**\n\nIt would have been nice to have more systematic simulation testing, not just on a case that is particularly favorable to this method (rather than standard VAE).\n\n**Response:**\n\nWe compared our method to Ada-GVAE, a SOTA identifiable VAE. We also included results from prior work on learning data generating mechanisms in Figure 1 as comparisons.\n\n&nbsp;\n\n**Comment:**\n\nI was also surprised that there was no reference to recent work on clustering datapoints based on shared causal structure (e.g., Huang, et al’s 2019 NeurIPS paper on shared causal relation clustering).\n\n**Response:**\n\nWe thank the reviewer for pointing out the closely related paper. We will add the paper to the related work section.", "role": "user" }, { "content": "Thanks to the authors for their clarification. The authors have answered almost all of my concerns. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
GdAzRedTV7J
CLeaR 2022 Conference
Reviewer_nTnf
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_nTnf, and the following are the paper you need to review:\n```<<GdAzRedTV7J>>```", "role": "user" }, { "content": "Review Content: Summary: This paper is concerned with learning interpretable deep representations by combining DNNs with decision trees. \nMain review: The paper proposes a clever and simple-to-implement way of learning interpretable representations for deep NNs, by jointly training them with decision trees. This question is relevant for the CleAR community since learning these types of representations is central to integrating causality with deep learning. \n\nThe work is technically sound, and clearly written. Related work is discussed in detail and the problem is well-motivated. The empirical experiments are convincing, and they are a nice demonstration that superior performance and interpretability do not necessarily go hand in hand. The ablation experiments also help strengthen the paper. \n\nMy only criticism is that when the implementation of the algorithm is presented, it is not clearly stated exactly which part of their pseudo-code and objective function corresponds to intersection regularization. Furthermore, it would be great if the authors can comment on whether the intersection regularization is actually necessary, if they were to use exactly the same architecture they proposed, and whether they have performed any experiments with that?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review\nThank you for your supportive review, we appreciate the time and effort.\n\n> **Reviewer:** My only criticism is that when the implementation of the algorithm is presented, it is not clearly stated exactly which part of their pseudo-code and objective function corresponds to intersection regularization. \n\n**Authors:** The notion of intersection regularization is a generic idea that can be implemented in multiple ways with different variations. The concept of intersection regularization is applied between the functions $G$ and $T$ and is carried out in lines 7, 9, 11 and 12 in Algorithm 1. Specifically, in line 7 we update $G$ to minimize the error of $G \\circ F$ with respect to the target function $y$, while being restricted to also be close to $T \\circ F$ (e.g., $\\min_{\\theta} Q(\\theta,\\omega)$ in (4)). In ${S}’$ we aggregate samples for training the tree $T$ on top of $F$ (line 9). These are given by pairs $(F(x),G(F(x)))$. In lines 11-12 we essentially train $T$ to match the outputs of $G$ when their input is $F(x)$. \n\n> **Reviewer:** Furthermore, it would be great if the authors can comment on whether the intersection regularization is actually necessary, if they were to use exactly the same architecture they proposed, and whether they have performed any experiments with that?\n\n**Authors:** We experimented with the necessity of the proposed regularization. As we discussed in Section 6, paragraph ‘Ablation Study’, we conducted an experiment when training the same network architecture to minimize cross-entropy loss against the ground-truth labels (with and without the L1 constraint on the embedding layer). As depicted in Table 1, the fidelity scores of these variations (‘quantized net’ and ‘quantized net + L1’) are inferior to the fidelity performance of our full method. \n\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It adequately addresses the points I raised. I continue to recommend this work positively.", "role": "assistant" } ]
GdAzRedTV7J
CLeaR 2022 Conference
Reviewer_uthu
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_uthu, and the following are the paper you need to review:\n```<<GdAzRedTV7J>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a novel problem named weakly supervised discovery of semantic attributes, which aims at extracting the semantic attributes based on simply image-level labels. To address this issue, this paper proposes to take advantage of MLP and decision tree to optimize the main classification network. Besides, Intersection Regularization is also proposed for training the MLP the decision tree together, and some relevant empirical results and theoretical analysis are also provided.\nMain review: This paper proposes a novel problem named weakly supervised discovery of semantic attributes, which aims at extracting the semantic attributes based on simply image-level labels. To address this issue, this paper proposes to take advantage of MLP and decision tree to optimize the main classification network. Besides, Intersection Regularization is also proposed for training the MLP the decision tree together, and some relevant empirical results and theoretical analysis are also provided.\n\nPros:\n1.\tThe paper makes a concrete and comprehensive overview of relevant work, especially in the field of Interpretability.\n2.\tThe paper proposes an interesting method to solve the proposed issue. The MLP behind the feature map is proposed to represent the quantized function for semantic attributes. The decision tree is proposed to auxiliary the main classification network to learn the relationship from attributes to image-level labels. The idea is easily implemented, and such modeling for these two different signals (semantic attributes and classification labels) is reasonable. \n3.\tThis paper also proposes a method named Intersection Regularization to optimize the main classification network by interactively updating the MLP and decision tree, which is also proved to be effective by some experiment results.\n\nCons:\n1.\tThere are some detailed errors in writing. For instance, page 5 mentioned “i’th elementary vector” and “ith attributes” at the same time. Besides, some nouns and sentences need to be clarified and defined so as to eliminate the ambiguity in this paper. For example, “feature fidelity” in Section 3.1, ”GD” in Section 5, and why “U = R, [-1,1] or {±1}”. \n2.\tThere is a confusing definition to the newly proposed problem. The title is “Weakly supervised Discovery of Semantic Attributes”, but page 2 also defines it as “Weakly Unsupervised Discovery of Semantic Attributes”. Therefore, I suggest the author should make an accurate name or definition to the proposed problem.\n3.\tThe motivation of the proposed problem seems not understandable to me. The essence of weakly supervised/unsupervised learning is to liberate humans from exhaustive annotation work, which means that the network could work with fewer or even no labels. Although this paper aims to find the attributes without annotating them, the image-level labels are utilized during the training process. It may sound more reasonable to utilize noisy or fewer labels for semantic attributes during the training phase so that the problem could be defined as “weakly supervised discovery of attributes”. \n4.\tThe meaning of exploring the potential semantic attributes should serve to better classification. However, the experiment results show that the classification accuracy of proposed method is way lower than some baseline methods, it may not convincing that only the evaluation to the recovered attributes achieves higher performance. Besides, the sole evaluation metric is feature fidelity score, which is newly proposed in this paper. Therefore, the author should show more solid and fair experiment results.\n5.\tLack of detailed implementation about the experiments including but not limited to the relevant data augmentation and training hyper-parameters such as batch size.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review\nThank you for your review, we appreciate the time and effort.\n\n> **Reviewer:** 1. There are some detailed errors in writing. For instance, page 5 mentioned “i’th elementary vector” and “ith attributes” at the same time. Besides, some nouns and sentences need to be clarified and defined so as to eliminate the ambiguity in this paper. For example, “feature fidelity” in Section 3.1, ”GD” in Section 5, and why “U = R, [-1,1] or {±1}”.\n\n**Authors:** Following the reviews, we fixed all of the mentioned typos. Feature fidelity stands for the metric proposed in Section 3.1. We refer to $d_D(f;g)$ as the fidelity of the feature $f$ with respect to $g$. By GD we meant Gradient Descent. Indeed, the setting should not be restricted to $U=\\mathbb{R}$, $[-1,1]$ and $\\{\\pm 1\\}$. Following the review, we removed this comment from the paper. \n\n> **Reviewer:** 2. There is a confusing definition to the newly proposed problem. The title is “Weakly supervised Discovery of Semantic Attributes”, but page 2 also defines it as “Weakly Unsupervised Discovery of Semantic Attributes”. Therefore, I suggest the author should make an accurate name or definition to the proposed problem.\n\n**Authors:** The term should be “Weakly Supervised Discovery of Semantic Attributes”. The other term is a typo, which we fixed following the reviews. \n\n> **Reviewer:** 3. The motivation of the proposed problem seems not understandable to me. The essence of weakly supervised/unsupervised learning is to liberate humans from exhaustive annotation work, which means that the network could work with fewer or even no labels. Although this paper aims to find the attributes without annotating them, the image-level labels are utilized during the training process. It may sound more reasonable to utilize noisy or fewer labels for semantic attributes during the training phase so that the problem could be defined as “weakly supervised discovery of attributes”.\n\n**Authors:** We agree that in many weakly/unsupervised learning settings the goal is to predict the labels in scenarios in which they are not present or partially present. As discussed in Section 3, in the proposed setting, the learning algorithm is provided with the class-labels (e.g., cat/ dog/ airplane/ etc) but is not provided with the semantic features $f(x)$ that are associated with pairs of samples $(x,y(x))$, where $x$ is an example and $y$ is its class-label. The goal is to be able to recover the semantic features $f(x)$, which are hidden from the learning algorithm. In that sense, the actual labels that we would like to predict is $f(x)$, rather than $y(x)$. In other words, $y(x)$ serves as auxiliary information, rather than the actual label in this setting. Therefore, we refer to this setting as a weakly supervised recovery of the attributes $f(x)$, since they are not provided to the learning algorithm, but some auxiliary labels (i.e., $y(x)$) that are related to $f(x)$ are provided instead.\n\nThis scenario is very common: in almost all scholarly fields there are scenarios in which one has labels but needs to find out the separating features. In literature, one knows the author but not the characteristics of their writings. In chemistry, one knows the melting temperature of the molecule but not the sub-molecules that lead to this value. Section 1 provides other examples in paleography, botany, and zoology.\n\n\n> **Reviewer:** 4a. The meaning of exploring the potential semantic attributes should serve to better classification. However, the experiment results show that the classification accuracy of the proposed method is way lower than some baseline methods...\n\n**Authors:** It is often the case that adding explainability hurts overall performance [1]. However, in our case, in a direct comparison between the same generic architecture with and without Intersection Regularization (i.e., ‘quantized net’ and ‘quantized net + L1’), the hit in performance is modest. \n\nWhile some baselines in the literature apply very specialized losses (Zhuang et al. 2020 , Chen et al , 2019) or augmentations (T. Hu et al. 2019) in order to drive accuracy in specific datasets, we decided to avoid these in favor of a more generic approach.\n[1] G. K. Dziugaite, S. B. David, D. Roy, Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability, Arxiv preprint, 2020.\n\n> **Reviewer:** 4b. ...it may not be convincing that only the evaluation to the recovered attributes achieves higher performance. Besides, the sole evaluation metric is feature fidelity score, which is newly proposed in this paper. Therefore, the author should show more solid and fair experiment results.\n\n**Authors:** We believe that the problem introduced in the paper is an interesting and important one. As the setting itself is novel, there are no existing metrics or solutions. Since the setting itself is novel, and (as far as we know) there are no other contributions that try to recover the ground-truth attributes in an unsupervised manner, we have no direct comparison with other work that tries to optimize the overlap between the extracted features and a set of ground truth attributes. We do, however, perform evaluation of multiple baselines and show the advantage of our method in the newly introduced settings. \n\nWe believe that the most solid experiments are those in which we add intersection regularization to a well established architecture and compare the results (last four rows of Tab. 1). Across all datasets we improve the feature fidelity while the degradation in accuracy is minimal. \n\n> **Reviewer:** 5. Lack of detailed implementation about the experiments including but not limited to the relevant data augmentation and training hyper-parameters such as batch size.\n\n**Authors:** In the experiments section, we provide the various loss coefficients,optimizer, stopping criterion, initialization and the evaluation process. We apologize for neglecting to mention some details. These are listed below and will be added to the paper. \n\n* Data Augmentations: we used the standard augmentations used in fine-grained classification tasks: (i) RandomHorizontalFlip with flipping probability p=0.5, (ii) RandomRotation with degrees divisible by 15 and (iii) RandomCrop with crop size 224. \n\n* We use a batch of size 64 across all of the experiments.\n\n* The learning rate is 0.001.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. Most of my concerns have been addressed.", "role": "assistant" } ]
GdAzRedTV7J
CLeaR 2022 Conference
Reviewer_jvUt
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_jvUt, and the following are the paper you need to review:\n```<<GdAzRedTV7J>>```", "role": "user" }, { "content": "Review Content: Summary: The paper proposes to learn an interpretable discrete layer with both quantization and regularization with decision trees.\nMain review: Strengths:\nThe results appear promising: according to the newly defined measure of feature fidelity, the proposed model learns more interpretable discrete representations without much loss on the performance.\n\nThe idea of hypothesis-space regularization is interesting (but it could be better motivated, see below).\n\nThe set of baselines and datasets makes sense.\n\nWeaknesses:\nIt is not clear that the paper fits well with the causal learning and reasoning theme. While learning discrete representations is likely to be useful in the field of causal ML, the paper does not discuss these connections.\n\nThe paper does not motivate the intuition behind the hypothesis-regularization idea. Why should we expect it to work? How the proposed training dynamic is expected to enforce feature fidelity in the discrete representation.\n\nAn important aspect of the approach is the training dynamic arising from alternating between updates on the two losses and the tree retraining. But the experiments do not show this dynamic, they only focus on the final outcome.\n\nThe number of dimensions in the discrete layer seems to be chosen much larger than the annotations used for evaluation (256, 512 in networks versus 64, 50 in datasets). How does the proposed model behave when the number of dimensions is the same or smaller than in the annotations? In Figure 3. some ablation is performed but the number of dimensions still remains >= 128.\n\n\n\nMinor comments:\nIn the computation of feature fidelity: what strategy is used to compare 4-bits attributes with binary attributes, or to compare with numeric attributes?\n\nIn section 3.1\nmax{r(q1, q2;D), r(q1, 1 − q2;D)} shouldn't it be 1 - q1 as well in the second term to have invariance w.r.t. whether the positive are marked as 1 or 0.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Thank you for your review\nThank you for your supportive review, we appreciate the time and effort.\n\n> **Reviewer:** It is not clear that the paper fits well with the causal learning and reasoning theme. While learning discrete representations is likely to be useful in the field of causal ML, the paper does not discuss these connections.\n\n**Authors:** The paper fits well within the topics listed in the call for papers under “Fairness, accountability, transparency, explainability, trustworthiness, and recourse” and the links to transparency and explainability are made within the manuscript (e.g., Sections 1 and 2). More broadly, we believe that the discovery of meaningful intermediate features is at the heart of the scientific methodology and is therefore crucial for the study of reasoning.\n\n> **Reviewer:** The paper does not motivate the intuition behind the hypothesis-regularization idea. Why should we expect it to work? How the proposed training dynamic is expected to enforce feature fidelity in the discrete representation.\n\n**Answer:** Motivation: As mentioned in the introduction, we look for features that are both evidence-based and distinctive. By evidence-based we mean that there is a mapping $f$ such that every input $x$ is mapped to a vector $f(x)$ of binary values, indicating the presence of each attribute. Distinctiveness means that there should be simple rules that determine the class label $y(x)$ based on the obtained attributes $f(x)$. Since deep networks excel in the creating mappings such as $f$ and decision trees are designed to create logical rules that recover labels from attributes, finding a classifier that is in the intersection of both hypothesis classes (the one of deep networks and the one of decision trees) seem like a plausible way to obtain the desired properties.\n\nWe prove, in Sec. 4 that the proposed method for intersecting the hypothesis spaces converges under some assumptions. \n\nFor dynamics, please see below.\n\n> **Reviewer:** An important aspect of the approach is the training dynamic arising from alternating between updates on the two losses and the tree retraining. But the experiments do not show this dynamic, they only focus on the final outcome.\n\n**Authors:** Following the reviews, we provide plots of the dynamics of the IR loss and the fidelity score over training epochs for various datasets.\n\naPascal dataset: https://imgur.com/a/hVPOZ6J (anonymous image)\n\naYahoo dataset : https://imgur.com/a/LPID9Cm (anonymous image)\n\nAs can be seen, as a general tendency, the fidelity score increases in correlation with the fact that the intersection regularization loss (IR Loss) decreases. While during most of the training process both metrics improve together, at later stages they seem to fluctuate in a narrow band in a matter that is not always correlated. The cross-entropy (CE) loss and the accuracy constantly improve from epoch to epoch. There does not seem to be a clear tradeoff between the IR loss and the CE loss, at least in the first 12 epochs.\n\n\n> **Reviewer:** The number of dimensions in the discrete layer seems to be chosen much larger than the annotations used for evaluation (256, 512 in networks versus 64, 50 in datasets). How does the proposed model behave when the number of dimensions is the same or smaller than in the annotations? In Figure 3. some ablation is performed but the number of dimensions still remains $\\geq 128$.\n\n**Authors:** For most datasets, the number of ground-truth attributes in the datasets is between 60-85 (except for CUB-200-2011, where we have 312 attributes), and thus lowering the latent dimension to less than 100 might lead to an excessive loss in expressivity, which would decrease both the accuracy and the fidelity score. \n\nIn our experiments, we chose the dimension to be the minimal dimension (that is a power of 2) that allows a reasonable accuracy rate. This way we can ensure that the model is able to perform reasonably on the classification task, while using a relatively small number of dimensions for recovering the attributes.\n\nWith the default parameters of the paper (see also parameter sensitivity study in Sec. 7), we obtain for aYahoo an accuracy of 96.86%, and fidelity score of: 61.46.\n\nFor latent dimension of 64 and quantization with 1 bit both the accuracy and the fidelity are hurt: accuracy: 92.22%, and fidelity score of: 48.18.\n\nReducing the dimension further, this becomes worse: For latent dimension of 32 and quantization with 1 bit we get: accuracy: 91.10%, and fidelity score of: 51.36. For latent dimension of 16 and quantization with 1 bit we get: accuracy: 58.47%, and fidelity score of: 35.72.\n\nIn our method, when the accuracy is low, the distinctiveness (accuracy obtained with simple rules over the attributes; see introduction) is low, since the two are tied through intersection regularization. Therefore, assuming that one can obtain high accuracy based on the ground truth features, the fidelity (agreement with the ground truth features) is also expected to be low, when the accuracy is low. In conclusion, the accuracy must be high enough in order to support high fidelity, requiring the deep neural network to have enough capacity.\n\n> **Reviewer:** In the computation of feature fidelity: what strategy is used to compare 4-bits attributes with binary attributes, or to compare with numeric attributes?\n\n**Authors:** The strategy is described in the last paragraph of ‘Quantization and Binarization’ paragraph of Section 5 (on page 8). For multi-bit quantization, each value of each attribute is considered a separate (binary) feature, e.g., for 2-bit quantization possible values of feature $f_i$ are 0,1,2,3; this generates 4 positive boolean (binary) features $f_i=0$, $f_i=1$, $f_i=2$, and $f_i=3$, and four negative ones: $f_i\\neq 0$, $f_i\\neq 1$, $f_i\\neq 2$, and $f_i\\neq 3$. For instance, in the 2-bit quantization, the binary attribute $f_i=0$ is encoded as (1,0,0,0), the attribute $f_i=1$ is encoded (0,1,0,0) and the binary attribute $f_i=3$ is encoded as (0,0,0,1). \n\n> **Reviewer:** In section 3.1 $\\max(r(q_1, q_2;D), r(q_1, 1 − q_2;D))$ shouldn't it be $1 - q_1$ as well in the second term to have invariance w.r.t. whether the positive are marked as 1 or 0.\n\n**Authors:** The current expression is correct. Intuitively, we want to compare $q_1$ with $q_2$ and also with $q_2$ as if its “bits were flipped”, $(1-q_2)$. This is given by $\\max(r(q_1, q_2;D), r(q_1, 1 − q_2;D))$. For instance, suppose $q_2$ is a function that classifies dogs as 1 and cats as 0, and $q_1$ is the same function but flipped (i.e., classifies dogs as 0 and cats as 1). In this case, $r(q_1, q_2; D)=0$ and $r(1-q_1, 1-q_2; D)=0$, even though the functions $q_1$ and $q_2$ are essentially the same thing (up to bit flip). This matches the following equation $r(q_1, 1-q_2;D)=r(q_1,q_1;D)=1$. \n", "role": "user" }, { "content": "I'm grateful for the authors' response. It adequately addresses the points I raised. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
HsSLdHuAmnY
CLeaR 2022 Conference
Reviewer_tEcR
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_tEcR, and the following are the paper you need to review:\n```<<HsSLdHuAmnY>>```", "role": "user" }, { "content": "Review Content: Summary: This paper studies the identifiability of the causal relationships between continuous and discrete variables.\nMain review: # Quality and clarity\nThe paper is not clear, not well-organized, and very hard to read, and it seems to be missing some important points.\n- The aim of this study is not clearly stated in Abstract or Introduction. The aim of this study seems extending the studies by Li and Shimizu (2018) and Wei et al. (2018). However, it is not clear in Abstract or Introduction. In addition, there is no explanation about what is the gap in Li and Shimizu (2018) and Wei et al. (2018).\n- The details about the models of Li and Shimizu (2018) and Wei et al. (2018) can be found in Section 2 (Model definition), but should be moved to Section 1 (Introduction) or an additional section about related studies.\n- There is no definition about the probabilistic distribution of $e$ in Definition 1.\n- The subscript of function $f$ is abused in Definition 1. $f_c$ and $f_d$ refer to the functions of continuous variables and discrete variables respectively. However, the subscript of $f_i$ corresponds to the index of $x_i$.\n- $c$ is also abused: the subscript of $f_c$ means \"continuous\", and variable $c$ in \"$bx+e+c$\" means the intercept.\n- The last sentence in page 3 says \"Now we characterize the condition about the non-Gaussian distributions.\", but page 4 says \"it’s noteworthy that basic non-Gaussian distributions satisfy the Condition 1\". Therefore, it is hard to see if the authors are saying that all the non-Gaussian variables satisfy Condition 1 or not. In addition, it is not clear what \"basic non-Gaussian distributions\" means.\n- The authors \"assume that the two models give the same joint distribution of observed variables $x_i$ and $x_j$.\" in page 5, but there is no explanation about why it does not lose generality.\n- The proof (ii) of Theorem 2 aims to prove the identifiability between discrete variables. However, it is not clear how the causal direction is determined by the proof.\n- In Equation (11), $e_j$ is limited to follow Logistic(0,1). However, such definition cannot be found in Definition 1.\n- There is no explanation about \"Notears\" in page 10.\n- It is hard to evaluate the result shown in Figure 3 because there is no information about the ground truth. In addition, the experiment on the real-world data does not include comparative evaluation with previous methods.\n\n# Originality and significance\nAs I wrote above, the originality and the significance of this study is not clearly stated in this paper. The contribution of this study seems that the study extended the work of Li and Shimizu (2018) and Wei et al. (2018). However, the significance of it is still unclear.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer tEcR\nWe would like to thank the reviewer tEcR for giving helpful comments. We will address the question point by point below.\n\nQ1: The aim of this study is not clearly stated in Abstract or Introduction. The aim of this study seems extending the studies by Li and Shimizu (2018) and Wei et al. (2018). However, it is not clear in Abstract or Introduction. In addition, there is no explanation about what is the gap in Li and Shimizu (2018) and Wei et al. (2018).\n\nA1: The aim of this study is to provide identifiability guarantees for causal discovery with mixed variables. Li and Shimizu (2018) proposed a causal model that consisted of both continuous and discrete variables. Their experiment based on artificial data empirically implied that the model is identifiable, but any mathematical proof has not been given. Wei et al. (2018) developed a mixed causal model and proved its identifiability only in the bivariate cases. In their bivariate identifiability proof, they assumed the noises of continuous variables followed the Laplace distributions and the intercepts for discrete variables are zeros, i.e., $c_i = 0$ in Eq.(2). \nThus, we aim at enriching the identifiability results for causal discovery with mixed variables, i.e., allowing the proposed model in the proof to have more non-Gaussian distributions to be followed by the noises of continuous variables and possibly nonzero intercepts $c_i$ in Eq.(2), and proving the identifiability in both bivariate and multivariate cases. Please see the last three paragraphs in the Introduction. Thanks for your feedback. We will make them clearer in the Introduction.\n\nQ2: The details about the models of Li and Shimizu (2018) and Wei et al. (2018) can be found in Section 2 (Model definition), but should be moved to Section 1 (Introduction) or an additional section about related studies.\n\nA2: Compared with the models of Li and Shimizu (2018) and Wei et al. (2018), our model is additionally described using an integrated framework shown in Eq. (3) for the concise representation in the identifiability proof. So, we think it is better to move the Model definition to an additional section.\n\nQ3: There is no definition about the probabilistic distribution of $e$ in Definition 1.\n\nA3: From Assumptions A2 and A3, the probabilistic distributions of errors $e_i$ are defined as non-Gaussian and Logistic distributions for continuous and discrete variables, respectively. Please see Definition 1 and the corresponding assumptions A1-A3. We will make it clear in Eq. (3).\n\nQ4: The subscript of function $f$ is abused in Definition 1. $f_c$ and $f_d$ refer to the functions of continuous variables and discrete variables respectively. However, the subscript of $f_i$ corresponds to the index of $x_i$. $c$ is also abused: the subscript of $f_c$ means \"continuous\", and variable $c$ in \"$bx+e+c$\" means the intercept.\n\nA4: Thanks for your review. We will replace $f_c, f_d, P_c$ and $P_d$ with $f^{con}, f^{dis}, P^{con}$ and $P^{dis}$ using superscripts.\n\nQ5: The last sentence in page 3 says \"Now we characterize the condition about the non-Gaussian distributions.\", but page 4 says \"it’s noteworthy that basic non-Gaussian distributions satisfy the Condition 1\". Therefore, it is hard to see if the authors are saying that all the non-Gaussian variables satisfy Condition 1 or not. In addition, it is not clear what \"basic non-Gaussian distributions\" means.\n\nA5: Thanks for pointing out this issue. We attempted to characterize Condition 1 that non-Gaussian distributions in our model need to follow so that the identifiability holds. “Basic non-Gaussian distributions” mean those distributions which are commonly used in non-Gaussian data analysis methods including independent component analysis and non-Gaussian causal discovery. We will clarify them.\n\nQ6: The authors \"assume that the two models give the same joint distribution of observed variables $x_i$ and $x_j$.\" in page 5, but there is no explanation about why it does not lose generality.\n\nA6: In the proof of Theorem 2, we’d like to prove that under some conditions, the model for bivariate variables is identifiable (The two models in causal and anti-causal directions entail different joint distributions). Thus, we prove it by the contradictory method, i.e., assuming two models give the same joint distribution, then some conclusion would be drawn to contradict our model’s conditions. We will make it clear at the beginning of the proof.\n\nQ7: The proof (ii) of Theorem 2 aims to prove the identifiability between discrete variables. However, it is not clear how the causal direction is determined by the proof.\n\nA7: Theorem 2 demonstrates the identifiability of our model in the bivariate case, which does not focus on the procedure on how to determine the direction. To know how the causal direction is determined, please see Section 4 Optimization method.\n\n\n\n\n\n\n\n\n\nQ8: In Equation (11), $e_j$ is limited to follow Logistic(0,1). However, such definition cannot be found in Definition 1.\n\nA8: In Definition 1, the LiM model is defined to satisfy Assumptions A1-A3. Please see the Assumption A3, where the distribution of $e_j$ is assumed to follow Logistic(0,1) in Eq.(2).\n\nQ9: There is no explanation about \"Notears\" in page 10.\n\nA9: Thanks! We will add an explanation about the compared method “Notears”.\n\nQ10: It is hard to evaluate the result shown in Figure 3 because there is no information about the ground truth. In addition, the experiment on the real-world data does not include comparative evaluation with previous methods.\n\nA10: It is commonplace for us to encounter the fact that the datasets do not have ground truths. Though there is no public open ground truth towards this dataset, we could use expert or domain knowledge as a reference for evaluation. Many scholars have employed this dataset to verify their methods’ performance (Zhang et al. 2011; Margaritis, 2005). Specifically, in our real-world experiment, the finding “there is no direct link between NOX and MED but they are dependent through intermediate causal relationships” is consistent with the one in Margaritis, (2005); the finding that “TAX, which reflects the government’s housing policy, influences IND, LST, and CRI” is consistent with the one summarized in Kenyon et al (2012). In addition, we employed comparison methods to the real-world dataset. Overall, we see that the logistic method estimated more than 40 edges, and LiNGAM estimated nearly 20 edges, which have more spurious edges than others; laplace and PC methods estimated the TAX as an effect variable, which was not consistent with our common understanding (Kenyon et al.,2012). Due to the large networks and limited space, we will present the results in Supplementary Materials.\n\n[Kenyon et al., 2012] Kenyon D A, Langley A H, Paquin B P. Rethinking property tax incentives for business[M]. Cambridge, MA: Lincoln Institute of Land Policy, 2012.\n\n", "role": "user" }, { "content": "Thank you for the authors' reply. It satisfactorily addresses the major issues I had.", "role": "assistant" } ]
HsSLdHuAmnY
CLeaR 2022 Conference
Reviewer_eCeE
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_eCeE, and the following are the paper you need to review:\n```<<HsSLdHuAmnY>>```", "role": "user" }, { "content": "Review Content: Summary: The authors propose a method for causal discovery from mixed data under the linearity and causal sufficiency assumptions. The proposed method relies on assuming non-Gaussian error terms for continuous variables and logistic error terms for discrete variables. The authors show that, under certain conditions, the generating causal DAG in their Linear Mixed causal model (LiM) is fully identifiable. Finally, they showcase the performance of their LiM model on a number of simulations and in a real-world experiment on the Boston housing dataset.\nMain review: **Update**: After reading the other reviews and discussing with the authors, I have increased my score from 5 to 6.\n\nThe authors build upon the work of Shimizu et al. (2006) for handling continuous variables, and on the work of Wei et al. (2018) for handling discrete variables. The extension over Wei's approach is that the LiM model can incorporate more types of non-Gaussian distributions for the continuous variables. However, the method's applicability is severely limited by the causal sufficiency assumption, which will not hold for most realistic settings. I would have liked to have seen this limitiation discussed more explicitly in the paper. \n\nI think the paper could be significantly improved in terms of clarity. The paper is unfortunately pockmarked with writing errors, which hinder readability. Moreover, there are parts such as Corollary 1 whose purpose in the text is unclear to me. The authors require Condition 1 on the tails of the distribution to prove Theorem 2, but I am not sure why the condition of having a finite non-zero limit needs to be separated in the two cases S1 and S2. Conversely, some parts are not explained in enough detail, as far as I am concerned. For instance, the authors claim, if I understand correctly, that by setting $x_i = x_j = 0$ in equations (9) and (10), which are actually supposed to be a single equation, we conclude that $k_i = k_j$. However, what we get is that $\\frac{1-k_i}{1 - e^{-c_i}} = \\frac{1-k_j}{1 - e^{-c_j}}$. It seems to me that some non-obvious steps are missing there.\n\nOn page 2, what do you mean by \"The output causal structure here may fall into the solutions up to the ground truth's skeleton\"? Similarly, on page 9, what do you mean by \"an optimization method may easily fall into its skeleton solutions\" and what is the \"sub-optimality\" you are referring to?\n\nOn page 7, the authors relate their reasoning to the proof of Theorem 28 in Peters et al. (2014). I do not think it is necessary to copy the text of Remark 30 from Peters et al. (2014), especially since it is hard to understand without context. \n\nOn page 10, in the Experiments section, the authors mention that they take the instance of a PDAG returned by PC for evaluation. Which instance do they use exactly? Depending on how the evaluation is performed, one instance might perform better than another. It is also not exactly clear to me how the performance is being evaluated. Are the precision, recall, and F1 score computed in terms of the skeleton, that is, having the corrected edges, or do you also consider the edgemarks? When is a discovery correct and when is it wrong? \n \nOverall, I think that while the LiM model may improve over the state-of-the-art, albeit incrementally, the paper lacks clarity and rigor, and is need of some serious rewriting before it is ready for submission.\n\nTypos, writing errors, and other comments:\n- In the introduction, the categorization of causal discovery algorithms into constraint-based and score-based is introduced twice (on page 1, and then again at the beginning of page 2).\n- Page 1: \"may output Markov equivalence class\" -> \"output Markov equivalence classes\"; \"same conditional independence\" -> \"same conditional independences\"; \"These methods including\" -> \"These methods, including\"; \"on cases when\" -> \"on cases where\"\n- Page 2: \"which are naive and are easy to lose data information\" -> \"which is naive and can make it easy to lose data information\"; \"can not\" -> \"cannot\"; \"may return Markov equivalence class\" -> \"may return Markov equivalence classes\"; \"accompanied with\" -> \"accompanied by\"; \"ratio by\" -> \"ratio of\"; \"T class\" -> \"T categories\"; \"And further\" -> \"Further\"\n- Page 3: \"equals to 1\" -> \"equals 1\"; \"equals to 0\" -> \"equals 0\"; \"the assumptions A1-A3\" -> \"assumptions A1-A3\"\n- Page 4: \"condition probability\" -> \"conditional probability\"\n- Page 5, Equation (17): $b_{ij}$ should be $b_{ji}$ for both branches\n- Page 6: \"Due to the Condition 1\" -> \"Due to Condition 1\"\n- Page 6, Definition 3: The authors use the notation $\\perp$ presumably to denote (conditional) independence, but do not explain what it means. \n- Page 7, Definition 4: It would be nice to introduce the sets **pa** and especially **nd** for those who are not so familiar with the notation. What do you mean by \"$x_j$'s component with $f_i$\" when explaining the underbrace?\n- Page 7, Theorem 5: \"faithful\" -> \"faithfulness\"\n- Page 9: \"that the estimated $b_{ij}$ is hard to receive absolute zeros\" -> \"that it is hard for the estimated $b_{ij}$ to receive absolute zeros\";\n- Page 10: \"Logistic (logistic)\" -> \"logistic\"; \"Laplace (laplace)\" -> \"Laplace\"; \"Re.\" -> \"recall\" & \"Pre.\" -> \"precision\" (The abbreviations seem completely unnecessary.); \"to the sample sizes\" -> \"to sample sizes\"\n- Page 12: \"lower status of the population\" -> \"percentage lower status of the population\"; \"may not an effect variable\" -> \"may not be an effect variable\"; The variable INDUS is later referred to as IND, which is inconsistent.\n- Page 14: \"form\" should be \"from\"\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer eCeE\nWe would like to thank the reviewer eCeE for the thoughtful feedback and constructive comments. Please see below our responses.\n\nQ1: the method's applicability is severely limited by the causal sufficiency assumption, which will not hold for most realistic settings. I would have liked to have seen this limitation discussed more explicitly in the paper.\n\nA1: Thanks for pointing out this issue. Causal discovery without the causal sufficiency assumption is still an interesting and open challenge, especially when the observed data are mixed with both continuous and discrete variables. When the observed data are mixed with both continuous and discrete variables, to our best knowledge, though there is some progress on this challenge [1], no mathematical proof of the identifiability for multivariate cases has been provided even under the causal sufficiency. Therefore, we believe that it is an important contribution to first give proof of the identifiability under the causal sufficiency. We will discuss it more in Section 6 Conclusions.\n[1] Raghu, V.K., Ramsey, J.D., Morris, A. et al. Comparison of strategies for scalable causal discovery of latent variable models from mixed data. Int J Data Sci Anal 6, 33–45 (2018). https://doi.org/10.1007/s41060-018-0104-3\n\nQ2: There are parts such as Corollary 1 whose purpose in the text is unclear to me. The authors require Condition 1 on the tails of the distribution to prove Theorem 2, but I am not sure why the condition of having a finite non-zero limit needs to be separated in the two cases S1 and S2.\n\nA2: Corollary 1 is designed to help demonstrate in detail that the widely-used non-Gaussian distributions could satisfy Condition 1. We believed that Corollary 1 will promote the interpretation of Condition 1.\n\nQ3: The authors claim, if I understand correctly, that by setting $xi=xj=0$ in equations (9) and (10), which are actually supposed to be a single equation, we conclude that $ki=kj$. However, what we get is that $\\frac{1−k_i }{1−e^{-c_i}}=\\frac{1−k_j}{1−e^{-c_j}}$. It seems to me that some non-obvious steps are missing there.\n\nA3: You are correct. Thanks for your excellent comment! When setting $xi=xj=0$, we obtain $(1-k_i)(1-\\frac{1}{1+e^{-c_j}}) = (1-k_j)(1-\\frac{1}{1+e^{-c_i}})$ from Eq.(9) and (10) shown below. Then we introduce another assumption, i.e., $p(x_j=1|x_i=0) = p(x_i=1|x_j=0)$, which implies $\\frac{1}{1+e^{-c_j}} = \\frac{1}{1+e^{-c_i}}$ holds and thus $c_i=c_j$ holds. With $c_i=c_j$, it can be induced that $ k_i = k_j$ (Please note that the model of Wei et al. (2018) assumes $c_i=c_j=0$, which is a more restrict assumption than ours). In this case, Wei’s model (Wei et al., 2018) is a special case of our model. Intuitively, this assumption “ $p(x_j=1|x_i=0) = p(x_i=1|x_j=0)$” means the baselines are the same when predicting one discrete variable from the other. It can be tested easily by computing their values from the raw data. Besides, it is worth mentioning that our simulations used different $c_i$ and $c_j$, and a reason why the experimental results could not achieve 100% accuracy might be the violation of this assumption. Despite this, our method still performs better than others. We will revise our proof and experiments.\n\n$k_i^{x_i}(1-k_i)^{1-x_i}(\\frac{1}{1+e^{-(c_j + b_{ji} x_i)}})^{x_j}(1-\\frac{1}{1+e^{-(c_j + b_{ji} x_i)}})^{1-x_j} $ (9)\n\n$k_j^{x_j}(1-k_j)^{1-x_j}(\\frac{1}{1+e^{-(c_i + b_{ij} x_j)}})^{x_i}(1-\\frac{1}{1+e^{-(c_i + b_{ij} x_j)}})^{1-x_i} $ (10) \n\nQ4: On page 2, what do you mean by \"The output causal structure here may fall into the solutions up to the ground truth's skeleton\"? Similarly, on page 9, what do you mean by \"an optimization method may easily fall into its skeleton solutions\" and what is the \"sub-optimality\" you are referring to?\n\nA4: The “sub-optimality” means that the estimated skeleton is consistent with the ground-truth skeleton; while the estimated causal structure with causal directions may not be consistent with the ground-truth one.\n\nQ5: On page 7, the authors relate their reasoning to the proof of Theorem 28 in Peters et al. (2014). I do not think it is necessary to copy the text of Remark 30 from Peters et al. (2014), especially since it is hard to understand without context.\n\nQ5: Thanks for your advice. We will replace Remark 30 with its intuitive description.\n\n\n\nQ6: On page 10, in the Experiments section, the authors mention that they take the instance of a PDAG returned by PC for evaluation. Which instance do they use exactly? Depending on how the evaluation is performed, one instance might perform better than another. It is also not exactly clear to me how the performance is being evaluated. Are the precision, recall, and F1 score computed in terms of the skeleton, that is, having the corrected edges, or do you also consider the edgemarks? When is a discovery correct and when is it wrong?\n\nA6: Since the comparison method PC may return a PDAG, we return one possible DAG for the evaluation. The implementation code for this procedure can be found in the pgmpy python library. For the evaluation of performances, we used the estimated DAG, i.e., the precision, recall, and F1 scores were computed in terms of both edges and edgemarks. Only when the edge with its causal direction was estimated correctly, will it be considered as a correct discovery. We will clarify it.\n\nQ7: In the introduction, the categorization of causal discovery algorithms into constraint-based and score-based is introduced twice (on page 1, and then again at the beginning of page 2).\n\nA7: The first categorization is introduced for causal discovery algorithms, while the second introduced categorization is mainly and especially for those algorithms with mixed data. We will rephrase the introduction.\n\nQ8: Typos, writing errors.\n\nA8: Thanks so much for your meticulous review! We will fit them.\n\nQ9: What do you mean by \"$x_j$'s component with $f_i$\" when explaining the underbrace?\n\nA9: The underbrace with $x_j$ represents the input component of $f_i$ for the variable $x_j$. We will make it clear.\n\n", "role": "user" }, { "content": "Title: Acknowledgment of response and further comments\nThank you to the authors for the detailed rebuttal. I still have a few concerns:\n\nQ2: I still think you could do away with Corollary 1. I would say it is just as easy to show that the \"basic non-Gaussian distributions\" respect Condition 1 as it is to show that they follow Corollary 1, and I do not see that added benefit of knowing that the Gamma distribution follows S2, whereas the Laplace, Uniform, and Exponential follow S1.\n\nQ6: If you are using only one possible DAG for the evaluation, then the metrics you mentioned will depend on the specific DAG. Why not average over all DAGs in the equivalence class (PDAG)? I think that would provide a more accurate assessment of performance.\n\nFurther remarks:\nQ3: Explicitly adding that assumption would indeed make things much clearer.\nQ4: I would just write out what sub-optimality means in that case, that is, use better-known terms like consistency.", "role": "assistant" }, { "content": "Title: Response to Reviewer eCeE\nWe greatly appreciate the Reviewer eCeE's comments and suggestions! Please see below our response.\n\nQ2: I still think you could do away with Corollary 1. I would say it is just as easy to show that the \"basic non-Gaussian distributions\" respect Condition 1 as it is to show that they follow Corollary 1, and I do not see that added benefit of knowing that the Gamma distribution follows S2, whereas the Laplace, Uniform, and Exponential follow S1.\n\nA2: We will show that the \"basic non-Gaussian distributions\" respect Condition 1 without Corollary 1.\n\nQ6: If you are using only one possible DAG for the evaluation, then the metrics you mentioned will depend on the specific DAG. Why not average over all DAGs in the equivalence class (PDAG)? I think that would provide a more accurate assessment of performance.\n\nA6: We will average all DAGs for the evaluation of PC’s performance.\n\nQ3: Explicitly adding that assumption would indeed make things much clearer. \n\nA3: Yes, we will add this assumption to make things clearer.\n\nQ4: I would just write out what sub-optimality means in that case, that is, use better-known terms like consistency.\n\nA4: We will use “local optima” to represent the estimated structures that belong to the equivalence class of the global optima (the ground truth DAG), and will make it clear in the paper. Thanks!\n\n", "role": "user" }, { "content": "Title: Acknowledgment\nThank you for addressing my final concerns.", "role": "assistant" } ]
HsSLdHuAmnY
CLeaR 2022 Conference
Reviewer_Gufh
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Gufh, and the following are the paper you need to review:\n```<<HsSLdHuAmnY>>```", "role": "user" }, { "content": "Review Content: Summary: The main contribution of the paper is an introduction into a novel causal discovery algorithm for mixed (i.e., continuous and categorical) data. In the present approach, continuous variables are assumed to deviate from the Gaussian distribution and categorical variables are assumed to be the result of discretization of a latent variable that follows a logistic distribution. The authors show that the presented approach enables one to uncover causal structures beyond Markov equivalent classes in both, bivariate and multivariate data settings. A two-step hybrid approach is presented that, in the first step, estimates the log-likelihood score function for the joint distribution of mixed data, and, in the second step, identifies the causal graph with the best score value among structures of the skeleton space. The performance of the algorithm is evaluated using artificial data experiments. Here, simulation results suggest that the presented approach outperforms five competing algorithms (2 variants of the NOTEARS algorithm considering logistics and Laplace distributions, the PC algorithm, LiNGAM originally proposed for continuous variables only, and a second variant of the proposed algorithm without the second search phase). The Boston housing dataset is used to illustrate the presented approach in practice. Here, the results are mixed. Some of the identified paths come with high face validity, other paths are more likely to be spurious due to logical inconsistencies. \nMain review: The paper introduces a novel approach to discover causal structures in mixed (continuous/categorical) data. Such data occur across all empirical sciences. Thus, the presented approach can be considered of high relevance for the audience of causal machine learning and beyond. However, there is a small number of aspects (summarized below) that requires additional attention prior to recommending acceptance of the manuscript in the Proceeding of Machine Learning Research. \n\nFirst, the paper is well organized. However, in general, the paper might benefit from careful proofreading of a professional editor. Some parts of the paper are a bit hard to follow which may mitigate clarity.\n\np. 2, 2nd paragraph: For sake of completeness, I suggest including another mixed data causal discovery algorithm proposed by Yamayoshi et al. (2020). These authors proposed a Latent-LiNGAM (L-LiNGAM) algorithm. Here, it is assumed that observed categorical variables are the result of discretizing latent continuous variables via link functions (in case of the identity link L-LiNGAM reduces to the classic LiNGAM). The paper would benefit from incorporating this previous work. Also, I recommend including this algorithm in the artificial data experiment as a benchmark. \n\np. 3, last paragraph: Similar to Wei et al. (2018), the presented algorithm makes use of the fact that causal models are identifiable due to systematic differences in the marginal distributions of binary variables. It may be worth noting that a similar result has already been discussed by Wiedermann and von Eye (2017) in the domain of skewed latent variables. Following the idea of Dodge and Rousson (2000) – who showed that, when the error is Gaussian, the skewness of the true outcome variable will always be smaller than the skewness of the true predictor –these authors showed that, for the bivariate case, asymmetry exists between causally competing non-hierarchical log-linear models (i.e., models without one of the two main effects). Because the marginal distribution of the true outcome will be closer to symmetry than the marginal distribution of the predictor, causal model selection can be performed. \n\np. 10, Section 5.1 (Synthetic data): The selection of competing algorithms seems somewhat suboptimal. For example, why was LiNGAM included in the simulation? Since this algorithm has been designed for continuous variables only, it does not serve as a reliable benchmark. Instead, I suggest including previously suggested algorithms for mixed data (e.g., the L-LiNGAM approach mentioned above). \n\np. 10: Please provide a citation for the NOTEARS algorithm. \n\np. 10 & 11 (Results section): While the accuracy results for the proposed approach are impressive and convincing, I do not fully agree with some of the write-up in the corresponding results section. For example, the authors state that accuracies of the proposed approach and PC “…increase remarkably along with the sample sizes”, this statement only holds for sample sizes up to 1000. Beyond n = 1000, it seems that the accuracy of the proposed approach slightly decreases again. Here, the authors should clarify whether this is an ignorable instability due to the selected number of iterations, or whether this is a phenomenon inherent to the approach. Also, please note that the performance of the PC algorithm seems to be stable for the bivariate case which contradicts the authors’ statement cited above. Finally, when discussing results of varying the number of discrete variables, it may be worth noting that the accuracy of the proposed method tends to decrease, suggesting that the accuracy may be affected by the number of categorical variables. \n\np. 12 (Conclusion section): I fully agree that future work needs to be done with respect to nonlinear causal relations. However, given that the authors state that identification results also hold for the ANM, it seems that this work has already been done. Thus, I suggest being a little more specific. Which aspects of causally modeling nonlinear relations requires future attention? Finally, another important future direction may be the inclusion of categorical variables with more than two categories. Currently, the authors make use of the fact that multi-categorical variables can be represented by k – 1 binary variables. However, such an approach might be suboptimal, in particular, in the presence of ordinal variables. It is well-known that ignoring ordinal information can have adverse effects on model estimation (see von Eye and Wiedermann, 2018, for a recent illustration). The paper might benefit from incorporating these perspectives. \n\nReferences \n\nDodge, Y., & Rousson, V. (2000). Direction dependence in a regression line. Communications in Statistics-Theory and Methods, 29(9–10), 1957–1972. https://doi.org/10.1080/03610920008832589\n\nvon Eye, A., & Wiedermann, W. (2018). Strengthening arguments based on scale levels? Journal for Person-Oriented Research, 4(1), 45-47.\n\nWiedermann, W., & von Eye, A. (2017). Log-linear models to evaluate direction of effect in binary variables. Statistical Papers. (in press).\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer Gufh\nWe would like to thank the reviewer Gufh for the inspiring feedback and useful comments. Please see below our responses.\n\nQ1: p. 2, 2nd paragraph: For sake of completeness, I suggest including another mixed data causal discovery algorithm proposed by Yamayoshi et al. (2020). These authors proposed a Latent-LiNGAM (L-LiNGAM) algorithm. Here, it is assumed that observed categorical variables are the result of discretizing latent continuous variables via link functions (in case of the identity link L-LiNGAM reduces to the classic LiNGAM). The paper would benefit from incorporating this previous work. Also, I recommend including this algorithm in the artificial data experiment as a benchmark. \n\nA1: Thanks a lot for your valuable comments. The major difference between both methods is that they (Yamayoshi et al. 2020) do not concentrate on the theoretical analysis of the model with mixed data but the estimation method. However, our work aims at giving identifiability results for the mixed model and proposing an estimation method. We will add this work as a reference. Also, we could not find the available code online up until now. If we obtain the code, we will be ready to include it as a benchmark in the simulation. \n\nQ2: p. 3, last paragraph: Similar to Wei et al. (2018), the presented algorithm makes use of the fact that causal models are identifiable due to systematic differences in the marginal distributions of binary variables. It may be worth noting that a similar result has already been discussed by Wiedermann and von Eye (2017) in the domain of skewed latent variables. Following the idea of Dodge and Rousson (2000) – who showed that, when the error is Gaussian, the skewness of the true outcome variable will always be smaller than the skewness of the true predictor –these authors showed that, for the bivariate case, asymmetry exists between causally competing non-hierarchical log-linear models (i.e., models without one of the two main effects). Because the marginal distribution of the true outcome will be closer to symmetry than the marginal distribution of the predictor, causal model selection can be performed.\n\nA2: Thanks so much for your information. We will mention these works for bivariate cases. \n\nQ3: p. 10: Please provide a citation for the NOTEARS algorithm.\n\nA3: We will provide a citation for the NOTEARS algorithm.\n\nQ4: p. 10 & 11 (Results section): While the accuracy results for the proposed approach are impressive and convincing, I do not fully agree with some of the write-up in the corresponding results section. For example, the authors state that accuracies of the proposed approach and PC “…increase remarkably along with the sample sizes”, this statement only holds for sample sizes up to 1000. Beyond n = 1000, it seems that the accuracy of the proposed approach slightly decreases again. Here, the authors should clarify whether this is an ignorable instability due to the selected number of iterations, or whether this is a phenomenon inherent to the approach. Also, please note that the performance of the PC algorithm seems to be stable for the bivariate case which contradicts the authors’ statement cited above. Finally, when discussing results of varying the number of discrete variables, it may be worth noting that the accuracy of the proposed method tends to decrease, suggesting that the accuracy may be affected by the number of categorical variables.\n\nA4: We will rephrase the experimental analysis on the sensitivity to the sample sizes. As shown in Fig. 1, our proposed method increases remarkably along with the sample sizes up to 1000. When the sample size reaches 1000, it looks to decrease slightly with increasing sample sizes. This phenomenon might be originated from the overfitting problem of optimization. Though our method is sensitive to sample sizes for learning causal networks, it does perform better than other comparisons. As for the effects of the number of categorical variables, we performed experiments on a causal network with 5 variables and 5000 sample sizes. A part of the result is presented below. \n\n|No. of dis. var.|PC|LiNGAM|mixed|laplace|logistic|ours| \n|-|-|-|-|-|-|-|\n|1\t|0.330229\t|0.799588\t|0.621185\t|0.840026\t|0.574885\t|$\\textbf{0.855555}$|\n|2\t|0.362768\t|0.595599\t|0.480997\t|0.629606\t|0.536318\t|$\\textbf{0.812429}$|\n|3\t|0.340048\t|0.568806\t|0.454009\t|0.521855\t|0.514909\t|$\\textbf{0.782167}$|\n|4\t|0.299163\t|0.592565\t|0.432868\t|0.492097\t|0.613164\t|$\\textbf{0.638836}$|\n\nWe see that the proposed method’s performance tends to decrease as the number of discrete variables increases. But our method overall performs better than other comparisons. We will add this description in Section 5.1.\n\n\n\nQ5: p. 12 (Conclusion section): I fully agree that future work needs to be done with respect to nonlinear causal relations. However, given that the authors state that identification results also hold for the ANM, it seems that this work has already been done. Thus, I suggest being a little more specific. Which aspects of causally modeling nonlinear relations requires future attention? Finally, another important future direction may be the inclusion of categorical variables with more than two categories. Currently, the authors make use of the fact that multi-categorical variables can be represented by k – 1 binary variables. However, such an approach might be suboptimal, in particular, in the presence of ordinal variables. It is well-known that ignoring ordinal information can have adverse effects on model estimation (see von Eye and Wiedermann, 2018, for a recent illustration). The paper might benefit from incorporating these perspectives.\n\nA5: We agree with you. We will add the inclusion of categorical variables as one of the future lines. As for the nonlinear causal modeling, there are some aspects that we could consider deeper research. For instance, we could focus on the cases where for the mixed data, continuous variables are generated by the Post Non-Linear Model (PNL, it has a very general form about the data mechanism); or where continuous variables are generated by a nonlinear deterministic relation where $x_i = f(x_{pa(i)})$ without noise; or where the discrete variable is generated by a mechanism where its value equals 1 if the nonlinear function of its parent variables plus an error is larger than 0, otherwise, its value equals 0, etc. These cases have much more challenges than we might not imagine. Thus, they are worthwhile being explored. Thank you for your valuable suggestions!\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. Most of my concerns have been addressed. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
JjCi8ON-C3N
CLeaR 2022 Conference
Reviewer_SghD
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_SghD, and the following are the paper you need to review:\n```<<JjCi8ON-C3N>>```", "role": "user" }, { "content": "Review Content: Summary: This paper studies estimating the directionality of cause-effect relationships between two time series using the principle of Independence of Causal Mechanisms (ICM). ICM leads to the Spectral Independence Criterion (SIC), postulating that the power spectral density (PSD) of the cause time series is uncorrelated with the squared modulus of the frequency response of the filter generating the effect. This work provids an information theoretic interpretation of SIC, presents an identifiability result, and demonstrates the robustness of SIC to downsampling.\nMain review: The paper is well written, an easy read. \nThe problem is quite interesting to the community and has plenty of applications in different areas. Exploring the idea of SIC and establishing its information geometric interpretation is valuable. \n\nSIC seems to be a strong assumption which limits the applicability of this method. Is SIC verifiable from observed data? A discussion on what classes of dynamical systems satisfy this assumption would improve the acceptability of SIC. \n\nMy main concern is about the contribution and novelty. Comparing the result with [Shajarisales et al. (2015)], this work offers small amount of contribution (e.g., information geometric interpretation and Robustness with respect to Decimation procedure). \n\nGeneralizability of this approach is questionable. Consider a simple scenario in which X->Z->Y. In this case, does SIC-based approach detect a direct influence from X to Y? and how does this approach take information of time series Z into account for detecting causal-effects?\nWhat if there are hidden confounder between X and Y? Can one still apply SIC approach?\n\nIn Theorem 2, \\epsilon cannot be arbitrary small. If it goes to zero, then the probability becomes -1. Hence, given that <S_{xx}> is related to m and m^3\\epsilon^2 cannot be arbitrary small, is the proposed bound in this theorem a tight bound?\n\nThe experiments are not convincing that this method can be promising. Is it better than other approaches for identifying cause vs effect e.g., the Wiener filter approach in “On the Problem of Reconstructing an Unknown Topology via Locality Properties of the Wiener Filter,” by Donatello Materassi et al. 2012. This is yet another concern that the cited papers are not diverse (limited and seems to ignore other literature in causal discovery in time series using PSD. \n\nMinor comments:\nEquation after Eq. (2) is incorrect. \nThere are several missing references in the Appendix.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer SghD\nThank you for your comments. We address them below.\n\n*“SIC seems to be a strong assumption which limits the applicability of this method. Is SIC verifiable from observed data?”*\n\nAs described in sec. 7, we can check that the distribution of SDR values is centered around one in empirical data. A deviation from this value suggests that SIC assumptions are not met and need to be adapted. See illustration on Fig. 4 c-d for power law like neural signals.\n\n*“A discussion on what classes of dynamical systems satisfy this assumption would improve the acceptability of SIC.”*\n\nAs stated in the background section, our theory and the original paper rely on linearity, stationarity and deterministic mechanism assumptions. We will emphasize it more in the introduction and background. \n\n*“My main concern is about the contribution and novelty. Comparing the result with [Shajarisales et al. (2015)], this work offers small amount of contribution (e.g., information geometric interpretation and Robustness with respect to Decimation procedure).”*\n\nIn addition to these two results, we offer for the first time an identifiability result, which is a (if the not “the”) central goal of theories of causal inference. In addition, sec. 7 provides an extension of this setting which is highly relevant in applications to experimental data (as they often comprise power law signals).\n\n*“Generalizability of this approach is questionable. Consider a simple scenario in which X->Z->Y. In this case, does SIC-based approach detect a direct influence from X to Y? and how does this approach take information of time series Z into account for detecting causal-effects? What if there are hidden confounder between X and Y? Can one still apply SIC approach?”*\n\nConfounding is a very interesting question but out of the scope of the present work, we will emphasize this in the background section (this is also mentioned in the discussion). The present paper focuses on inferring the direction of causation for cause-effect pairs and can be applied to unconfounded pairs of nodes satisfying our assumptions. In the case that you mention, we would thus test X->Z, Z->Y or even X->Y, depending on the question asked. Some further aspects of the case of X->Z->Y have been investigated empirically in Besserve et al. 2021. \n\n*“In Theorem 2, \\epsilon cannot be arbitrary small. If it goes to zero, then the probability becomes -1. Hence, given that <S_{xx}> is related to m and m^3\\epsilon^2 cannot be arbitrary small, is the proposed bound in this theorem a tight bound?”*\n\nWe did not perform any analysis of the tightness of the bound and are not aware of concentration of measure results which provide such guaranties. Note that tightness of the bound is not required to achieve our purpose: identifiability.\n\n*“The experiments are not convincing that this method can be promising. Is it better than other approaches for identifying cause vs effect e.g., the Wiener filter approach in “On the Problem of Reconstructing an Unknown Topology via Locality Properties of the Wiener Filter,” by Donatello Materassi et al. 2012. ”*\n\nFirst let us mention that the paper by Donatello et al. 2021 looks like a very interesting theoretical work, but does not provide quantitative comparisons with classical baselines, nor provides a link to a code to test it. Please refer to Shajarisales et al. (2015) for comparison of SIC with competing baselines like Granger causality. In the present work, we focus instead on better understanding the theoretical foundations of this approach. On a high level, while Granger-like approaches lend themselves to fairly classical identifiability analysis of auto-regressive models, this paper and Shajarisales et al. (2015) suggest SIC can be useful under a set of misspecifications of this model.\n\n*\" This is yet another concern that the cited papers are not diverse (limited and seems to ignore other literature in causal discovery in time series using PSD.\"* \nWe are happy to include a greater variety of references, although we are not aware of other methods relying on PSD in the strict sense. However, we can include references to partial directed coherence and other frequency domain approach, exploiting cross spectral densities. \n\n*“Minor comments: Equation after Eq. (2) is incorrect. There are several missing references in the Appendix.”*\n\nThank you we fixed these typos.\n", "role": "user" }, { "content": "Thanks for the authors' response. They have adequately addressed the majority of my concerns, and I may consider adjusting the rating upwards.", "role": "assistant" } ]
JjCi8ON-C3N
CLeaR 2022 Conference
Reviewer_Rp12
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Rp12, and the following are the paper you need to review:\n```<<JjCi8ON-C3N>>```", "role": "user" }, { "content": "Review Content: Summary: The paper provides an information geometric interpretation for the Spectral Independence Criterion (SIC) based on the connection between SIC and the Trace Method and that between the Trace Method and Information Geometric Causal Inference established in the literature. After that, the authors present some identifiability results in a toy generative model and show the robustness of SIC to downsampling. Finally, the paper discusses a way to generalise the spectral independence assumption through the invariance perspective. \nMain review: To be honest, I am not familiar with the field. Therefore, it is hard for me to comment on the contribution and significance of the paper. For example, is SIC an important state-of-the-art method in causal direction inference? How useful is the information geometric interpretation? How broad is the generative model framework? \n\nRegarding the technical details, I have the following questions:\n\n1. Since confounding issues may present under the model in equation (1), I wonder when the authors discuss the causal mechanism using equation (1), do they assume that the causal relationship between $\\boldsymbol{x}$ and $\\boldsymbol{y}$ has been confirmed, and the goal is only to determine the causal direction between $\\boldsymbol{x}$ and $\\boldsymbol{y}$? If so, please specify that in the paper.\n2. How can we generalise the definite of $\\overrightarrow{P}_Y$ below the first equation on page 6 to the notations $\\overrightarrow{P_YU_Y}$, $\\overrightarrow{P_Y\\overrightarrow{P}_Y}$ and $\\overrightarrow{\\overrightarrow{P}_YU_Y}$? Could the authors be more specific on this?\n3. In section 5.1, $\\boldsymbol{b}$ is assumed to be sampled from a random variable $\\boldsymbol{B}$, where $\\boldsymbol{B}=R\\boldsymbol{U}$, with $R\\geq 0$ a real valued radius distribution and $\\boldsymbol{U}$ uniformly distributed on the unit sphere in $\\mathbb{R}^m$. Here, what does $R$ really mean, a distribution function or a random variable?\n\nIn terms of paper writing, there seems a lot of typos in the main text and Latex compiling errors in the appendix. For example,\n\n1. In the equation at the end of page 3, do the authors actually mean $\\hat{\\boldsymbol{h}}_{\\boldsymbol{Y} \\rightarrow \\boldsymbol{X}} $ in the denominator on the right-hand side?\n \n2. In the fourth line on page 5, do the authors mean ''define $\\rho_{\\boldsymbol{Y}\\rightarrow\\boldsymbol{X}}$ by exchanging the roles of $\\boldsymbol{X}$ and $\\boldsymbol{Y}$''?\n3. In the fourth line of section 5.4, it should be ''One key issue'' instead of ''On key issue''.\n4. Latex compiling errors can be found in Appendix A.2 and A.9.\n5. Many notations are not consistently presented in the paper. For example, the impulse response is sometimes presented in bold as $\\boldsymbol{h}$ and sometimes as an ordinary $h$.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer Rp12\nThank you for your comments that we address below. \n*“Since confounding issues may present under the model in equation (1), I wonder when the authors discuss the causal mechanism using equation (1), do they assume that the causal relationship has been confirmed, and the goal is only to determine the causal direction? “*\n\nYes, only determining the causal direction is addressed. We will specify more clearly the cause-effect pair inference setting in section 2.\n\n*”How can we generalise the definition of $\\overrightarrow{P_Y}$\nbelow the first equation on page 6 to the notations $\\overrightarrow{P_Y U_Y}$? Could the authors be more specific on this?”*\n\nThese last notations above equation (7) are generalized vector notations for oriented pairs of points (each point being a probability distribution) in the statistical manifold. The squared norm of these vectors corresponds to the KL divergence, and a generalized form of orthogonality between is specified by generalized Pythagorean theorems in the form of the formula at the top of page 6. These vector representations are illustrated on Figure 1. \nWe will clarify this in section 4.1 and change the notation for $\\overrightarrow{P_Y}$ as suggested by another reviewer. \n\n*”What does R really mean, a distribution function or a random variable?”*\n\nWe mean a random variable, we will fix it, thanks for pointing that out. \n\n*”In the equation at the end of page 3, do the authors actually mean $\\boldsymbol{h}_{Y\\rightarrow X}$ in the denominator on the right-hand side?”*\n\nYes we fixed this typo, thank you.\n\n*“In the fourth line on page 5, do the authors mean ''define\nby exchanging the roles of and ''?”*\n\nYes, we fixed this typo.\n\n*“In the fourth line of section 5.4, it should be ''One key issue'' instead of ''On key issue''.”*\n\nThank you. this is corrected.\n\n*“Latex compiling errors can be found in Appendix A.2 and A.9.”*\n\nThank you we will fix them.\n\n*”Many notations are not consistently presented in the paper. For example, the impulse response is sometimes presented in bold as and sometimes as an ordinary”*\n\nWe denoted the impulse response sequence in bold and its Fourier transform (with a hat) in ordinary font (as it is a function of a real variable). We will do a careful pass to check the consistency.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. They have addressed most of the issues I raised, and I will likely raise the rating.", "role": "assistant" } ]
JjCi8ON-C3N
CLeaR 2022 Conference
Reviewer_P6NT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_P6NT, and the following are the paper you need to review:\n```<<JjCi8ON-C3N>>```", "role": "user" }, { "content": "Review Content: Summary: Official Review\nMain review: The authors take the SIC-based causal discovery framework for time series and proves that under certain generative modeling assumptions the causal direction between two time series is identifiable. They also show robustness results to downsampling. I really like the work and thank the authors for their submission. I have several minor comments and clarification questions below and would be happy to hear authors' comments on them.\n\nDetailed Comments: \nI would explicitly mention that the proposed framework assumes a linear relation between the time series' in the Introduction. \n\nReferring to a classical textbook in signal processing might help with the background exposition such as Fourier transform, e.g., Signals and Systems by Oppenheim and Willsky.\n\n\"If such a BIBO-stable filtering relationship exists in only one direction (i.e. when the frequency response is not invertible at some frequency), it is natural to assign causality to the stable direction.\" \nCould you please elaborate on this? I believe the reasoning relies on the assumption that the true model satisfies this property. \n\nBIBO stability seems to be a central concept here, as it appears in Assumption 1. It might be better to provide the definition in a Definition environment rather than in-text for clarity.\n\n\"This suggests that the amplification implemented by the mechanism is not adapted to the specific values...\" \nThis statement might be misunderstood. I suggest rephrasing as \".. amplification implemented by the mechanism is not dependent on the specific values...\"\n\n\"white noise\" \nPlease define white noise in relation to DTFT.\n\n\"the ICM-based Trace Condition (Janzing et al., 2010b) has been shown to be equivalent to\" \nCould you also state the condition?\n\nI find the notation \\vec{P_Y} slightly confusing since it is used together with the vector notation. Pushforward measure is typically shown with \\mu in the subscript. Authors could consider using this notation instead.\n\nWhy is the distance between \\vec{P_Y} and U_Y called \"irregularity of mechanism\" in Figure 1 and last paragraph before Section 4.1? Could you provide some intuition and reasoning for this statement?\n\nCould you comment on why the deterministic mechanism (Y=f(X)) assumption is necessary in Section 4.1 but not for Gaussian processes in Section 4.3? I don't think this is necessary based on the proof; but some insight on what makes it possible that it wasn't before while working with distributions would be appreciated.\n\nA cross-citation is missing from page 17.\n\nPlease define \"radius distribution\". \n\n\"U and B are stochastically independent\" \nCould you clarify this statement? They don't seem to be independent in the probabilistic sence since B=RU so I believe I am missing something here.\n\nguaranties->guarantees\n\nas number of filter coefficients->as the number of filter coefficients\n\ninstead m inside the exponential->instead of m inside the exponential\n\nas there product->as their product\n\nIf->if in Proposition 2. Please go over Propositon 2 in general. There are several typos.\n\nCould you comment on the assumption between \\hat{h} and \\textbf{h} in Propositon 2?\n\nSection 5.3 and Propositon 2: It is true that \\eta>0 but is there a way to comment on the gap? What do you intuitively expect this gap to depend on? Furthermore, this gap will not be available at the time of inference, correct?\n\nguaranty->guarantee\n\nOn key issue->One key issue\n\nTheorem 3 is very nice to have.\n\nIt is also very good to see that the method has some robustness against down-sampling.\n\n\"where signals which such invariance\"->\"where signals with such invariance\"\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Reply to reviewer P6NT\nThank you for appreciating the value of our contribution! Please find your comments addressed below.\n\n*”I would explicitly mention that the proposed framework assumes a linear relation between the time series' in the Introduction.”*\n\nThis is a good point, we will also specify it in the abstract as well.\n\n*”Referring to a classical textbook in signal processing might help with the background exposition such as Fourier transform, e.g., Signals and Systems by Oppenheim and Willsky.”*\n\nWe will add this reference in the background section.\n\n*”\"If such a BIBO-stable filtering relationship exists in only one direction (i.e. when the frequency response is not invertible at some frequency), it is natural to assign causality to the stable direction.\"\nCould you please elaborate on this? I believe the reasoning relies on the assumption that the true model satisfies this property.”*\n\nIndeed, if the system were not to be stable, the system would leave its domain of linearity as the output magnitude grows exponentially and the overall assumptions of this paper would be violated. We will mention it explicitly in the background section. \n\n*”BIBO stability seems to be a central concept here, as it appears in Assumption 1. It might be better to provide the definition in a Definition environment rather than in-text for clarity.”*\n\nWe will do it.\n\n*“ \"This suggests that the amplification implemented by the mechanism is not adapted to the specific values...\"\nThis statement might be misunderstood. I suggest rephrasing as \".. amplification implemented by the mechanism is not dependent on the specific values...\" ”*\n\nThank you, we will rephrase accordingly.\n\n*”Please define white noise in relation to DTFT.”*\n\nWe have defined it at the top of page 5. We can make this definition more formal but please let us know if we misunderstood your comment. \n\n*”\n\"the ICM-based Trace Condition (Janzing et al., 2010b) has been shown to be equivalent to\"\nCould you also state the condition? ”*\n\nYes we can restate it formally in the paper (in main text if space constraints allow, or in supplemental). In short, the trace condition postulated that the trace of the effect covariance factorizes as the production of the trace of the cause covariance and another trace factor function of the linear operator linking cause to effect.\n\n*“ I find the notation \\vec{P_Y} slightly confusing since it is used together with the vector notation. Pushforward measure is typically shown with \\mu in the subscript. Authors could consider using this notation instead.”*\n\nThis is a good idea, we will reformulate based on a pushforward notation to avoid confusion with the vector notation.\n\n*” Why is the distance between \\vec{P_Y} and U_Y called \"irregularity of mechanism\" in Figure 1 and last paragraph before Section 4.1? Could you provide some intuition and reasoning for this statement? ”*\n\nIrregularity stems from the fact that it is define as the distance between a point and a manifold of so-called “regular” distributions. In the SIC case, regular distributions are white noises. \n\n*“Could you comment on why the deterministic mechanism (Y=f(X)) assumption is necessary in Section 4.1 but not for Gaussian processes in Section 4.3? I don't think this is necessary based on the proof; but some insight on what makes it possible that it wasn't before while working with distributions would be appreciated.”*\n\nThe deterministic mechanism assumption is assumed through all the paper, due to the filtering relation specified in eq. (1). Proofs also rely on this assumption. We can further emphasize this in the background and introduction. Extensions of theoretical analyses of ICM criteria to the noisy mechanism case are actually very challenging. \n\n*”Please define \"radius distribution\".”*\n\nWe actually meant a radius random variable. We will modify it accordingly.\n*” \"U and B are stochastically independent\"\nCould you clarify this statement? They don't seem to be independent in the probabilistic sence since B=RU so I believe I am missing something here. ”*\n\nThis is a synonym for statistical independence (we will replace by this term), as opposed to independence of mechanism. But confusion comes from a typo: we meant $\\boldsymbol{U}$ and $R$ are statistically independent. We fixed it.\n\n*“Could you comment on the assumption between \\hat{h} and \\textbf{h} in Propositon 2?”*\n\nDue to Parseval equality, the squared norm of $\\textbf{h}$ also corresponds to the average of $\\hat{h}$ in the frequency domain, such that the assumption essentially states that the absolute squared Fourier transform remains below twice its average.\n\n*“Section 5.3 and Propositon 2: It is true that \\eta>0 but is there a way to comment on the gap? [...] Furthermore, this gap will not be available at the time of inference, correct?”* \n\nWe agree this gap is challenging to obtain, and this has been a difficulty to prove identifiability. At a high level, theorem 3 relies on properties of the generative model to assess this gap.\n", "role": "user" }, { "content": "Thanks for the authors' response. It satisfactorily addresses the major issues I had. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
JqbZ9JQz0wV
CLeaR 2022 Conference
Reviewer_upxr
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_upxr, and the following are the paper you need to review:\n```<<JqbZ9JQz0wV>>```", "role": "user" }, { "content": "Review Content: Summary: review\nMain review: The paper studies an approach to simplify the computation of prediction functions under constraints on path-specific effects. For the case where the the direct effect of the treatment to the outcome variable is in the paths that must be controlled, they present a reparametrization of the likelihood function that makes explicit the total path-specific effect that should be controlled. The advantage of this reformulation is to allow for simpler algorithm for the optimization of the likelihood under fairness constraints. The authors then discuss an extension of their model to learn the full distribution (including P(x))\n\npros: the paper presents the problem and approach thoroughly. Theorem 2 seems non-trivial and rather strong.\n\ncons: overall I found the paper hard to follow and I had difficulties understanding the conclusions of the experiments:\n* As for many papers on controlling path-specific effects, the notation and the formulas are quite heavy, but I guess this is somewhat unavoidable. Also, and similarly to other papers on controlling path-specific effects, the paper discusses only a few very simple causal models. While the generality of the theoretical results is still there, it is unclear how practical that is.\n* I did not understand the take-away message of Simulation 1 (Actually, I am not sure to understand what part of the paper is assessed in Simulation 1 and how it relates to simulation 2). \n* The authors consider a transductive setting (we know the points to be labeled), and there is no reason for transductive learning to be better than inductive learning in general. It is unclear to me why the estimation of P(X) turns out to improve predictive performances on this problem.\n* In general, I found it unclear what was the aim of the authors -- improving predictive performance given a fairness constraint, or finding a better approximation to the true distribution that respects the fairness constraints. From the experimental results, there is no algorithm that performs better on the two criteria, and it is unclear to me whether this was expected or whether the experimental results are inconclusive.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Reviewer upxr \n\nWe thank the reviewer for their positive comments, and questions. \n\nQ1: Notations and examples\n\nA: We understand expressing counterfactual path-specific effects can be notationally hard to digest. However, they have clear interpretations in terms of decomposing the total effect along various causal mechanisms. We provided a few examples, both in the main draft and in the appendix, to better illustrate our points and help the readers with the notations. This however, does not imply our proposed methods are only limited to the examples we provided. We are happy to discuss more complicated models in the appendix (or in the main draft as space permits).\n\n\nQ2: What are the take-away messages of simulation 1? (Actually, I am not sure to understand what part of the paper is assessed in Simulation 1 and how it relates to simulation 2)\n\nA: Simulation 1 is related to Theorem 1. There are various ways of estimating an identified causal functional (likelihood-based methods, inverse probability weighting approaches, influence function based methods, etc). Our choice of estimator for computing unfair effects will affect the performance of our constrained optimizer. This is because different estimators rely on different parts of the joint distribution. What we showed in Theorem 1 is that if a larger part of the joint is being constrained, the constrained distribution then lies closer (in the KL-divergence sense) to the unconstrained distribution. In Simulation 1, for illustration our unfair effect is the direct effect. Estimating the direct effect with the influence function based estimator relies on most of the joint distribution; and hence leads to an approximation of the fair distribution that is the closest to the unfair distribution, as opposed to other scenarios where we pick a different estimator. \nSimulation 2 is related to Theorem 2. This shows that if we fix our estimator (say we use g-formula) then additionally constraining p(X), on top of other parts that appear in g-formula, moves our approximation of the fair distribution closer to the unfair distribution. \n\nQ3: Why does estimation of P(X) improve predictive performance?\n\nA: This is a consequence of what we show in Theorem 1. On one hand, the unconstrained distribution has the best predictive performance, but it does not enforce our fairness constraints. On the other hand, constraining more parts of the joint distribution moves us closer to the unconstrained distribution. Hence, by constraining P(X) we hope to move closer to the unconstrained distribution and achieve better predictive performance. \n\nQ4: Is the objective to improve predictive performance given a fairness constraint, or finding a better approximation to the true distribution that respects the fairness constraints?\n\nA: It’s a mix of the two. We want to achieve as best predictive performance as possible while satisfying our (hard) fairness constraints. Nearer approximations to the unconstrained distribution will achieve better predictive performance (since the unconstrained version will always do best). \n", "role": "user" }, { "content": "I'm grateful for the authors' response. Most of my concerns have been addressed. I continue to recommend this work for acceptance.", "role": "assistant" } ]
JqbZ9JQz0wV
CLeaR 2022 Conference
Reviewer_sqKv
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_sqKv, and the following are the paper you need to review:\n```<<JqbZ9JQz0wV>>```", "role": "user" }, { "content": "Review Content: Summary: A very strong piece of work!\nMain review: In this paper, the authors proposed a new method to train the fair predictive models. The new approach is guaranteed to achieve more accurate estimate and is easier to optimize than the one from (Nabi and Shpitser 2018).\n\n1. The paper is very well written and easy to understand. I did not check the proof in details, but as far as I am aware, the proofs are correct and technically sound;\n\n2. The simulations are solid and very comprehensive. It would be nicer if the authors can provide a easier to use computational toolbox from their proposed method. This can create a broader impact of the proposed work;\n\n3. The authors can consider citing the other results aiming for fairness in computer assisted decision making, such as the one from\nhttps://arxiv.org/abs/2110.15310 and the references therein. In particular, it would also be interesting to compare the criteria of fairness used in the two papers.\n\nOverall, as I am not an expert of fairness, and I did not keep a close track of the most recent publications in fairness, my comments might be biased. Still I believe this is a good paper!\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reviewer sqKv\nWe thank the reviewer for their positive comments. \n\nRegarding the toolbox: We will make our code publicly available. We have thought about creating a fairness toolbox, but it certainly requires more time. \n\nRegarding the citation and discussion of the suggested paper: Thanks for sharing the reference. We are happy to add it to our references and discuss it in the paper. \n", "role": "user" }, { "content": "I value the authors' response. It resolves the majority of my concerns. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
JqbZ9JQz0wV
CLeaR 2022 Conference
Reviewer_ujaL
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_ujaL, and the following are the paper you need to review:\n```<<JqbZ9JQz0wV>>```", "role": "user" }, { "content": "Review Content: Summary: In this work, based on Nabi and Shpitser (2018), the authors introduce new theoretical results and optimization techniques to make model training easier and more accurate. Experiments on simulations show the effectiveness of this method.\nMain review: The topic of this work is significant. How to impose hard fairness constraints on predictive models and optimize them efficiently is a bottleneck in fairness learning. Below are some concerns: \n\nConcerns:\n1.\tThe problem this work want to address is not clearly defined. In the second paragraph of Section 3, the authors aim to predict the outcome labels for new instances, subject to fairness constraints. Are new instances the test data? Then, the authors train on the combination of historical data and new instances. Does this mean the test data is used in the training phase?\n2.\tThe motivation of this work is confusing. What is the distribution $p^{*}(Y, Z)$ of the fair world? What is the distribution $p(Y, Z)$ of the unfair world? Z is the features. Why would Z change in the two worlds?\n3.\tWhat is the definition of $g(p(Y, Z))$? Why in practice, it should be strictly bounded as 0? What is the variables that appear in the estimator $\\widehat{g}(\\mathcal{D})$? Doesn’t $\\mathcal{D}$ contain the whole Z (Z1+Z2)?\n4.\tIn your Theorem, the observed data distribution $p(Y, Z)$ is induced by a causal model. Is there any requirements for the causal model? Has it to be in the form of Figure 1?\n5.\tThis paper is not well organized, e.g., the Fig 1 is referred to in page 2 but located in page 5. The logic and connection among those sections is ambiguous.\n6.\tSome grammatical issues, e.g., `… pathways are … context-specific, hence require input ...’ and typos, e.g., $R_i$ in the formula at the bottom of page 5 is out of the bracket, should be fixed.\n7.\tHow will this method perform on real-world dataset?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reviewer ujaL\nWe thank the reviewer for their positive comments, and questions. \n\nQ1: Training and test data\n\nA: Our training and test data do not overlap. \n\nQ2: Differences between fair p*(Y, Z) and unfair p(Y, Z) worlds. Does Z change?\n\nA: The covariates Z do not change between the two worlds. However, their distribution may change depending on how we approximate the fair distribution p*(Y, Z). Intuitively, the idea is that in the counterfactual world where our fairness constraint is satisfied, there are associations or dependencies among variables that are different somehow, and this induces a possible different joint distribution over Y, Z.\n\n\nQ3: What is the definition of g(p(Y, Z))? Why in practice, it should be strictly bounded as 0? What is the variables that appear in the estimator g^(D)? Doesn’t D contain the whole Z (Z1+Z2)?\n\nA: Our measure of fairness is the effect along unfair pathways. A path-specific effect is a counterfactual quantity that first needs to be identified as a function of observed data. We denote the identified functional for our unfair effect generically via g(p(Y, Z)), where p(Y, Z) is the observed data distribution. Different specific choices of g correspond to different functionals. The functional g(p(Y, Z)) needs to be computed/estimated from data which we denote by g^(D). D here just highlights the fact that the g functional is computable using samples in D. In practice, bounds near zero (or “null” value) are desirable, though the tolerance bounds (\\epsilon^-, \\epsilon^+ )are left up to the user.\n\nQ4: Is there any requirement for the causal model?\n\nA: Our assumption is that the observed data distribution factorizes wrt a directed acyclic graph (where some variables may be unmeasured).\n\nQ5: Fig 1 is referred to in page 2 but located in page 5. The logic and connection among those sections is ambiguous.\n\nA: This is an artifact of latex compilation. We are happy to move the figure closer to where it’s first referenced. \n\nQ6: Grammatical issues and typos \n\nA: Thanks for catching these. We will fix them in the camera-ready. \n\nQ7: How will this method perform on real-world dataset?\n\nA: Given the outlined theory, our expectation is that the methods we proposed should lead to better predictive performance in fair predictive modeling. Our simulations confirm that even in simple scenarios, this is true. Though we do not have space to do so here, we hope to apply these methods to real data in a nearby future work. \n", "role": "user" }, { "content": "Thanks to the authors for their clarification. The reply answers almost all of my questions. I will continue to recommend accepting this work.", "role": "assistant" } ]
LAAZLZIMN-o
CLeaR 2022 Conference
Reviewer_7tou
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_7tou, and the following are the paper you need to review:\n```<<LAAZLZIMN-o>>```", "role": "user" }, { "content": "Review Content: Summary: This paper presents a method that transforms images from one class to another based on diffusion models.\nMain review: Summary\n=======\n\nI would characterize this work as an extension of generative image diffusion models where, instead of generating images from one class, we input an image of class A and want to generate an output image of class B. This seems (?) to be motivated by a desire to connect this class of models to counterfactual estimation and causal inference in structural causal models (SCM). However, to me the task appears more similar to what has previously been described as \"counterfactual explanation\" (CE) in this context (e.g. Schut et al, 2021): change an image in such a way that it retains features of the original image, remains interpretable and realistic, but will be accepted as a member of the new class. The authors show experimental results on MNIST that seem to indicate that this method outperforms Schut et al and another baseline (Van Looveren and Klaise 2021). \n\nThe algorithm essentially operates in two steps:\n(1) gradually add noise to an input image from class A using a diffusion model (forward diffusion) until we have reached the point where the image consists \"only\" of noise;\n(2) apply a (reverse) diffusion model to the result of (1) to generate an output of class B. \n\nInterestingly, the \"noise\" produced in step (1) seems to retain some features of the original image, as the resulting image indeed appears to be \"close\" to the input class A image in many cases. This at least to me was somewhat unexpected, since I do not see where the method is particularly designed to retain input information in this way, but I found this very interesting to observe nevertheless. However, the qualitative results shown on the ImageNet data suggest that this technique no longer works reasonably in this more complex setting, as the resulting images would not be recognizable as members of the desired target class (they would be unrealistic counterfactual explanations). An alternative explanation of these results, which I would like to see the authors discuss, would be that step (1) is actually not successful in the sense that it does not achieve the stated goal of \"recovering the exogenous noise\".\n\nPros\n===\n\nOriginality: I found this to be an interesting and thought-provoking idea, with some partly unexpected results. \nQuality: The algorithm is well developed and seems to be performing well at least on the MNIST data. This indicates that there is something to gain by employing diffusion models in this context of counterfactual explanation. \n\nCons\n====\n\nQuality/Clarity: I have doubts about the stated connection to counterfactual prediction in structural causal models (SCMs). The introduction starts with some general ideas about how SCMs are defined, and how we could also use SCMs to represent (networks of) stochastic diffusion equations. While I can follow these general ideas, I do not see how they are connected to the algorithm presented in Algorithm 1, which appears to be much more close to the CE problem as discussed in Schut et al. It appears that Proposition 1 is where the authors are trying to make this connection. However I don't follow their reasoning at that critical point. How is it obvious that computing a counterfactual probability is equivalent to solving this reverse diffusion process? If this works for any structural causal model, as you seem to claim, then can you perhaps give a \"toy problem\" of a simple binary counterfactual quantity in a simple SCM, like one of the examples given in Pearl's \"Primer\"? The example below the proposition only adds to the confusion. Here the specific SCM x2 -> x1 is discussed where presumably, x2 represents the source class of an image and x1 the target class. In this specific case (which is the setting of the paper) we have P( x1 | do(x2) ) = P( x1 | x2 ), so there is no difference between the causal and observational distributions. Then why use causal language in the first place? Adding further to the confusion, the authors write \"The causal effect of an intervention can, therefore, be written as p(x(1) | do(x(2) = x(2)))\" -- why \"therefore\"? This is simply the notation that is used for causal effects, which would be valid in any model structure. Given these inaccuracies and omissions, I am currently not convinced that there is actually a connection between Algorithm 1 and counterfactuals in SCM. It rather appears that the authors are simply labelling their method as \"counterfactual predictions\" without really establishing carefully and in detail how these two concepts relate. To build their case for this connection between SCM and diffusion models, the authors also cite various papers e.g. by Mooij and Blom, but the specific connections made in those papers are very different from what is being done here and I don't see why this would be immediately applicable. I feel it would be important to clarify this, or alternatively revert to the much clearer concept of \"counterfactual explanation\" as used e.g. in Schut et al, which doesn't attempt (or need) to make the connection to SCMs.\n\nSignificance: Although it is reassuring that the method seems to slightly outperform the baselines, the theoretical foundations do not give me much confidence that we should expect this to be the case, given that the baseline methods are directly designed to solve the desired problem of counterfactual explanation. I can't therefore judge very well at this point how useful or influential this method will turn out to be. \n\nOther comments\n============\n\nIn the empirical evaluation, the results you show for Schut et al. seem to be worse than what they show in their paper, why? \n\nIt is unclear why you are mentioning conditional ignorability in this context. \n\nAll endogenous variables in the training set are annotated but labels at inference time are not required” – could you clarify what you mean with the second part of that sentence?\n\np6 “...following Dhariwal and Nichol (2021).” This is a bit vague; what aspects of their work are you following? What did they do, and how is your work the same or different?\n\nResults could use some more context. For example:\na. Could you explain briefly how CLD is different from IM1 and IM2?\nb. Table 1 page 9: what are the subscripts?\nc. What is the l1 distance (p10)?\nd. Small note: abbreviation VAE is used in figure 3 before it is \nintroduced on page 11.\ne. P11: what is E_phi?\nf. Could you put the results into context? E.g. the results in Figure 3 are better than other methods, but what currently are still the weaknesses that future versions should improve on?\ng. Related: in Figure 1, the method is applied to ImageNet – but this is not described in the results at all. I get that previous methods could not handle this input so that no direct comparison is possible, but again, I would have liked to see a discussion of how well the method performs. For example, while do(cliff) seems to have worked reasonably well, the same cannot be said for the espresso maker and the waffle \niron.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer 7tou\n\nWe really appreciate your comments and attentiveness to details while reviewing. Your comments will be incorporated in the final version. \n\n### Main comments\n\n> appears more similar to what has previously been described as \"counterfactual explanation\" (CE)\n\nWe want the re-interpret an anti-causal classifier as part of a causal mechanism for generation. It goes beyond just giving explanations towards for estimating the result of interventions with the gradients of the marginal and conditional distributions.\n\n> It appears that Proposition 1 is where the authors are trying to make this connection. However I don't follow their reasoning at that critical point.\n\nWe agree. The transition can be further developed, and we will make the changes in the paper. See global comment for a complete explanation.\n\n> How is it obvious that computing a counterfactual probability is equivalent to solving this reverse diffusion process?\n\nReversing the diffusion process while applying an intervention with the gradients of the classifier results in a counterfactual (if you begin with the exogenous noise of a factual observation instead of noise sampled from a Gaussian prior). \n\n> If this works for any structural causal model, as you seem to claim, then can you perhaps give a \"toy problem\".\n\nThe theory of Diff-SCM is generic enough to work for any structural causal problem using energy-based models for causal estimation. However, the technical application requires additional techniques. Please refer to global comments for an explanation of what is required. We appreciate that at present we cannot scale up and we will highlight this limitation in the paper alongside suggestions on how the community can overcome them. We believe that the theory of Diff-SCM is generic enough to inspire further applications and developments in using energy-based models for causal estimation.\n\n### Other comments\n\n> results you show for Schut et al. seem to be worse than what they show in their paper\n\nPlease see answer in footnote 4, page 10 where we explained the origins of this discrepancy. \n\n> conditional ignorability\n\nIndeed, we want the intervention to be identifiable before causal estimation. Conditional ignorability is not required. We will withdraw this requirement from the paper. \n\n> \"labels at inference time are not required” – could you clarify?\n\nPrevious works required that the use of the factual class labels at inference time for recovering exogenous noise. This phrase can be withdrawn as we can see it led to confusion. It was only included to highlight differences with other works. \n\n> “p6 “...following Dhariwal and Nichol (2021).” This is a bit vague”\n\nWe answer this question in the global comment. We will make the differences and our contributions more explicit in the paper. \n\n> Could you explain briefly how CLD is different from IM1 and IM2?\n\nSection 5.2 para. 2 “Note that the metrics IM1 and IM2 from Sec. 5.1 do not take minimality into account.”. We refer the reviewer to section 5.2 of the paper for a complete explanation of why minimality is needed. To summarize, we wish that the estimated counterfactual maintains characteristics that are not causally related to the intervention. In many cases, that cannot be measured directly. Hence, we propose the metric CLD.\n\n> Table 1 page 9: what are the subscripts?\n\nStandard deviation over test set.\n\n> What is the l1 distance (p10)?\n\nMean absolute error.\n\n> P11: what is E_phi?\n\nEncoder of VAE\n\n> Could you put the results into context?\n\nWe will add a more extensive discussion of results and further highlight limitations such as the risk of a biased classifier, \n\n> Related: in Figure 1, the method is applied to ImageNet – but this is not described in the results at all I would have liked to see a discussion of how well the method performs.\n>... and ...\n> An alternative explanation of these results, which I would like to see the authors discuss, would be that step (1) is actually not successful in the sense that it does not achieve the stated goal of \"recovering the exogenous noise\".\n\nOur understanding is that the model recovers the exogenous noise properly. However, there is a trade-off between belonging to the original class vs. target class. There is a point in the intersection of the two distributions where the generated image does not belong to neither. In these cases, the diffusion model is failing (not expressive enough) to force the output to belong to the data distribution. \n\n Global/general Response:\n```\n\nWe thank the reviewers for their time and providing useful comments. \n\n### Main concerns \n\nWe want to quickly address two main points, that set apart our work.\n\n> Clarify the novelty when compared to Dhariwal and Nichol, 2021\n\nThe work of Dhariwal and Nichol, 2021 introduces DDIM (Song et al, 2021) + classifier for conditional generation. Our work devises a causal formulation using diffusion. The main, but key difference, is that we add the **abduction of exogenous noise** (first part of Algorithm 1). Without this abduction, we cannot ensure that the resulting image will match other aspects of the original image whilst altering only the intended aspect (ie. Where we want to intervene). In causality speak, with abduction, we can sample from a counterfactual distribution instead of the interventional distribution. The abduction is done by reversing the time direction on Equation 12 in Song et. al. 2021 \n\n> Clarify the transition between Diff-SCM (a general framework) towards 2-variable counterfactual estimation\n\nIndeed, Section 3 flows from a general theoretical framework towards an algorithm applied to a simpler 2-variable model. In addition, experiments are restricted to the latter. Our primary goal with the general framework is to inspire the causal ML community to consider diffusion models (using the gradients of conditional and marginal distributions) for causal modelling. Section 3.4 gives the first steps towards this direction with applications on a simpler causal model. \n\nApplying Diff-SCM to more complex causal models would require the use of additional techniques that are beyond the scope of this paper. For instance, consider the SCM depicted in Fig. 2, a classifier naively trained to predict $x^{(2)}$ (class) from $x^{(1)}$ (image) would be biased towards the confounder $x^{(3)}$. Therefore, the gradient of the classifier w.r.t the image would also be biased. This would make the intervention do($x^{(2)}$) not correct. In this case, the graph mutilation (removing edges from parents of node intervened on) would not happen because the gradients from the classifier would pass information about $x^{(3)}$. As detailed in Pearl et al. 2016 section 4.2.4, graph mutilation is essential for counterfactual estimation. We acknowledge this limitation of our method and leave this as future work. We will ensure to include this in the paper in section 3. \n\nWe agree with the reviewers that the transition is abrupt in Section 3. We will make the appropriate modifications after Proposition 1. Reviewer Zhqw accurately described what we propose in the last bullet point. That is, we aim to train a diffusion model for each node in the graph (over which one wishes to estimate a counterfactual). Then, we train classifiers in the anti-causal direction (given the path between intervention node and desired counterfactual node). We compute exogenous noise with the diffusion models. Finally, we use the gradients of the classifiers and diffusion models to propagate the intervention in the causal direction over the nodes. \n\n### Conclusion\n\nWe propose a theoretical framework for causal estimation with diffusion models. While the experimental results are shown in a subset of possible applications, we believe that Diff-SCM can inspire future work combining energy-based models into causal estimation. In addition, we augment the state-of-the-art diffusion models for counterfactual estimation (which had not been done before). \n\n### References\n\n - Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances on Neural Information Processing Systems, 2020.\n - Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In Proc. of International Conference on Learning Representations, 2021\n - Prafulla Dhariwal and Alex Nichol. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems, 2021.\n - Pearl, Judea., Madelyn. Glymour, and Nicholas P. Jewell. Causal Inference in Statistics : A Primer. 1st ed. Chicester: Wiley, 2016. Print. \n\n```\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The response covers nearly all of the issues I raised. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
LAAZLZIMN-o
CLeaR 2022 Conference
Reviewer_Zhqw
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Zhqw, and the following are the paper you need to review:\n```<<LAAZLZIMN-o>>```", "role": "user" }, { "content": "Review Content: Summary: Using recent advances in latent variable modeling for better counterfactual image generation.\nMain review: The paper modifies the conditional sampling DDIM procedure of [Dhariwal and Nichol, 2021] to generate counterfactuals Pr(x=x_cf | do(y), x=x_f). The key addition over the DDIM+conditional sampling procedure is to extract a suitable latent encoding of x_f which is then shared with the generation of x_cf. Diffusion models run in the forward direction yield such an encoding, and conditional sampling (diffusion models run in reverse direction, along with a classifier to bias towards interventional distribution do(y)) yields impressive counterfactual image generation results on MNIST and ImageNet. \n\nThe comments below focuses only on the weaknesses of the paper.\n\n- Eqn1: Please clarify that there are several simplifications to reach the objective form in Eqn1. For instance, there is a weighting term that Ho et al 2020 empirically find to not be significant; there is a reparametrization trick in going from ||x_t - e_theta(...)|| to ||e_theta(...) - epsilon||, etc.\n\n- How are e_theta and p_phi fitted to data? Does the training data need additional annotations (e.g., a suitable prior on the latent u^(k) to ensure separation of factual and counterfactual latent encodings)?\n\n- Baselines: Dhariwal and Nichol, 2021 should be included as a baseline. The only change from Algorithm 1 is that they would set u^(k) from N(0,I) rather than the abduction procedure from x_F. \n\n- Additional experiment: Can you please clarify why Schut et al and van Looveren et al are sufficient baselines? At first glance, the approaches to controllable image generation, which learn disentangled latent spaces, seem to be immediately relevant. Rather than building an anti-causal predictor for interventions, they would map interventions to operations on the latent space. Why not compare to controllable generation via latent variable models that allow us to intervene on the latent space?\n\n- Additional experiment: All the experiments are focusing on an image generation example (and a restricted 2 variable example). Illustrating the general approach of Algorithm 1 empirically on even a toy SCM will be very informative.\n\n- Clarity: Section 3.2 begins with a general framing w.r.t SCMs but abruptly goes to the 2 variable case. It may be useful to include a discussion before/after Section 3.4 that discusses the requirements for running Diff-SCM. For instance, what should the data contain. What models need to be learned. How to set T, s, {beta_t}, etc. My understanding is as follows: Given an SCM, Diff-SCM will learn {e_theta(x)} for every node in the SCM x (if we pre-specify that we only care about counterfactual outcomes of a subset of nodes, then e_theta models will only be learned for those nodes). For every e_theta(x_k), Diff-SCM will learn an anti-causal predictor P(x_j | x_k) for every other node in the SCM (if we pre-specify that we only care about interventions on some subset of nodes, then only those anti-causal predictors are learned). This approach crucially needs to know the SCM structure (i.e., corresponding causal graph). There may also be some situations where this approach is warranted (even when we know the SCM) rather than modeling the SCM mechanisms f^(k) directly [e.g. pa(x_k) are unobserved in the data but x_j \\in ancestor(x_k) is]. However, the paper does not discuss this, nor the computational and statistical issues that may be present in learning the many encoders and classifiers.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer Zhqw\n\n\nWe thank the reviewer for your thoughtful and knowledgeable comments. Your comments will be incorporated in the final version. \n\n> Eqn1: Please clarify that there are several simplifications to reach the objective form in Eqn1\n> How are e_theta and p_phi fitted to data?\n\nWe will make the simplifications more explicit. However, the theory for training the diffusion models is given in Appendix A and further implementation details are in Appendix D.\n\n> Does the training data need additional annotations (e.g., a suitable prior on the latent u^(k) to ensure separation of factual and counterfactual latent encodings)\n\nWe need class annotations for training the classifier. More complex SCMs would also require annotation of all variables in the training set too. We use a gaussian prior for the latent spaces, following the diffusion models literature. However, there is no difference between the exogenous noise from the factual and counterfactual samples, i.e. $u^{(k)}$ is the same for $x^{(k)}_{\\text{CF}}$ and $x^{(k)}_{\\text{F}}$. One might reconstruct $x^{(k)}_{\\text{F}}$ from $u^{(k)}$ or estimate the counterfactuals $x^{(k)}_{\\text{CF}}$ by using the gradients of the classifier as an intervention. \n\n> Can you please clarify why Schut et al and van Looveren et al are sufficient baselines? \n> Why not compare to controllable generation via latent variable models that allow us to intervene on the latent space?\n\nThe causal variables are not necessarily disentangled, as you suggested in “e.g. pa(x_k) are unobserved in the data but x_j \\in ancestor(x_k)”. Classifiers learn to become invariant to nuances but related variables can be still captured by the anti-causal mechanism. Following this idea, we decided to constrain the problems to a setting where the causal variables are learned in a discriminative fashion. We will expand on this point in the paper. \n\n> Clarity: Section 3.2 begins with a general framing w.r.t SCMs but abruptly goes to the 2 variable case\n\nWe agree. Thank you for the useful comment. Your description captures exactly what we wanted to communicate. We will improve the transition to the 2 variable case as suggested before section 3.4. We further develop this point in the global review. \n\n Global/general Response:\n```\n\nWe thank the reviewers for their time and providing useful comments. \n\n### Main concerns \n\nWe want to quickly address two main points, that set apart our work.\n\n> Clarify the novelty when compared to Dhariwal and Nichol, 2021\n\nThe work of Dhariwal and Nichol, 2021 introduces DDIM (Song et al, 2021) + classifier for conditional generation. Our work devises a causal formulation using diffusion. The main, but key difference, is that we add the **abduction of exogenous noise** (first part of Algorithm 1). Without this abduction, we cannot ensure that the resulting image will match other aspects of the original image whilst altering only the intended aspect (ie. Where we want to intervene). In causality speak, with abduction, we can sample from a counterfactual distribution instead of the interventional distribution. The abduction is done by reversing the time direction on Equation 12 in Song et. al. 2021 \n\n> Clarify the transition between Diff-SCM (a general framework) towards 2-variable counterfactual estimation\n\nIndeed, Section 3 flows from a general theoretical framework towards an algorithm applied to a simpler 2-variable model. In addition, experiments are restricted to the latter. Our primary goal with the general framework is to inspire the causal ML community to consider diffusion models (using the gradients of conditional and marginal distributions) for causal modelling. Section 3.4 gives the first steps towards this direction with applications on a simpler causal model. \n\nApplying Diff-SCM to more complex causal models would require the use of additional techniques that are beyond the scope of this paper. For instance, consider the SCM depicted in Fig. 2, a classifier naively trained to predict $x^{(2)}$ (class) from $x^{(1)}$ (image) would be biased towards the confounder $x^{(3)}$. Therefore, the gradient of the classifier w.r.t the image would also be biased. This would make the intervention do($x^{(2)}$) not correct. In this case, the graph mutilation (removing edges from parents of node intervened on) would not happen because the gradients from the classifier would pass information about $x^{(3)}$. As detailed in Pearl et al. 2016 section 4.2.4, graph mutilation is essential for counterfactual estimation. We acknowledge this limitation of our method and leave this as future work. We will ensure to include this in the paper in section 3. \n\nWe agree with the reviewers that the transition is abrupt in Section 3. We will make the appropriate modifications after Proposition 1. Reviewer Zhqw accurately described what we propose in the last bullet point. That is, we aim to train a diffusion model for each node in the graph (over which one wishes to estimate a counterfactual). Then, we train classifiers in the anti-causal direction (given the path between intervention node and desired counterfactual node). We compute exogenous noise with the diffusion models. Finally, we use the gradients of the classifiers and diffusion models to propagate the intervention in the causal direction over the nodes. \n\n### Conclusion\n\nWe propose a theoretical framework for causal estimation with diffusion models. While the experimental results are shown in a subset of possible applications, we believe that Diff-SCM can inspire future work combining energy-based models into causal estimation. In addition, we augment the state-of-the-art diffusion models for counterfactual estimation (which had not been done before). \n\n### References\n\n - Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances on Neural Information Processing Systems, 2020.\n - Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In Proc. of International Conference on Learning Representations, 2021\n - Prafulla Dhariwal and Alex Nichol. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems, 2021.\n - Pearl, Judea., Madelyn. Glymour, and Nicholas P. Jewell. Causal Inference in Statistics : A Primer. 1st ed. Chicester: Wiley, 2016. Print. \n\n```\n", "role": "user" }, { "content": "Thank you for the authors' feedback. The response covers nearly all of the issues I raised. I continue to recommend this work positively.", "role": "assistant" } ]
LAAZLZIMN-o
CLeaR 2022 Conference
Reviewer_W982
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_W982, and the following are the paper you need to review:\n```<<LAAZLZIMN-o>>```", "role": "user" }, { "content": "Review Content: Summary: In the paper « Diffusion Causal Models for Counterfactual Estimation », the authors propose a generative diffusion model making it possible to model the effects of counterfactual interventions on a target variable. The generative diffusion model is seen from the point of view of a functional causal model where each variable can be expressed as a function of its parents in the causal graph and a random noise. In order to model the intervention, an anti-causal model is learned. Once learned the gradient of this anti-causal model is integrated in the generation model in order to take into account the effects of the intervention. The influence of the intervention can be regulated with a hyperparameter s. In the experimental section, the authors study a causal model with two variables X1 and X2 and evaluate the response of the model when the value of the hyperparameter s varies. A distance called CLD is introduced to measure a trade-off between the sample reproduction of the factual class and the counterfactual class. This distance is measured as a Kullback–Leibler divergence in the latent space of an autoencoder. For the dataset MNIST, the authors show that a specific value of 0.7 can achieve the best tradeoff between the reproduction of the factual class while taking into account the counterfactual intervention.\nMain review: 1) The paper is well written and fairly clear.\n\n2) The generative diffusion model used in the article seems to be really powerful for modeling distributions. However, in this article it is not clear to me how this kind of conditional model could actually be used for real causal applications.\n\n3) The review of the literature seems quite exhaustive with many interesting and recent references on the subject. However, I am concerned about the originality of the proposed approach as the model appears to be almost exactly the same as the model described in (Dhariwal and Nichol 2021), simply reinterpreting the conditional diffusion generative model with the language of causality. Is there anything different between the algorithm 1 of this paper and the algorithm proposed in (Dhariwal and Nichol 2021) in section 4, except that y is replaced by the parents of x in the causal graph ?\n\n4) I understand however that the authors are proposing a more general model than (Dhariwal and Nichol 2021) because it can be extended to a complex causal model with more than two variables. However in the experimental section of this paper, the authors consider only a two variables causal model x2 → x1. So, is there something really new ? For example, the images shown in Figure 1 could also have been generated with the conditional diffusion generative model in Dhariwal and Nichol 2021, by setting y = cliff, y = expresso maker, or is there something that I don’t understand ?\n\n5) Introducing the counterfactual latent divergence (CLD) distance seems interesting. The authors show that the factor s has an influence on the generated samples and allows to control the distance with the factual and counterfactual classes.\n\nIn general, although the paper is well written and well documented, I am not convinced that it really adds anything new from a theoretical and experimental point of view.\n\nRéférence :\nDhariwal, P., & Nichol, A. (2021). Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer W982\nWe thank the reviewer for your thoughtful and knowledgeable comments. We appreciate your positive comments towards the proposed CLD metric.\n\n> Is there anything different between the algorithm 1 of this paper and\nthe algorithm proposed in (Dhariwal and Nichol 2021) in section 4,\nexcept that y is replaced by the parents of x in the causal graph?\n\nWe highlight that we perform the **abduction of exogenous noise** using the forward diffusion, as opposed to Dhariwal, P., & Nichol, A. (2021). This fundamentally changes the concept from their paper. Instead of sampling from an interventional distribution, we sample from the counterfactual distribution. This requires a latent variable (exogenous noise) $u^{(k)}$ that retains some information about $x^{(k)}$. \n\n> For example, the images shown in Figure 1 could also have been generated with the conditional diffusion generative model in Dhariwal and Nichol 2021, by setting y = cliff, y = expresso maker, or is there something that I don’t understand ?\n\nSetting y = cliff in their model would generate an arbitrary cliff. In our model, it’s conditioned on the factual image, therefore the cliff should aim to resemble the original carbonara image. \n\n> However in the experimental section of this paper, the authors consider only a two variables causal model x2 → x1. So, is there something really new?\n\nBeyond the contribution detailed above, we believe that the theoretical framework of Diff-SCM can provide a to the causal community. We refer the reviewer to the global comment with a broader perspective on the matter.\n\n Global/general Response:\n```\n\nWe thank the reviewers for their time and providing useful comments. \n\n### Main concerns \n\nWe want to quickly address two main points, that set apart our work.\n\n> Clarify the novelty when compared to Dhariwal and Nichol, 2021\n\nThe work of Dhariwal and Nichol, 2021 introduces DDIM (Song et al, 2021) + classifier for conditional generation. Our work devises a causal formulation using diffusion. The main, but key difference, is that we add the **abduction of exogenous noise** (first part of Algorithm 1). Without this abduction, we cannot ensure that the resulting image will match other aspects of the original image whilst altering only the intended aspect (ie. Where we want to intervene). In causality speak, with abduction, we can sample from a counterfactual distribution instead of the interventional distribution. The abduction is done by reversing the time direction on Equation 12 in Song et. al. 2021 \n\n> Clarify the transition between Diff-SCM (a general framework) towards 2-variable counterfactual estimation\n\nIndeed, Section 3 flows from a general theoretical framework towards an algorithm applied to a simpler 2-variable model. In addition, experiments are restricted to the latter. Our primary goal with the general framework is to inspire the causal ML community to consider diffusion models (using the gradients of conditional and marginal distributions) for causal modelling. Section 3.4 gives the first steps towards this direction with applications on a simpler causal model. \n\nApplying Diff-SCM to more complex causal models would require the use of additional techniques that are beyond the scope of this paper. For instance, consider the SCM depicted in Fig. 2, a classifier naively trained to predict $x^{(2)}$ (class) from $x^{(1)}$ (image) would be biased towards the confounder $x^{(3)}$. Therefore, the gradient of the classifier w.r.t the image would also be biased. This would make the intervention do($x^{(2)}$) not correct. In this case, the graph mutilation (removing edges from parents of node intervened on) would not happen because the gradients from the classifier would pass information about $x^{(3)}$. As detailed in Pearl et al. 2016 section 4.2.4, graph mutilation is essential for counterfactual estimation. We acknowledge this limitation of our method and leave this as future work. We will ensure to include this in the paper in section 3. \n\nWe agree with the reviewers that the transition is abrupt in Section 3. We will make the appropriate modifications after Proposition 1. Reviewer Zhqw accurately described what we propose in the last bullet point. That is, we aim to train a diffusion model for each node in the graph (over which one wishes to estimate a counterfactual). Then, we train classifiers in the anti-causal direction (given the path between intervention node and desired counterfactual node). We compute exogenous noise with the diffusion models. Finally, we use the gradients of the classifiers and diffusion models to propagate the intervention in the causal direction over the nodes. \n\n### Conclusion\n\nWe propose a theoretical framework for causal estimation with diffusion models. While the experimental results are shown in a subset of possible applications, we believe that Diff-SCM can inspire future work combining energy-based models into causal estimation. In addition, we augment the state-of-the-art diffusion models for counterfactual estimation (which had not been done before). \n\n### References\n\n - Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances on Neural Information Processing Systems, 2020.\n - Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In Proc. of International Conference on Learning Representations, 2021\n - Prafulla Dhariwal and Alex Nichol. Diffusion Models Beat GANs on Image Synthesis. In Advances in Neural Information Processing Systems, 2021.\n - Pearl, Judea., Madelyn. Glymour, and Nicholas P. Jewell. Causal Inference in Statistics : A Primer. 1st ed. Chicester: Wiley, 2016. Print. \n\n```\n", "role": "user" }, { "content": "Title: Response to authors\nThank you for these explanations. In my first review, I did not understand that the sample comes from a latent variable that retains some information about $x^k$. \nNow I see a difference with Dhariwal and Nichol 2021 in the paper. \n\nI change my rating from 5 to 7.", "role": "assistant" } ]
NpZhf5dT6zW
CLeaR 2022 Conference
Reviewer_URGT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_URGT, and the following are the paper you need to review:\n```<<NpZhf5dT6zW>>```", "role": "user" }, { "content": "Review Content: Summary: This paper design an interactive test called i-bet. The paper deals with classic randomized experiments The idea is to test by betting from a game-theory perspective, which improves the power under heterogeneous treatment effects. \nMain review: Strength:\n\n- It follows a clear research direction from Wilcoxon rank test to CovAdj Wilcoxon test and to the proposed i-bet test.\n\n- The idea to introduce reward and use a game-theoretic way to design the test is interesting. The intuition is explained clearly by \"If an analyst can guess most treatment assignments correctly, then the treatment must have an effect\".\n\n- The cumulative products $M_t$ is well designed with good martingale properties under the null.\n\nRoom for improvements:\n\n- Current simulation setting is relatively simple. The covariates have a dimension of 3 and follow simple distribution. It is better to evaluate in more challenging case.\n\n- In the simulation, it is better to include more details such as how to compute the power in Figure 2 and 3 for the proposed method. \n\n- Why in Fig. 2 linear-CATE-test is better than i-bet but in Fig. 3 it is much worse? How to choose which one to use?\n\n- One writing suggestion is to have a minimal introduction to the full algorithm (e.g., alg. 2) and with a demonstrative example first. Current methodology is not introduced until page 6, with many discussions and related works discussed before. \n\nIn general, this paper presents an interesting idea and I recommend for acceptance. \n\nPost-rebuttal: thanks for the authors' feedback and I would maintain my recommendation.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer URGT\nWe thank the reviewer for the comments. Below are our responses. \n\n> Current simulation setting is a relatively too simple. The covariates have a dimension of 3 and follow simple distribution. It is better to evaluate in more challenging case.\n\nThe simulation was intended for illustration only, since we felt it was easier to highlight certain behaviors when it is clear what the test is doing. In a high-dimensional setting, with odd noise distributions, and a nonlinear signal, it is harder for us to reason about which test is failing/succeeding for what reason. In other words, the restriction is not in the applicability of the method, but in the human ability to comprehend and explain what the methods are doing. In Appendix D, we included more simulation settings on non-Gaussian heavy-tailed Cauchy noise and nonlinear signals. \n\n> In the simulation, it is better to include more details such as how to compute the power in Figure 2 and 3 for the proposed method.\n\nWe have added the details of power computation under Figure 2. The power is averaged over 500 repetitions, and estimated by the proportion of the repeated experiments where the null gets rejected.\n\n> Why in Fig. 2 linear-CATE-test is better than i-bet but in Fig. 3 it is much worse? How to choose which one to use?\n\nIn Figure 2, the linear-CATE-test has higher power because the simulated data for Figure 2 is generated such that the outcome is a linear function of the covariates (with interaction term) by equations (12) and (13), which is consistent with the assumed model in the linear-CATE-test. However, the linear-CATE-test cannot behave as well in Figure 3 because the simulated data using equation (14) does not follow a linear relationship. We recommend using the linear-CATE-test if the practitioners strongly believe the outcome indeed is a linear function of the covariates; otherwise, we would recommend our proposed i-bet test so that we can explore and choose the appropriate model as the test proceeds, for example starting with a linear model, and then re-evaluating that choice (based on a few unmasked treatment assignments) if the first few bets are as good as random chance, instead of resulting in growing capital.\n\nTo elaborate on the latter point briefly, we view the simulations with i-bet as a thought experiment: the reader must imagine that we perhaps chose a poor model for all methods at the start. Using a poor (uninformative or barely better than chance) model, our bet $w_j$ would possibly have a random sign for early subjects, and our wealth may fluctuate up and down, rather than increase reasonably steadily. The multi-step i-bet test can have higher power than other single-shot tests because, in the midst of this testing, the analyst can observe the poor start, explore and evaluate various models using the complete data for completed bets and the masked-assignment data for every other point, and try to find a better model. We mentioned the above note in \"Illustrations of adaptive modeling\" of Section 3; and to highlight the above idea, we have moved this discussion to the beginning of Section 3.\n\n> One writing suggestion is to have a minimal introduction to the full algorithm (e.g., alg. 2) and with a demonstrative example first. Current methodology is not introduced until page 6, proceed with many discussions and related works.\n\nIn the original paper, we attempted to introduce the full algorithm as the reviewer suggested: see equation (5) for the test statistic, the follow-up description that the algorithm should stop when the statistic exceeds $1/\\alpha$, and equation (6) for some intuition on the error control. Nevertheless, we can add a forward reference to the full algorithm in the above-mentioned paragraph, and also add an illustrative example.\n", "role": "user" }, { "content": "Thank you for the authors' detailed response. It resolves the majority of my concerns.", "role": "assistant" } ]
NpZhf5dT6zW
CLeaR 2022 Conference
Reviewer_zjzR
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_zjzR, and the following are the paper you need to review:\n```<<NpZhf5dT6zW>>```", "role": "user" }, { "content": "Review Content: Summary: The paper proposes a non-parametric tests for testing the global null hypothesis in a randomized experiment. It avoids the permutations employed in traditional nonparametric tests by interactive testing, that was recently introduced in the literature.\nMain review: The key idea in this paper is summarized in Remark 2, that whenever the test statistic is a nonnegative martingale (by design), permutations can be avoided. Based on this observation and Ville's (Doob's) maximal inequality, the authors construct a process M_t and show that it is a nonnegative martingale under the null. \n\nI am not very familiar with the interactive testing literature, but at least the application of this idea for testing the global null hypothesis appears to be new.\n\n1. In equation (1), it is unclear what the notation Y_i | A_i=1, X_i refers to. According to the context, this should refer to the potential outcomes, but the notation seems to suggest observed outcomes.\n\n2. The authors describe a specific way of choosing \\pi_t in Section 2.2. I am wondering if they could comment on how would different choices of \\pi_t would influence the power of the proposed test, and how should a practitioner make such choices in a real setting.\n\n3. It is a pity that they did not apply this test to a real data set. Such data sets seem to be readily available, and a data application would help illustrate the method better, and help clarify such practice choices needed to made by practitioners; e.g. see my comment 2 above.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Response to Reviewer zjzR\nWe thank the reviewer for the comments. Below are our responses.\n\n> In equation (1), it is unclear what the notation $Y_i | A_i=1$, $X_i$ refers to. According to the context, this should refer to the potential outcomes, but the notation seems to suggest observed outcomes.\n\nTo answer the review's question, we provide an alternative, equivalent, description in the language of observed and potential outcomes. Let's say that each subject $i$ has potential control outcome $Y_i^C$, potential treated outcome $Y_i^T$, and the treatment indicator~$A_{i}$ for $i \\in [n] \\equiv \\{1,2,\\dots,n\\}$. The observed outcome is $Y_i = Y_i^C(1 - A_i) + Y_i^T A_i$ under the standard causal assumption of consistency ($Y_i = Y_i^T$ when $A_i = 1$ and $Y_i = Y_i^C$ when $A_i = 0$). In this setting, we have $Y_i | A_i=1$ is precisely $Y_i^T$. We have added the above description in Section 1.1. \n\n> The authors describe a specific way of choosing $\\pi_t$ in Section 2.2. I am wondering if they could comment on how would different choices of $\\pi_t$ would influence the power of the proposed test, and how should a practitioner make such choices in a real setting.\n\nThe choice of the ordering $\\pi_t$ affects the test power, when taken together with the choice of weight $w_t$. So let us first note that a desirable weight $w_t$ should ideally have the same sign as $A_{\\pi_t}$; this would allow $M_t$ to increase to sooner reach the rejection threshold $1/\\alpha$. Therefore, we recommend practitioners to order upfront the subjects for which they (or the algorithm acting on their behalf) are most confident about their treatment assignment. \n\nWe have added the above recommendation as a remark in Section 2.1.\n\n> It is a pity that they did not apply this test to a real data set. Such data sets seem to be readily available, and a data application would help illustrate the method better, and help clarify such practical choices needed to made by practitioners; e.g. see my comment 2 above.\n\nWe have now run the methods on 4 real-world datasets, in 3 all of them don't reject (data in [1], [2], and the nudge-intervention experiment in [3]), and in the fourth all of them reject (data in the voucher-intervention experiment in [3]). Hence we continue to focus more on the simulations where the ground truth is known, in order to compare the methods in detail. The reason we did not add the real-data application earlier is that we cannot compare the power by doing repeated simulations; we can run each test once and get a binary signal about whether they are rejected or not. Also, since we do not know the ground truth, we would not even be sure if it is a false positive or not. Nevertheless, we will add some real data experiments to the paper, and mention the above caveats.\n\nAs an example, the data in the last experiment is from Hainmueller et al. (2018) [3], which investigates whether application fees act as a barrier for low-income immigrants who want to become US citizens. A relevant part of their study collected $750$ participants who are supposed to pay the application fee at the time, and assigned them into two groups using complete randomization. The treated group ($n_1 = 210$) received a voucher that pays the fee for the naturalization application; while the control group ($n_0 = 540$) did not receive this voucher. The outcome $Y_i$ is whether or not participant $i$ reported having submitted the US citizenship application. The covariates $X_i$ include age, gender, years holding a green card, marital status, and educational attainment.\n\n\n---\n[1] Daren R Anderson, Samantha Horn, Dean Karlan, Amanda E Kowalski, Jody L Sindelar, and Jonathan Zinman. Evaluation of combined financial incentives and deposit contract intervention for smoking cessation: A randomized controlled trial. Journal of Smoking Cessation, 2021.\n\n[2] Manish Gehani, Suman Kapur, Sudha D Madhuri, Vara Prasad Pittala, Sree Kala Korvi, NagamaniKammili, and Shashwat Sharad. Effectiveness of antenatal screening of asymptomatic bacteriuria in reduction of prematurity and low birth weight: Evaluating a point-of-care rapid test in a pragmatic randomized controlled study. EClinicalMedicine, 33:100762, 2021.\n\n[3] Jens Hainmueller, Duncan Lawrence, Justin Gest, Michael Hotard, Rey Koslowski, and David D Laitin. A randomized controlled design reveals barriers to citizenship for low-income immigrants. Proceedings of the National Academy of Sciences, 115(5):939–944, 2018.", "role": "user" }, { "content": "Thanks to the authors for their clarification. The reply answers almost all of my questions. I will continue to recommend accepting this work.", "role": "assistant" } ]
NpZhf5dT6zW
CLeaR 2022 Conference
Reviewer_GSiN
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_GSiN, and the following are the paper you need to review:\n```<<NpZhf5dT6zW>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a novel interactive hypothesis testing framework that possesses desirable statistical properties as well as computational advantages over existing permutation based tests. The theoretical properties are proved and performance is demonstrated on simulated data.\nMain review: Originality: This paper provides a novel interactive betting hypothesis testing framework. Its novelty is justified in my opinion,and I'm not aware of existing work that already solves this problem. \n\nSignificance: The problem of detecting the causal effect of a treatment is one that is important in many fields in the natural sciences, social sciences and engineering. This proposed method gives a game theoretic insight into the problem of hypothesis testing. \n\nTechnical quality: To my best understanding, the claims are theoretically sound and substantiated by theorems put forth in the paper. \n\nClarity: The submission is clearly written as well as a comparison to existing methods is made and the performance demonstrated on simulated data.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Response to Reviewer GSiN\nWe thank the reviewer for the positive comments.", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The response addresses most of my concerns. I will continue to recommend accepting this work.", "role": "assistant" } ]
P0f91v5fTK
CLeaR 2022 Conference
Reviewer_DkQi
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_DkQi, and the following are the paper you need to review:\n```<<P0f91v5fTK>>```", "role": "user" }, { "content": "Review Content: Summary: This paper considers an interactive RL ting where the agent and the human may have different sensory capacities. It then shows agents may learn only sub-optimal policies if they do not take into account human advice. A counterfactual agent is proposed to include the results of actions from humans. A new RL algorithm is also proposed considering that there is a budget constraint over the available amount of human advice.\nMain review: The paper is generally well written and present interesting ideas within the specific MDPUC and DSCM models. My major concern is whether the foundation results, namely, Cor 6 and Thm 7, are indeed meaningful.\n-\tFirst, it is assumed that agent cannot access $U$, but human can. My question is why this is a rational setting? \n-\tSecond, in Thm 7, the condition “the human decision is affected only by observable variables” indicates that the human decisions cannot bring in more information than what the agent observes. In other words, as no more information is introduced, it is intuitive that human decisions are not useful if we consider the asymptotic performance.\n-\tThird, I do not find any condition on the human decision, which is a bit surprising to me. E.g., even if human can access $U$, but his/her decision is a constant (say, always 1), then it does not provide any information. \n\nMinor: $U^{(t1)}, U^{(t2)}$ seems not defined.\n\nIn summary, the paper presents interesting ideas, with the specific RL setting. However, it seems to me that whether the human can help is equivalent to whether there could be more information. In this sense, the contribution of the paper is weakened.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: A few clarifications\nWe thank the reviewer for the thoughtful feedback. We believe that a few misreadings of our work made some of the evaluations overly harsh. We respectively ask the reviewer to reconsider our paper in the light of clarifications provided below.\n\n---\n#### Q1. “First, it is assumed that agent cannot access $U$, but human can. My question is why this is a rational setting?”\n\nYes, we believe that this setting represents many challenges in real-world applications. For instance, most vehicles with “autonomous driving” features today are indeed “semi-autonomous”, i.e., they require a human operator at the driver seat. The human driver monitors the road condition and could take over the control when the situation becomes complicated. Since the human driver and the robot driver (i.e., the vehicle) have different sensory capabilities, the robot might not necessarily be able to obtain some information accessible to the human, i.e., they are unobserved confounders to the robot. For instance, the human driver could hear the siren and decide to stop the car on the sideway. However, sound information is not recorded in most autonomous driving datasets (e.g., ApolloScape, 2018). The omission of audio information could be In this case, the siren becomes a confounder unobserved to the robot driver. If the robot decides to ignore the human’s action (i.e., stopping the car), it may lead to accidents and even causalities.\n\n---\n#### Q2. “Second, in Thm 7, the condition “the human decision is affected only by observable variables” indicates that the human decisions cannot bring in more information than what the agent observes. In other words, as no more information is introduced, it is intuitive that human decisions are not useful if we consider the asymptotic performance.” ... “However, it seems to me that whether the human can help is equivalent to whether there could be more information. In this sense, the contribution of the paper is weakened.”\n\nWe are glad that the reviewer found our graphical condition in Theorem 7 “intuitive”. While it may appear straightforward, Theorem 7 is the first condition in the literature that determines whether full autonomy could be achieved in automated systems, which we believe is a significant contribution. Also, Theorem 7 only constitutes part of our contributions. We provide in Section 5 a novel method to resolve the trade-off between autonomy and optimality. It finds an effective semi-autonomous decision policy that optimizes the reward while subject to budget constraints over humans’ involvement.\n\n---\n#### Q3. “Third, I do not find any condition on the human decision, which is a bit surprising to me. E.g., even if human can access $U$, but his/her decision is a constant (say, always 1), then it does not provide any information.”\n\nTheorem 7 states that the human’s action could be ignored if “the human decision is affected only by observable variables.” The reviewer’s example, i.e., the human’s decision “is a constant (say, always 1)” regardless of values of $U$, satisfies this condition.\n\n---\n#### Q4. “$U^{(t1)}, U^{(t2)}$ is seems not defined.”\n\nIn the MDPUC+ of Figure 2(c), the unobserved states $U^{(t)} = (U^{(t1)}, U^{(t2)})$” (Page 6, Line 6). That is, $U^{(t1)}, U^{(t2)}$ are elements in the random vector $U^{(t)}$.\n\n", "role": "user" }, { "content": "Title: Thanks for response\nThanks for your response. I decide to slightly increase my evaluation to \"6: Marginally above acceptance threshold\". (but it seems I cannot modify the original comment. I will figure it out.)\n\nThe given example is a good motivating example, so please make it clear in the paper. And also maybe add some more realistic experiments, which should benefit the paper. I ", "role": "assistant" } ]
P0f91v5fTK
CLeaR 2022 Conference
Reviewer_yKxJ
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_yKxJ, and the following are the paper you need to review:\n```<<P0f91v5fTK>>```", "role": "user" }, { "content": "Review Content: Summary: This paper studies the role of human instructor in the interactive setting, and shows that it is helpful to keep human in the loop as long as it has information regarding to the task, though not playing it in an optimal way.\nMain review: Significance:\nThis paper studies an interesting topic, i.e., what is the role of humans for interactive reinforcement learning settings. To formally study this, the paper defines the experimental agent and counterfactual agent and identifies the conditions that when human input is valuable to achieve optimal performance. The interesting part is that it shows when human has specific information for the task that is not accessible to the agent, though they do not perform it optimally, it could still boost the performance of the agent.\n\nQuality/Clarity:\nThis paper is easy to read. By formally defining the related concepts in the causal language, the setup is clear and makes the paper easy to follow and understand. I appreciate the identification of the conditions that when human input can provide valuable information. It would be great (if possible) to characterize this in a more quantitative way. Intuitively the quality of human play should have some effect on the learning process of the agent, this might could be achieved by imposing some human players' policy and seeing how it affects the final performance. Another interesting part I like of the paper is the trade-off between optimality and autosomally through a polynomial optimization problem, though it would be great if the paper could discuss some scalability of the optimization problem. \n\nMinor comments:\nMaybe I am missing something here, the paper mentions that the agents could still benefit from the human's advice though the reward and transition are fully known (with some features unobservable), is there any experiments showing this empirically?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer yKxJ\nWe thank the reviewer for the thoughtful feedback. We agree that a quantitative condition for characterizing the value of the human’s action is an interesting research direction. We summarize the reviewer’s questions and provide answers below.\n\n---\n#### Q1. “Another interesting part I like of the paper is the trade-off between optimality and autosomally through a polynomial optimization problem, though it would be great if the paper could discuss some scalability of the optimization problem.”\n\nWe are glad that the reviewer appreciated the trade-off between optimality and autonomy. Our polynomial optimization problem faces the same challenges as other standard MDP planning algorithms with high-dimensional state space. One approach to addressing these challenges is exploring additional parametric assumptions about the transition distributions and the reward functions, e.g., the linearity.\n\n---\n#### Q2. “Maybe I am missing something here, the paper mentions that the agents could still benefit from the human's advice though the reward and transition are fully known (with some features unobservable), is there any experiments showing this empirically?”\n\nYes, Figure 1 shows such an experiment. More specifically, exp2 is a policy obtained from standard POMDP planning algorithms provided with the full knowledge of the transition distribution $P(s^{(t+1)}|s^{(t)}, u^{(t)}, x^{(t)})$ and the reward function $P(y^{(t)}|s^{(t)}, u^{(t)}, x^{(t)})$. It uses $U^{(t)}$ as the latent state and $S^{(t)}$ as its partial observation. The ctf represents the counterfactual policy learned using human input. Simulation results in Figure 1 reveal that the performance of the standard POMDP agent (exp2) coincides with the random coin-flipping (random). The counterfactual agent (ctf) outperforms all the other policies, even though the physician (human) performs worse than random guessing.", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised. I maintain my positive recommendation for this work.", "role": "assistant" } ]
P0f91v5fTK
CLeaR 2022 Conference
Reviewer_FQqZ
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_FQqZ, and the following are the paper you need to review:\n```<<P0f91v5fTK>>```", "role": "user" }, { "content": "Review Content: Summary: This paper considers the following setting of RL: one agent can only access part of the confounders in the system. Even this agent can access all the reward function and state-transition functions, the learnt policy may still be sub-optimal. Then the authors propose the so-called \"counterfactual agent\" and a hybrid approach to improve the regret.\nMain review: Strengths:\n\nThe theory developed herein is relatively self-complete. The simulation studies are relatively comprehensive. The polynomial optimization formulation of the hybrid agent is quite novel and potentially has strong practical and theoretical relevance (from the algorithmic perspective). This may also open up the possibility of studying statistical-computational trade-offs in settings related to this paper. In particular, the hybrid setting is also related to recent literature on combining data with no unmeasured confounding with observational data with unmeasured confounding to dynamical settings. In this case, though, the trade-off between causal and non-causal can be balanced under the resource (access to human decision-makers) constraint.\n\nWeaknesses/Other comments:\n\n1. The paper did not reference many important works on time-varying causal inference (which is essentially RL + causality) in statistics: e.g. Murphy JRSSB 2003, Robins 2004, Dudík-Erhan-Langford-Li Stat. Sci. 2014.\n\n2. From the perspective of causality, it is not at all surprising that the Experimental Agents (EA) as defined in this paper may learn sub-optimal policies when EA does not have access to all confounders. I believe this should be discussed in detail in the first section and the intuition by causal thinking should also be presented. I think it is also important to explain the intuition to readers why even when human decision-makers are inferior to random guessing, it is still beneficial to have this component in the algorithm.\n\n3. As I mentioned, the hybrid setting seems to be closely related to the recent literature in causality that tries to combine real-world data for which strong ignorability fails to hold and data with no unmeasured confounding. Could the authors briefly comment on this?\n\n4. In the hybrid setting, which algorithm is used for the polynomial optimization problem? How long does it take to run the algorithm? I had the impression that Sherali-Adams is fast but SDP is generally not scalable to even moderately big systems. It might be helpful if the authors can further comment on the practicality of these choices, in particular during the presentation of the simulation results.\n\n5. MDP might not be a realistic assumption in particular in clinical applications. It might be helpful if the authors could make some further comments on how to extend their works to non-MDP settings in the Discussion section.\n\nOverall, the flavor of causality in this paper is not very strong and the kind of idea borrowed from causality is rudimentary. But the optimization algorithms, the theoretical justification, and the detailedness of the authors' analyses are worth sharing with the causal inference community.\n\nMinor typos:\n\nCorollary 19 proof (Appendix page 17): $F_{CTF}$ should be $\\Pi_{CTF}$.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer FQqZ\nWe thank the reviewer for the thoughtful feedback and incorporate it into the updated manuscript. We summarize the reviewer’s questions and provide answers below\n\n---\n#### Q1 + Q3 \n>\"The paper did not reference many important works on time-varying causal inference (which is essentially RL + causality) in statistics: e.g. Murphy JRSSB 2003, Robins 2004, Dudík-Erhan-Langford-Li Stat. Sci. 2014.”\n\n>“As I mentioned, the hybrid setting seems to be closely related to the recent literature in causality that tries to combine real-world data for which strong ignorability fails to hold and data with no unmeasured confounding. Could the authors briefly comment on this?”\n\nThank you for sharing the references. The problem settings the reviewer mentioned are related, but orthogonal to the one studied in our work. More specifically, these papers studied the offline learning setting where the goal is to learn an efficient interventional policy from data collected through observational studies. The learned policy is still “experimental” (Definition 3), i.e., it does not utilize the human’s action during the execution of the policy. In other words, the human’s action is only utilized before the deployment of the policy and is ignored after a policy is obtained. There is a good amount of follow-up on this work, which includes graphical models, for example, see (Zhang and Bareinboim 2017, 2019, 2020, 2021).\n\nOn the other hand, this paper studies another aspect of issues arising due to unmeasured confounders. We design a novel family of agents that models the system dynamics using a specific type of counterfactual distributions in Definition 8, similar to the effects of the treatment on the treated (ETT). Using this counterfactual representation, the agent is able to actively account for the human’s action “on the fly”, after the deployment of the policy. (Here, interventional and counterfactual policies are strictly different, as shown in Def. 3 and Def. 5. ) \n\nFinally, we note that there exist opportunities combining both the offline and online modes of learning. For instance, in Experiment 2, we learn some entries of ETT distributions from the confounded observational distribution (Eqs. 12-13). The agent then uses the estimated ETT probabilities to learn a counterfactual policy that actively accounts for the human’s action.\n\n---\n#### Q2 “From the perspective of causality, it is not at all surprising that the Experimental Agents (EA) as defined in this paper may learn sub-optimal policies when EA does not have access to all confounders. I believe this should be discussed in detail in the first section and the intuition by causal thinking should also be presented. I think it is also important to explain the intuition to readers why even when human decision-makers are inferior to random guessing, it is still beneficial to have this component in the algorithm.”\n\nWe agree with the reviewer that experimental agents “may learn sub-optimal policies when EA does not have access to all confounders.” Humans’ suboptimal decisions are still beneficial since the agent may use them to infer the optimal decision. For instance, for a binary action X, if an intervention X = 0 is inferior to random guessing, the opposite action X = 1 is guaranteed to perform better than random guessing. We will include further explanations in the updated manuscript. Thank you for the suggestion.\n\n---\n#### Q4 “In the hybrid setting, which algorithm is used for the polynomial optimization problem? How long does it take to run the algorithm? I had the impression that Sherali-Adams is fast but SDP is generally not scalable to even moderately big systems. It might be helpful if the authors can further comment on the practicality of these choices, in particular during the presentation of the simulation results.”\n\nWe used the SDP relaxation (Lasserre, 2001) to solve the polynomial program for all experiments. It could return an optimal hybrid policy in 10-15 mins. However, the program was unable to finish for a moderately large domain due to the lack of memory. We conducted experiments on a laptop with 32GB memories. Similar to standard RL algorithms, our method could face computational challenges when the state-action space is high-dimensional. To address this issue, one approach is to consider additional parametric assumptions in the transition distribution and the reward function, e.g., see (Guestrin et al., 2003).\n---\n#### Q5 “MDP might not be a realistic assumption in particular in clinical applications. It might be helpful if the authors could make some further comments on how to extend their works to non-MDP settings in the Discussion section.”\n\nThis paper studies the problem of maximizing the cumulative discounted reward over an infinite horizon ($T = \\infty$). In clinical applications, most environmental models (e.g., dynamic treatment regimes (DTRs)) are concerned with the finite-horizon decision-making settings. For instance, every patient usually only receives a finite number (say, 10) of chemotherapy treatments. It follows immediately that one could define counterfactual agents for DTR models that actively account for the physician’s action. However, the polynomial program formulation for the trade-off between autonomy and optimality depends on the MDP properties and may require additional generalization. We will include further discussions about extending to non-Markov settings in the updated manuscript.\n", "role": "user" }, { "content": "Title: thank you\nI'd like to thank the authors for their comprehensive response. The responses have fully addressed my concerns. I'll increase my evaluation by one point.\n\nThank you!", "role": "assistant" } ]
Q102xPpYV-B
CLeaR 2022 Conference
Reviewer_DtnT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_DtnT, and the following are the paper you need to review:\n```<<Q102xPpYV-B>>```", "role": "user" }, { "content": "Review Content: Summary: The paper studies estimation of causal effects using a combination of experimental and observational data. The authors propose the integrative R-learner, an extension of the existing R-learner estimator, for the purpose. Under the assumptions of effect transportability and ignorability in the experimental data, this is shown to lead to consistent estimation with asymptotic variance at least as good as a comparable estimator based only on experimental data. The approach is evaluated on synthetic and real-world data and compared with relevant baselines. \nMain review: **Originality:**\n\nThe paper approaches a previously explored problem under less restrictive assumptions, with an estimator that generalizes a previously known one. This provides sufficient originality. \n\n**Significance:**\n\nThe result that strict variance improvement over RCT data alone depends on the nature of basis functions for the effect and confounding functions is interesting—but not expanded upon. \n\nIt is slightly disappointing that advantage over RCT-only estimates is studied only in terms of asymptotic variance for a fixed proportion of RCT samples. If anything, this study is justified by the fact that RCT data is rare and observational data is plentiful. Although p(S=1) may be small, I find it hard to see that “n” grows linearly with “m” in practice. Admittedly, the comparison is made with an estimator that is more wasteful—which uses all N samples from an RCT. Nevertheless, it is unclear how the result scales with p(S=1) or for fixed “n” with growing “m”. Surely these questions are of greater practical concern in the settings used for motivation?\n\nThe comparison with Kallus et al. (2018) and Nie & Wager (2021) in section 5 is OK, it makes for a more self-contained paper, but does not add much new insight. For example, the instability of the U-learner is noted in existing work. Would it be possible to provide a rate comparison with the U-learner?\n\nIn simulation study I, the authors study the finite-sample properties of their estimator. This is a very low-dimensional setting (d=2) where the sample size from the RCT is proportionally quite large. Additionally, the model size is very small as well. The integrative r learner is the clear winner in this setting, yes, but it does not tell us much about where this edge comes from and if/when it is lost. \n\n**Technical quality:**\n\nOverlap between observational data and RCT is not discussed other than in experiments. However, is it not necessary for objective (5) to be uniquely identified in general? Specifically, if RCT and OS are completely disjoint, it would seem c and tau could be invariant to a mutliplication/division with a constant on the observational data, since \\tau does not have to be consistent with the RCT in these points. \n\nThe authors equate “If all the confounding variables are captured in the OS” with exchangeability w.r.t X. These are not equivalent, e.g., if the OS contains other post-treatment variables than the outcome. \n\nIn Assumption 1, \\pi_A(X) is not defined. \n\n**Clarity:**\n\nThe introduction is clear, but perhaps a little too long given its current content. This is not the first work studying a combination of RCT and OS data, and the problem does not need to be justified from scratch—the arguments that RCTs are expensive and OS data is plentiful but can lead to biased estimates have been made many times before. I appreciate that this gives a more self-contained description, but I think the space could be better spent. In particular, I’m missing references (in the introduction) to e.g., Bareinboim & Pearl (2016), Kallus, Puli & Shalit (2018)—especially since it is used heavily later in the paper, Peysakhovich & Lada (2016), Gentzel, Pruthi & Jensen (2021), etc. While these may not address the same exact setting as is studied here, they are more closely relevant than some of the other references in the introduction. In particular, this could help contextualize the contributions of the present work. \n\nThe Setup section clearly defines the problem and the main assumptions. \nSection 3.1 justifies the proposed learning objective well. \n\nUnless I’m mistaken, Algorithm 1 describes a special case of grid search. I’m not sure an algorithm box is necessary here—although it is certainly clear!\n\nI think the abbreviations NK_linear and NK_lasso could be interpreted as “Nathan Kallus_linear”. Surely, it would be more appropriate to also give credit to the two co-authors of that paper than to list both first and last name of the first author?\n\nThe original R-learner studies a non-parametric kernel-based estimator. Here, a series approximation is used. Was there a particular reason for this departure?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Responses to Reviewer DtnT \nWe thank the reviewer for your careful reading of our manuscript and helpful comments. In what follows, we have the point-to-point responses to the detailed review comments.\n1. *“The result that strict variance improvement over RCT data alone ... but not expanded upon.”*\n Thanks for the constructive comments. To have a better understanding of strict variance improvement over the RCT-only estimator, we consider an extreme example where the OS does not have unmeasured confounding. In this example, the confounding function is zero. If we knew this fact priori, we can combine the experimental and observational studies directly for HTE estimation. The integrative R-learner improves the efficiency of the RCT-only estimator for the obvious reason that it uses a larger sample size. As long as the confounding function has a simpler structure than the HTE, e.g. more sparse in variables or more smooth in its functional form, the integrative method outperforms the RCT-only estimator. When we lack such prior knowledge for the confounding function, we use basis functions to approximate the confounding function and use penalization to exclude the redundant basis functions and leverage the sparse or smooth structure of the confounding function to achieve the efficiency gain as the oracle one does.\n2. *“It is slightly disappointing that advantage over RCT-only estimates is studied only in terms of asymptotic variance, … in the settings used for motivation?”*\n Thanks for the inspiring feedback. Our proposed method is indeed motivated in the settings where the sample size of the RCT (n) is small but the sample size of the OS (m) is much larger. To address the practical concern, we can allow n/m go to 0 with n and m going to infinity in our asymptotic regime. This asymptotic regime is not restrictive in some practical situations when the RCT has hundreds of patients and the OS has a much larger sample. We agree that non-asymptotic risk bounds for a fixed n and a growing m are also of great interest, and we will work on deriving such bounds in our future work. \n3. *“Would it be possible to provide a rate comparison with the U-learner?”*\n Thanks for the great comment. Yes, we can also show that the U-learner is root-N consistent for the HTE as our proposed integrative R-learner. The proof steps are similar: using sample splitting to estimate the nuisance functions helps to replace the estimated nuisance functions in the adjusted outcomes in the U-learner with the true ones, and then applying the theory of series approximation leads to the asymptotic properties of the U-learner. Although the U-learner and the R-learner have a comparable rate of convergence, we chose to extend the R-learner to the data integration context exactly because the R-learner is more stable than the U-learner, especially when the propensity score can be close to zero or one. \n4. *“Additionally, the model size is very small … where this edge comes from and if/when it is lost.”*\n The efficiency gain comes from the smoothness or sparsity of the confounding function (Theorem 1). When the confounding function requires more basis expansion than the HTE, then the integrative approach does not gain any efficiency on HTE estimation. We agree that additional simulation studies will be needed to provide a comprehensive evaluation of the proposed method. \n5. *“Overlap between observational data and RCT is not discussed other than in experiments.”*\n Thanks for bringing up this important question. The goal is to identify and estimate (x), where x belongs to the support of X in RCT. The support of X in RCT must overlap with/nest in the support of X in OS. Outside of the support of X in RCT, (x) is not identifiable. We will add this assumption in the revised paper.\n6. *“The authors equate “If all the confounding variables are captured in the OS” ... other post-treatment variables than the outcome.”*\n Thanks for the comment. We will clarify this as “If all the pre-treatment confounding variables are captured in the OS”. \n7. *“In Assumption 1, \\pi_A(X) is not defined.”*\n This is a typo: \\pi_A(X) should be changed to e(X, S). We will make the correction. \n8. *“Unless I’m mistaken, Algorithm 1 describes a special case of grid search. I’m not sure an algorithm box is necessary here—although it is certainly clear!”*\n We agree with the comment, and we will move Algorithm 1 to the appendix if the space is limited.\nWe will change “NK_linear” and “NK_lasso” to “KPS_linear” and “KPS_lasso” so as to include the first letter of the last names of all the authors.\n9. *“The original R-learner studies a non-parametric kernel-based estimator. Here, a series approximation is used. Was there a particular reason for this departure?”*\n Using series approximations enables us to compare the efficiency gain in Theorem 1 more easily. \n10. Thanks for the suggestion on the additional related literature in the introduction part. We will add them in the revised paper.", "role": "user" }, { "content": "Title: Re: response\nThank you for your response!", "role": "assistant" } ]
Q102xPpYV-B
CLeaR 2022 Conference
Reviewer_8X2B
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_8X2B, and the following are the paper you need to review:\n```<<Q102xPpYV-B>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a new method, integrative R-learner, for estimating causal effects from both observational and experimental data in the binary contextual bandit setting. Importantly, the observational data is subject to causal confounding, and both the observational and experimental data are subject to covariate shift (different distribution of the context variables). The authors show their method asymptotically converges to an unbiased estimator (under assumptions), at least as fast as the estimator that uses the experimental data.\nMain review: ### Originality\n\nThe approach proposed in the paper seems novel, as far as I can tell. However, I think a few citations to related works might be missing. How does the approach relate to, e.g.\n\nZhang and Bareinboim (2017) Transfer learning in multi-armed bandits: A causal approach.\n\nZhang and Bareinboim (2021) Bounding causal effects on continuous outcomes.\n\n### Significance\n\nThis paper addresses an important problem, that of merging observational and experimental data in causal inference, which is very relevant to CLeaR. However, it is also very dense mathematically for a quite limited in scope (binary contextual bandits). And most importantly, I remain unconvinced, from the presented experimental results, of the utility of the method. Thus, I am unsure this paper is likely to have a broad impact.\n\n### Technical quality\n\nThe technical quality of the paper is rather poor. The presented approach is backed-up by theoretical justifications, however it is hard to judge the validity of those, or how restrictive the required assumptions are. The paper is very dense and mathematically heavy (although I noted the notation was also a bit sloppy at times), and the proposed approach includes many bells and whistles (different models for different parts of the model, cross-fitting, series approximation, regularization with cross-validation) which highly obfuscate the interpretability of the paper. On the experimental side also, I found the experiments rather limited and potentially flawed (see my detailed comments), and thus not convincing enough to support the claims of the paper.\n\n### Clarity\n\nAlthough the proposed approach does not seem that complicated and can be summarized in two equations (5 and 6), the mathematical developments in the paper are hard to parse and interpret intuitively. Also, I found the experiments not very clear. For example, I could not clearly understand what the compared methods actually correspond to (e.g., the rct_x and os_x variants).\n\n### Detailed comments\n\np.3 Ass.1: $c_1,c_2 \\in (0, 1)$ -> This notation is confusing to me. Do you mean the interval $[0,1]$, or the set $\\\\{0,1\\\\}$ ? This should be clarified.\n\np.3 Ass.1: $\\pi_A(X)$ -> this concept has not been introduced yet. What is $\\pi_A(X)$ ? Furthermore, if I assume that this represents the policy used in the observational study, $p(A|X,S=0)$, then this statement is trivial. Since $A$ is binary, we necessarily have that $p(A=a|X,S=0) \\in [0,1], \\forall a \\in \\\\{0,1\\\\}$.\n\np.3 Ass.1: I have a hard time understanding the meaning of (c). I suppose you mean $Y=Y(a), \\forall a$ ? Then, this turns your assumption (a) into $A \\perp Y \\mid (X,S=1)$, which means that the treament has no causal effect on the outcome. Am I missing something ?\n\np.3 Ass.2: $s=0,1$ -> do you mean $\\forall s \\in \\\\{0,1\\\\}$ ?\n\np.3 eq.2: $A\\\\{\\dots\\\\}$ -> this notation is again new. What does $A\\\\{\\dots\\\\}$ mean ?\n\np.3: combining the RCT and OS can identify the HTE and confounding function -> the first part of this statement is trivial, since the RCT alone can identify the HTE.\n\np.3: $Y(a) \\perp S \\mid X,(a=0,1)$ -> this notation again is rather sloppy. Do you mean $Y(a) \\perp S \\mid X,A=a, \\forall a \\in {0,1}$ ?\n\np.3 Ass.2: $\\mathbb{E}\\\\{...\\\\}=\\tau(X)$ -> notation is inconsistent, should be $\\mathbb{E}(...)$.\n\np.6: the possible smooth or sparsity of $c(X)$ leads to the efficiency gain of the integrative R-learner compared to using the RCT alone -> Do the gains of your method only come from regularization ? And what if the RCT $\\tau(X)$, estimated alone from RCT data, is also regularized ? Do you still obtain gains with your method ?\n\np.6: Thus, it is critical to impose different regularization parameters for $\\tau(X)$ and $c(X)$ -> I do not see the implication. Why would these regularizations need to be different ?\n\np.7 Ass.3: $||\\tau(X)||_\\infty$ and $||c(X)||_\\infty$ have not been defined.\n\np.7: the integrative $R$-learner has strictly larger efficiency -> what is meant here by efficiency ? This concept has not been properly defined. Do you mean that regularizing $c(X)$ imposes an implicit regularization on $\\tau(X)$, which reduces the capacity of the RCT model, thus reducing model variance (at the cost of increasing model bias) ?\n\np.9: It is unclear to me what the \"rct_x\" and \"os_x\" methods are. The methods described in Section 3 and 5 seem to use both observational and experimental data. How are those methods modified to include only experimental (rct) or observational (os) data ?\n\np.9 Sec.6: the main claim of your paper is \"without any prior knowledge of hidden confounding in the OS, the proposed integrative R-learner is consistent and asymptotically at least as efficient as the estimator using only the RCT\". To support that claim experimentally, it is crucial that in your experiments, at least in the synthetic ones, you let the number of RCT samples vary from a small to a large number so as to verify that a) both estimators indeed converge to the same thing; and b) your estimator converges faster.\n\np.9: $U_{rct}$ has not been defined.\n\np.10 Tab.1: This experiment seems flawed. How is it that the unbiased RCT estimator \"rct_rf\" performs worse than the confounded one, \"os_rf\" ? It seems like the combination of RCT and OS data has a much lower impact on the final performance than the choice of the model family used to learn the estimator. I am not sure what \"rct_oa\", \"os_oa\", \"rct_rlearner\" and \"os_rlearner\" are, but if you want to evaluate the hypothesis that \"int_rlearner\" brings improvements due to the use of the OS data, then it is crucial that you also evaluate against a model that is learned from the RCT data only, which is adequate and properly regularized (e.g., using cross-validation).\n\np.10 Tab.1: Another comment, it seems crucial to me to include another important important baseline, which assumes unconfoundedness in the OS data, and estimates the RCT from a naive combination of RCT and OS data. I suspect that such a baseline, with a properly regularized model, could perform very well in this experiment.\n\np.10: generate population -> generate a population\n\np.10: slection -> typo\n\np.11 Tab.2: Here again, how is it that the biased estimators, using OS data only, perform better than the unbiased estimators, which use RCT data only ?\n\np.11: taking samples with $T=0$ -> what is $T$ here ?\n\np.11: an unbiased estimate from all samples -> $X$ is missing from this formula. How can this be an unbiased estimate of the RCT ?\n\np.12: the RMSE of our integrative R-learner increases -> how is it that the best RMSE obtained by your method is at iteration 1? Why event proposing an iterative learning procedure, if your method performs best without doing iterations ?\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Responses to Reviewer 8X2B\nWe thank the reviewer for your careful reading of our manuscript and helpful comments. In what follows, we have the point-to-point responses to the detailed review comments.\n1. Due to the confusions caused by the notations possibly varying in different communities, we first have clarifications for notations: \n * The rule for the order of the operation we use: [{()}]. We start with (), then [], finally {}. For example, $2\\times[1+2\\times\\\\{1+2\\times(1+1)\\\\}]=22$, we first calculate (...), then {...}, and finally [...]. In eq2, A{...} means A$\\times${...}.\n *(0, 1) means the open interval from 0 to 1 without including 0 or 1, and [0, 1] means a closed interval including 0 and 1. We assume $P(A=1| X, S)\\in(0, 1)$ instead of [0, 1] because the propensity score cannot take 0 or 1, which means that every subject has a positive probability to take either treatment.\n *s=0, 1 means $\\forall s \\in\\\\{0, 1\\\\}$.\n *Y(a)⊥ S | X, (a=0, 1) means Y(a)⊥ S | X, $\\forall a \\in\\\\{0, 1\\\\}$.\n *$||\\tau(X)||_\\infty$ means $\\sup_x\\tau|(x)|$.\n2. *“quite limited scope (binary contextual bandits).”* \n\n As mentioned in the Discussion, “An interesting direction of future work is to extend the integrative R-learner to multiple treatments which commonly arise in reality. It can be constructed similarly based on the multivariate version of Robinson’s transformation and the corresponding R-learner (Section 7 in Nie and Wager, 2021)”.\n3. *“many bells and whistles ... obfuscate the interpretability of the paper.”* \n\n The proposed estimator is an easily implemented framework which can be summarized concisely as follows:\n * Estimation of nuisance functions, i.e., P(A=1| X, S) and E(Y | X, S), with cross-fitting.\n *Estimation of the HTE and confounding function with regularized regression. \n\n Step a. can be achieved by many different models as long as they satisfy Assumption 4. We consider the series approximation with penalty and machine learning models (e.g., Xgboost) in simulation studies and real-data experiments. In Step b., we also consider both the series approximation with penalty and machine models (e.g., random forest). We use different models to illustrate that the proposed method is a general framework and can easily adopt suitable models in different situations. For a comparison of asymptotic variances of the integrative R-learner and the RCT-only estimator, we give the theoretical proof for the series approximation models.\n4. *“e.g., the rct_x and os_x variants”* \n\n The meanings of comparison methods are introduced in the beginning of Section 6. “rct_x” means using only the RCT alone to estimate the HTE with the method x. Similarly, “os_x” means using only the OS alone. For example, “rct_oa” stands for using only the RCT alone to estimate the HTE with the outcome adjusted method which is introduced in Section 5 (i.e., regressing the estimated adjusted outcome on X to estimate the HTE and confounding function).\n5. *“p.3 Ass.1: I have a hard time understanding the meaning of (c).‘* \n \n Assumption 1(c) is a causal consistency assumption which is a common assumption in causal inference literature (Rubin, 1980; Rosenbaum and Rubin, 1983). It means that an individual's potential outcome under his or her observed treatment is the outcome that will actually be observed for that person. This implies no interference: Potential outcomes for an individual are unaffected by treatments received or potential outcomes of other individuals; an exception is when the treatments are vaccines for prevention of an infectious disease. \n6. *“p.6: the possible smooth or sparsity ... Do you still obtain gains with your method ?”* \n\n We add regularization on both $\\tau(x)$ and c(x). The efficiency gain is due to the relative complication of c(x) compared to $\\tau(x)$. Intuitively, we can start with one extreme example where there is no unmeasured confounding, which means that the c(x)=0 (the simplest function). If we knew this fact, then we could combine the RCT and OS directly for HTE estimation, and the integrative estimator improves the efficiency of the RCT-only estimator obviously due to a larger sample size. Our proposed estimation method uses basis functions to approximate the confounding function and uses penalization to leverage the smoothness or sparsity of the confounding function to achieve the same result as the oracle one. \n7. *“To support that claim experimentally ... converges faster.”* \n \n Thanks for the constructive comment. We added a simulation for the setting in Simulation I with $\\tau(X)= 1+2X+0.75X^2, m=2000$ and the increasing n below, where “int_rlearner” converges to 0, which verifies the consistency. \n| n |rct_OA |rwe_OA |int_OA |rct_rlearner |rwe_rlearner |int_rlearner |\n|:--|:----------|:-------|:----------|:------------|:------------|:------------|\n|200 |0.28(0.01) |0.57(0) |0.29(0.02) |0.26(0.01) |0.58(0) |0.23(0.01) |\n|500 |0.16(0.01) |0.57(0) |0.13(0.01) |0.15(0.01) |0.58(0) |0.12(0.01) |\n|1000 |0.11(0) |0.57(0) |0.08(0) |0.11(0) |0.58(0) |0.09(0.01) |\n|2000 |0.08(0) |0.57(0) |0.06(0) |0.08(0) |0.58(0) |0.06(0) |\n8. *“p.9: $U_{rct}$ has not been defined.“* \n\n $U_{rct}$ is an unobserved confounding variable which is generated from a normal distribution, please see Section 6.1.1.\n9. *“p.11: taking samples with T -> what is here ?”* \n \n This is a typo: it should be treatment A instead of T.\n10. *“p.11: an unbiased estimate from all samples -> X is missing from this formula. How can this be an unbiased estimate of the RCT?”* \n\n The propensity score in the equation is actually a special case of P(A=1 | X). We use a constant to estimate it because it is a randomized experiment. This is constructed following the method in Kallus et al. (2018).\n11. *”p.12: the RMSE of our integrative R-learner increases ... without doing iterations ?”* \n\n In the real-data experiment which has a much larger dimension of confounding variables than the simulated data because categorical variables are converted to dummy variables, machine learning approaches such as random forest have better performances than series approximation. However, unlike series approximation with explicit penalty terms, it is not straightforward to impose different regularization on $\\tau(x)$ and c(x) with machine learning methods. Therefore, we propose an iterative method which estimates $\\tau(x)$ and c(x) iteratively. The reason that the first iteration gives the best performance is that we use warm start when doing iterations, i.e., we initialize the confounding function values as the ones estimated from the method in Kallus et al. (2018) which is already a good approximation for the c(x). \n12. Thanks for pointing out the wording issues, other typos and providing two additional pieces of helpful literature. We have modified them in the revised version.", "role": "user" }, { "content": "I'm grateful for the authors' response. They have addressed most of the issues I raised, and I am considering increasing the score.", "role": "assistant" } ]
Q102xPpYV-B
CLeaR 2022 Conference
Reviewer_YmAo
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_YmAo, and the following are the paper you need to review:\n```<<Q102xPpYV-B>>```", "role": "user" }, { "content": "Review Content: Summary: The current paper builds on the R-learner method developed by Nie & Wager. The Nie & Wager method provides a flexible framework for jointly using a variety of statistical and machine learning methods to predict treatment effects from observational data under the assumption of no unmeasured confounding of the treatment and outcome. The current paper develops the Integrative R-learner method which uses both observational (e.g., EMR) and experimental (e.g., RCT) data on a common set of real-valued covariates to predict treatment effects of a binary treatment on a real-valued outcome, when unmeasured confounders are possible. The method uses the experimental data to reduce bias in the causal predictions, and it uses the observational data to reduce variance, when doing so is possible. An evaluation using simulated and real data support that the method predicts treatment effects better than several other state-of-the-art methods.\nMain review: Originality:\n\n* The approach leverages the flexibility of the R-learner to use multiple machine learning methods, while also supporting the integration of observational and experimental data to predict treatment effects.\n\n* The approach uses different regularization terms for the HTE and confounding functions.\n\n* The paper includes in the supplement a proof that the integrative R-learner is consistent and that asymptotically it is at least as efficient as the estimator that uses only experimental data.\n\nSignificance:\n\n* The problem of leveraging both observational and experimental data to improve the prediction of treatment effects is quite important in many fields. In healthcare, for example, it could support better predictions of treatment outcomes for individual patients, which can help optimize their care.\n\n* The ability to use a wide range of supervised statistical and machine learning methods to address this problem is important, because it allows current and future methods to be immediately applied to help improve the prediction of treatment effects.\n\n* The paper does not discuss the issue of model interpretability. How feasible would it be to develop an automated explanation capability that describes the factors in the model that are driving its predictions? For the real-data experiments described in the paper, what were the factors driving the predictions?\n\n* To what extent are the assumptions made in the simulation studies in alignment with the assumptions made by the Integrative R-Learner method? For the simulation study, did the authors “stress test” their method by making assumptions that are (1) counter to those made by their Integrative R-Learner method, and (2) compatible with the assumptions made by other methods that were evaluated? The paper does not discuss this issue, and thus, it seems that they did not. \n\n* The Discussion section is quite brief. It would be useful to include, for instance, some discussion of the limitations of the described approach and directions for future research. For example, how would the method handle a treatment outcome that is naturally represented using a nominal variable (e.g., a binary variable representing in-hospital death)? How would it handle the common situation in which the covariates in an RCT are a small subset of the available observational covariates in the electronic medical record (EMR) system? The paper doesn’t have to solve these problems of course, but some discussion of such issues and open problems would be helpful.\n\nTechnical quality:\n\n* The work looks to be of high quality, but I did not carefully check the proofs. \n\n* In the real experiments, what is the standard error of the predictions shown in Figure 1?\n\nClarity:\n\n* Overall, the paper is grammatically well written. \n\n* The paper is written in a style that will be easiest for readers with a strong background in mathematical statistics to understand. Several parts of the paper may be difficult for readers with other backgrounds to understand clearly and completely.\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Responses to Reviewer YmAo\nWe thank the reviewer for your careful reading of our manuscript and helpful comments. In what follows, we have the point-to-point responses to the detailed review comments.\n\n1. *“The paper does not discuss the issue of model interpretability. How feasible would it be to develop an automated explanation capability that describes the factors in the model that are driving its predictions? For the real-data experiments described in the paper, what were the factors driving the predictions?”*\n\n Thanks for the constructive comment for model interpretability. Yes, it is feasible. For the penalized series approximation such as lasso regression, we can select important variables that drive the heterogeneity of treatment effect, i.e., the variables whose coefficients are not zero. For machine learning such as Xgboost, we can obtain feature importance with built-in functions in the R packages or Python libraries, which are calculated by the out-of-bag estimation error.\n\n2. *“To what extent are the assumptions made in the simulation studies in alignment with the assumptions made by the Integrative R-Learner method? …, it seems that they did not.”*\n\n The current simulated data satisfy the required assumptions for the proposed integrative R-learner to be consistent. For the real-data experiment, the causal assumptions, e.g. Assumption 2 (Transportability of the HTE) are not testable empirically and thus rely on subject matter knowledge for their justification. We will design different simulation settings where one or multiple required assumptions are violated to conduct stress tests on the consistency of the proposed estimator. \n\n3. *“how would the method handle a treatment outcome that is naturally represented using a nominal variable (e.g., a binary variable representing in-hospital death)?”*\n\n When the outcome is not a continuous variable, e.g., a binary outcome, we can extend the proposed framework to study causal odds ratios instead of average treatment effects (see *Vansteelandt, Stijn, and Marshall Joffe. \"Structural nested models and G-estimation: the partially realized promise.\" Statistical Science 29, no. 4 (2014): 707-731.*)\n\n4. *“How would it handle the common situation in which the covariates in an RCT are a small subset of the available observational covariates in the electronic medical record (EMR) system?”*\n\n Thanks for pointing out this important question. Our proposed method can handle the case when the covariates in an RCT capture all the treatment effect modifiers, and the confounding function can incorporate the additional auxiliary variables in the OS. If the treatment effect modifiers are not captured in one of the datasets, we cannot leverage the OS sample to improve the HTE estimation in the RCT sample. That being said, this is an important and interesting question for our future work.\n\n5. *“In the real experiments, what is the standard error of the predictions shown in Figure 1?”*\n\n Since the real data is fixed and given, we are not able to repeat the experiments like in the simulation studies, thus standard error is not able to be calculated.\n", "role": "user" }, { "content": "Title: Thanks to the authors for their response.\nI thank the authors for their helpful response to my comments and questions.", "role": "assistant" } ]
UnInKa0-_M9
CLeaR 2022 Conference
Reviewer_25b4
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_25b4, and the following are the paper you need to review:\n```<<UnInKa0-_M9>>```", "role": "user" }, { "content": "Review Content: Summary: Review report\nMain review: The authors consider the existing VCSGS algorithm and show that under suitable assumptions, it leads to uniformly-consistent estimation of causal strengths.\n\nAlthough the technical content and contribution seem interesting, the paper needs to undergo a significant revision for the reader to be able to appreciate the material. I also have some comments about the content below and would be happy to hear from the authors. \n\n\"k-Triangle-Faithfulness Assumption is a strictly weaker assumption that avoids some implausible implications of the Strong Causal Faithfulness Assumption and also allows for uniformly consistent estimates of Markov Equivalence Classes (in a weakened sense), and of identifiable causal effects.\" \nPlease add a citation for the k-Triangle Faithfulness assumption.\n\nPlease delete the following at the end of page 1: \n\"This is where the content of your paper goes.\"\n\n\"Kalisch and Bu ̈hlmann (2007) showed that such uniform consistency is achieved by the PC algorithm if the underlying DAG is sparse relative to the sample size\" \nCould you explain what this means? Sparsity is typically relative to number of nodes. Is this result independent from the number of nodes?\n\n\"It has the implausible consequence that it puts a lower bound on the strength of edges, since a very weak edge entails a very weak partial correlation.\" \nI don't understand this statement. It's an assumption and there are many models where such an assumption would be true. Why is this called implausible?\n\nCould you define \"k-Faithfulness Assumption\"?\n\nIs there a distinction between lowercase k and capital K?\n\n\"edges to be week anyywhere in a graph\" \ntwo typos\n\nRule 4(c) seems wrong. Checking the original version from https://arxiv.org/pdf/1502.00829.pdf, in S4.iii, we see that X->Y<-Z is the condition, not X->Y->Z.\n\nPlease use \\lim and \\sup for better visibility of formulas.\n\nThere are also typos in latex, such as \"lrangle\".\n\nIn terms of organization, I would add proof sketches to the main paper and only keep those in the main text. Moving the proofs will lead to better readability even if the main paper remains much shorter than 12 pages. That is not a problem.\n\n\"while holding everything else constant\" \nPlease specify that only parents of Y are conditioned on. Not descendants. \n\nmax->\\max\n\nAlso I suggest using \\limits to move limits to right under the operator for better visibility. \n\nCould you comment on why max is chosen for defining the edge strength rather than, for example, average? How will the rest of the approach be affected by such changes to the definition? Is the approach robust to this?\n\n\"Given the TV smoothness(L), p is continuous\" \nI am not sure if the terminology is accurate here. A discrete p will never be continuous. \n\nI am not sure if I follow the reasoning behind the derivation before Section 3.1 for p_y|Pa(Y)... This looks like a very loose bound and the connection to the narrative (why do we need this derivation?) is not clear to me.\n\n\"Then the test is uniformly consistent w.r.t. a set of distributions Ω for\" \n\"for\" should be replaced with \"if\" I believe.\n\nani.i.d. sample of size size n->an i.i.d. sample of size n\n\nThe equations at the end of page 7 are really hard to parse due to abrupt and unaligned line breaks. Please consider using a controlled align environment. \n\nProof of Theorem 3:\n\"Since we have proved that the probability for C(n,M) to make any of the three kinds of mistakes uniformly converges to 0, the theorem directly follows.\"\nI would appreciate seeing more details on this step. I believe the reasoning is that any error is either type I OR type II OR type III, correct? But this needs to be considered in the lim sup and at least a short statement detailing this would be nice to have.\n\n\"are true inM\"->\"are true in M\"\n\nPlease write down the theorem for which the proof is given in page 9 explicitly.\n\n\"(the number of bins)\"\nWhat is a bin? This is not explained as far as I can see.\n\nAgain, it is very very hard to parse the equations/proofs in page 10 due to the way equations are presented. Please consider using align and breaking into newlines for better readability. Same for page 11.\n\nPlease use \\mathcal{O} instead of O for big-oh notation and use \\left(\\right) instead of () for large parantheses. \n\nI believe I missed the part where the following statement became relevant in the later section: \n\"Notice that missing an edge is not listed as an mistake here, and we are going to prove later that the estimation of causal influence can still be used to correctly predict the effect of intervention even if the algorithm misses edges.\"\n\nI believe the analysis looks mostly fine but the paper suffers a lot from the clarity of presentation. As recommended above, moving proofs to the appendix and using main paper to convey intuitions behind the theorems and proofs would help greatly and make room for more discussion about connections to previous work.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to comments and suggestions\nDear Reviewers:\n\nThank you for your comments and suggestions. We will fix the typos and change the formatting of the proofs to make them more readable. For your questions about this paper, please kindly find my response below:\n* \"Kalisch and Bu ̈hlmann (2007) showed that such uniform consistency is achieved by the PC algorithm if the underlying DAG is sparse relative to the sample size\"\nCould you explain what this means? Sparsity is typically relative to number of nodes. Is this result independent from the number of nodes?\n\n**Respond**: This result is not independent from the number of nodes. Kalisch and Bu ̈hlmann assumped that the maximum number of neighbors, q, in the DAG, grows as q = O(n^{1-b}), with n being the number of nodes and 0<b<1.\n\n* \"It has the implausible consequence that it puts a lower bound on the strength of edges, since a very weak edge entails a very weak partial correlation.\"\nI don't understand this statement. It's an assumption and there are many models where such an assumption would be true. Why is this called implausible?\n\n**Respond**: A better way for us to phrase this is that it is often implicitly assumed that there may be many weak causes of a variable, but even if they exist, that they can be safely ignored in estimating causal effects. The k-triangle faithfulness assumption basically formalizes that assumption, whereas the Kalisch-Buhlmann results require the stronger assumption that there are no such weak causes. That assumption is not necessary to obtain uniform consistency, and limits the applicability of their results.\n\n* Could you comment on why max is chosen for defining the edge strength rather than, for example, average? How will the rest of the approach be affected by such changes to the definition? Is the approach robust to this?\n\n**Respond**: Yes, the average can also be chosen to define the edge strength. The rest of the approach will be changed slightly, and the approach is robust to this. We choose the max because we are particularly interested in weak edges, and if an edge is weak, the maximum of change on effect when changing the state of cause is small.\n\n* \"Given the TV smoothness(L), p is continuous\"\nI am not sure if the terminology is accurate here. A discrete p will never be continuous.\n\n**Respond**: Here by continuous I mean that if a and a’ are arbitrarily close, p_y conditioning on A =a and A=a’ are arbitrarily close.\n\n* I am not sure if I follow the reasoning behind the derivation before Section 3.1 for p_y|Pa(Y)... This looks like a very loose bound and the connection to the narrative (why do we need this derivation?) is not clear to me.\n\n**Respond**: We just want to show that p_y|Pa(Y) cannot be infinitely large; it is used later in the proof of lemma 4. \n\n* Proof of Theorem 3: \"Since we have proved that the probability for C(n,M) to make any of the three kinds of mistakes uniformly converges to 0, the theorem directly follows.\" I would appreciate seeing more details on this step. I believe the reasoning is that any error is either type I OR type II OR type III, correct? But this needs to be considered in the lim sup and at least a short statement detailing this would be nice to have.\n\n**Respond**: The reasoning is correct. We will provide more details.\n\n* \"(the number of bins)\" What is a bin? This is not explained as far as I can see.\n\n\n**Respond**: To test whether x, y are independent conditioning on z, the conditional independence test we used in this work separates the z into bins. We will explain it in the paper. \n\n* I believe I missed the part where the following statement became relevant in the later section:\n\"Notice that missing an edge is not listed as a mistake here, and we are going to prove later that the estimation of causal influence can still be used to correctly predict the effect of intervention even if the algorithm misses edges.\"\n\n**Respond**: Here we want to say that, an edge is only going to be missed if it is a weak edge; and if such an edge is missed, we still can achieve uniform consistency when estimating p_y|Pa’(Y) where here Pa’(Y) is the parent of Y except those connected to Y through the missed, weak edges. \n", "role": "user" }, { "content": "I value the authors' response. They have addressed most of the issues I raised, and I will consider raising the rating score.", "role": "assistant" } ]
UnInKa0-_M9
CLeaR 2022 Conference
Reviewer_AaJn
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_AaJn, and the following are the paper you need to review:\n```<<UnInKa0-_M9>>```", "role": "user" }, { "content": "Review Content: Summary: The authors present a result on uniform consistency for estimating causal structures in the case of of smooth distributions. By this, the authors extend existing results mainly restricted to linear Gaussian models. The way the achieve this is by replacing assumptions. Moreover, they present an algorithm for causal structure learning (VCSGS) and an algorithm for estimating causal effect size (EEA) and prove uniform consistency in the inference of the causal structure.\nMain review: Main comments\n-\tMy main concern with this paper is readability: I found it very hard to extract the information. After the acceptable introduction, the paper dives into a series of lemmas and proofs, where it is easy to lose the bigger picture. Moreover, the notation is not always consistent and the display of the main equation arrays is too much croweded to the left, so hard to read. I will give several examples below. Because of this, I think the current form of the paper is not acceptable for publication.\n-\tThere are several gaps in the proofs (claiming that this step is easy or similar to some other proof), where it is hard to assess the correctness of the proof.\n-\tI don’t think that the results presented will have a great impact on the field. They more seem like variations of existing approaches.\n-\tCould the authors mention on the practical applicability of the algorithms (e.g. computational complexity)?\n\nDetailed Comments:\n-\tThe claims and outlooks in the introduction are not well matched in the rest of the paper. E.g. the term “Generalized k-Triangle Faithfulness” is not mentioned once again after the introduction. I suggest a much more rigorous format with careful use of formal Definitions for new assumptions and an algorithmic pseudo-code for algorithms. Moreover, the main findings should be clearly marked with Theorems and the very technical lemmas (e.g. Lemma 4, which is presented within a proof) should be moved to the appendix or supplementary material.\n-\tI is not clear if the two algorithms proposed are new in themselves or if the consistency result is new.\n-\tDelete sentence at the end of Introduction: “This is where the content of your paper goes”\n-\tCh 2.1., end of first paragraph: “and Y is a descendant of X” (X and Y are in the wrong order in the current text)\n-\tCh 2.2: “its its”\n-\tCh 2.2, end of page 2: “the set of X variables is jointly standard Gaussian”; I am not sure if “standard” applies here, it should probably be moved to the error term?\n-\tCh 2.2: spelling in “week anyywhere”\n-\tCh 2.2: Does the “K” in the inequalities and the “k” in the last sentence of ch 2.2 mean the same value? If yes, please use same symbol, if no, please explain how the “k-triangle-Faithfulness” relates to the two versions of “k”.\n-\tCh 2.3, step 3: The term “unshielded” was not defined (although it is pretty clear what you mean)\n-\tCh 2.3, step 4a: I am wondering if you should add “unshielded” to “the triple (X,Y,Z)” ?\n-\tCh 2.3, step 5: “apparently non-adjacent” was not defined and does not occur in the algorithm before\n-\tCh 3, second bullet point: In the subscript you use both “Y” and “Y=y”. Before you also just used the subscript “y”. Please stay consistent to improve readability.\n-\tCh 3: What does the “TV” in the “TV Smoothness” mean?\n-\tCh 3: What norm are you referring to in “\\leq (1 + L |Pa(Y)|)” ? Should that really be a capital “P” in “Pa” or rather “pa”?\n-\tCh 3.1: Can you give the rationale behind using the proposed definition of epsilon? Since it characterizes conditional dependence given A, I would have thought of: p_{X,Y|A} vs. p_{X|A}*p_{Y|A}. Why are you using the joint and not the conditional probability?\n-\tCh 3.1, p.6: Typo in math expression in 3rd paragraph: “lrangle”\n-\tCh 3.1, p.6: “size size”\n-\tCh 3.1, list item 2: “G_M” instead of “G_m”\n-\tCh 3.1: “no undirected edge connecting Y AND X” (I think “and X” is missing)\n-\tCh 3.1: “intervening ON X” (“on” is missing)\n-\tCh 3.1: “Since the proofs of kind I and kind III are almost the same … we are only going to prove kind II here” – this is not obvious to me.\n-\tThe proof of Lemma 2 is hard to check: Please add some details throughout the proof on the main strategy of the proof. E.g., you prove by contradiction, but in the end there is no clear mentioning of the contradiction you achieved. Moreover, the reference “Lemma 3.1” should be “Lemma 1”. Moreover, the notation of “K” in the denominator is not consistent.\n-\tEdge Estimation Algorithm: Please set this apart clearly from the main text. I suggest pseudo-code and an algorithm display.\n-\tThe definition of “conditional probability distance” is hard to read. Is there a way to simplify the notation?\n-\tP.9: The proof starts without an easy to see claim. Since this seems to be one of the main results, you could perhaps reformulate this as a Theorem (ideally you name all your main theorems) to help the reader to filter out the main ideas from the technical details.\n-\tThe references are not complete. At least “Kalisch and Bühlmann (2007)” appears in the main text but not in the reference section.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to comments and suggestions\nDear Reviewers:\n\nWe thank the reviewers for their comments and suggestions. We will fix the typos and change the formatting of the proofs to make them more readable. We will also correct mistake in the algorithm, and move the proofs of the lemmas to the appendix, to open room for more discussion of the overall strategy, and how the result relates to previous research. For your questions about this paper, please kindly find my response below:\n\n\n* I don’t think that the results presented will have a great impact on the field. They more seem like variations of existing approaches.\n\n\n**Respond**: The result of our work shows that given uniform consistency of nonparametric conditional independence tests (which itself requires a smoothness assumption), we can reach uniform consistency of estimating causal structures and effects without making parametric assumptions. The only uniform consistency results for causal search and estimation algorithms that we know of (Spirtes and Zhang, Kalisch and Buhlmann) assume linear Gaussian models. Since uniform consistency allows one to give probabilistic bounds on the size of errors given a sample size as a function of k and the smoothness parameter, this gives some small sample results for algorithms that only currently have large sample guarantees. \n\n\n* Could the authors mention on the practical applicability of the algorithms (e.g. computational complexity)?\n\n\n**Respond**: We will provide the computational complexity of the algorithms and the convergence rate of estimation. \n\n\n* not clear if the two algorithms proposed are new in themselves or if the consistency result is new.\n\n\n**Respond**: The consistency result is new.\n\n\n* Ch 3.1: Can you give the rationale behind using the proposed definition of epsilon? Since it characterizes conditional dependence given A, I would have thought of: p_{X,Y|A} vs. p_{X|A}*p_{Y|A}. Why are you using the joint and not the conditional probability?\n\n\n**Respond**: We want to characterize the $p_\\{X,Y|A\\}$ vs. $p_\\{X|A\\}$*$p_\\{Y|A\\}$ over all values of X, Y and A. You can think about our definition of epsilon as a general characterization of all $|p_\\{X,Y|A\\} -p_\\{X|A\\}*p_\\{Y|A\\}|$ over all values of X, Y and A\n\n\n* Ch 3.1: “Since the proofs of kind I and kind III are almost the same … we are only going to prove kind II here” – this is not obvious to me.\n\n\n**Respond**: We will move the proofs of the lemmas into the appendix and provide more details. The only additions that we need to make to the proofs of kind I and III are minor modifications of the corresponding proofs in Spirtes and Zhang.\n", "role": "user" }, { "content": "Title: Update\nI would like to thank the authors for their response. After digesting their answers, I see the situation in the following way:\n- The result in itself seems to be indeed potentially interesting to the community; therefore, I am willing to increase my score\n- I trust that it is possible to improve the readability of the paper as the authors indicate. However, I think this will be quite challenging and therefore my final score is still marginal.", "role": "assistant" } ]
YWRhER626PX
CLeaR 2022 Conference
Reviewer_Xryg
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Xryg, and the following are the paper you need to review:\n```<<YWRhER626PX>>```", "role": "user" }, { "content": "Review Content: Summary: A useful contribution that could be better presented\nMain review: This paper presents a novel and useful simulation environment that can be used to evaluate certain classes of causal models. Evaluation in causal learning and reasoning is complex, largely because causal models are intended to evaluate the effects of actions, making many static data sets insufficient for evaluation. In addition, algorithms for causal learning and reasoning are increasingly designed for tasks that go beyond simple estimation of average treatment effects.\n\nThat said, several aspects of the presentation could be substantially improved. First, it takes two pages before the introduction actually identifies the key contribution of the paper (“To this end…”). This should appear much earlier, preferably on p. 1. While Figure 1 presents some details earlier, that figure is not intelligible until readers get some more general description of CausalCity.\n\nSecond, given that CausalCity is built on top of AirSim (and already very capable simulation environment), the authors would substantially improve the paper by highlighting the unique features that they added to enable CausalCity to be useful for R&D in causal inference. Such features should be the central topics addressed by each subsection rather than the current focus on the specifics of a driving simulator (e.g., “environment features”, \"vehicles\").\n\nThird, most simulators would appear to provide opportunities for doing many of the things listed as advantages of CausalCity. The paper should identify the unique advantages of CausalCity for research in causal inference that are not provided by other simulators. While it is beneficial to research on causal inference to have “another simulator”, it would be useful to highlight why this particular simulator will help research in causal inference more than others.\n\nFinally, the discussion of a model (NS-DR) begins by discussing a data set. It would be far more useful to begin by discussing the model itself. This is what readers will expect.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response\nEstablish what CausalCity provides above and beyond AirSim:\n\nWe agree that simulators in general are useful for causal analyses, and AirSim could be used in this way directly depending on the task. However, we have added specific features that enable more elaborate and complex modeling, that we argue makes it easier to create more realistic scenarios. The key unique features of CausalCity are: 1) the ability to define routes for any number of vehicles using spawn and waypoints and actions at those points (e.g., turning, changing lanes, stopping, etc.) through an easily configurable interface, 2) the vehicles have basic agency such that it is not necessary to define the complete trajectory of every vehicle – what this enables is the ability to create much more complex scenarios more easily and focus on high-level behaviors. Of course, the expected low level causal relationships that one would expect still exist, such as the consequences if two cars collide. In contrast, in AirSim only low-level controls exist and it would be necessary to define the complete trajectory of every vehicle at every time point, which is laborious.\n\nHighlight the contributions earlier:\n\nWe appreciate the suggestion regarding moving the contributions earlier in the text. We agree that this would help. We will bring a summary of the main points to the first paragraph to help situate the paper and provide more context for Fig. 1.\n\nRemove the description of the data from the discussion of NS-DR:\n\nThis is another good suggestion. We will remove the description of the CLEVRER data, as that is not particularly relevant to our experiments and focus on the explanation of the method.\n\n Global/general Response:\n```\nWe would like to thank the reviewers for their positive and constructive feedback. We are glad they felt that the CausalCity project provided a novel, useful and welcome addition to the community. Below we have responded to each reviewer with the changes we will make to the paper to address their comments.\n```\n", "role": "user" }, { "content": "I acknowledge the authors' response. It adequately addresses the points I raised. I continue to recommend this work positively.", "role": "assistant" } ]
YWRhER626PX
CLeaR 2022 Conference
Reviewer_VwpM
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_VwpM, and the following are the paper you need to review:\n```<<YWRhER626PX>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a new high-fidelity simulation environment built on the AirSim software. This simulation aims to provide a true-to-life scenario, with a safety-critical context, for causal learning algorithms. They showcase the use of their simulation on the tasks of trajectory prediction and causal discovery and create a baseline with three existing algorithms. The simulation revolves around vehicles driving in an intersection with various controllable factors such as camera view, traffic lights, weather, time of day, buildings, lamps, road works, etc… Most importantly, the simulation features vehicles equipped with an AI agent that can be controlled through high-level directives such as turning at the intersection, merging, etc… The ability to control all these factors enables evaluating models on counterfactual scenarios, making the simulation much richer than others that generate a static dataset. \nMain review: This paper builds on AirSim, a simulator for drones and cars in a photo-realistic setting, and creates a driving environment with multiple autonomous agents controlled by high-level directives. The main contribution is the agency of the vehicles and a concrete implementation of the AirSim software in a causal learning setting, along with some baselines. This is an original simulation environment and it differs from others by its ability to control different aspects of the simulation, thus enabling counterfactual evaluation. The related works are discussed and compared clearly. \n\nOne of the main challenges in causal learning is the lack of a comprehensive and popular dataset on which causal learning algorithms are all tested to measure progress in the field. One of the roadblocks to such a dataset is that contrary to classic machine learning, causal learning is all about interventions and counterfactual reasoning. A static dataset can hardly cover all the possible scenarios, making simulation a powerful tool to test methods on. However, complex and realistic simulations are hard to create, therefore many simulated environments are often simplistic synthetic examples that can’t guarantee good performance in real-life scenarios. This work is important in that it tries to bridge this gap, and provides a realistic setting all while maintaining the ability to control many aspects of the simulation, for example, confounders. \n\nThe website presenting the simulation is well done and the examples and figures give a good idea of what the simulation is about. However, I found the code of the simulation itself to be a little under-documented and not very clear. For instance, I have not found anywhere the JSON files that form the basis of the scenarios so I have no idea how they are structured or what is expected as input. The paper talks about controlling weather and time of day etc… but it is not clear where one should start to achieve that. I realize that most of the code to do this lies in the AirSim documentation. However, it would be nice to have pointers to the commands that allow changes related to confounders and vehicle behavior. Of course, the code in itself is supplementary material to the paper, and I am sure that it is a work in progress and that the documentation could be clarified even by other people in the community and this could serve as a stepping stone to a more user-friendly simulation platform. I have not tried to reproduce the simulation so I cannot speak to how straightforward it is. \n\nThe paper itself is well organized and I have found it quite a pleasure to read. The motivation is very clear and the different simulations are well illustrated. Overall the main idea of the paper resonates easily with the interested reader. \n\nI am excited to play around with it as I think it has some potential! I hope the authors will make the code more accessible to people unfamiliar with AirSim.\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response.\nThank you for the helpful comments and encouraging review.\n\nProvide more examples of how to create the JSON config files:\n\nThank you for bringing this to our attention. We have added a script to the Github repo (https://github.com/causalcity/causalcity.github.io/blob/main/code/scenarioGenerator.py) showing how to create scenarios and have provided a greater explanation of how to construct the JSON files in the Github repo readme (https://github.com/causalcity/causalcity.github.io).\n\n\nProvide examples of how to control confounder variables:\n\nThe reviewer is correct: weather, road conditions, traffic signals and time of day could all be examples of confounders in the simulation. We will add examples of how to control these confounder variables directly into our documentation. Some of the confounders will only directly impact the visual appearance of the scene (e.g., time of day), while others will directly impact the movement of the vehicle (e.g., traffic signals). We will clarify which applies in each case.\n\n Global/general Response:\n```\nWe would like to thank the reviewers for their positive and constructive feedback. We are glad they felt that the CausalCity project provided a novel, useful and welcome addition to the community. Below we have responded to each reviewer with the changes we will make to the paper to address their comments.\n```\n", "role": "user" }, { "content": "Title: Acknowldgement\nI thank the authors for addressing my concerns!", "role": "assistant" } ]
YWRhER626PX
CLeaR 2022 Conference
Reviewer_di38
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_di38, and the following are the paper you need to review:\n```<<YWRhER626PX>>```", "role": "user" }, { "content": "Review Content: Summary: The authors introduce a high-fidelity simulation environment for driving scenarios (CausalCity) that can produce benchmark datasets for evaluating causal discovery and counterfactual reasoning algorithms. The vehicle objects have agency in navigating the environment (e.g., avoiding collisions, stopping at traffic lights), which enables the researcher to focus on defining high-level actions (interventions) to be studied.\nMain review: One important limitation when developing causal inference algorithms is the lack of real-world datasets for which the ground truth causal relations are known, which is why I think the CausalCity simulation environment is a welcome addition. It seems to me that the authors have managed to design an engine that can simulate relatively complex and reasonably realistic scenarios. The most novel aspect of this simulation, as far as I understand, is the addition of low-level _agency_ to the vehicle objects, which allows the simulation designer to focus on the high-level actions that the vehicles are supposed to take (merging lanes, turning). The paper is very well written and virtually error-free.\n\nAs the authors themselves admit, this environment is limited to driving tasks. Another limitation of the present work is that running the simulation code seems to require a bit of toil. The GitHub page has a rather nice presentation, but I would have liked it if the authors had included a dataset to be used directly in the code repository. I could not find such a dataset there. \n\nOverall, I think the authors have proposed a handy simulation environment that can be used to create more realistic synthetic datasets, which in turn can help toward the design of causal inference algorithms.\n\nMinor comments:\n- page 9, Subsection 4.2: \"highlevel\" -> \"high-level\", \"we create dataset\" -> \"we create datasets\"\n- page 10, last line before Discussion section: \"In 5\" -> \"In Fig. 5\"\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response.\nMake the links to the dataset easier to find/add them to the Git repo:\n\nWe had links to the datasets on the project page (https://causalcity.github.io/dataset.html) these are hosted in a cloud storage account as they are too large to put in the Github repo. However, we will add the link to the Github readme, as we think this would help - https://github.com/causalcity/causalcity.github.io/blob/main/README.md.\n\n\nMinor comments: \n\nThank you for bringing those typos to our attention. We have corrected them in the manuscript.\n\n Global/general Response:\n```\nWe would like to thank the reviewers for their positive and constructive feedback. We are glad they felt that the CausalCity project provided a novel, useful and welcome addition to the community. Below we have responded to each reviewer with the changes we will make to the paper to address their comments.\n```\n", "role": "user" }, { "content": "Title: Acknowledgment\nThank you to the authors for addressing my few concerns.", "role": "assistant" } ]
ZJRRwV4lCLz
CLeaR 2022 Conference
Reviewer_eiJx
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_eiJx, and the following are the paper you need to review:\n```<<ZJRRwV4lCLz>>```", "role": "user" }, { "content": "Review Content: Summary: The paper discusses common misusage of causal tools and language in the machine learning literature, clarifying certain concepts and pointing out important flaws.\nMain review: Let me preface my review by saying that I'm not an expert in NLP nor philosophy of causality (or any kind for the matter). I do however consider myself pretty well versed in machine learning and its intersection with causality, and found this paper very useful.\n\nThe paper clarifies many definitions present in the potential outcomes theory and the assumptions that they entail. Furthermore, the authors show how other papers in the ML / NLP literature misuse these terms and implicitly or subtly fail to satisfy the assumptions of the methods they employ. At the very core of it, it shows how people in the machine learning literature constantly misuse the term \"causal\" when making claims, particularly when no manipulation can be imagined. Of particular importance is the fact that using potential outcomes algorithms without satisfying SUTVA or strong ignorability conditions ( similar to \"each unit be potentially exposable to any of the causes\") can result in wrong claims, which is seen in machine learning very often. The examples are very easy to understand and it points to very good literature for further clarification.\n\nThe paper is very well written, is correct, and extremely precise. It clarifies a lot of misuses and I personally learnt a lot from it. I would love to see a journal version of this paper with more examples of misuse / false claims of causality in the ML literature.\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer eiJX\nThanks for reviewing our paper and for your comments. We appreciate the thoughtfulness of your comments and are glad that you find the article useful in clarifying causal principles. We are currently preparing a manuscript with a detailed case study on drawing causal inference using textual data, and we hope that we may further clarify important causal principles in the context of a detailed example. ", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. It adequately addresses the points I raised. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
ZJRRwV4lCLz
CLeaR 2022 Conference
Reviewer_aLNX
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_aLNX, and the following are the paper you need to review:\n```<<ZJRRwV4lCLz>>```", "role": "user" }, { "content": "Review Content: Summary: This paper examines the role of textual data as study units when conducting causal inference by drawing parallels between human subjects and organized texts. This paper expounds the main concepts and principles of causal inference, and reveals some vague concepts and fallacies. This paper discusses two strategies in order to facilitate better framing a causal query. \nMain review: Advantages:\n1)\tThe two strategies proposed sound reliable.\n2)\tThis paper provides a wealth of cases to illustrate points.\n3)\tThe relevant work provided in this paper is sufficient.\n\nDisadvantages:\n1)\tSome concepts lack formal definitions, for instance, the \"Study Unit\".\n2)\tThe strategy proposed in this paper lacks formal expressions and illustrations, which makes the strategy difficult to understand.\n3)\tAlthough the strategy proposed in this paper is claimed to be effective in some cases, there is a lack of experiments to prove its effect.\n4)\tOnly two strategies are proposed but the specific implementation is lacking, which makes the contribution may be insufficient\n5)\tThe presentation of this paper is not clear enough and lacks pictures to express opinions intuitively.\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer aLNX\n\nThanks for reviewing our paper and for your comments. We appreciate the time and effort you put in reviewing, and we found many of your points helpful in revising our paper to be clearer. However, we feel that there might have been a confusion of presentation that led to your comments that (1) the strategies proposed are difficult to understand, and (2) the contributions are insufficient. We have revised our paper to increase its clarity when presenting definitions and introducing those strategies. We hope it addresses your concerns. Please see more details in our point-to-point responses below.\n \n\n> 1. Some concepts lack formal definitions, for instance, the \"Study Unit.\"\n\nThanks for bringing this to our attention. I n our revision, we will add a formal definition of \"study unit\": a study unit is an entity subjected to an intervention (often, but not necessarily, independently of other units). We currently give two examples of study units in our article: human subjects (e.g., patients) as often encountered in epidemiology, biomedical sciences, and social sciences, and textual data (e.g., conference papers) as in much recent research at the intersection of machine learning, NLP, and causality. In our revision, we will highlight these two examples of study units.\n \n> 2. The strategy proposed in this paper lacks formal expressions and illustrations, which makes the strategy difficult to understand.\n \nThank you for raising this question. We have revised our flow when introducing definitions. We would also like to take this opportunity to summarize the two strategies and explain their practical implications, as will be reflected in our revision.\n \nFirst, instead of imagining manipulating some immutable aspects of textual data, imagine manipulating audiences’ perception of this aspect. This strategy has proven successful in many applications of causal inference for decades. By articulating the timing of the \"treatment,\" researchers can differentiate between \"pre-treatment\" and \"post-treatment\" variables: while \"pre-treatment\" variables need to be adjusted for, \"post-treatment\" variables should not; in fact, \"post-treatment\" variables are often \"surrogate markers\" for the main endpoint or \"mediators\" which mediate a treatment’s effect. \n \nSecond, empirical researchers are advised to shift from one trait to a collection of traits. For instance, if an NLP researcher is interested in how an article’s politeness affects some prespecified outcome, one should treat \"readers’ perception of politeness\" as a general causal variable encompassing multiple quantifiable traits. The practical implication of this strategy is that, by clearly stating the constituent parts of the intervention, empirical researchers can differentiate between \"causal variables\" and \"pre-treatment variables.\" \n \nTo reliably estimate a causal effect, two strategies described above help frame a causal query, formalize the quantity to be estimated (i.e., the estimand), and assign a role to each variable collected (i.e., causal variable or pre-treatment variable or post-treatment variable). Researchers may then use their favorite estimating strategies such as matching, weighting, outcome modeling via structural equations, and/or a combination of these strategies, to obtain an estimate for the appropriate estimand. In our revised version, we will make these points clearer and discuss the practical implications of the proposed strategies in more detail.\n \nLastly, we need to mention that both strategies are known to some degree in classical causal inference literature and are state-of-the-art; we are not claiming that we invented them out of the blue. Our main contribution is to identify cases/papers where there are ambiguities/pitfalls framing the causal query and carefully apply these principles to drawing causal conclusions from textual data.\n \n\n\n> 3. Although the strategy proposed in this paper is claimed to be effective in some cases, there is a lack of experiments to prove its effect.\n \nOur article is a perspective piece and not a theoretical/empirical paper. However, we do think that the article is a good fit for the conference, as our article aims to clarify \"causal principles underlying NLP.\" While new methods and algorithms are valuable, we feel that clearly stating the foundations/assumptions/estimands is of particular interest for causal inference problems. After all, methods and algorithms are meaningful only when researchers have a sound understanding of what questions these methods and algorithms are going after.\n \n> 4. Only two strategies are proposed but the specific implementation is lacking, which makes the contribution may be insufficient\n \nIn this article, we are interested in estimation problems, mostly because this is the primary focus in a lot of recent work at the intersection of NLP and causality; see, e.g., the review paper by Feder et al. (2021) and Keith et al. (2020). Our main contribution in the article is to articulate how to properly frame a causal query, including properly assigning a role to each variable. After figuring out the estimand, the timing of the treatment, causal variables, and pre-treatment variables, researchers may proceed with their favorite estimating strategies, e.g., matching, weighting, outcome modeling via structural equations. Implementation of these estimation strategies is standard and can be found in many off-the-shelf packages/libraries.\n \n> 5. The presentation of this paper is not clear enough and lacks pictures to express opinions intuitively.\n \nWe have revised our paper based on your comments and those from other reviewers, and hopefully, the paper is clearer (please see our per-point responses above). Please let us know which part in addition you think is not clear enough or less intuitive. We are happy to clarify them here and revise our paper accordingly (including adding pictures if applicable).\n \n\n### References\n\nAmir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, and Diyi Yang. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond, 2021.\n\nKatherine A Keith, David Jensen, and Brendan O’Connor. Text and causal inference: A review of using text to remove confounding from causal estimates. arXiv preprint arXiv:2005.00649, 2020.\n", "role": "user" }, { "content": "Thanks for the response provided by the authors. It satisfactorily addresses most of my concerns, and I will likely adjust the score higher.", "role": "assistant" } ]
ZJRRwV4lCLz
CLeaR 2022 Conference
Reviewer_JrEN
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_JrEN, and the following are the paper you need to review:\n```<<ZJRRwV4lCLz>>```", "role": "user" }, { "content": "Review Content: Summary: The authors discuss the application of causal inference methods to textual data, and propose two strategies for framing causal questions in a more precise manner: (i) shifting from immutable traits to perceptions of them, and (ii) shifting from abstract concepts to its constituent parts. They discuss how these strategies address the pecularities pertaining to textual data and how they can help in more clearly defining causal inference problems. \nMain review: I would like to start the review by mentioning that this article reads more like a perspective piece than an original research article. This is perfectly fine as far as I am concerned, but I am not sure if the submission then fits this conference's call for papers.\n\nI find that this article is of decent quality and provides a useful perspective on how causal inference methodology can be applied to textual data. The strategies proposed are sound and have value beyond the textual domain. As far as I can tell, the authors manage to include a reasonably comprehensive overview of the current state-of-the-art causal inference methods and applications for textual data. Finally, the paper is very well written and structured, and virtually error-free.\n\nThe scope of the article is perhaps a bit limited. The authors focus on causal effect estimation via covariate adjustment, and do not touch upon tasks such as causal structure learning. It might be interesting to include more facets of causal inference into the discussion.\n\nOverall, I think this article is an interesting perspective on the current and future use of causal inference methodology on textual applications, and would be a reasonable contribution to the conference.\n\nMinor comments:\n- page 10: \"articulate treatment or causal variable\" -> \"articulate the treatment or causal variable\"\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer JrEN\n\nThanks for reviewing our paper and for your comments. We are glad that you found our paper interesting and appreciate your suggestions and comments. Thanks for pointing out the typo, we have corrected it in the revision. Please find below our per-point responses, and we hope they address your concerns.\n\n### Goodness-of-fit to CLeaR’s call for papers.\n\nOur motivation for this paper is two-fold. We have observed an increasing trend (especially the NLP community) of exploring the intersection of machine learning models and causal inference. On one hand, we think many of these pioneering attempts would potentially exert very significant impact towards not only the NLP domain, but also to artificial general intelligence; on the other hand, however, we feel that some work tend to use \"causality\" and \"causal inference\" too liberally and sometimes without enough justifications for causal assumptions/principles. The goals of our paper are thus\n\n1. to take several concrete examples and illustrate what causal queries might be less well-formulated, and why we should watch out for them; and\n\n2. how proper causal queries could be framed while honouring assumptions/existing wisdoms of causal inference.\n \nIn this regard, we wish to respond to the call on the “methodology” and “applications” relevant to drawing causal inference using textual data. We hope our paper could be of some help for practitioners to “build machine learning on causal principles” by pointing out potential pitfalls and danger zones, and provide mitigation strategies.\n \n### Scope of the article.\n\nThank you for raising this concern as well as pointing out potential topics! We agree with you that in this paper we only used the problem of estimating causal effects as examples while not touching upon other problems such as causal structure learning as you mentioned. The reasons for our choice are mainly as follows, and we have revised our paper to highlight these and to discuss the applicability of the them to other causal problems.\n\n1. Estimating causal effect is perhaps one of the most ubiquitous problems in causal inference, contemplated by practitioners of every school of causal formulations, whose problem statement is easy to understand. We think by using it as an example, it will be easier and clearer to illustrate potential pitfalls, and make it more available to a broader range of readers. \n\n2. Causal effect estimation is the topic that most articles at the intersection of NLP and causality are concerned and familiar with, hence we chose it as our primary focus.\n\n3. The potential pitfalls and mitigation strategies we discussed were based on the causal model itself, thus relatively less dependent on the specifics of the problem itself: no matter whether one wants to estimate the causal effect or perform causal discovery, regardless of which school of causal models is being used, fundamental questions such as mutable vs. immutable traits, perceptions of treatments, validations of the assumptions (SUTVA, positivity, strong ignorability, etc) need to be addressed. For example, in causal discovery and causal structure learning, by the same argument, researchers are advised to think twice if a certain immutable trait (e.g., race/ethnicity) is found to be a “causal” variable. Another implication of practical importance for causal discovery research is that researchers may need to study how to discover a collection of variables as “causal variables,” e.g., the number of theorems in a paper *plus* the number of equations PLUS reference to theory papers as in our running example.\n \n\nIn this sense, we think our discussions can hopefully also benefit practitioners interested in other forms of causal problems such as causal structure learning.\n\n", "role": "user" }, { "content": "Title: Acknowledgment\nThank you to the authors for addressing my few concerns.", "role": "assistant" } ]
avZrtg416yr
CLeaR 2022 Conference
Reviewer_rX9F
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_rX9F, and the following are the paper you need to review:\n```<<avZrtg416yr>>```", "role": "user" }, { "content": "Review Content: Summary: The paper considers causal discovery via the post-nonlinear noise model and contributes a novel autoencoder-estimation technique for the multivariate case by applying the same causal order and pruning procedure as proposed for the ANM model.\nMain review: The paper follows a straightforward approach: The authors extend the autoencoder-based estimation of the bivariate PNL model by Uemura and Shimizu (2020) to the multi-cause case and then apply a causal order and pruning procedure as proposed for the ANM case in Peters et al. (2014). To learn y = g(f(x) + e) they use an auto-encoder with loss terms for the independence between x and e, here max_i(HSIC(xi, e)) and the reconstruction error of y. The approach is applied to a single (!) DAG model and real-world datasets. Overall a weak accept.\n\n\n# Pros:\n\n* (Potentially) the first estimator for the multi-cause PNL\n* Several steps to aid the stability of estimation are undertaken\n\n# Cons:\n\n* Numerical experiments extremely limited (owing to extreme computational demands)\n* Few limitations / weaknesses discussed\n* Omitted proofs despite Supplementary Material\n* No code provided\n\n# Originality\n\nThe authors state that even the publication of the PNL model was already in 2012 there hasn't been work on multi-cause estimators. I found another work (\"Post-nonlinear Causal Model with Deep Neural Networks\" by Chung etal, 2019) that also addresses a multi-cause PNL with neural networks. Hence, the authors would have to explain the theoretical difference and/or compare them numerically.\n\n# Significance\n\nThe PNL model is the to-date most general among the class of restricted structural causal models and may in many cases be the only one applicable given complex datasets. A novel estimator is, hence, an important contribution.\n\n# Quality\n\nThe introduction and preliminaries sufficiently cover the problem setting. The proposed method straightforwardly follows from the bivariate previous work and the causal order and pruning approach from the ANM. These are sufficiently described. For the only Theorem the authors state \"The “if“ part is obvious from the definition. The converse can be easily verified in the same manner as the ANM case (see Peters et al., 2014, A. 15).\". Since there are no space problems here, I would provide the proof for completeness. There are also no guarantees discussed for how well the autoencoder may fit in a general setting.\n\nThe authors mention the issue of stabilization in the Conclusions, but could they elaborate on this also in the Discussion?\n\nMy main concern is about the numerical comparison. In fact only one single DAG with a few SCMs is considered and the proposed approach compared to the RESIT estimator of ANM and DirectLinGAM. As mentioned above, there seems to be another relevant neural network estimator that could be considered here.\n\nThe main issue seems to be the extreme computational complexity of the method. Could the authors discuss how this can be alleviated? Also, except for computational complexity no weaknesses are discussed.\n\nI would also suggest to provide code somewhere.\n\n\n# Clarity\n\nThe style of writing suffers a bit from English grammar issues.\n\n\n# Further comments\n\n* \"Causal discovery methods assume that causal structures follow structural equation models\"\nWhat about constraint/score-based methods. These do not assume one particular SEM, rather a very general class. I suggest to frame the problem a bit more general here.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer rX9F\nThank you for your review and insightful comments.\n\nWe would like to answer your questions and comments in the following. We will reflect the answers in our camera-ready version.\n\n> The authors state that even the publication of the PNL model was already in 2012 there hasn't been work on multi-cause estimators. I found another work (\"Post-nonlinear Causal Model with Deep Neural Networks\" by Chung etal, 2019) that also addresses a multi-cause PNL with neural networks. Hence, the authors would have to explain the theoretical difference and/or compare them numerically.\n\nThank you for showing us another existing work. We found the PDF, Y. Chung et al, \"Post-nonlinear Causal Model with Deep Neural Networks\", class report, 2019. If it is the wrong one, please kindly let us know.\n\nThe key difference is that our method aims to estimate causal DAGs among variables while the existing one does not. The existing method estimates PNL models with multiple causes and a single effect by neural networks. However, as the authors discuss in Conclusion, it does not estimate causal DAGs from these results and additional algorithms/methods are required to achieve the DAG estimation.\n\n\n> For the only Theorem the authors state \"The “if“ part is obvious from the definition. The converse can be easily verified in the same manner as the ANM case (see Peters et al., 2014, A. 15).\". Since there are no space problems here, I would provide the proof for completeness.\n\nWe omitted the exact proof of this theorem because it will be long even though the most part is the same as [Peters 14a].\n\nWe would show a more detailed sketch in the following.\n\n\"If\" part:\n\nLet x be a sink variable and X' be the other variables. From the definition of the multivariate PNL model (1), there exists a PNL model such that x=g(f(X')+e), X'⟂e.\nNote that the function f does not do anything for non-parent variables of x.\n\n\"Only if\" part:\n\nSuppose there exists a PNL model with an effect y that is not a sink on the true DAG G (+). Since y is not a sink, y has children on G. Let z be a sink on the subgraph consisting of y's children and D be descendants of z. Since D does not have children of y, D⟂y | S∪{z} holds. Therefore, from the assumption (+), there exists a PNL model y=g'(f'(S,z)+e'). On the other hand, since z is a child of y, there exists a PNL model z=g(f(S,y)+e).\n\nGiven the situation in which S was observed, this contradicts the identifiability of the bivariate PNL model.\n\n\n> There are also no guarantees discussed for how well the autoencoder may fit in a general setting.\n\nIn AbPNL, the autoencoder is used to impose the invertibility of the outer function g of the PNL model. Specifically, two neural networks representing g and g^{-1} are arranged in serial and trained so that they become the inverses of the other by minimizing the reconstruction error. This is the general usage of the autoencoder in the field of neural networks. Although we modified AbPNL for our method, the autoencoding part works in the same manner as the original AbPNL.\n\n> The authors mention the issue of stabilization in the Conclusions, but could they elaborate on this also in the Discussion?\n\nThank you for your suggestion. We will set up the Discussion section and add more detailed discussions.\n\nAs stated in the paper, we estimate one PNL model by training the corresponding neural networks several times to stabilize results. Therefore, the stabilization of the training is important for not only improving the performance but also reducing the computational cost. One of the typical approaches is to change the optimizer, while there is a trade-off between stability and speed. Another possible option is to add a regularization term to the loss. To use this idea, we need to consider carefully the impact of changing the loss on using its value when comparing models in the sink identification.\n\n> My main concern is about the numerical comparison. In fact only one single DAG with a few SCMs is considered and the proposed approach compared to the RESIT estimator of ANM and DirectLinGAM. As mentioned above, there seems to be another relevant neural network estimator that could be considered here.\n\nIn the experiment in section 4.1, we randomly generated 100 linear and nonlinear problems, respectively, as stated in the caption of Table 1. In this procedure, not only causal equations but also DAG structures are generated randomly. Two examples in Fig. 2 are (coincidentally) have the same DAG structure.\n\nWe apologize for insufficient and misleading expressions. To make it clear, we will add the descriptions in the main text and change the examples in Fig. 2.\n\n\n> The main issue seems to be the extreme computational complexity of the method. Could the authors discuss how this can be alleviated? Also, except for computational complexity no weaknesses are discussed.\n\nThank you for the important point.\nThe computational complexity of the sink identification is mainly due to its iterative manner and model estimations. In a single iteration, we compare k models for given k variables. Therefore, let n be the number of variables, a total of n+(n-1)+...+2=n(n-1)/2-1 model estimations are required, which is the same as RESIT and DirectLiNGAM.\nAdditionally, in our method, we conducted the training of neural networks several times for one model to stabilize the results. Consequently, the total number of training is t*(n(n-1)/2-1), where t is the number of training trials per one model.\nOne effective approach to reduce the computational time is parallelization. Since all the trainings in one iteration can be performed separately, we can reduce the time by parallelizing them, which we used in the experiments. For the time of a single training, the neural networks in our method are used to represent simple scalar functions and do not need to have complex structures. Therefore, the training time is much shorter than the general deep models such as image recognition and language processing. However, as stated in the above response, we think the instability of training is the most important point to be improved.\n\nWe conducted additional experiments on larger problems of n=15. The following results show the performance of 10 generated problems due to the time limitation. We will increase the number of problems to evaluate the performance in more general and put the results in the camera-ready version with additional analyzes of the results.\n\n| | d_edit (L) | N_rev (L) | d_edit (N) | N_rev (N) |\n| ------------- | -----------:| -----------:| -----------:| -----------:|\n| AbPNL(0.95) | 11.9 (5.43) | 2.40 (1.71) | 19.2 (4.49) | 3.50 (1.27) |\n| RESIT(linear) | 2.80 (2.04) | 0.00 (0.00) | 39.2 (12.4) | 5.20 (1.93) |\n| RESIT(gam) | 3.3 (1.57) | 0.00 (0.00) | 29.9 (9.17) | 3.90 (2.23) |\n| RESIT(gp) | 68.0 (8.01) | 1.60 (1.17) | 32.5 (7.69) | 4.10 (1.52) |\n| DirectLiNGAM | 2.5 (1.78) | 0.00 (0.00) | 31.8 (9.54) | 6.70 (2.71) |\n\n(L): Results on 10 linear problems, (N): Results on 10 nonlinear problems.\n\n> I would also suggest to provide code somewhere.\n\nThank you for the suggestion.\nWe also believe code availability is important and are working on it.\n\n> Clarity\n> The style of writing suffers a bit from English grammar issues.\n\nWe apologize for the grammar issue.\nWe will carefully check the paper.\n\n> Further comments\n> \"Causal discovery methods assume that causal structures follow structural equation models\" What about constraint/score-based methods. These do not assume one particular SEM, rather a very general class. I suggest to frame the problem a bit more general here.\n\nThank you for the suggestion.\nWe will revise the abstract to state the problem in more general.\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It satisfactorily addresses the major issues I had. I continue to recommend this work positively.", "role": "assistant" } ]
avZrtg416yr
CLeaR 2022 Conference
Reviewer_k2Ax
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_k2Ax, and the following are the paper you need to review:\n```<<avZrtg416yr>>```", "role": "user" }, { "content": "Review Content: Summary: This paper considers the problem of learning causal structure from observed data generated by the post-nonlinear model. The authors first study the estimation of the PNL model with multi-parents by autoencoder strategy of Neural network. Then, the authors propose a two-step method to estimate the post-nonlinear model, including identifying causal order and removing redundant edges.\nMain review: The paper considers an essential problem of estimating causal structure from observation data. \\\nThe paper builds on prior work in the field of post-nonlinear models. The authors put some effort into estimating the whole graph when the data follow the PNL model.\\\nThe critical challenge is how to estimate PNL efficiently. The authors use neural networks to represent and optimize the two functions of PNL. \\\nThe theoretical analysis or experimental results are presented in a logical way.\n\nThe way of literately finding a sink is my only concern. This way will affect the performance of your method when the number of variables is large. It would be nice to discuss this issue in the main paper.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer k2Ax\nThank you for reviewing our paper.\n\nWe would like to answer your qustion. We will reflect the reponse in our camera-ready version.\n\n> The way of literately finding a sink is my only concern. This way will affect the performance of your method when the number of variables is large. It would be nice to discuss this issue in the main paper.\n\nThank you for your suggestion. We will set up the Discussion section and add more detailed discussions.\n\nThe computational cost in the iterative sink identification grows rapidly as the number of variables increases. To be more precise, since the algorithm finds one sink in one iteration, it requires n-1, where n is the number of variables, iterations to identify the causal order. In a single iteration, we compare k models for given k variables.\nTherefore, a total of n+(n-1)+...+2=n(n-1)/2-1 model estimations are required, which is the same as RESIT and DirectLiNGAM. Additionally, in our method, we conducted the training of neural networks several times for one model to stabilize the results. Consequently, the total number of training is t*(n(n-1)/2-1), where t is the number of training trials per one model.\n\nOne effective approach to reduce the computational time is parallelization. Since all the trainings in one iteration can be performed separately, we can reduce the time by parallelizing them, which we used in the experiments. For the time of a single training, the neural networks in our method are used to represent simple scalar functions and do not need to have complex structures. Therefore, the training time is much shorter than the general deep models such as image recognition and language processing.\n\nWe conducted additional experiments on larger problems of n=15. The following results show the performance of 10 generated problems due to the time limitation. We will increase the number of problems to evaluate the performance in more general and put the results in the camera-ready version with additional analyzes of the results.\n\n| | d_edit (L) | N_rev (L) | d_edit (N) | N_rev (N) |\n| ------------- | -----------:| -----------:| -----------:| -----------:|\n| AbPNL(0.95) | 11.9 (5.43) | 2.40 (1.71) | 19.2 (4.49) | 3.50 (1.27) |\n| RESIT(linear) | 2.80 (2.04) | 0.00 (0.00) | 39.2 (12.4) | 5.20 (1.93) |\n| RESIT(gam) | 3.3 (1.57) | 0.00 (0.00) | 29.9 (9.17) | 3.90 (2.23) |\n| RESIT(gp) | 68.0 (8.01) | 1.60 (1.17) | 32.5 (7.69) | 4.10 (1.52) |\n| DirectLiNGAM | 2.5 (1.78) | 0.00 (0.00) | 31.8 (9.54) | 6.70 (2.71) |\n\n(L): Results on 10 linear problems, (N): Results on 10 nonlinear problems.\n", "role": "user" }, { "content": "Title: Thank you for your response\nThank you for addressing my concerns with this response. I will keep my overall score.", "role": "assistant" } ]
avZrtg416yr
CLeaR 2022 Conference
Reviewer_wyQs
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_wyQs, and the following are the paper you need to review:\n```<<avZrtg416yr>>```", "role": "user" }, { "content": "Review Content: Summary: This paper extended the AbPNL method developed for bivariate causal discovery to process multivariated case. It first borrowed a widely used causal order learning method, and combined it with multivariate AbPNL to learn a causal order. An edge pruning method is then used to get a DAG based on the causal order.\nMain review: The paper is clearly written and organized. But I have the following concerns:\n1. Novelty: the paper has some small innovation points, but can not give us valuable enlightenment. It is more a combination of some existing methods/tricks with tiny adaption.\n2. Theoritical foundation: the paper lacks a systematic proof of model identifiability and inference consistency.\n3. Experiments: For the experiments on both simulation and real data, the datasets are too simple with just several variables, how the performace will be if running on large datasets? The propsed method relies on learning many DNN models to evaluate data fitting, however, when the variable and sample scale are large, the time efficiency and hyperparameter adjusting will be a problem. The author claimed the proposed method is a general one which can process both linear and nonlinear relations, however, results in table I do not support such cliam. For the accuracy showed in table 2, how much is from the deep models used in the new method, and how much is from the causal order learning?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to Reviewer wyQs\nThank you for reviewing our paper and the insightful comments.\n\nWe would like to answer your questions and commens in the following. We will reflect the answers in our camera-ready version.\n\n> Novelty: the paper has some small innovation points, but can not give us valuable enlightenment. It is more a combination of some existing methods/tricks with tiny adaption.\n\nIn this paper, we aim to develop a new multivariate causal discovery method based on the PNL model. Despite its theoretical foundations, the practical estimation method for the multivariate PNL model is not well developed. We believe that our proposal increases the applicability of SEM-based causal discovery methods and promotes their use in real-world problems.\n\n> Theoritical foundation: the paper lacks a systematic proof of model identifiability and inference consistency.\n\nThe identifiability of the multivariate PNL model is already shown in the existing studies as stated in Section 2.2. We focused on the development of its estimation method in this paper.\n\nFor the inference consistency of RESIT, Theorem 34 in [Peters 14a] says, \"RESIT used with a consistent non-parametric regression method and an independence oracle is guaranteed to find the correct graph from the joint distribution.\" Since our method uses the iterative sink identification and the greedy pruning schema used in RESIT, the correct graph can be estimated if the consistency of the autoencoding part, i.e. the invertibility of the outer function, is guaranteed.\n\n> Experiments: For the experiments on both simulation and real data, the datasets are too simple with just several variables, how the performace will be if running on large datasets? The propsed method relies on learning many DNN models to evaluate data fitting, however, when the variable and sample scale are large, the time efficiency and hyperparameter adjusting will be a problem.\n\nThank you for the important comment on the scalability of our method. As you pointed out, the scale of the problem such as the number of variables has an impact on the performance. The iterative sink identification requires n(n-1)/2-1 model estimations, where n is the number of variables, and thus the computational time grows when the number of variables is large. Additionally, in our method, we conducted the training of neural networks several times for one model to stabilize the results, which further increases the time. Since all the trainings in one iteration can be performed separately, we used parallel computation to reduce the time in the experiments.\n\nFor the time of a single training, the neural networks in our method represent simple scalar functions and do not need to have complex structures. Therefore, the training time is much shorter than the general deep models such as image recognition and language processing. We think that hyperparameters such as network structures and the selection of optimizers are insensitive for the same reason and we used the same parameters used in [Uemura and Shimizu 20] throughout all the experiments.\n\nWe will add the Discussion section to the paper and put these discussions. Furthermore, we conducted additional experiments on larger problems of n=15. The following results show the performance of 10 generated problems due to the time limitation. We will increase the number of problems to evaluate the performance in more general and put the results in the camera-ready version with additional analyzes of the results.\n\n| | d_edit (L) | N_rev (L) | d_edit (N) | N_rev (N) |\n| ------------- | -----------:| -----------:| -----------:| -----------:|\n| AbPNL(0.95) | 11.9 (5.43) | 2.40 (1.71) | 19.2 (4.49) | 3.50 (1.27) |\n| RESIT(linear) | 2.80 (2.04) | 0.00 (0.00) | 39.2 (12.4) | 5.20 (1.93) |\n| RESIT(gam) | 3.3 (1.57) | 0.00 (0.00) | 29.9 (9.17) | 3.90 (2.23) |\n| RESIT(gp) | 68.0 (8.01) | 1.60 (1.17) | 32.5 (7.69) | 4.10 (1.52) |\n| DirectLiNGAM | 2.5 (1.78) | 0.00 (0.00) | 31.8 (9.54) | 6.70 (2.71) |\n\n(L): Results on 10 linear problems, (N): Results on 10 nonlinear problems.\n\n> The author claimed the proposed method is a general one which can process both linear and nonlinear relations, however, results in table I do not support such cliam.\n\nAs you pointed out, the results in Table 1 show that our method is superior to the other methods on nonlinear problems and inferior on linear problems. However, we believe that causal relations of complex real-world systems are not fit in such a simple linear model and the results on nonlinear problems are more important to evaluate the performance in practice.\n\n\n> For the accuracy showed in table 2, how much is from the deep models used in the new method, and how much is from the causal order learning?\n\nThe difference between the results of our method and RESIT suggests the effectiveness of using the deep models because they adopt the same causal order learning scheme. Although the difference between the values of d_edit is large, the difference is mainly caused by the performance of the pruning part as inferred from Fig. 3. On the other hand, the values of N_rev show the performance of causal order learning more directly. The difference suggests that our method could approximate the causal relations more precisely and thus, estimate the true causal order by adopting deep models.\n", "role": "user" }, { "content": "Thanks for the authors' clarification. It answers most of my concerns.", "role": "assistant" } ]
bDOU-vlPM8n
CLeaR 2022 Conference
Reviewer_ShZb
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_ShZb, and the following are the paper you need to review:\n```<<bDOU-vlPM8n>>```", "role": "user" }, { "content": "Review Content: Summary: The authors introduced the VIM for learning latent representations of objects and their transition function overtime from video. I find the paper clearly written and the experiments well executed, however, I think if the results can be put in context of the performance of other method, it would be more illustrative\nMain review: The reviewer have elementary familiarity with video prediction (graduate level textbook knowledge) and is fairly familiar with unsupervised representation learning.\n\nDetailed Comments:\n1)\tModules: I might have missed it, but what is the functional form of the modules (f_\\theta)? In the experiments? And what are the common/possible choices in general?\n\n2)\tWith regard to interpretability, I feel like many alternative ways can achieve similar level of interpretability (e.g. object tracking + unsupervised learning over the trace), so it might be useful to establish some benchmark or show that the module’s abstraction is better than some alternative.\n\n3)\tDynamic Transfer: I feel like this is the most important results since this illustrate the benefit of disentangle the underlying dynamic rules of the generation process. I think it would be useful to compare VIM with prior methods such as SCOFF and GSWM.\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply\nThank you for your encouraging feedback! We answer your comments below:\n\n* Form of the modules\n\nThe modules are implemented as 2-layer MLPs that take as input the previous slot representation $\\mathbf{h}_{t-1}$ and produce a candidate current slot representation.\n\n* Benchmarks for Interpretability\n\nThe main advantage of our approach is that i) it can model stochastic sequences and ii) does not require supervision. Approaches such as SCOFF or RIM do not require supervision but cannot model stochastic sequences since they are deterministic. On the other hand, most tracking approaches require some kind of supervision. We will think on how to establish a benchmark for stochastic and unsupervised object-centric sequence modelling that goes beyond the metrics of likelihood/ELBO.\n\n* Dynamic Transfer results\n\nFor the settings explored, SCOFF cannot perform dynamic transfer because it is a deterministic model and fails to model the sequences in our setting. See our Comparison to SCOFF and RIM comment.\n\nWe hope our answer helps resolve your comments and we would be happy to answer any further questions you might have.\n\n Global/general Response:\n```\nSCOFF proposes an architectural component to process a visual input such that it can be explained in terms $K$ object files (corresponding to slots in our model) and $M$ schemata (modules) processing those objects files. \nThey propose to test their architecture in several downstream tasks and one of them is video prediction. \nTo do so they train SCOFF on a next-step prediction task with an autoregressive formulation. \nRIM corresponds to a SCOFF with $M = 1$ schema (in other words, with a single module). At each time step $t$ the computations are conditioned on the previous frame. \n\nA key assumption in SCOFF and RIM is that the outcome is deterministic given previous frames. \nSimilar to SV2P and SVG-LP, we argue that this leads to blurry predictions in settings with stochastic dynamics, as the model is trying to capture the mean of all plausible future outcomes. \nTo overcome this limitation we propose to include the intuition behind SCOFF's architectural inductive biases in a variational framework, where the schemata would rather correspond to the different modes/rules of the stochastic dynamics of the environment.\n\nWe report in the table below the reconstruction MSE averaged over 5 time-steps sequences for SCOFF and RIM (with teacher forcing at each time step) and VIM (using its inference network) for our proposed random walk balls dataset. All models are trained with 4 slots and VIM and SCOFF are trained with 5 modules/schemata.\nWe expect SCOFF and RIM to fail to produce sharp samples since they assume that the future is deterministic given the previous frame, whereas we expect VIM to handle the multimodality of the future outputs. This is reflected in the Reconstruction MSE, where VIM obtains significantly better results.\n\n| Model | Reconstruction MSE | \n|--------------|-----------|\n| RIM | $91.7 \\pm 5.4$ |\n| SCOFF | $81.9 \\pm 1.0$ |\n| VIM (Ours) | $\\mathbf{4.2 \\pm 0.5}$ |\n\nAdditionally, we add [this visualization of a SCOFF reconstruction](https://imgur.com/2mMoIVp). We notice that, while SCOFF correctly identifies the different objects in the scene, it cannot reconstruct frames and instead predicts the mean of all the plausible future frames. \n\n\n```\n\n Global/general Response:\n```\nWe would like to thank all reviewers for their useful feedback. As a result, we have run additional baselines and revised some parts of our submission. Below we provide a summary of the changes:\n\n* Comparison to SCOFF and RIM\n\nWe have added RIM and SCOFF baselines to our main experiment. The results from this experiment can be found in the previous general comment, 'Comparison to SCOFF and RIM'. Both RIM and SCOFF are not able to model the stochastic sequences in our dataset, despite the visual simplicity. That is because SCOFF and RIM are deterministic models. We hope this experiment is useful to better understand the advantages of VIM and its performance compared to previous approaches in the literature. \n\n* Updated math for Section 3\n\nWe have revised the model formulation. Before we had inconsistencies and the random variables in our model were not clearly described. To summarize, our model has two random variables: the module selection variable $\\mathbf{r^k_t}$ and the additive update $\\mathbf{z^k_t}$. The module selection variable $\\mathbf{r^k_t}$ is discrete, and it is modeled with a Gumbel Softmax distribution. The additive update $\\mathbf{z^k_t}$ is a continuous variable that follows a gaussian distribution. The slot representation is denoted by $h^k_t$, and it is computed as $\\mathbf{h_t^k}$ = LayerNorm( $\\mathbf{h_{t-1}^k} + f_\\theta(\\mathbf{h}_{t-1}^k, \\mathbf{r}_t^k) + \\mathbf{z}_t^k) $. \n$h^k_t$ is thus a deterministic function of random variables. We have prior and posterior distributions on both random variables $\\mathbf{r^k_t}$ and $\\mathbf{z^k_t}$, and our ELBO is a function of these variables. We detail the updated math in [this image](https://postimg.cc/ZBJTNHTn). We will include the revisions in Section 3 of the main text. We hope this clarifies our model formulation.\n\n* Multiple runs and updated results\n\nWe have run multiple runs for our main experiment. Our model tends to converge to similar solutions with low variance, indicating that it is well optimized. There were only few runs where the model got stuck in a local optima where only a few modules are used and the reconstruction error is high. However, with careful hyperparameter tuning this effect can be avoided, specially when tuning the learning rate warmup and schedule, which was also important for slot attention to obtain good slot separation. Another important hyperparameter is the schedule for changing the length of the sequences that the model is trained on. \nWe will update the main results in Figure 2 and 3 where we used a better learning rate decay that resulted in overall better performance.\nAn updated Figure 3 can be found [in this link](https://imgur.com/AJsNHqi)\n\nWe hope these changes help resolve the concerns raised by the reviewers and we would be happy to provide additional clarifications if needed. \n```\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It adequately addresses the points I raised. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
bDOU-vlPM8n
CLeaR 2022 Conference
Reviewer_4fRa
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_4fRa, and the following are the paper you need to review:\n```<<bDOU-vlPM8n>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes Variational Independent Mechanism, a generative model trained to uncover a latent set of objects and disentangled mechanisms that act on these objects. It proposes a generative extension to previous works learning independent mechanisms and tests it on a toy 2D video setup.\nMain review: Strengths:\nThe paper exploits ideas of sparsity and compositionality in latent space to recover mechanisms explaining the data that generalizes out-of-distribution. \nThe paper builds on previous work learning independent mechanisms and makes a variational version of them that appears meaningful. The initial experimental results reported are interesting and promising (but they could be more convincing, see below).\n\nWeaknesses:\nThe ideas motivating the work and the principles behind the architecture are not particularly novel and follow closely previous related works: in particular RIM (Goyal 2019) and SCOFF (Goyal 2020).\nThe main difference with previous works is that the current paper proposes a generative model. \nWhat problem does the generative VIM solve that was out-of-reach for RIM-kind of approaches?\n\nThis justification appears important because this paper focuses on toy 2D scenarios whereas SCOFF tackles more challenging evaluation setups. Also, this paper focuses on \"unary\" modules, whereas it seems that SCOFF can deal with more arbitrary relationships between modules and objects.\n\nThus, it seems that the paper should better justify the advantages brought by the variational mechanisms compared to these previous works. It would be interesting to also discuss (and report) what RIM and/or SCOFF would produce on the same 2D toy setup considered in this work.\n\nThe experiments are limited to a 2D setup with moving balls. \nFurthermore, there is no real comparison to baselines and for each experiment, only one run is reported. There is no measure of performance across many test runs. Only Figures 4 and 5 have some notion of global performance, but there is no error bound and we don't know on how many runs this was computed.\nIn general, the reporting of experimental results could be more thorough and convincing. In particular, 5.2 has only a very shallow description of results.\n\nMinor comments:\nIf only up/down/left/right are modeled, can the modules encode notions of velocity? Velocities would seem quite important to track objects moving in time but are more abstract than up/down/left/right.\n\nIn equation (2), shouldn't it be p(z_0|c) instead of p(z_0^k|c)?\nThe paragraph \"module selection posterior\" overflow in the margin\nThe experiments report results when varying the number of modules, what about varying the number of slots?\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply\nThank you for your valuable feedback. We answer your concerns below:\n\n* What problem does VIM solve compared to SCOFF and RIM?\n\nThe main advantage of VIM over RIM and SCOFF is the ability to model stochastic sequences. RIM and SCOFF are deterministic models that assume that the future is a deterministic function of the past and trained only on a next step prediction task. Instead, VIM is a probabilistic model that can capture multimodal future outcomes. Most datasets and real world scenarios are multimodal, and therefore this is an important aspect for video prediction. Papers such as SV2P[1] and SVG[2] improved upon previous video prediction models by showing that only probabilistic models can model complex datasets such as BAIR Push. Similarly, VIM introduces a probabilistic model in the context of object-centric video prediction with independent mechanisms that can handle stochastic environments. \n\n* Dataset complexity\n\nWhile it is true that our random walk ball datasets are simple, we believe that they are adequate to show that VIM can model stochastic sequences and to serve as a proof of concept for initial research in this area. Compared to SCOFF, our datasets are of similar visual complexity. Only the RL environments in SCOFF have more complex 2D visuals, however those datasets are not used for video prediction tasks. Additionally, we have run a SCOFF baseline in our proposed dataset - SCOFF is unable to properly model sequences in this setting because it cannot model multimodal outcomes. Thus, despite the visual simplicity of our dataset, we believe it is complex enough to highlight the capabilities of VIM compared to SCOFF or RIM.\n\n* Report the performance of SCOFF and VIM on our proposed setup\n\nThank you for proposing this baseline. Following your suggestion we have run a SCOFF and a RIM baselines in our proposed dataset. The results from this experiment can be found in a general comment for the paper. We observe that both models fail to produce sharp generations and instead model the mean next frame. This is due to SCOFF and RIM being deterministic models, and highlights the advantages of the variational formulation of VIM.\n\n* Multiple runs per experiment\n\nWe have added 2 seeds per model for the main diagram results in Figure 3 (see main comment for an updated figure). We have also further improved our hyperparameter selection and re-ran the model with a better learning rate decay schedule. We have obtained better module interpretability and better reconstructions, and we will update the results accordingly in Figure 3 in the paper. \n\n* Minor comments\n\nThanks for catching the typo in the math, we have updated Section 3 to fix this. Regarding the number of slots, if we do not use enough slots then we do not have good object separation (as in the case of slot attention). If using more slots than needed (which is our most common setup), then slot attention learns to assign blank slots that are not used. \n\nWe hope our answer helps resolve your concerns and we would be happy to answer any further questions you might have.\n\n\nReferences:\n\n[1] ]SV2P: Babaeizadeh, Mohammad, et al. \"Stochastic variational video prediction.\" 6th International Conference on Learning Representations, ICLR 2018. 2018.\n\n[2] SVG: Denton, Emily, and Rob Fergus. \"Stochastic video generation with a learned prior.\" International Conference on Machine Learning. PMLR, 2018.\n\n Global/general Response:\n```\nSCOFF proposes an architectural component to process a visual input such that it can be explained in terms $K$ object files (corresponding to slots in our model) and $M$ schemata (modules) processing those objects files. \nThey propose to test their architecture in several downstream tasks and one of them is video prediction. \nTo do so they train SCOFF on a next-step prediction task with an autoregressive formulation. \nRIM corresponds to a SCOFF with $M = 1$ schema (in other words, with a single module). At each time step $t$ the computations are conditioned on the previous frame. \n\nA key assumption in SCOFF and RIM is that the outcome is deterministic given previous frames. \nSimilar to SV2P and SVG-LP, we argue that this leads to blurry predictions in settings with stochastic dynamics, as the model is trying to capture the mean of all plausible future outcomes. \nTo overcome this limitation we propose to include the intuition behind SCOFF's architectural inductive biases in a variational framework, where the schemata would rather correspond to the different modes/rules of the stochastic dynamics of the environment.\n\nWe report in the table below the reconstruction MSE averaged over 5 time-steps sequences for SCOFF and RIM (with teacher forcing at each time step) and VIM (using its inference network) for our proposed random walk balls dataset. All models are trained with 4 slots and VIM and SCOFF are trained with 5 modules/schemata.\nWe expect SCOFF and RIM to fail to produce sharp samples since they assume that the future is deterministic given the previous frame, whereas we expect VIM to handle the multimodality of the future outputs. This is reflected in the Reconstruction MSE, where VIM obtains significantly better results.\n\n| Model | Reconstruction MSE | \n|--------------|-----------|\n| RIM | $91.7 \\pm 5.4$ |\n| SCOFF | $81.9 \\pm 1.0$ |\n| VIM (Ours) | $\\mathbf{4.2 \\pm 0.5}$ |\n\nAdditionally, we add [this visualization of a SCOFF reconstruction](https://imgur.com/2mMoIVp). We notice that, while SCOFF correctly identifies the different objects in the scene, it cannot reconstruct frames and instead predicts the mean of all the plausible future frames. \n\n\n```\n\n Global/general Response:\n```\nWe would like to thank all reviewers for their useful feedback. As a result, we have run additional baselines and revised some parts of our submission. Below we provide a summary of the changes:\n\n* Comparison to SCOFF and RIM\n\nWe have added RIM and SCOFF baselines to our main experiment. The results from this experiment can be found in the previous general comment, 'Comparison to SCOFF and RIM'. Both RIM and SCOFF are not able to model the stochastic sequences in our dataset, despite the visual simplicity. That is because SCOFF and RIM are deterministic models. We hope this experiment is useful to better understand the advantages of VIM and its performance compared to previous approaches in the literature. \n\n* Updated math for Section 3\n\nWe have revised the model formulation. Before we had inconsistencies and the random variables in our model were not clearly described. To summarize, our model has two random variables: the module selection variable $\\mathbf{r^k_t}$ and the additive update $\\mathbf{z^k_t}$. The module selection variable $\\mathbf{r^k_t}$ is discrete, and it is modeled with a Gumbel Softmax distribution. The additive update $\\mathbf{z^k_t}$ is a continuous variable that follows a gaussian distribution. The slot representation is denoted by $h^k_t$, and it is computed as $\\mathbf{h_t^k}$ = LayerNorm( $\\mathbf{h_{t-1}^k} + f_\\theta(\\mathbf{h}_{t-1}^k, \\mathbf{r}_t^k) + \\mathbf{z}_t^k) $. \n$h^k_t$ is thus a deterministic function of random variables. We have prior and posterior distributions on both random variables $\\mathbf{r^k_t}$ and $\\mathbf{z^k_t}$, and our ELBO is a function of these variables. We detail the updated math in [this image](https://postimg.cc/ZBJTNHTn). We will include the revisions in Section 3 of the main text. We hope this clarifies our model formulation.\n\n* Multiple runs and updated results\n\nWe have run multiple runs for our main experiment. Our model tends to converge to similar solutions with low variance, indicating that it is well optimized. There were only few runs where the model got stuck in a local optima where only a few modules are used and the reconstruction error is high. However, with careful hyperparameter tuning this effect can be avoided, specially when tuning the learning rate warmup and schedule, which was also important for slot attention to obtain good slot separation. Another important hyperparameter is the schedule for changing the length of the sequences that the model is trained on. \nWe will update the main results in Figure 2 and 3 where we used a better learning rate decay that resulted in overall better performance.\nAn updated Figure 3 can be found [in this link](https://imgur.com/AJsNHqi)\n\nWe hope these changes help resolve the concerns raised by the reviewers and we would be happy to provide additional clarifications if needed. \n```\n", "role": "user" }, { "content": "Thanks for the response provided by the authors. They have adequately addressed the majority of my concerns, and I will likely raise the rating.", "role": "assistant" } ]
bDOU-vlPM8n
CLeaR 2022 Conference
Reviewer_hQZF
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_hQZF, and the following are the paper you need to review:\n```<<bDOU-vlPM8n>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes an extension of current object-centric representation learning method to group transition dynamics per object in a video into multiple independent modules. The model is trained in a generative fashion where the changes in latent space over time are parameterized by those modules.\nMain review: The research question that this paper tries to solve is highly relevant for the research field of object-centric representation learning and bringing it closer to the causality domain. The initial idea of the paper is to combine an existing architecture, SCOFF (Goyal et al., 2020), with object-centric video representation learning. However, the model section contains several unclarities which makes it hard to fully grasp the proposed method:\n* The latent update prior is said to model the distribution $p(z^{k}\\_{t+1}|f(z^{k}\\_{t},r^{k}\\_{t+1}))$. However, $z^{k}\\_{t+1}$ is defined as the output of a Layer Normalization later, with some noise added to the input of the normalization layer. How does the distribution $p(z^{k}\\_{t+1}|f(z^{k}\\_{t},r^{k}\\_{t+1}))$ actually look like then? Why is the noise sampled instead of modeling an actual distribution? Is the distribution not needed in the Gaussian KL term?\n* How is the distribution $q(r|x)$ in Equation 5 defined? In Section 3.2, a module selection posterior is mentioned that is of the form $q(r\\_t|z\\_{t-1},x)$. Is $q(r|x)$ marginalized out over $z$? Or is it a different network?\n* The model selection prior $p(r_{t+1}|z_t)$ seems to be independent of previous module selections $r\\_{\\<t}$. Why is that? Is the module newly chosen at every time step, and not consistent across the time series? \n* Figure 1 shows that $p(x|z)$ is parameterized as a mixture of Gaussians, but images are naturally discrete and/or have a finite set of possible pixel values. Is a mixture of discrete logistics used as replacement, as done in e.g. Salimans et al. (2017)? Or how is a degenerated solution prevented?\n* The module selection resembles the mixture of expert setup used in other network architectures, like Fedus et al. (2021). However, the drawback of the proposed method is that all modules need to be run on the all inputs, even during inference, instead of only running the selected modules. Why is that required? Why isn't a learnable key for each module used without the need of running each module during inference?\n* In Section 5.1, paragraph Setup, it is mentioned that only during testing, the selection variable $r$ is categorical. Does this mean that during training, $r$ is a one-hot vector with values between 0 and 1, or is it still forced to be a one-hot vector and uses the straight-through gradient estimator?\nConsidering that no code was submitted, it would be hard to reproduce this architecture.\n\nAnother unclarity of this paper is with respect to the motivated inductive bias. In the introduction, the setup of the latent space is stated as the inductive bias. However, the inductive bias actually comes from the fact that the decoder is limited in expressiveness, such that sticking with a single module cannot achieve the same performance. Thus, the proposed model is limited to the setting where given the module, the output distribution is known/simple (e.g. a Gaussian). If more complex distributions would be possible in the output space (e.g. a mixture of logistics), the proposed model would not work anymore. Further, if the input image $x_{t-1}$ would already give clues about the module to use, it can collapse again since $z_{t-1}$ would be sufficient to predict $z_{t}$. Similarly, multiple objects with different appearances and different sets of actions could use the same modules for different actions, degrading the interpretability aspect. Thus, the use cases of this model seem very limited.\n\nThe experiments are focused on a single dataset which was created for the purpose of this paper. While it showcases the working of the model, it is very simplistic. The fact that most modules are correctly selected in only 4 out of 5 cases on average (i.e. 80%) is not entirely convincing, considering the simplicity of the dataset. A more realistic dataset should be used to showcase this, although it might be hard to find one considering the drawbacks mentioned above.\n\nIn conclusion, while I could see the paper being a valid, small contribution to the object-centric representation learning research field, the current form of the paper contains too many unclarities to fully understand the method, and the use cases seem limited. \n\n#### Minor comments\n* Figure 1: the colors in the legend for inference vs generation class with the chosen slot colors in the picture right below. It suggests that $z_1$ belong to inference, and $z_2$ to generation.\n* The citations should be checked for using the correct journal/conference (e.g. Locatello et al. 2020 was published in NeurIPS)\n\n#### References\n* Fedus, William, Barret Zoph, and Noam Shazeer. \"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\" arXiv preprint arXiv:2101.03961 (2021).\n* Salimans, Tim, et al. “PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications.” arXiv preprint arXiv:1701.05517 (2017).\n* Goyal, Anirudh, et al. \"Object files and schemata: Factorizing declarative and procedural knowledge in dynamical systems.\" arXiv preprint arXiv:2006.16225 (2020).\n\n__Post-rebuttal changes__\nThe rebuttal of the authors clarified several aspects of the paper. Under the assumption that these clarifications will be included in the final version of the paper, I have updated my score.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply\nThank you for your insightful comments. We realize there were confusing terms in our model explanation. We have updated the model description section to be more consistent and clear (see general comment, and [updated model](https://postimg.cc/ZBJTNHTn) ). We answer your questions in more detail below:\n\n* Latent/additive update distribution\n\nOur model has two random variables: the module selection variable and an additive update to the slot representation. The module selection variable $\\mathbf{r}$ is discrete, whereas the additive update $\\mathbf{z}$ to the slot representation $\\mathbf{h}$ is continuous. $\\mathbf{z}$ follows a gaussian distribution. The slot representation $\\mathbf{h}$ is the result of i) updating the previous slot representation with the selected module $\\mathbf{r}$, ii) adding the additive update $\\mathbf{z}$ and iii) normalizing the result (see general comment for formulas). Therefore, the distribution of the slot representation $\\mathbf{h}$ does not follow a closed-form distribution. In terms of the training objective for our model, we have revised the formula and our prior and posterior distributions are on the two random variables $\\mathbf{r}$ and $\\mathbf{z}$. \n\n* Posterior over module selection variable\n\nThis was a mistake on our side when writing the training objective. The posterior is indeed a conditional probability on the previous slot representations $\\mathbf{h}$ and we are using the same network as described in section 3.2. We will update equation 5 to reflect this.\n\n* Independence of module selection prior\n\nThe module selection prior is conditional on previous module selections implicitly through being conditional on the previous slot representation $\\mathbf{h_{t-1}}$, which is itself a function of previous module selections $\\mathbf{r}^k_{<t}$. $\\mathbf{h_{t-1}}$indeed encodes all the past information about the slot (including how it has been modified by the previously selected modules and additive updates).\n\n* Mixture of gaussians observation model\n\nIt is true that a mixture of logistics would probably provide a tighter ELBO by reducing the probability mass on continuous pixel values. Most previous object-centric approaches, including MONET, IODINE and Slot Attention, use mask decoders with a mixture of gaussians observation models for their simplicity. We round the continuous output of the mixture of gaussians, thus making sure that there are no invalid pixel values or values outside of the output range. Note that mixtures of logistics also have to be discretized. We are unsure of degenerate solutions that would be avoided by the mixture of logistics, and we would be grateful if you could provide additional details on this point.\n\n* Similarity to Switch Transformer\n\nThe Switch Transformer is quite different from our model, its goal being to execute different subnetworks depending on the input example. In our case, the rule to apply to each slot is not a deterministic function of the input or context (i.e. the previous slot), but rather something that we either infer given a ground-truth observation of an observation $\\mathbf{x}_t$ (posterior), or a random variable with a certain probability distribution given by the prior. It might be possible to select a rule with a key-query mechanism, but conceptually we believe it makes sense to compute all possible future states for a slot, and then select the right transition based on the similarity of the candidate state to the ground truth observed state.\n\n* Questions about the selection variable\n\nThe module selection variable is a discrete variable. During training it is represented as a continuous relaxation of a one-hot vector, as a result of sampling from a Gumbel-Softmax distribution. Note that in this case Gumbel-Softmax does not use the straight-through estimator. At test time we sample from the softmax distribution and therefore obtain a discrete one-hot vector. We use a gumbel-softmax distribution as in general it provides better gradient estimates than using the straight-through estimator.\n\n\n* Inductive bias\n\nWe would like to note that there is a difference between the observation distribution and the capacity of the model. Mixtures of logistics are only better at allocating probability mass for discrete outputs compared to mixtures of gaussians, usually resulting in slightly better ELBOs, but at their core they are still mixture models. The inductive bias in our model does not come from the observation model, but rather from the object-centric and dynamic rule-centric scene decomposition. While it is possible for a model with *high capacity* (regardless of the observation model) to render the full scene from a single slot, it is slot attention that usually prevents such solutions and in general correctly segments the scene into its different objects even if we have high capacity models.\n\nRegarding whether the transition from $\\mathbf{h}_{t-1}$ to $\\mathbf{h}_t$ is deterministic and therefore it can be captured by a single module, that only happens in settings where the future is a deterministic function of the past. The majority of video prediction datasets as well as real world scenarios are stochastic, and the main goal of our paper is to propose a stochastic counterpart to models such as SCOFF, which fail in stochastic settings (see our comment on comparision with SCOFF and RIM that showcases this).\n\n* Updated Figure 3 \n\nAs stated in the general comment, we updated our main Figure 3 with improved mode-module matching colormaps (92% vs 80%). We realized that our learning rate decay was not optimal and resulted in suboptimal performance of VIM variations.\n\n* Minor comments\n\nThank you for catching those. We will revise the citations and modify the figure accordingly.\n\nWe hope this answer addresses your concerns and we would be happy to provide additional information if needed.\n\n Global/general Response:\n```\nSCOFF proposes an architectural component to process a visual input such that it can be explained in terms $K$ object files (corresponding to slots in our model) and $M$ schemata (modules) processing those objects files. \nThey propose to test their architecture in several downstream tasks and one of them is video prediction. \nTo do so they train SCOFF on a next-step prediction task with an autoregressive formulation. \nRIM corresponds to a SCOFF with $M = 1$ schema (in other words, with a single module). At each time step $t$ the computations are conditioned on the previous frame. \n\nA key assumption in SCOFF and RIM is that the outcome is deterministic given previous frames. \nSimilar to SV2P and SVG-LP, we argue that this leads to blurry predictions in settings with stochastic dynamics, as the model is trying to capture the mean of all plausible future outcomes. \nTo overcome this limitation we propose to include the intuition behind SCOFF's architectural inductive biases in a variational framework, where the schemata would rather correspond to the different modes/rules of the stochastic dynamics of the environment.\n\nWe report in the table below the reconstruction MSE averaged over 5 time-steps sequences for SCOFF and RIM (with teacher forcing at each time step) and VIM (using its inference network) for our proposed random walk balls dataset. All models are trained with 4 slots and VIM and SCOFF are trained with 5 modules/schemata.\nWe expect SCOFF and RIM to fail to produce sharp samples since they assume that the future is deterministic given the previous frame, whereas we expect VIM to handle the multimodality of the future outputs. This is reflected in the Reconstruction MSE, where VIM obtains significantly better results.\n\n| Model | Reconstruction MSE | \n|--------------|-----------|\n| RIM | $91.7 \\pm 5.4$ |\n| SCOFF | $81.9 \\pm 1.0$ |\n| VIM (Ours) | $\\mathbf{4.2 \\pm 0.5}$ |\n\nAdditionally, we add [this visualization of a SCOFF reconstruction](https://imgur.com/2mMoIVp). We notice that, while SCOFF correctly identifies the different objects in the scene, it cannot reconstruct frames and instead predicts the mean of all the plausible future frames. \n\n\n```\n\n Global/general Response:\n```\nWe would like to thank all reviewers for their useful feedback. As a result, we have run additional baselines and revised some parts of our submission. Below we provide a summary of the changes:\n\n* Comparison to SCOFF and RIM\n\nWe have added RIM and SCOFF baselines to our main experiment. The results from this experiment can be found in the previous general comment, 'Comparison to SCOFF and RIM'. Both RIM and SCOFF are not able to model the stochastic sequences in our dataset, despite the visual simplicity. That is because SCOFF and RIM are deterministic models. We hope this experiment is useful to better understand the advantages of VIM and its performance compared to previous approaches in the literature. \n\n* Updated math for Section 3\n\nWe have revised the model formulation. Before we had inconsistencies and the random variables in our model were not clearly described. To summarize, our model has two random variables: the module selection variable $\\mathbf{r^k_t}$ and the additive update $\\mathbf{z^k_t}$. The module selection variable $\\mathbf{r^k_t}$ is discrete, and it is modeled with a Gumbel Softmax distribution. The additive update $\\mathbf{z^k_t}$ is a continuous variable that follows a gaussian distribution. The slot representation is denoted by $h^k_t$, and it is computed as $\\mathbf{h_t^k}$ = LayerNorm( $\\mathbf{h_{t-1}^k} + f_\\theta(\\mathbf{h}_{t-1}^k, \\mathbf{r}_t^k) + \\mathbf{z}_t^k) $. \n$h^k_t$ is thus a deterministic function of random variables. We have prior and posterior distributions on both random variables $\\mathbf{r^k_t}$ and $\\mathbf{z^k_t}$, and our ELBO is a function of these variables. We detail the updated math in [this image](https://postimg.cc/ZBJTNHTn). We will include the revisions in Section 3 of the main text. We hope this clarifies our model formulation.\n\n* Multiple runs and updated results\n\nWe have run multiple runs for our main experiment. Our model tends to converge to similar solutions with low variance, indicating that it is well optimized. There were only few runs where the model got stuck in a local optima where only a few modules are used and the reconstruction error is high. However, with careful hyperparameter tuning this effect can be avoided, specially when tuning the learning rate warmup and schedule, which was also important for slot attention to obtain good slot separation. Another important hyperparameter is the schedule for changing the length of the sequences that the model is trained on. \nWe will update the main results in Figure 2 and 3 where we used a better learning rate decay that resulted in overall better performance.\nAn updated Figure 3 can be found [in this link](https://imgur.com/AJsNHqi)\n\nWe hope these changes help resolve the concerns raised by the reviewers and we would be happy to provide additional clarifications if needed. \n```\n", "role": "user" }, { "content": "Title: Response to rebuttal\nI would like to thank the authors for annswering my questions. I have updated my score accordingly.\n\nRegarding the mixture of Gaussians: The problem of modeling the output with a Gaussian distribution is that if you parameterize the standard deviation, then the model can make it arbitrarly small for certain pixels. MONET gets around it by fixing the scale, while if you truly want to model a distribution over the image space, the mixture of discrete logistics discretizes the output space with full support while allowing to take derivatives.", "role": "assistant" } ]
bmQfuKN8g19
CLeaR 2022 Conference
Reviewer_Lefw
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Lefw, and the following are the paper you need to review:\n```<<bmQfuKN8g19>>```", "role": "user" }, { "content": "Review Content: Summary: This article proposes an approach to identifying and estimating a principal stratum effect based on a monotonicity assumption and a logistic regression model on a set of observed and counterfactual variables whose joint distribution is not nonparametrically identifiable on its own, as was proposed by Shepherd et al. (2006) for sensitivity analysis. The authors then apply this methodology to estimate principal stratum effects of bevacizumab as a supplement to neoadjuvant chemotherapy on event-free survival and overall survival.\nMain review: Pros:\n1.\tThe article is fairly well written and precise.\n2.\tIdentification of principal stratum effects is an important topic.\n3.\tThe article has a nice application to real data.\n\nCons/issues:\n1.\tI had a bit of a difficult time understanding the novel element of this article and what distinguished it from Shepherd et al. (2006). I am confused by the logistic regression model, which along with monotonicity appears to be sufficient to identify the principal stratum effect of interest. However, my understanding is that Shepherd et al. (2006) used this model for sensitivity analysis, suggesting that this effect was not identified. So did they in fact not need to vary the $\\beta$ coefficients in this model because they were already identified? Or were they operating under a different set of assumptions in which this model and the principal stratum effect were not identified? It would help to clear up this confusion if the authors could clarify the distinction between their approach and that of Shepherd et al. (2006).\n2.\tThe main challenge taken on in this article seems to be about identification; however, the article seemed to skip over identification and mainly talk about estimation. The article would be much easier to read in my opinion if there were a sharper division between identification and estimation. That is, it would be much more enlightening to have identified quantities and estimators all written in terms of observed variables rather than potential outcomes. This might also help to address my previous point.\n3.\tRelatedly, it is not very clear how the joint distribution of $S(0)$ and $S(1)$ in $G_L$ is identified, and the role the monotonicity assumption plays in this.\n4.\tI think it would make more sense to move Section 3.2 after Section 4.1.\n5.\tI did not understand very well what role $X$ plays. Is it just for precision? It seems you still have identification even without it.\n6.\tDoes $Q_0^{(x)}$ depend on $y$? This is not reflected in the notation.\n7.\tDo the $\\beta_x$s in Section 6.3 correspond to a larger model with dummy variables for each level of $x$ rather than assuming linearity in $x$?\n\nShepherd, B. E., Gilbert, P. B., Jemiai, Y., & Rotnitzky, A. (2006). Sensitivity analyses comparing outcomes only existing in a subset selected post‐randomization, conditional on covariates, with application to HIV vaccine trials. Biometrics, 62(2), 332-342.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author Response to Reviewer Lefw\nWe thank the reviewer for reviewing the paper and providing insightful feedback. In the following, we provide responses and clarification on issues raised by the reviewer.\n1. We adopted the same logistic regression model and assumptions as in Shepherd et al. (2006). In general, regression coefficients are not identifiable without any additional assumptions. In that paper and subsequent works, Shepherd and his co-authors focused on sensitivity analyses by varying model parameters in an imposed model on counterfactual outcomes. Noticing the probabilistic equation (7), we figured out how to identify the regression coefficients when the linearity assumption holds in the logistic regression. The identification was discussed in the last paragraph of page 6 in Section 4.2 and required assumptions were discussed in Section 7. We will revise the paper by providing a sub-section on the identifiability in Section 4.\n2. We agree with the reviewer that a sub-section on identifiability in Section 4 is warranted. We will revise the manuscript accordingly.\n3. The joint distribution of $S(0)$ and $S(1)$ in $G_L$ can only be estimated with maximum likelihood from the observed data under the monotonicity assumption. Details are presented in Lemma 1 and Appendix A. We will clarify its importance after Lemma 1 in the revision. \n4. We agree it would help with the flow if we move section 3.2 after section 4.1. However, imposing regression models (1) and (2) among counterfactuals is critical for developing the relevant causal estimands and we did not invent the wheel here. Having section 3.2 in its current place offers us an opportunity to give other researchers proper credits for their contribution.\n5. Having the baseline covariate $X$ is essential to the identification of the principal stratum causal effects from the observed data. Its relevance to identifiability was briefly discussed in the last paragraph of section 4.2 on page 6. We will clarify in the additional sub-section on identification in the revision.\n6. No, $Q_0^{(x)}$ does not depend on $y$. There is a summation on $y$.\n7. Yes, the regression model would become $G_M(x,y;{\\beta}) = \\displaystyle{\\frac{\\exp(\\beta_x+\\beta_{1}y)}{1+\\exp(\\beta_x+\\beta_{1}y)}}$. No linearity assumption is necessary once $\\beta_1$ is assumed known. For each X=x, the parameter $\\beta_x$ can be exactly identified and estimated.\n", "role": "user" }, { "content": "Title: Reviewer Response\nThank you very much for your detailed reply. I must confess that I still do not quite understand what assumptions you are invoking beyond those of Shepherd et al. Your assumptions must be stronger than those of Shepherd et al. since they do not obtain point identification, but from the article and your response it remains unclear to me how they differ. Based on Section 6.3, it seems that perhaps Shepherd et al. used a logistic model with dummy variables for each level of X, rather than a model with a single regression coefficient corresponding to X, i.e., assuming linearity in X. Is this the case? If so, the logistic model (1) ought to be modified. In any case, I hope that this ambiguity can be resolved or clarified in the new subsection on identifiability.", "role": "assistant" } ]
bmQfuKN8g19
CLeaR 2022 Conference
Reviewer_M4f1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_M4f1, and the following are the paper you need to review:\n```<<bmQfuKN8g19>>```", "role": "user" }, { "content": "Review Content: Summary: This paper works on analyzing the effect of treatment on survival among people who have a positive intermediate binary outcome in neoadjuvant breast cancer clinical trials. They use a principal stratum framework to approach the problem and propose an estimator for the causal effects of interests, specifically for binary outcomes.\nMain review: High level overview of comments:\n-\tThis is a very interesting, well-motivated problem and I enjoyed reading your paper. There are only two areas where I think would significantly improve your paper if you could address. They are listed in the below two bullets.\n-\tClarity of assumptions and notation: You mention that identification is tricky for principal stratum causal effects, but you do not explicitly say anywhere that the assumptions you make are sufficient for identification. Additionally, it would be helpful to explicitly define the \\beta used in section 4.2.\n-\tClarifying methodological contribution: Of course, you have a contribution in terms of the application to breast cancer trials, but it’s not entirely clear from reading (especially the related work section and the introduction) whether you are trying to say that you have a methodological contribution in this paper as well. You say at the end of section 2 that you propose a method to identify and estimate principal stratum effects under a setting similar to Shepherd et al, but for binary outcomes. What exactly is Shepherd et al. doing? How significant of an extension is your work?\n\n\nMore detailed comments:\n-\tI am wondering if your problem could be approached using a mediational analysis framework, where pCR would be a looked as a mediator for survival. Could you discuss why this would or would not make sense?\n-\tI would be nice if either before or within your related work section you mathematically write out what a principal stratum causal effect is (you write this nicely in section 3.1, but it would be easier for someone who is not familiar with principal stratum, but is familiar with potential outcomes, to understand high level what you are doing).\n-\tAdditionally, it would be helpful to clarify exactly what your contribution is. You say at the end of section 2 that you propose a method to identify and estimate principal stratum effects under a setting similar to Shepherd et al. Could you clarify what Shepherd does differently / if that method is applicable to binary outcomes? \n-\tBefore you introduce the estimators you will use in 4.1, I think it would be helpful to formally write out the assumptions needed for identification of the causal estimand (e.g. a theorem proving identifiability using the stated assumptions). It is not clear to me from reading if the assumptions you describe in section 3 are sufficient for identification. It would especially help readers not familiar with principal stratification who are unfamiliar with what kind of assumptions are needed for identification, especially since you say in section 2 that identification of principal stratum causal effects is generally difficult!\n-\tFor Theorem 3, it would be helpful if you list/refer to explicitly all the assumptions you are using (e.g., treatment assignment is independent of potential outcomes, monotonicity of outcomes S_i). Moreover, it would be helpful to discuss what exact\n-\tIn section 4.2, you define G_M(x,y,; \\beta). I think this is the same beta discussed in section 3.2. It would be helpful to explicitly define \\beta here definition. How many dimensions in \\beta here? \n-\tIn section 6.3 you say that you compare the performance of your method with the sensitivity analysis similar to Gilbert et al and Shephard et al. I am confused exactly with what you are trying to say here. Are you saying you are (i) performing sensitivity analysis of your method with a technique similar to that of Gilbert et al and Shephard et al OR (ii) are you saying that there is a completely different way to approach this problem / different estimator that you are comparing to. Based on the first sentence in 6.3 it sounds like case (ii), but based on what you write in the rest of 6.3 it sounds like case (i).\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author Response to Reviewer M4f1\nWe thank the reviewer for reviewing the paper and providing insightful feedback. Below are our clarifications to the questions and comments.\n\nRegarding your high-level overview of comments:\n- Even though we discussed the identification of regression coefficients in a couple of places (bottom of page 6 in Section 4.2 and Section 7), this issue is essential to this paper. We will add a sub-section on the identifiability in Section 4.\n- We adopted the same logistic regression model and assumptions as in Shepherd et al. (2006). In general, regression coefficients are not identifiable without any additional assumptions. However, In that paper and subsequent works, Shepherd and his co-authors their work focused on sensitivity analyses by varying model parameters in an imposed model on counterfactual outcomes. Noticing the probabilistic equation (7), we figured out how to identify the regression coefficients when the linearity assumption holds in the logistic regression. Indeed we developed a new method for the identification and estimation of regression coefficients and subsequent causal estimands. \n- Although we discussed the required conditions/assumptions (a. monotonicity; b. the number of levels for X is greater than or equal to 2) for identification of regression coefficients in a couple of places (bottom of page 6 in Section 4.2 and Section 7), the presentation of this important issue needs to be improved. We will add a sub-section on the identifiability in Section 4.\n\nRegarding your detailed comments:\n- Under the principal stratification framework, our casual estimands of interest in section 3 cannot be addressed via mediation analysis with pCR as a mediator.\n- The mathematical characterization of principal stratum causal effects is presented in Section 3.1. We will provide a couple of sentences to illustrate its interpretation in the revision.\n- Indeed we provided a method to identify and estimate regression coefficients for the counterfactual model which nobody was able to accomplish in the past. We will clarify our contribution in the revision.\n- We discussed the two required conditions in Section 4.2 and Section 7. We agree that the presentation is inadequate and will add a new subsection on the identification issue in Section 4 because it is essential to this manuscript.\n- We will add the exact conditions required for estimation in the revision.\n- The $\\beta$ in Section 3.2 and Section 4.2 are the same. The $\\beta$ is of dimension 3, including the intercept, the coefficient for $Y_i(0)$, and the coefficient for the baseline covariate $x$.\n- Unable to identify regression coefficients, Shepherd et al. conducted a sensitivity analysis for such data. Here we proposed a new method to identify and estimate regression coefficients. Comparing sensitivity analysis results to our method provides the reader the context that estimates from sensitivity analysis may vary a lot without taking advantage of what is presented by the observed data.\n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The reply answers almost all of my questions. I continue to recommend this work for acceptance.", "role": "assistant" } ]
bmQfuKN8g19
CLeaR 2022 Conference
Reviewer_veHz
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_veHz, and the following are the paper you need to review:\n```<<bmQfuKN8g19>>```", "role": "user" }, { "content": "Review Content: Summary: Under mild assumptions, this propose an approach to estimate those model parameters using empirical data and subsequently the causal estimand of interest. And apply this method to a recent clinical trial and its performance is evaluated via simulation studies. The main contribution of this paper is to propose a new method to identify and estimate principal stratum causal effects under new data settings. \nMain review: I believe the theorems and propositions in this article are interesting in their own right. The technical results appear to be correct. But it seems that this method is used to estimate the causal effect in $E_{01} \\cup E_{11}$ instead of $E_{01}$ only. Why not use principal strata \n{S(1)=1} and {S(0)=0}? Using four principal strata seems to complicate this problem. \nI might improve my score after my questions are well-answered.\n\nI am sorry. Because $P(Y(0)| S(1)=1)$ is cross-world, it seems that the principal stratification framework ( 4 principal strata) and monotonicity assumption are necessary.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Author Response to Reviewer veHz\nWe thank the reviewer for reviewing the paper and providing insightful feedback. In the paper, motivated by our real-data application, the NSABP B-40 study, we are interested in the causal effect in $E_{+1} = E_{01} \\cup E_{11}$ for those who would achieve pCR had they been treated with chemotherapy plus bevacizumab. Other principal stratum causal effects such as $\\theta_{jk}$ can be estimated using a similar approach as we outline in Section 4. For example, to estimate the causal effect in $E_{01}$ requires conditioning on $S(0)=0$ and $S(1)=1$ as in equations (4) and (5). All relevant terms can be expanded as in equation (6). Regression models for counterfactual outcomes, $Pr[S(1)=1|S(0)=0, Y(0)=y,X=x]$, and $Pr[S(0)=1|S(1)=1, Y(1)=y, X=x]$ need to be specified. The regression coefficients can be identified and estimated according to the proposed method. If the paper is accepted, to properly elucidate the proposed method, we’ll add a subsection in Section 4 to present the main results on the identification of regression coefficients of the model for counterfactual outcomes, as in equation (1). You are correct that monotonicity is a key assumption in order to achieve parameter identifiability as in many causal inference methods. This assumption is popular and can be justified in many real data. \n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. The authors have answered almost all of my concerns. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
bwmG6Xp2vD0
CLeaR 2022 Conference
Reviewer_Dzzm
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Dzzm, and the following are the paper you need to review:\n```<<bwmG6Xp2vD0>>```", "role": "user" }, { "content": "Review Content: Summary: The authors demonstrate that if additional assumptions are made on the vertices and the edges of a causal graph the resulting Markov equivalence class becomes much smaller. Their assumptions are: that vertices belong to predefined classes and causal relations can only be in one direction between any two classes.\nMain review: The authors show that when a causal graph is assumed to be constituted of vertices belonging to specific types, and that edges are assumed to exist only in one direction between any two classes, the resulting Markov equivalence set for the data distribution in question becomes smaller. The authors' presentation is clear and easy to follow. This can be interesting for the researchers that focus on this topic. I also agree that this perspective can result in other useful results down the line with respect to integration of causality and machine learning research. \n\nOn the other hand, I believe that the current paper's contribution to this cause borders on marginal. While the results can be useful for specific applications, in the paper's current state the extent of these applications seem to be limited. These limitations include the lack of an efficient algorithm for t-edge orientation, causal sufficiency assumption, the method inheriting (some) problems associated with cond. independence-based causal discovery methods, and lack of persuasive real applications. Perhaps an improved discussion of potential current applications and experiments on real datasets that indeed has type constraints would help the paper to better justify its arguments. Despite its shortcomings, I recommend the acceptance of the paper, given its potential downstream improvements by the related scientific community.\n\nIn addition to my comments above, I list some of the more specific comments and questions I have about the paper:\n- How would the authors imagine their work to develop causal discovery in the presence of latent confounders?\n- How does authors' theory and proposed algorithm work when we do not know the types of certain variables and/or when there is no constraint between certain pairs of variables in either direction?\n- I think the concept t-edge is confusing. t-DAG is an augmented version of the original object (DAG). But t-edge is something completely different than the original object (a graph edge), i.e. while a t-edge is a set of edges, an edge is just a tuple. I would recommend using something similar to a t-edge-set. I imagine the authors think that this allows formulating the relationship between types as a higher-level DAG; but in its current state this terminology is more confusing than helpful.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer Dzzm \nThank you for your comments. The general response (above) contains answers to your concerns about causal sufficiency (Point 1), the limitations of conditional independence-based methods (Point 2), and an enhanced discussion of potential applications (Point 3). Your other comments are addressed below.\n\n### 1. Lack of persuasive real applications\n\nOur main objective is to show the benefits of using assumptions on variable types in causal discovery. Such benefits are clearly visible in our simulation studies. We agree that showing results on real-world datasets would have been a plus. Our results on the pseudo-real data are a step in this direction. Due to the inherent complexity of working with real-world datasets (e.g., lack of ground truth graph), we have resorted to surveying the literature for studies where our theoretical framework would have been applicable. In future work, we will strive to develop collaborations with domain experts to apply our work to real-world problems.\n\n### 2. How does authors' theory and proposed algorithm work when we do not know the types of certain variables and/or when there is no constraint between certain pairs of variables in either direction?\n\nWe answer this question in two parts. The first case, where the type of some variables is unknown, is covered by the current version of our theoretical framework. The second case, where type consistency does not apply to some variables, would require a minor extension of our framework, which we have chosen to leave out to avoid complicating the presentation.\n\n**a) When we do not know the types of certain variables.**\nVariables for which the type is unknown should be assigned to a unique type of which they are the only instance. This ensures that the type consistency assumption still constrains their interactions with other types. Alternatively, one could think of assigning all variables with unknown types to a single (common) type. However, this is incorrect, since it forces all such variables to interact with other types in the same direction. We will add a comment about this in the appendix.\n\n**b) When there is no constraint between certain pairs of variables.**\nTo handle this case, one would need to add an “exclusion set” to the definition of t-DAGs, which would contain vertices (i.e., variables) to which the type consistency assumption is not applied. We left out this case to avoid complicating the presentation. Note that our Theorem 12 would still hold in this case, but the statement of Corollary 13 would need to be restricted to the case where the exclusion set is empty. We will also add a comment about this in the appendix.\n\n### 3. The concept t-edge is confusing [...] I would recommend something similar to a t-edge-set.\n\nWe understand the reviewer’s concerns and agree that the word t-edge-set would clearly expose the *set* nature of t-edges. However, as mentioned by the reviewer, t-edges are abstractions that allow reasoning about the orientation of type relationships in a higher-level graph. While we expose their set nature in the first place, we quickly move on to an edge-like notation $t_i \\xrightarrow{t} t_j$, which we believe facilitates the presentation of our results. We will keep this in mind and try to improve the presentation in the camera-ready.\n\n Global/general Response:\n```\nWe thank the reviewers for their positive feedback and for their insightful questions that have led us to reflect more upon this work and its future extensions. We address questions that were raised by more than one reviewer in this common response and respond to any reviewer-specific concerns separately.\n\n### 1. Extension to semi-Markovian settings (latent confounding) [Reviewers Dzzm, Q9JM]\nA full characterization of the benefits of typing assumptions in the presence of latent confounding goes beyond the scope of this work. However, we view this as an important future direction, since an extension of our framework to this case would strengthen its practical applicability. As a specific example of how our typing assumptions could help reduce the size of the equivalence class in the presence of latent confounding, consider a PAG with the edge $a \\mathbin{{\\circ}\\mspace{-2mu}{\\rightarrow}} b$, where $\\mathbin{{\\circ}\\mspace{-2mu}{\\rightarrow}}$ denotes that some MAGs entailed by the PAG contain either $a \\rightarrow b$ or $a \\leftrightarrow b$ (confounded) and where $a$ and $b$ are respectively of type $t_1$ and $t_2$. If we know that $t_2 \\xrightarrow{t} t_1$ (e.g., inferred from a two-type fork in another part of the graph), then we know that $a \\leftrightarrow b$, i.e., the relation between $a$ and $b$ is due to a latent confounder. We will pursue this direction further and explore adaptations of algorithms for this setting (e.g., FCI) as part of future work.\n\n### 2. Other types of structure learning algorithms (other than PC) [Reviewers Dzzm, Q9JM]\nOur typing assumptions are broadly applicable and are not limited to variants of the PC algorithm. In this work, we chose to focus on PC due to the simplicity and ubiquity of this algorithm. Nonetheless, t-MEC consistent versions of other algorithms, e.g., score-based ones, are definitely possible to derive. For example, consider the score-based algorithm DCDI [(Brouillard et al. 2020)](https://arxiv.org/abs/2007.01754), which searches for the highest-scoring DAG using continuous optimization. In this algorithm, the adjacency matrix of the graph is parameterized by a matrix of parameters representing the probability of having an edge between each pair of variables. It would be possible to directly enforce type-consistency by adding t-edge orientation probabilities to the parametrization. While such extensions are possible, they go beyond the scope of this paper and we keep them for future work.\n### 3. Important comment about the nature of variable types [Reviewers Nb5f] and potential practical applications [Reviewer Dzzm]\n\n> I do wonder how often practitioners will find themselves in a situation where they can a) confidently partition variables into distinct types and b) have no idea how these types relate to each other. In my experience applying these methods, a) and b) tend to move together, since having a basic temporal ordering of variables is quite common in practice. (Reviewers Nb5f)\n\nWe agree that it is very plausible that an expert who would be able to attribute types to variables would also have some intuition about how these types are related. Our framework is compatible with this setting; the orientation of some t-edges may be known a priori. However, our results also work in the case where such prior knowledge is either unavailable or unreliable. Below, we give two examples of cases where an expert may be able to partition variables into types, but where their ordering may be unknown. We will add these examples to the appendix, since these may help practitioners formulate their problems into our proposed framework.\n\n**Example 1 - Time-related types with missing information.**\nConsider the setting where variables are collected over multiple days. This corresponds to a typical tiered background knowledge setting, where we let tiers correspond to types, i.e., the day on which variables were measured (e.g., Day 1 variables, …, Day n variables). Now, consider the case where the exact date of measurement for some types is missing (or unreliable), e.g., for privacy consideration. That is, we know which variables were collected simultaneously, but we don’t know when. While it is not possible to order such types with respect to the others, we know that their inter-type causal relationships follow the arrow of time and that all variables in a type are at the same position in time. In contrast with standard tiered background knowledge, our framework allows for the inclusion of such partial information.\n\n\n**Example 2 - Entities as types.**\nIn some use-cases, types could be used to represent distinct entities, each characterized by multiple measured variables. For example, such entities could be devices on a network for which telemetry data is available, users in a social network and their behavioral characteristics, or machines in a manufacturing production line and their input/output quantities (akin to [Marazopoulou et al. (2016)](https://arxiv.org/abs/1605.04056)). In such settings, the nature of the entities is not necessarily sufficient for an expert to have an intuition about their ordering. For example, we may know that two sets of variables are from two distinct network devices, but not how these devices are related within the network topology. When it is reasonable to assume that entities interact in a directional manner (e.g., producer/consumer, influencer/follower relationships), our typing assumptions can be applied by assuming variables of each entity to be of the same type.\n### 4. Extension: learning the types of variables [Reviewers Q9JM, Nb5f]\nThank you both for your comments and suggestions. An extension to the case where variable types are learned from data goes beyond the scope of the current paper. Several details remain to be worked out, but we attempt to give an intuition of what we have in mind below.\n\n> I could not tell from the discussion exactly what they had in mind here. I believe it’s important that types in this context are not simply categories: rather, they have a substantive causal flavor. [Reviewer Q9Mj]\n\nWhat we have in mind is a setting in which observational data is complemented with metadata for each variable (e.g., a textual description, a DNA sequence, a picture), which would be used to assign them to types (akin to clustering). We absolutely agree that assigning variables to arbitrary categories, deprived of causal semantics, is not enough. Rather, we plan to explore an approach where variable type assignments are learned in parallel with the graph structure and where being assigned to a specific type limits the possible interactions of a variable according to the type consistency assumption. Achieving this successfully will likely necessitate additional assumptions, e.g., (at the very least) that the metadata contains relevant information. A thorough study of such assumptions is kept for future work.\n\n> Is it possible to incorporate some notion of uncertainty about types? The motivation here is that if I am unsure of how types relate to each other, I am probably also less sure of which variables belong to which type. Perhaps the authors may be able to sketch out a sensitivity analysis strategy that addresses this. [Reviewer Nb5F]\n\nIdentifying the types of variables from metadata is clearly non-trivial and taking uncertainty into account will very likely be necessary. For example, the metadata available for a variable may not be sufficient to assign it to a single class with high certainty. Relying on a sensitivity analysis strategy to measure how much the equivalence class would change if a variable were assigned to one type or another is an interesting idea. For example, if two of the learned types ($t_i$ and $t_j$) correspond to partitions of a true type ($t$), one may learn that assigning a variable with type $t$ to either $t_i$ or $t_j$ has no impact on the resulting equivalence class. We will keep this in mind when further pursuing this idea.\n```\n", "role": "user" }, { "content": "Title: Thanks for the response\nI thank the authors for their further comments and clarifications, and am looking forward to reading the published version of their paper.", "role": "assistant" } ]
bwmG6Xp2vD0
CLeaR 2022 Conference
Reviewer_Q9JM
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Q9JM, and the following are the paper you need to review:\n```<<bwmG6Xp2vD0>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a method for incorporating types into causal models, with corresponding causal structure learning algorithms for respecting typing assumptions. In short, if t1 and t2 are different types, then at most one type can cause the other. The utility of the method is shown in theory and in some test cases.\nMain review: In my view this paper checks all of the boxes: it is well written and clear, the idea is a natural one, the results are largely convincing, and one is left wanting to know more. The central idea of the paper is very simple. It is not at all surprising that the kind of inductive bias the authors pinpoint would help in causal inference, simply by reducing the space of compatible causal graphs. But the authors go beyond this basic observation to clarify how exactly it will do so, and also give suggestive case studies in which the proposed algorithms really do appear to lead to gains. I also think the authors have a good argument for why this type of qualitative information is the kind of assumption that a domain expert might realistically be comfortable making.\n\nMost of my questions about extensions and further comparisons are probably already on the authors’ radar. How does this work in the semi-Markovian setting (latent confounders)? What about other causal structure learning algorithms other than PC? And so on.\n\nOne point in the discussion that made me a bit nervous was the authors’ discussion of automated ways of discovering types. I could not tell from the discussion exactly what they had in mind here, but I believe it’s important that types in this context are not simply categories: rather, they have a substantive causal flavor. Without some further reason for thinking that categories in a given domain line up nicely with types that would support the relevant kinds of inference, I worry that trying to induce the types from correlational data could lead to systematic faulty inferences. Perhaps the authors could say a bit more about what they had in mind there?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer Q9JM\nThank you for your comments. The general response (above) contains answers to your questions on an extension to the semi-Markovian setting (Point 1), other causal structure learning algorithms (Point 2), and an extension where variable types are learned (Point 4).\n Global/general Response:\n```\nWe thank the reviewers for their positive feedback and for their insightful questions that have led us to reflect more upon this work and its future extensions. We address questions that were raised by more than one reviewer in this common response and respond to any reviewer-specific concerns separately.\n\n### 1. Extension to semi-Markovian settings (latent confounding) [Reviewers Dzzm, Q9JM]\nA full characterization of the benefits of typing assumptions in the presence of latent confounding goes beyond the scope of this work. However, we view this as an important future direction, since an extension of our framework to this case would strengthen its practical applicability. As a specific example of how our typing assumptions could help reduce the size of the equivalence class in the presence of latent confounding, consider a PAG with the edge $a \\mathbin{{\\circ}\\mspace{-2mu}{\\rightarrow}} b$, where $\\mathbin{{\\circ}\\mspace{-2mu}{\\rightarrow}}$ denotes that some MAGs entailed by the PAG contain either $a \\rightarrow b$ or $a \\leftrightarrow b$ (confounded) and where $a$ and $b$ are respectively of type $t_1$ and $t_2$. If we know that $t_2 \\xrightarrow{t} t_1$ (e.g., inferred from a two-type fork in another part of the graph), then we know that $a \\leftrightarrow b$, i.e., the relation between $a$ and $b$ is due to a latent confounder. We will pursue this direction further and explore adaptations of algorithms for this setting (e.g., FCI) as part of future work.\n\n### 2. Other types of structure learning algorithms (other than PC) [Reviewers Dzzm, Q9JM]\nOur typing assumptions are broadly applicable and are not limited to variants of the PC algorithm. In this work, we chose to focus on PC due to the simplicity and ubiquity of this algorithm. Nonetheless, t-MEC consistent versions of other algorithms, e.g., score-based ones, are definitely possible to derive. For example, consider the score-based algorithm DCDI [(Brouillard et al. 2020)](https://arxiv.org/abs/2007.01754), which searches for the highest-scoring DAG using continuous optimization. In this algorithm, the adjacency matrix of the graph is parameterized by a matrix of parameters representing the probability of having an edge between each pair of variables. It would be possible to directly enforce type-consistency by adding t-edge orientation probabilities to the parametrization. While such extensions are possible, they go beyond the scope of this paper and we keep them for future work.\n### 3. Important comment about the nature of variable types [Reviewers Nb5f] and potential practical applications [Reviewer Dzzm]\n\n> I do wonder how often practitioners will find themselves in a situation where they can a) confidently partition variables into distinct types and b) have no idea how these types relate to each other. In my experience applying these methods, a) and b) tend to move together, since having a basic temporal ordering of variables is quite common in practice. (Reviewers Nb5f)\n\nWe agree that it is very plausible that an expert who would be able to attribute types to variables would also have some intuition about how these types are related. Our framework is compatible with this setting; the orientation of some t-edges may be known a priori. However, our results also work in the case where such prior knowledge is either unavailable or unreliable. Below, we give two examples of cases where an expert may be able to partition variables into types, but where their ordering may be unknown. We will add these examples to the appendix, since these may help practitioners formulate their problems into our proposed framework.\n\n**Example 1 - Time-related types with missing information.**\nConsider the setting where variables are collected over multiple days. This corresponds to a typical tiered background knowledge setting, where we let tiers correspond to types, i.e., the day on which variables were measured (e.g., Day 1 variables, …, Day n variables). Now, consider the case where the exact date of measurement for some types is missing (or unreliable), e.g., for privacy consideration. That is, we know which variables were collected simultaneously, but we don’t know when. While it is not possible to order such types with respect to the others, we know that their inter-type causal relationships follow the arrow of time and that all variables in a type are at the same position in time. In contrast with standard tiered background knowledge, our framework allows for the inclusion of such partial information.\n\n\n**Example 2 - Entities as types.**\nIn some use-cases, types could be used to represent distinct entities, each characterized by multiple measured variables. For example, such entities could be devices on a network for which telemetry data is available, users in a social network and their behavioral characteristics, or machines in a manufacturing production line and their input/output quantities (akin to [Marazopoulou et al. (2016)](https://arxiv.org/abs/1605.04056)). In such settings, the nature of the entities is not necessarily sufficient for an expert to have an intuition about their ordering. For example, we may know that two sets of variables are from two distinct network devices, but not how these devices are related within the network topology. When it is reasonable to assume that entities interact in a directional manner (e.g., producer/consumer, influencer/follower relationships), our typing assumptions can be applied by assuming variables of each entity to be of the same type.\n### 4. Extension: learning the types of variables [Reviewers Q9JM, Nb5f]\nThank you both for your comments and suggestions. An extension to the case where variable types are learned from data goes beyond the scope of the current paper. Several details remain to be worked out, but we attempt to give an intuition of what we have in mind below.\n\n> I could not tell from the discussion exactly what they had in mind here. I believe it’s important that types in this context are not simply categories: rather, they have a substantive causal flavor. [Reviewer Q9Mj]\n\nWhat we have in mind is a setting in which observational data is complemented with metadata for each variable (e.g., a textual description, a DNA sequence, a picture), which would be used to assign them to types (akin to clustering). We absolutely agree that assigning variables to arbitrary categories, deprived of causal semantics, is not enough. Rather, we plan to explore an approach where variable type assignments are learned in parallel with the graph structure and where being assigned to a specific type limits the possible interactions of a variable according to the type consistency assumption. Achieving this successfully will likely necessitate additional assumptions, e.g., (at the very least) that the metadata contains relevant information. A thorough study of such assumptions is kept for future work.\n\n> Is it possible to incorporate some notion of uncertainty about types? The motivation here is that if I am unsure of how types relate to each other, I am probably also less sure of which variables belong to which type. Perhaps the authors may be able to sketch out a sensitivity analysis strategy that addresses this. [Reviewer Nb5F]\n\nIdentifying the types of variables from metadata is clearly non-trivial and taking uncertainty into account will very likely be necessary. For example, the metadata available for a variable may not be sufficient to assign it to a single class with high certainty. Relying on a sensitivity analysis strategy to measure how much the equivalence class would change if a variable were assigned to one type or another is an interesting idea. For example, if two of the learned types ($t_i$ and $t_j$) correspond to partitions of a true type ($t$), one may learn that assigning a variable with type $t$ to either $t_i$ or $t_j$ has no impact on the resulting equivalence class. We will keep this in mind when further pursuing this idea.\n```\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. It resolves the majority of my concerns. I reaffirm my positive endorsement for this paper.", "role": "assistant" } ]
bwmG6Xp2vD0
CLeaR 2022 Conference
Reviewer_Nb5F
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Nb5F, and the following are the paper you need to review:\n```<<bwmG6Xp2vD0>>```", "role": "user" }, { "content": "Review Content: Summary: This paper develops a new class of graphical models called typed directed acyclic graphs. These tDAGs differ from DAGs in that each node is assigned a single type. Then, assuming that causal relationships between two types of variables arises in one direction, they develop Markov equivalence class theory for t-DAGs. This serves as the foundation for causal discovery, in which the authors modify the PC algorithm to handle types. Finally, the authors consider identification for random tDAGs.\nMain review: Quality: The paper overall is of excellent quality. The technical exposition is up to the standard one would expect for a paper in causal discovery.\nClarity: This paper is very well written. The flow of the paper begins with interesting motivational ideas, and naturally opens into a formulation of the problem\nOriginality: To my knowledge, I have not seen DAGs approached in this particular manner. Similar techniques have existed (e.g. tiers in causal discovery) but none that restrict the equivalence class.\nSignificance: I believe this work will be significant to practitioners of causal discovery, since it develops tools that help incorporate additional information often present in applications.\nPros: The paper overall proposes a novel and interesting idea. The authors have done a good job developing and shaping that idea.\nCons: While I do think this work is a step in the right direction, I do wonder how often practitioners will find themselves in a situation where they can a) confidently partition variables into distinct types and b) have no idea how these types relate to each other. In my experience applying these methods, a) and b) tend to move together, since having a basic temporal ordering of variables is quite common in practice. \nThe authors do address this partially by noting that types could be learned using ML classification methods, although this poses a further question of how exactly types might be identified from data (it is not immediately obvious to me that this identification task is trivial). \nIs it possible to incorporate some notion of uncertainty about types? The motivation here is that if I am unsure of how types relate to each other, I am probably also less sure of which variables belong to which type. Perhaps the authors may be able to sketch out a sensitivity analysis strategy that addresses this.\nMinor notes:\nI would consider using “collider” or “V-structure” in place of “immorality” as a more gender-neutral term.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer Nb5F\nThank you for your comments. The general response (above) contains answers to your questions on the plausibility of an expert being able to assign types without having an intuition of the ordering (Point 3) and an extension where variable types are learned (Point 4). As for your comment on the term *immorality*, we absolutely agree and will replace it by *v-structure*.\n Global/general Response:\n```\nWe thank the reviewers for their positive feedback and for their insightful questions that have led us to reflect more upon this work and its future extensions. We address questions that were raised by more than one reviewer in this common response and respond to any reviewer-specific concerns separately.\n\n### 1. Extension to semi-Markovian settings (latent confounding) [Reviewers Dzzm, Q9JM]\nA full characterization of the benefits of typing assumptions in the presence of latent confounding goes beyond the scope of this work. However, we view this as an important future direction, since an extension of our framework to this case would strengthen its practical applicability. As a specific example of how our typing assumptions could help reduce the size of the equivalence class in the presence of latent confounding, consider a PAG with the edge $a \\mathbin{{\\circ}\\mspace{-2mu}{\\rightarrow}} b$, where $\\mathbin{{\\circ}\\mspace{-2mu}{\\rightarrow}}$ denotes that some MAGs entailed by the PAG contain either $a \\rightarrow b$ or $a \\leftrightarrow b$ (confounded) and where $a$ and $b$ are respectively of type $t_1$ and $t_2$. If we know that $t_2 \\xrightarrow{t} t_1$ (e.g., inferred from a two-type fork in another part of the graph), then we know that $a \\leftrightarrow b$, i.e., the relation between $a$ and $b$ is due to a latent confounder. We will pursue this direction further and explore adaptations of algorithms for this setting (e.g., FCI) as part of future work.\n\n### 2. Other types of structure learning algorithms (other than PC) [Reviewers Dzzm, Q9JM]\nOur typing assumptions are broadly applicable and are not limited to variants of the PC algorithm. In this work, we chose to focus on PC due to the simplicity and ubiquity of this algorithm. Nonetheless, t-MEC consistent versions of other algorithms, e.g., score-based ones, are definitely possible to derive. For example, consider the score-based algorithm DCDI [(Brouillard et al. 2020)](https://arxiv.org/abs/2007.01754), which searches for the highest-scoring DAG using continuous optimization. In this algorithm, the adjacency matrix of the graph is parameterized by a matrix of parameters representing the probability of having an edge between each pair of variables. It would be possible to directly enforce type-consistency by adding t-edge orientation probabilities to the parametrization. While such extensions are possible, they go beyond the scope of this paper and we keep them for future work.\n### 3. Important comment about the nature of variable types [Reviewers Nb5f] and potential practical applications [Reviewer Dzzm]\n\n> I do wonder how often practitioners will find themselves in a situation where they can a) confidently partition variables into distinct types and b) have no idea how these types relate to each other. In my experience applying these methods, a) and b) tend to move together, since having a basic temporal ordering of variables is quite common in practice. (Reviewers Nb5f)\n\nWe agree that it is very plausible that an expert who would be able to attribute types to variables would also have some intuition about how these types are related. Our framework is compatible with this setting; the orientation of some t-edges may be known a priori. However, our results also work in the case where such prior knowledge is either unavailable or unreliable. Below, we give two examples of cases where an expert may be able to partition variables into types, but where their ordering may be unknown. We will add these examples to the appendix, since these may help practitioners formulate their problems into our proposed framework.\n\n**Example 1 - Time-related types with missing information.**\nConsider the setting where variables are collected over multiple days. This corresponds to a typical tiered background knowledge setting, where we let tiers correspond to types, i.e., the day on which variables were measured (e.g., Day 1 variables, …, Day n variables). Now, consider the case where the exact date of measurement for some types is missing (or unreliable), e.g., for privacy consideration. That is, we know which variables were collected simultaneously, but we don’t know when. While it is not possible to order such types with respect to the others, we know that their inter-type causal relationships follow the arrow of time and that all variables in a type are at the same position in time. In contrast with standard tiered background knowledge, our framework allows for the inclusion of such partial information.\n\n\n**Example 2 - Entities as types.**\nIn some use-cases, types could be used to represent distinct entities, each characterized by multiple measured variables. For example, such entities could be devices on a network for which telemetry data is available, users in a social network and their behavioral characteristics, or machines in a manufacturing production line and their input/output quantities (akin to [Marazopoulou et al. (2016)](https://arxiv.org/abs/1605.04056)). In such settings, the nature of the entities is not necessarily sufficient for an expert to have an intuition about their ordering. For example, we may know that two sets of variables are from two distinct network devices, but not how these devices are related within the network topology. When it is reasonable to assume that entities interact in a directional manner (e.g., producer/consumer, influencer/follower relationships), our typing assumptions can be applied by assuming variables of each entity to be of the same type.\n### 4. Extension: learning the types of variables [Reviewers Q9JM, Nb5f]\nThank you both for your comments and suggestions. An extension to the case where variable types are learned from data goes beyond the scope of the current paper. Several details remain to be worked out, but we attempt to give an intuition of what we have in mind below.\n\n> I could not tell from the discussion exactly what they had in mind here. I believe it’s important that types in this context are not simply categories: rather, they have a substantive causal flavor. [Reviewer Q9Mj]\n\nWhat we have in mind is a setting in which observational data is complemented with metadata for each variable (e.g., a textual description, a DNA sequence, a picture), which would be used to assign them to types (akin to clustering). We absolutely agree that assigning variables to arbitrary categories, deprived of causal semantics, is not enough. Rather, we plan to explore an approach where variable type assignments are learned in parallel with the graph structure and where being assigned to a specific type limits the possible interactions of a variable according to the type consistency assumption. Achieving this successfully will likely necessitate additional assumptions, e.g., (at the very least) that the metadata contains relevant information. A thorough study of such assumptions is kept for future work.\n\n> Is it possible to incorporate some notion of uncertainty about types? The motivation here is that if I am unsure of how types relate to each other, I am probably also less sure of which variables belong to which type. Perhaps the authors may be able to sketch out a sensitivity analysis strategy that addresses this. [Reviewer Nb5F]\n\nIdentifying the types of variables from metadata is clearly non-trivial and taking uncertainty into account will very likely be necessary. For example, the metadata available for a variable may not be sufficient to assign it to a single class with high certainty. Relying on a sensitivity analysis strategy to measure how much the equivalence class would change if a variable were assigned to one type or another is an interesting idea. For example, if two of the learned types ($t_i$ and $t_j$) correspond to partitions of a true type ($t$), one may learn that assigning a variable with type $t$ to either $t_i$ or $t_j$ has no impact on the resulting equivalence class. We will keep this in mind when further pursuing this idea.\n```\n", "role": "user" }, { "content": "I value the authors' response. It adequately addresses the points I raised. I still support my favorable recommendation for this manuscript.", "role": "assistant" } ]
cDxT7WYhaD
CLeaR 2022 Conference
Reviewer_DmFu
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_DmFu, and the following are the paper you need to review:\n```<<cDxT7WYhaD>>```", "role": "user" }, { "content": "Review Content: Summary: This paper studies the Out-of-Distribution Generalization classification problem. And it test several simple reweighting methods on several benchmarks. \nMain review: This paper conducts some experiments on the OOD generalization classification tasks. The authors test some sample reweighting methods on four benchmarks and find that simple data balancing baselines can achieve competitive performance. However, the technical contributions of this work are limited since it does not propose any new methods or theoretical analysis, for which I think this paper does not meet with the bar of this conference.\nRating: 3 reject, not good enough\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Official rebuttal\nWe thank the reviewer for their comment and acknowledge their point of view.\n\nThe purpose of this submission is not to propose new methods or theory, but to take a sobering look at the rapidly growing field of worst-group-accuracy optimization, and show that very simple baselines do address all the benchmarks commonly considered in our community.\n\nIn particular, recent years have provided us with a plethora of increasingly complex algorithms to solve the worst-group-accuracy benchmarks, relying on large computational budgets, multiple hyper-parameters, and blurring our understanding of why and when generalization happens. Our submission shows that incredibly simple baselines, requiring far less compute and no hyper-parameters, achieve state-of-the-art performance across most popular datasets. This is an important and novel result, calling for a closer look at both algorithms and benchmarks.\n\nIn light of our results, new researchers in the field of worst-group-accuracy optimization have a solid reason to step back, implement and compare against simple subsampling baselines before endeavoring to develop novel, complex methods.\n", "role": "user" }, { "content": "I value the authors' response. Their response has resolved the majority of my concerns, and I may raise the score accordingly.", "role": "assistant" } ]
cDxT7WYhaD
CLeaR 2022 Conference
Reviewer_PzXS
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_PzXS, and the following are the paper you need to review:\n```<<cDxT7WYhaD>>```", "role": "user" }, { "content": "Review Content: Summary: An insightful paper comparing important baselines to recently proposed approaches such as JTT and DRO\nMain review: Quality: The paper studies the problem that often arises when the data has groups/attributes that can be in the minority and often times get sacrificed at test time as the model can learn to rely on spurious features that work well for majority group at train time. The problem has gotten quite some attention after the work of gDRO from Sagawa et al. (2019) and IRM Arjovsky et al. (2019). \nThe paper carries out a careful empirical comparison of natural baselines that can be divided into two categories one that access group information and others do not access group information to recently proposed JTT and gDRO. \nI found the results of the paper quite insightful, which I summarize below.\n1. If we do not have access to attribute information, then rebalancing using class label is the go to method and not JTT.\n2. If we have access to attribute information, then rebalancing using group label is approximately as good as gDRO barring some cases. Given its other benefits, it should still be off the shelf method to use in these settings. \n3. Very interestingly, in the hardest problem when attribute information is also not used for validation then the benefit of using these simpler baselines is even more pronounced. \n4. Also, in the hardest problem it seems when there is no attribute information (at train or validation time) and we are comparing RWY, SUBY, ERM, and JTT the winner seems to be SUBY. Hence, the conclusion I draw is that if there is class imbalance, then SUBY is winner for worst group accuracy. So if there was no class imbalance, then SUBY may be still similar to ERM?\n5. I would like to ask authors if they have intuitions or comparisons for the setting when there is no attribute information at train time, then methods like https://openreview.net/pdf?id=YygA0yppTR or https://openreview.net/pdf?id=b9PoimzZFJ seem to have shown impressive results. I ask this because authors conclude that it seems it's quite hard to beat these simple baselines. \n6. Also, the authors at the end ask if the idea of worst group accuracy a good one. Does it not fundamentally boil down to asking if there is one classifier that can achieve a reasonably good performance (not far from their individually optimal ones) for all the groups? I believe in water birds and celebA this should be the case but I am not sure about civil environments and multiNLI dataset. A proxy for checking this can be that train a classifier separately on each group and find the individual optimal performance. Find how the classifier performs when transferred to other groups, if the gap is large then perhaps there is no common classifier that performs well across the groups. If the gap is small, then there exists a good classifier and thus worst group accuracy is meaningful. I wonder if authors have similar intuition?\n\nClarity: The paper is very clear.\n\nOriginality: The paper's contributions are new. The methods are natural and important baselines.\n\nSignificance: The paper is quite important.\nRating: 8 accept, good paper\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Additional clarifications and answers to reviewer's questions\nWe thank the reviewer for their thorough reading of the paper, the detailed feedback, and the interesting related work that they pointed out. We also thank the reviewer for his acknowledgment of our *careful* empirical evaluation, our *insightful* results, and our *comprehensive* choice of datasets.\n\nBelow, we do our best to address the major concerns raised by the reviewer\n\n**Assertion: “... when there is no attribute information (at train or validation time) and we are comparing RWY, SUBY, ERM, and JTT the winner seems to be SUBY.”**\n\nWe want to clarify that Table 3 shows that RWY is the least impacted by the absence of attribute information instead of SUBY. However, our main conclusion is that the presence of attribute information during validation is crucial for all methods since they all heavily suffer from its absence, even though RWY suffers less from it on average. \n\n**Question: “Hence, the conclusion I draw is that if there is class imbalance, then SUBY is winner for worst group accuracy. So if there was no class imbalance, then SUBY may be still similar to ERM?”**\n\nIn the case of class balance, SUBY indeed becomes equivalent to ERM. RWY is also very similar to ERM in the case of class balance, except that samples are drawn with replacement in RWY. \n\n**Related work**\n\nWe would like to thank the Reviewer for pointing out two related works. [Diffenderfer et al. 2021] studies different forms of corruption on CIFAR and hence is inherently different from the specific task we attack in our submission. [Ahmed et al. 2021], however, is closely related to our work as it is concerned with inferring minority and majority groups and enforcing invariant solutions on them, which could be directly applied to our setting. Given its impressive performance on Coloured MNIST, we are keen to conduct similar experiments in non-synthetic setups such as CelebA, MultiNLI, or CivilComments in future work. \n\n**Assertion: “Also, the authors at the end ask if the idea of worst group accuracy a good one”**\n\nWe are afraid that is not entirely true. The question we pose is whether the *current benchmarks* used in the worst group accuracy setting are enough to train models that can correctly discard the spurious correlations present in them. For instance, when a model is trained on a dataset with only red cars, we can hardly expect it to discard the color. Similarly, we wonder if the datasets here are enough to train models that would perform well on different stratifications of the same data. We will be looking for methods that disprove this by significantly outperforming our simple baselines, by starting with the two related works mentioned in the review. \n\n**Question: “Does it not fundamentally boil down to asking if there is one classifier that can achieve a reasonably good performance (not far from their individually optimal ones) for all the groups?”**\n\nWe agree.\n\n**Question: “ A proxy for checking this can be that train a classifier separately on each group and find the individual optimal performance. Find how the classifier performs when transferred to other groups, if the gap is large then perhaps there is no common classifier that performs well across the groups. If the gap is small, then there exists a good classifier and thus worst group accuracy is meaningful. I wonder if authors have similar intuition?**\n\nWe are afraid that this might not work in our precise setting as we stratify using the class values. The reason is that if we train a classifier in each group, the best one would be the constant classifier, always predicting the label associated with that group. \n", "role": "user" }, { "content": "I appreciate the authors' efforts in addressing my concerns. The response covers nearly all of the issues I raised. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
cDxT7WYhaD
CLeaR 2022 Conference
Reviewer_rWjE
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_rWjE, and the following are the paper you need to review:\n```<<cDxT7WYhaD>>```", "role": "user" }, { "content": "Review Content: Summary: Empirical results showing that simple reweighting techniques can perform comparably to sota methods for worst-group accuracy.\nMain review: The paper presents some thought-provoking results. If the goal is to do well on worst-group accuracy, then simple reweighting techniques can do as well on benchmark datasets.The set of datasets is comprehensive and proposed techniques are intuitive enough. I like the direction of this work. \n\nHowever, some of the results are underwhelming, and I feel that the paper can be improved by more theory/conjecture/analysis that explains the results. \n1. In table 2, gdro is better on multi-nli. Why? What is it about this dataset that makes reweighting less effective? Same for suby on civilcomments. What can we learn from these exceptions?\n\n2. Given that there are exceptions, should we still use the complicated methods? Under what conditions would reweighting be ideal? This can be answered by either extending the simulated toy dataset or by theoretical analysis on a simple setup.\n\nOverall, I'm unclear what to take away from this paper. Is the proposal that we should adopt reweighing techniques? or that the datasets needs to be more complicated? If the former, you need to explain the exceptions in Table 2 better. If the latter, it will be good to propose a dataset where you clearly see the difference between sota methods and reweighting (even a toy data is fine).\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Some further investigations and explanations following remarks by reviewer 3\nWe are happy to hear that you have found our results “thought-provoking” and “intuitive”. Thank you for your remark on the superior performance of G-DRO in MultiNLI. It inspired some further investigation that we will add to the manuscript. We hope this helps clarify!\n\n**Question: “In table 2, gdro is better on multi-nli. Why? What is it about this dataset that makes reweighting less effective? Same for suby on civilcomments. What can we learn from these exceptions?”**\n\nWe believe gDRO is performing best in Multi-NLI because of the nature of the dataset. The spurious attribute “presence of: ‘no’, ‘never’, ‘nobody’ or ‘nothing’ in the second sentence” only helps in a small proportion of the data (see table 1). It is therefore not a dominating spurious correlation and the classifier still needs to extract other features to achieve good accuracy. Group dro minimizes a soft maximum of the group losses, thus enabling a more flexible way to penalize this non-dominant spurious correlation.\n\nTherefore, the simpler baselines are less effective in this case because balancing either through subsampling or reweighting is a strong measure that is meant to completely decorrelate the spurious feature with the label, which is most useful when the spurious correlation dominates the data. Therefore, the gains of getting rid of such a spurious correlation are compensated by the harsh capacity control imposed by balancing. Indeed, subsampling throws away a big proportion of the data, and reweighting has a similar effect when stopped early, which is the case for MultiNLI (training for only a few epochs).\n\nIn the table shown in this link ([click here](https://i.imgur.com/Zd7HtRt.png)) we present a particular example that we found upon further investigation that serves to illustrate the hypothesis above; it shows a sentence evaluated using the three methods “gDRO” “RWG” and “SUBG”. The highlight in green shows the gradient norm of the contradiction’s class probability with respect to the input embeddings. It is a proxy to the importance of each word in the final decision of the model as to whether it is a contradiction or not. Group DRO picked up on the crucial word “remembering” in the first sentence and therefore could correctly classify the relation between the two sentences as a contradiction. The other methods lack this “feature” and therefore fail.\n\nAs for SUBY, it seems to be underperforming for most datasets, even performing worse than ERM. We, therefore, argue that this would disqualify it from being one of the baselines that compete with more complicated methods. \n\nFrom these exceptions, we can learn that these simple balancing baselines might only work on simpler cases where the spurious correlation is clear cut and is present in a big majority of the data. In more nuanced cases such as MultiNLI, more complex methods such as gDRO might be useful. We will add this discussion in the paper and hope it clarifies things!\n\n**Question: “Given that there are exceptions, should we still use the complicated methods? Under what conditions would reweighting be ideal? This can be answered by either extending the simulated toy dataset or by theoretical analysis on a simple setup. Overall, I'm unclear what to take away from this paper. Is the proposal that we should adopt reweighing techniques? or that the datasets needs to be more complicated?”**\n\nDespite the existence of exceptions, simple data balancing baselines achieve state-of-the-art performance on average. In the end, our results show quantitatively that simple baselines are worth trying first. Furthermore, they give a very good ballpark of the performance that we should expect even from more complicated methods.\nThe ideal conditions for reweighting (in contrast to subsampling) is when different groups are more similar, since reweighting uses more data. An extreme case would be when majority and minority groups are the same, and subsampling would just needlessly throw away i.i.d data.\n\nOur main takeaway is: **considering simple data balancing strategies is useful and important when optimizing for worst-group-accuracy, as they achieve competitive performance across all commonly studied benchmarks.**\n\nWe do actually believe that more complicated datasets would be necessary to illustrate the strengths and weaknesses of different worst-group-accuracy optimizers. The fact that we can achieve such good performance by throwing away a lot of data as subsampling does is a clear pointer towards the fact that these datasets are very simple. In fact, we believe creating more realistic benchmarks is the way forward to improving our understanding about out-of-distribution generalization right now.", "role": "user" }, { "content": "Title: thank you\nThe analysis in response to the first question is very useful. will be great to add such interpretation to the paper.", "role": "assistant" } ]
cU5EeCQk5LX
CLeaR 2022 Conference
Reviewer_mdbo
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_mdbo, and the following are the paper you need to review:\n```<<cU5EeCQk5LX>>```", "role": "user" }, { "content": "Review Content: Summary: The main contribution of the paper to the literature is to provide identifiability results for causal discovery under the assumption of linearity and allowing for both latent confounding and deterministic relationships.\nMain review: Originality: The identifiability results under determinism are new and relevant. The novelty compared to existing works is well justified.\n\nSignificance: The paper addresses an important problem, which is relevant for the CLeaR community. Determinism is for example implicit in constitutive relationships. Complex systems typically mix up constitutive and causal relationships. The paper for sure provides new insights. The insight of using blind source separation to find out deterministic relationships is interesting and full of potentiality. The paper is likely to have broad impacts also outside the CLeaR community, in all complex settings where causal and constitutive relationships are mixed up.\n\nTechnical quality: The proposed approach is technically sound, with claims substantiated by sound theoretical results. However, some the implications of the so-called \"unique components condition\" should be better spelled out. It seems to me that this condition implies that one observes all the variables that enter as argument in a deterministic function of the structural causal model. Suppose that, in the SCM, X=f(Y,Z). It seems to me that the unique component condition (and the proposed identification method to function) needs that one observed Y and Z. Latent confounding is allowed only when is nondeterministic. I think this should be spelled out. Moreover, at p. 6 (bottom) it is written: \"we can use this unique component to compute the total causal effect from $x_i$ to $x_k$ (by dividing the coefficient of this unique component in $x_k$ by its coefficient in $x_i$)\". What is claimed between brackets should be rigorously proved.\n\nThe paper is clearly written and well organised. However, in the introduction section some illustrative examples of the different SCMs would help.\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer mdbo\nWe thank the reviewer for the comments and we are glad that the reviewer finds our work insightful. We address the concerns below. \n\n**Regarding the comment about “observing all the variables that enter as argument in a deterministic function of the structural causal model”**: \n\nWe would like to clarify that we only observe the “observed variables” in $\\mathcal{X}$, not the source variables. All source variables in $\\mathcal{S}$ are exogenous and are not observed. Recall that in our setup we define a latent confounder as a source variable $s$ that is exogenously connected to more than one observed variable (say $x_i$ and $x_j$). In this case, we can interpret $s$ as a distinct source for an *unobserved* variable, say $L$ (i.e., $s\\rightarrow L$ with generating model $L=s$), which is a latent confounder of the corresponding observed variables (i.e., replacing the edges $s\\rightarrow (x_i,x_j)$ by $L \\rightarrow (x_i, x_j) $). \n\nFor example, consider the structure in Figure 1(a), with an extra edge from $s_3$ to $x_3$ added to the graph. Therefore, $s_3$ (or equivalently the added child, $L$) is a confounder of all $x_1$, $x_2$, and $x_3$, and $x_3=f(x_1, x_2, L)$. First, the structure of $x_3$ in this model satisfies the unique component condition, which simply requires that there should not be any edges from $s_1$ to $x_3$ and from $s_2$ to $x_3$. Second, one argument of the function that generates $x_3$ is not observed, i.e., $L$. Third, $x_3$ is a deterministic function of $x_1$, $x_2$ and latent confounder $L$. Therefore, this is an example in which (i) we have a latent confounder (shared source) in the system, (ii) an observed variable is deterministically generated from other observed variables and latent confounders, and (iii) the generating model does satisfy our unique component condition. Also, this structure can be uniquely recovered by our proposed algorithm. \n\n**Regarding the statement that we compute the total causal effect from $x_i$ to $x_k$ by dividing the coefficients**: \n\nThe rigorous proof is included in the sufficiency proof in Appendix F.2, and we further explained this claim in Remark 9 (in the proof), Page 30. We will clarify it further in the text and will refer to the proof in the revised paper. \n\n**Illustrative examples of the different SCMs**: \n\nWe thank the reviewer for this suggestion regarding improving the presentation. We will add more illustrative examples in the introduction. ", "role": "user" }, { "content": "I appreciate the authors' reply. Most of my concerns have been addressed. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
cU5EeCQk5LX
CLeaR 2022 Conference
Reviewer_P6YW
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_P6YW, and the following are the paper you need to review:\n```<<cU5EeCQk5LX>>```", "role": "user" }, { "content": "Review Content: Summary: The paper generalizes the linear SCM setting to the case where simultaneously latent variables and deterministic relationships between endogenous variables are existent. The main result of the paper provides necessary and sufficient conditions for unique identifiability of causal effects among the variables. For the sufficiency direction, an algorithm to learn the causal structure is provided. It is shown that this algorithm recovers the true causal structure better than some of the alternative algorithms when dealing with some different subclasses of this general SCMs.\nMain review: I personally like the paper. It is clearly written and contains a clear result. The result and the algorithm, although marginally significant in my opinion, are novel and worth publishing. The technicalities seem correct to me although I did not check the Appendix carefully. A problem with the paper in current from is that a large number of examples, explanations, and numerical experiments that fit in the main body of the paper and help to understand the content are put in the Appendix. I think the authors could have chosen what goes into the main text and what goes to the Appendix better. I have just a few comments below:\n\nAt first, the paper sounds as though they have solutions for all possible deterministic relationships, but as it turns out later there are rather strong combinatorial conditions that need to be satisfied. I think the authors should be more clear about this from the beginning.\n\nThe argument about the faithfulness assumption being satisfied almost surely, although formally correct, is misleading. It is true that any additional conditional independence is of order zero, but any SCM assumes many of these order-zero constraints, so it is not totally unreasonable to wonder whether more of these independencies not lie around. In addition, even after looking at the related Appendix section, I could not see that it is shown formally that this condition is equivalent to the faithfulness assumption. \n\nThere should be more discussion on the main two assumption required in the main result. What are the causal interpretation and implications of these assumptions, and also how restrictive do they become? \n\nIt seems to me that the conditions could be used beyond the linear setting, as, in principle they do not use the assumption of linearity. It would be worthwhile to have a discussion on whether they generalize the known results on identifiability of non-linear SCMs.\nRating: 8 accept, good paper\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer P6YW\nWe thank the reviewer for the comments and suggestions. We address the concerns below. \n\n**Regarding “strong combinatorial conditions need to be satisfied for identifiability”**: \n \nWe would like to emphasize that the identifiability conditions in Section 3 are necessary and sufficient. Therefore, there cannot be any solution for the model studied in our paper with weaker identifiability conditions. We will emphasize these conditions at the beginning of the presentation in the revised paper. We thank the reviewer for this suggestion. \n\n**Faithfulness assumption in P-SCM**: \n\nP-SCM faithfulness assumption part (a) can be interpreted as: Any variable is marginally dependent on all its descendants. This is equivalent to the faithfulness assumption in Salehkaleybar et al. (2020) on linear SCMs with latent variables (DS-P-SCMs), and is in fact less restrictive than the common faithfulness assumption in studying causal models, which requires that any conditional independency in the distribution implies a corresponding d-separation in the causal diagram. \n \nPart (b) of the assumption is there to simply prevent the case when the coefficients of the exogenous connections from a subset of two or more sources to an observed variable is proportional to the coefficients of their exogenous connections to another observed variable. For example, suppose $x_1 = b_{11} s_1 + b_{21}s_2$ and $x_2 = b_{12} s_1 + b_{22}s_2$. Assumption (b) prevents $(b_{11}, b_{21})$ from being proportional to $(b_{12}, b_{22})$. This is not a restrictive assumption as if we were to introduce a continuous probability distribution over the non-zero entries of $\\mathbf{B}$, part (b) merely rules out some zero probability events. \n\nWe will add more explanation about the implications of the assumptions in the revised paper. \n\n**Regarding casual interpretation and restrictiveness of the conditions in main result**: \n\nThe causal interpretation of unique component condition is that certain edges from source variables to observed variables are not allowed in the ground-truth structure. The interpretation of marriage condition is that the ancestor set of every observed variable has a sufficient number of exogenous connections (from source variables) so that it allows for unique identifiability of the causal effects to this observed variable. We will add these interpretations in the revised paper. \n\nAs for the restrictiveness of the conditions, we provided experimental results in Section 5 in which we tested the satisfiability of the conditions for randomly generated P-SCMs and DS-P-SCMs. We evaluated what portion of them satisfy the conditions under different generating model assumptions, different number of causal/exogenous connections, observed variables, and source to observed node ratio. Please refer to Appendix G.1 for more details. ", "role": "user" }, { "content": "I'm grateful for the authors' response. It satisfactorily addresses the major issues I had. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
cU5EeCQk5LX
CLeaR 2022 Conference
Reviewer_fNiS
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_fNiS, and the following are the paper you need to review:\n```<<cU5EeCQk5LX>>```", "role": "user" }, { "content": "Review Content: Summary: This paper considers the causal discovery problem in linear structural causal models, where the observed variables are of a deterministic function of other observed variables or latent confounders, but not with an exogenous source. \nMain review: This paper considers the causal discovery problem in linear structural causal models, where the observed variables are of a deterministic function of other observed variables or latent confounders, but not with an exogenous source. The authors give necessary and sufficient conditions for the unique identifiability of the model and the relative estimation method. Overall, this paper is written well, with complete theoretical identifiability analysis for the model and sufficient supplementary materials. However, I have some confuses, listed below.\n\n1. About the motivation of the introduced deterministic linear model, I agree that “G-SCMs without distinct sources can be used to model scenarios with less factors than observed nodes;”, but why deterministic relations are implied? For example, I mean in Fig.1(a), s_1 and s_2 can be removed while x_3 could have a distinct exogenous variable. I do not think the motivation is applicable for the proposed model. Btw, if it is indeed deterministic relation, the model in Eq. (2) cannot characterize this determinism.\n\n2. I am confused about Assumption 2, which I think is contradictory with the proposed “deterministic relations”. For instance, if we consider a two-variable case and suppose x_i -> x_j, then according to Assumption 2, x_j has at least one source in x_j that is not in x_i, which implies x_j does not deterministically depend on x_i or latent confounders. Please explain it.\n\n3. For the nonlinear causal discovery methods with deterministic relations and with identifiability, please also refer to the following papers.\n[1] Chen Z, Zhang K, and Chan L, \"Nonlinear Causal Discovery for High Dimensional Data: A Kernelized Trace Method,\" 2013 IEEE 13th International Conference on Data Mining, 2013, pp. 1003-1008, doi: 10.1109/ICDM.2013.103\n[2] Zeng Y, Hao Z, Cai R, et al. Nonlinear Causal Discovery for High-Dimensional Deterministic Data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021.\n[3] Janzing D, Mooij J, Zhang K, et al. Information-geometric approach to inferring causal directions[J]. Artificial Intelligence, 2012, 182: 1-31.\n\n4. The sentence in the paper “when the sources are non-Gaussian random variables, the mixing matrix W can be recovered using Independent Component Analysis (ICA) or overcomplete ICA methods” is not correct, but using the under-complete ICA. \n\n5. It might be better to use the assumptions in ICA or BSS to derive Assumption 4, rather than “separability”, and the input of the algorithm is the data X, but not the mixing matrix. \n\n6. How could we know the number of latent sources or the size of the mixing matrix? \n\nI would raise my score if the authors could resolve all my concerns.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer fNiS\nWe thank the reviewer for the comments. We address the concerns below. \n\n1. We first would like to clarify the definition of a latent confounder: Consider a setup in which a source variable $s$ is exogenously connected to more than one observed variable (say $x_1, x_2$). In this case, we can interpret $s$ as a distinct source for an *unobserved* variable, say $L$ (i.e., $s\\rightarrow L \\rightarrow (x_1, x_2)$), which is a latent confounder of $x_1$ and $x_2$. For example, in Figure 1(a), $s_3$ can be interpreted as $s_3 \\rightarrow L$, where $L$ is a latent confounder of $x_1$ and $x_2$. \nThe reviewer mentioned a modification of Figure 1(a), in which $s_1$ and $s_2$ are removed and a new source is added to $x_3$. We note that by our definition of latent confounders, in this modified model $x_1$ and $x_2$ will both be deterministic functions of the latent confounder $L$. Therefore, the fact that we have less sources than observed variables results in deterministically generated variables. We would like to emphasize that an observed variable is non-deterministic only if it is exogenously connected to distinct sources—sources that are not exogenously connected to any other observed variables. If deterministic relations do not exist in the system, then each observed variable must be associated with a distinct source, and therefore, the number of the sources must be more than or equal to the number of the observed variables. \n \n2. In the special case of two-variable the reviewer is correct: If Assumption 2 is satisfied, then $x_j$ cannot be a deterministic function of $x_i$ or latent confounders (shared sources). However, when there are more than two observed variables, Assumption 2 does not always contradict non-determinism. One such example is shown in Figure 1(a). In this model, $x_3$ deterministically depends on its parents $x_1$ and $x_2$, but $x_3$ includes one more source ($s_2$) than $x_1$, and one more source ($s_1$) than $x_2$. Therefore Assumption 2 is satisfied and yet we have a deterministic relation. \n\n Please note that, we believe the two-variable example with deterministic relation is a degenerate case, and there is no way to recover the causal direction between $x_i $ and $x_j$ only based on observations. Suppose there is a latent confounder $s_k$, and the generating model is: $x_i=b_1s_i+b_2 s_k$, $x_j=ax_i+b_3 s_k$. Then we can always construct an alternative model $x_j=ab_1 s_i+ (a b_2 + b_3) s_k$, $x_i=1/a~ x_j - b_3/a~ s_k$ with the same observational distributions of $x_i$ and $x_j$, where $x_j$ is a parent of $x_i$. In this case the true model cannot be distinguished from the alternative model by any method. \n \n3. We thank the reviewer for the references. We will add these references in the revised paper. \n \n4. We would like to clarify that the dimension of the mixing matrix $\\mathbf{W}$ is $p\\times m$, where $p$ is the number of observed variables, and $m$ is the number of sources. That is, we observe $p$ variables in $\\mathcal{X}$, each of which is a linear combination of the $m$ independent sources in $\\mathcal{S}$. Therefore, whether to use undercomplete ICA, standard ICA or overcomplete ICA for blind source separation depends on the number of the sources and the number of the observed variables, which further depends on the model structure. We note that even if deterministic relations exist in the system, there may still be more or equal sources than observed variables (such as Figure 1(a)). \nFurther, we would like to clarify that undercomplete ICA is in fact included in standard ICA in most literature, as both can be solved using the same algorithm (e.g., FastICA). \n \n5. The reason we assume separability instead of more specific assumptions on the model (such as non-Gaussianity of the sources which is assumed in LiNGAM) is that there are numerous models and corresponding BSS methods where the true mixing matrix can be correctly recovered. We do not want to restrict our recovery approach to a specific BSS model. For the input of the algorithm, we can add a black box for BSS and then change the input to observational data $X$. \n \n6. It depends on the assumptions of the selected BSS method. In standard ICA there are always equal numbers of observed variables and latent sources. In undercomplete ICA and SBSSR model by Behr et al (2018), the number of latent sources can be recovered by additional statistical testings along with the recovery of the mixing matrix in the algorithm. In overcomplete ICA, the number of latent sources is always assumed to be known in advance. ", "role": "user" }, { "content": "Title: Response to CLeaR 2022 Conference Paper105 Authors\nThanks for the authors’ responses. I have increased my score to 7. \n\nFollowing the authors’ responses, I have the following suggestions,\n\ni) it might be better to give a definition/explanation for deterministic relations, since some scholars may have thought that such deterministic relations are those $x=f(Pa(x))$ without noise (Glymour et al., 2019). \n\nii) as for Assumption 2, it might be better to explain clearly that the model and/or theory does not hold for the two-variable cases.\n\n(Glymour et al., 2019) Glymour C, Zhang K, Spirtes P. Review of causal discovery methods based on graphical models[J]. Frontiers in genetics, 2019, 10: 524.\n", "role": "assistant" }, { "content": "Title: Thanks for the suggestions\nWe thank the reviewer for the suggestions. We will add both explanations in the revised paper.", "role": "user" }, { "content": "I am pleased to have had the opportunity to discuss the points above with the author, which has definitely improved the overall quality of the paper.", "role": "assistant" } ]
dHsFFekd_-o
CLeaR 2022 Conference
Reviewer_h9au
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_h9au, and the following are the paper you need to review:\n```<<dHsFFekd_-o>>```", "role": "user" }, { "content": "Review Content: Summary: The authors present a novel approach for discovering disentangled representations in data sets with auxiliary variables.\nMain review: # Review\nThe authors present a novel approach to discovering disentangled latent variables from observations and auxiliary data generated by a latent variable model. They present identifiability results for their approach (up to permutation), and present empirical comparisons with related work on synthetic data. Both the motivation and the presentation is very clear and easy to follow. The comparison with previous literature is sufficient (at least at the conceptual level, see below). The findings are novel and of interest to the conference audience, and overall topic of the paper is very timely.\n\n## Questions and suggestions\n\nBelow I present some questions and suggestions to the authors:\n\n- Possibly the only obvious shortcoming of the paper is comparison on real data with similar works, especially with iVAE. Do the authors plan to add such results?\n- It would be a good idea to mention the dependency of the method on the observed auxiliary variable (e.g. time) is a necessary condition for the application of this approach in the abstract of the paper.\n- Given that sparsity is a concept that can be understood differently according to the context (e.g. in compressed sensing vs. scientific computation), I think it would be a good idea to delineate what sparse means in this context early on in the paper.\n- Can the regularization on G_a and G_z undermine sufficient (time or action) variability if taken to large levels? \n\n## Minor points\n- In Thm 4, the explanation for the first assumption is given below at least partly. So in the theorem you could refer the author to the lines below, and from there to Def. 9.\n- \"This can be interpreted as having a sufficiently nonlinear transition model.\" Pg. 6. Can any linear transition model meet this criterion?\n- For an even more lively exposition (e.g. in Pg. 10), demonstrating how differently results obtained by iVAE would be in a similar example (e.g. that in Fig. 1) could be beneficial.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response\nWe thank the reviewer again for the work they have put into reviewing our work.\n\n**Experiments on real data:** We agree that experiments demonstrating the usefulness of mechanism sparsity for disentanglement on realistic data would be very interesting. We believe the rigorous theory, the algorithm contribution, the careful writing to make non-trivial ideas approachable, and our existing experiments already constitute a novel and significant contribution that will be of interest to the community, and that publishing the paper and code as it stands now allows the research community to build on these ideas. We note that the paper is already 51 pages, half of which are careful proofs of our theory in the appendix. We are very excited to explore more realistic applications of mechanism sparsity, but leave it as future work.\n\n**Mentioning dependency on auxiliary variables in abstract:** We agree this point should be more explicit in the abstract and will adjust it in camera-ready version (see proposed abstract modification below). We would like to make a clarification. Whether our method requires auxiliary variables depends on how you define “auxiliary variables”. If you consider “previous observations” as an auxiliary variable (like in [1]), then yes our method requires it. If you only consider the random variable $A^t$ to be the auxiliary variable, then, no, our method does not require observing some $A^t$ (previous observations are sufficient). \n\nWe have changed the first sentence of the abstract for: \n“This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables.”\n\n**Effect of excessive regularization on sufficient variability:** Thank you for this question. It is important to understand that the sufficient variability assumptions are assumptions about the ground-truth data generative process, and not the learned model itself. Thus, regularization will never affect these assumptions. That being said, excessive regularization can indeed be problematic by hindering the ability of the learned model to fit the data properly, which would prevent you from applying Thm. 5, which assumes the learned distribution is exactly equal to the ground-truth distribution. We will add a quick discussion on this phenomenon in the experimental section.\n\n**Can any linear transition model satisfy the sufficient variability assumption of Thm. 4 (linear identifiability)?** Great question. It turns out interpreting this assumption as having a sufficiently nonlinear transition model was our mistake since there are linear models that satisfy this assumption. For example, $\\lambda(z^{<t}, a^{< t}) := Wz^{t-1}$ with $W$ an invertible matrix would satisfy the sufficient variability assumption of Thm. 4. We will fix this in the text. \n\n[1] Aapo Hyvarinen, Hiroaki Sasaki, Richard E. Turner. Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning. AISTATS (2019).\n\n Global/general Response:\n```\nWe thank the reviewers for their valuable feedback. *Reviewer fiBq* concisely conveyed the intuition of our contribution: “The theory makes use of the intuition that entangled representations will require a denser adjacency matrix and that inducing sparsity on this matrix will push the latent factors to disentangle.” All three reviewers found the paper well-motivated, well-written and easy to follow. For example, *reviewer vHyw* writes “The paper is very well written and easy to follow, despite several more complex statements. The initial research statement is well motivated, and the examples given throughout the paper makes it accessible, on a higher level, to a wider audience (i.e. what the paper is about in general, and what it tries to solve).” In addition, all three reviewers found the paper of interest. For example, *reviewer h9au* mentioned “The findings are novel and of interest to the conference audience, and overall topic of the paper is very timely.”\n\nWe will make our code publicly available. The reviewers can find an anonymized version here: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/\n\nWe will address the concerns of each reviewer individually. Note that some answers provide novel experiments.\n```\n", "role": "user" }, { "content": "Title: Thanks for the response\nI thank the authors for their further clarifications, and welcome the changes they plan to make to improve the exposition and presentation. I am looking forward to reading the published version of their paper.", "role": "assistant" } ]
dHsFFekd_-o
CLeaR 2022 Conference
Reviewer_fiBq
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_fiBq, and the following are the paper you need to review:\n```<<dHsFFekd_-o>>```", "role": "user" }, { "content": "Review Content: Summary: Interesting paper achieving disentanglement by sparsity regularisation\nMain review: This paper proposes to achieve disentanglement of latent factors by simultaneously learning them with the sparse causal graph that relates them. Building on recent results in the nonlinear ICA literature, permutation-identifiability is proven under a number of assumptions. The theory makes use of the intuition that entangled representations will require a denser adjancency matrix and that inducing sparsity on this matrix will push the latent factors to disentangle.\n\n\n## Strengths\n\nThe paper is written and structured clearly and easy to follow. It provides examples throughout to illustrate the different concepts, which further helps understanding. Additionally, the theory presented is well-motivated throughout.\n\nWhile the experiments are limited to (rather unrealistic) small-scale simulated data, they show the improved disentanglement performance under the proposed approach, as well as the effect of violated assumptions consolidating the presented theory.\n\n\n## Weaknesses\n\nWhile most assumptions are well motivated, for some this motivation is missing and I would be curious to know how the authors expect these assumptions to restrict the applicability of their method to different scenarios. For example:\n1) What does it mean in practice that $\\mathbf{f}$ is assumed to be a diffeomorphism? \n2) What does assumption 1 in theorem 5 mean in practice?\n\nAdditionally, some symbols are not well-defined. Most importantly, $G$ is described to be causal, but no explanation is given for why that would be the case. Additionally, I find the usage of $\\mathbf{T}$ slightly confusing. While it generally describes the sufficient statistic, on page 5 it is also described as an element-wise nonlinearity. On the next line, an example for the Gaussian case is given, where $\\mathbf{T}(z) := z$, which is not nonlinear. Other symbols for which I would appreciate a clearer definition or separate description are $y$ and $Py$ (Definition 2), $p$ (starting in Theorem 4) and $\\tau$ (section 2.4.1). \n\n\n### Disclaimer\nI did not check the proofs provided in the Appendix in detail.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Response\nWe thank the reviewer for the work they have put into reviewing our work.\n\n**Practical implications that f must be a diffeomorphism?** Thank you for this very important and interesting question. We answer it with an example that violates this assumption. Choose any vision task presenting occlusion, i.e. when an object can be completely hidden behind another one. Continuing with the tree-robot-ball example, occlusion could happen for instance when the ball is hidden behind either the tree or the robot. This situation violates the invertibility of f since given the pixel representation of the situation, we cannot infer the position of the ball, since it can either be behind the tree or behind the robot. Relaxing this invertibility assumption is a very interesting question left for future work. Diffeomorphism also means $f$ and its inverse must be differentiable, however we believe this assumption is purely technical and could probably be relaxed. Our experiments were performed with a piecewise linear $f$ (because of the ReLU activations) which are not differentiable on a set of (Lebesgue) measure 0, but this was not an issue in practice. We will happily add a discussion on these points to our manuscript.\n\n**What does assumption 1 in theorem 5 mean in practice?** This assumption says two things: First, that $k = 1$ (dimensionality of the sufficient statistic T) and, second, that T is invertible (and differentiable, but this is a purely technical assumption that could probably be relaxed). For instance if both the mean and the variance of $p(z^t \\mid z^{<t}, a^{<t})$ in the ground-truth model are assumed to vary a lot with $z^{<t}$ and $a^{<t}$, then Thm. 5 does not provide any guarantee since this situation is better modelled when $k=2$. Another counterexample, which we gave in the paper, is if the mean remains fixed and only the variance changes, i.e. $T(z) = z \\odot z$, which is not invertible. \n\nTo verify how important the $k=1$ assumption is, we decided to apply our method on a dataset with $k=2$. The data generating process is very similar to the one considered in Fig. 3 with a non-diagonal adjacency matrix, but here the variance of $z^t$ depends on $z^{t-1}$ or $a^{t-1}$. Note that this means the model is misspecified (since we do not model this dependency). The results of these experiments are presented here: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/additional_figure/plot_k=2.pdf.\n\nThis experiment indicates that Assumption 1 of Thm. 5 might not be crucial in practice and that our approach is robust to model misspecification to some extent. We will add these experiments to the manuscript.\n\n**Why is G causal?** Thank you for asking for clarifications. Strictly speaking, our work does not require $G$ to be causal, in the sense that it can be used to predict the effect of interventions targeting the latent variables. But when the index $t$ refers to time, which is what we assume throughout, interpreting $G$ as causal makes a lot of sense, since the future cannot affect the past. The argument is analogous to why Granger causality is actual causality under some assumptions (See Thm. 10.3 in [1] for instance). We will add a quick clarification in Section 2.\n\n**Confusion around T being a sufficient statistic and a nonlinearity.** We clarified the problematic sentence just below Definition 1 by changing “element-wise nonlinearity” to “element-wise sufficient statistic”, since, indeed, T will not necessarily be nonlinear.\n\n[1] Jonas Peters, Dominik Janzing, and Bernhard Scholkopf. Elements of Causal Inference. MIT Press (2017).\n\n Global/general Response:\n```\nWe thank the reviewers for their valuable feedback. *Reviewer fiBq* concisely conveyed the intuition of our contribution: “The theory makes use of the intuition that entangled representations will require a denser adjacency matrix and that inducing sparsity on this matrix will push the latent factors to disentangle.” All three reviewers found the paper well-motivated, well-written and easy to follow. For example, *reviewer vHyw* writes “The paper is very well written and easy to follow, despite several more complex statements. The initial research statement is well motivated, and the examples given throughout the paper makes it accessible, on a higher level, to a wider audience (i.e. what the paper is about in general, and what it tries to solve).” In addition, all three reviewers found the paper of interest. For example, *reviewer h9au* mentioned “The findings are novel and of interest to the conference audience, and overall topic of the paper is very timely.”\n\nWe will make our code publicly available. The reviewers can find an anonymized version here: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/\n\nWe will address the concerns of each reviewer individually. Note that some answers provide novel experiments.\n```\n", "role": "user" }, { "content": "Title: Reply\nThank you for the clarifications and additional details. I would be happy to see this paper accepted.", "role": "assistant" } ]
dHsFFekd_-o
CLeaR 2022 Conference
Reviewer_vHyw
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_vHyw, and the following are the paper you need to review:\n```<<dHsFFekd_-o>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a method for finding the true, latent factors and their causal relations by enforcing a sparsity among mechanisms. The method is build upon works around ICA and includes proofs under which settings it can correctly disentangle the factors. The paper also discusses a VAE-based approach to implement the proposed method.\nMain review: The paper is very well written and easy to follow, despite several more complex statements. The initial research statement is well motivated, and the examples given throughout the paper makes it accessible, on a higher level, to a wider audience (i.e. what the paper is about in general, and what it tries to solve). The approach is novel and concerns an important problem. The idea of using the sparsity of mechanisms to find the true mechanisms seems fair, since it should intuitively hold in many real-world situations. The assumptions of the method, such as that the conditional density belongs to an exponential family, the Jacobian varying sufficiently etc., might limit it in practical applications and can make it hard to verify whether they hold in a real-world dataset or not, but are necessary considering the goal of provably finding the mechanisms. Overall, I would consider the paper as a good contribution to the ICA and causality research field.\n\nThe only major drawback of this paper is the limited experimental evaluation. The proposed method is evaluated on two synthetic datasets, one with a time-sparsity setting, and another for action-sparsity. While the proposed method outperforms the shown baselines, it is not clear how robust the method is, and whether it would work on more realistic data. For instance, the noise added to the observation $X$ was chosen to have a small standard deviation of 0.01. What if this noise is increased? How does the method handle higher noise levels? Similarly, what if the noise level is increased for the ground truth transition function (covariance is chosen to be $0.0001I$). In terms of the adjacency matrices, more diverse graphs with random connections can be tested.\n\n#### Minor comments\n* Page 5, last paragraph: typo 'which we which we'\n* Appendix page 43, 4th line in B.1: double punctuation 'with negative slope of 0.2. .'\n* Page 7, paragraph _Sparsity_: It is stated that the minimal graph is found that 'properly fit[s] the data'. What does 'properly' mean in this case? Is it that the data is fit up to a small error $\\epsilon$ which is supposed to be lower than the regularization?\n* Page 45, paragraph _Learned mechanism_: The MLP for the mean layer is said to use 5 layers with each 512 hidden units. This seems quite large compared to the ground truth functions (f being a 3 layer MLP with 20 hidden units, mean functions often sin/cos-based). Is such a large network necessary to learn the sin/cos functions in the synthetic datasets, and get stable results?\n* Page 12, Figure 3: The chosen plot style is not very intuitive to understand at first. The y-axis represents a 0-1 scaling of three metrics which have different trends (SHD - lower is better, R/MCC - higher is better). The label 'diag. graph' and 'non-diag. graph' on this axis add to the confusion since it is a label for the row, not the y-axis itself. It also wasn't clear that the multiple points/crosses represent multiple runs, because several overlay each other. I would recommend splitting this plot into multiple ones, which makes it much easier to grasp for the reader.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response\nWe would like to thank the reviewer for the work they have put into reviewing this work.\n\n**Limited experimental evaluation:** We would like to clarify that we tested our approach on more than two datasets, more precisely 2 * 4 = 8 datasets. For both time and action-sparsity, we have a dataset with diagonal graph, non-diagonal graph, a graph violating the “graph criterion” of Thm. 5 and a dataset violating sufficient variability. The experiments on datasets violating assumptions can be found in Appendix B.3. That being said, we agree with the reviewer that more experiments would further confirm the viability of the proposed approach. Based on the reviewer’s suggestion, we performed a few experiments to verify how robust our approach is to (i) higher observation noise levels, (ii) higher latent noise levels, and (iii) more diverse (random) graphs. We will add these experiments to the camera-ready version.\n\n**(i)** Higher observation noise levels: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/additional_figure/plot_obs_noise.pdf\n\nWe considered the same data generating process with the same non-diagonal graphs as in Fig. 3 on p.12, but varying the noise level on the observation. We consider a standard deviation of 0.01, 0.1 and 0.5. We can see that the time-sparsity experiment suffers from higher noise level but not the action-sparsity dataset. We emphasize that the theory developed in this work applies for any noise level. That being said, the gap in performance we see in the time-sparsity experiment might be explained by a worse sample complexity due to noisier data. It is also possible that our simple choice of approximate posterior $q(z^{\\leq T} \\mid x^{\\leq T}, a^{< T})$ is a worse approximation when the noise is greater. Investigating these questions is left as future work.\n\n**(ii)** Higher latent noise levels: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/additional_figure/plot_latent_noise.pdf\n\nAs above, we consider the same data generating process with the same non-diagonal graphs as in Fig. 3 on p.12, but, this time, varying the noise level on the latent variables. We consider standard deviations of 0.01, 0.1 and 0.5. Again, our theory applies for any noise level on the latents. The approach performs well everywhere except on the time-sparsity dataset for the maximal standard deviation of 0.5 (the method yields good performance with std = 0.01 and 0.1). Given that the initial latent is sampled from a Nomal(0,I), we consider a std of 0.5 to be very high.\n\n**(iii)** Diverse random graphs: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/additional_figure/plot_random_graphs.pdf \n\nIn this experiment, we used the same sin/cos data generating process, but we sampled the graphs randomly using independent Bernoulli variables with probability of sampling an edge of 0.25, 0.5 and 0.75. In this case, we do not know if the sampled graph satisfies the graphical criterion on Thm. 5. What we observe is that we obtain very good MCC (sometimes better than the experiments of Fig. 3) with probability 0.25 and 0.5, and less so with 0.75. We believe this is because a denser graph is less likely to satisfy the graphical criterion.\n\n**Page 7, paragraph “Sparsity”:** This statement was intended to be informal. We adjusted it to make it clearer and more rigorous: “This assumption is satisfied if $\\hat{G}$ is a minimal graph among all graphs that allow the model to exactly match the ground-truth generative distribution.”\n\n**Is such a large network necessary to learn the sin/cos functions in the synthetic datasets, and get stable results?** Clarification: it is true that the _ground truth_ decoder function has only 3 layers of 20 units. However, the learned decoder has 6 layers of 512 units. Preliminary experiments showed that overparameterizing both the transition network and the decoder achieved better performance. This is in line with recent observations in the deep learning community that overparameterization is beneficial for optimization, see for example [1].\n\n[1] Sanjeev Arora, Nadav Cohen, Elad Hazan. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. ICML (2018).\n\n Global/general Response:\n```\nWe thank the reviewers for their valuable feedback. *Reviewer fiBq* concisely conveyed the intuition of our contribution: “The theory makes use of the intuition that entangled representations will require a denser adjacency matrix and that inducing sparsity on this matrix will push the latent factors to disentangle.” All three reviewers found the paper well-motivated, well-written and easy to follow. For example, *reviewer vHyw* writes “The paper is very well written and easy to follow, despite several more complex statements. The initial research statement is well motivated, and the examples given throughout the paper makes it accessible, on a higher level, to a wider audience (i.e. what the paper is about in general, and what it tries to solve).” In addition, all three reviewers found the paper of interest. For example, *reviewer h9au* mentioned “The findings are novel and of interest to the conference audience, and overall topic of the paper is very timely.”\n\nWe will make our code publicly available. The reviewers can find an anonymized version here: https://anonymous.4open.science/r/anon_disentanglement_via_mechanism_sparsity-C43C/\n\nWe will address the concerns of each reviewer individually. Note that some answers provide novel experiments.\n```\n", "role": "user" }, { "content": "Title: Response to rebuttal\nThank you for your response, it clarifies all my questions. ", "role": "assistant" } ]
dnPQAse2ULc
CLeaR 2022 Conference
Reviewer_n7SG
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_n7SG, and the following are the paper you need to review:\n```<<dnPQAse2ULc>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes an hierarchical reinforcement learning algorithm based on discovering effects (mode normalized state space differences after acting) and using them to solve tasks.\nMain review: The paper is full of new interesting ideas. In particular, that of using and learning effects rather than coarser level states or options for HRL. All of the paper, however, relies on a precise definition of effects: mode normalized *differences between states* after taking actions.\n\nA big criticism of mine is that the algorithm in the tabular setting is essentially equivalent to using (random) states as goals, which is nothing new. In a deterministic tabular MDP, if s is a one-hot encoding description of the state, then e_t(s, a) = s' - s is just a +1 on s' and -1 on s. After mode-normalization, e_c(s, a) is +1 in s' and -1 in the s'(s, mode_a). Thus there are at least as many e_c as possible s', and thus the whole thing is essentially a UVFA with random states as goals.\n\nThe same criticism applies for high dimensional deep RL settings where we assume s' are essentially one to one with (s, a) pairs for non-terminal s. In this case we typically have a very bad reachability problem of using states as goals since for a given s in general it's impossible to reach a preselected s' (think of a video where we ask that we reach an exact image, this will generally be impossible since we can't control every pixel intensity that may change with the time of day / light / slight changes in motion).\n\nThus, the algorithm only makes sense for cases where we have a highly descriptive compositional state representation like (coordinate x, coordinate y, do I have the ball) and so forth. This means that for deep RL we are ignoring the representation learning issue, which is OK for a first paper on the subject. However, the fact that it doesn't work at all for tabular cases is in my opinion a bad omen.\n\nI also think the authors should provide themselves this kind of analysis in the paper and be more honest about the limitations of their method.\n\nOther fundamental criticisms I have are as follows:\n - since the e_t used are estimated by a neural network forward model, this is going to be incredibly hard to scale. Learning transition functions that are relevant is difficult in deep RL and typically miss very important information. See for instance https://arxiv.org/pdf/1507.08750.pdf and all this line of research.\n- e_c^goals come from a vae which means they are learned vectors. How on earth do they match *exactly* the e_c^step that's used to match in algorithms 1 and 2. This seems very fishy and quite a limitation of the method. \n\nAll in all, the paper is interesting and has a lot of new ideas, is clear, and I find the work significant. However, it has a lot of very strong problems that are not even adressed properly in the paper. If they provide a more proper analysis of the limitations in the paper I will increase my score.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer n7SG\n> A big criticism of mine is that the algorithm in the tabular setting is essentially equivalent to using (random) states as goals, which is nothing new...\n\nThis is an interesting case. If the state is a one-hot encoding, the change in state will, as the reviewer said, be -1 and +1 on s and s' respectively. The change on s will be identified as not controlled since its normative state is -1 (the agent leaves that state no matter what action it takes). Note that the agent may or may not control the change in s'; this depends on the environment dynamics and the agent's action. A peculiar case is when the agent typically would go to $\\hat{s'}$ i.e. the mode is $\\hat{s'} - s = [-1, 1, 0]$ but taking an unusual action would lead to $[-1, 0, 1]$. In this case, the agent had an effect on two states $e_c(s,a) = [-1, 0, 1] - [-1, 1, 0] = [0, -1, 1]$; that is, the agent avoided one state and made another state happen. This is consistent with, for example, the agent's oxygen decreasing by a constant value when it moves but stays the same when it does not.\n\nIn general, there seems to be a slight misunderstanding. The normative world computed in Eq. 5 (the mode) considers the current state, and thus, it is not the \"overall\" normal state of the system. In other words, the mode is only over actions and not actions and states. We will clarify this in the paper.\n\nRegarding the method being equivalent to using random states as goals. Let us take an extreme case to show that CEHRL is doing something different. If, for example, we have an environment where the agent's actions do not have any effect on the environment, Eq. 5 would output zeros everywhere since the outcome of every action is the same, the mode; even if the environment has infinite states. Thus, for CEHRL there would be only one goal but infinite for the case of the random states. More specifically, CEHRL only considers controlled states and randomly sets one of those states as a goal.\n\n> The same criticism applies for high dimensional deep RL settings where we assume s' are essentially one to one with (s, a) pairs for non-terminal s...\n\n* We indeed consider a deterministic setting. For stochastic environments, the mode should include all possible next states. We will add this limitation and ways to solve it in a \"limitations\" section. The key idea to estimate the mode for stochastic domains (and to avoid a call per action to the NN) is to use something similar to the mode-collapse problem/feature in GANs.\n* As in our previous answer, Eq. 5 identifies controlled changes only. Using this kind of goal instead of the entire observation/state, CEHRL alleviates the reachability problem. An example of this benefit is presented in experiments 4.4 where using the whole state instead of controlled effects leads to worse learning scalability the more dynamics in the environment. Nonetheless, we do agree that more work needs to be done to set fully achievable goals.\n\n> Thus, the algorithm only makes sense for cases where we have a highly descriptive compositional state representation like (coordinate x, coordinate y, do I have the ball) and so forth...\n\nAlthough CEHRL is not limited to compositional states (as described in previous examples), we do agree that our method will work best with highly disentangled states. This, as the reviewer mentioned, is a representation learning problem that we did not consider in this work. We will make this assumption more evident in a new \"limitations\" section.\n\n> since the e_t used are estimated by a neural network forward model, this is going to be incredibly hard to scale. Learning transition functions that are relevant is difficult in deep RL and typically miss very important information. See for instance https://arxiv.org/pdf/1507.08750.pdf and all this line of research.\n\nIndeed, predicting the next state is extremely hard, but this limitation is not exclusive to our method, and future advances in this area can be incorporated. It is important to notice that although we use a forward model ($\\hat{e_t}(s,a)$), it is OK for this model to not be perfect on uncontrolled effects. The only requirement is that it makes the same mistakes in every alternative world, e.g., if the model predicts that an uncontrolled car will continue straight, but it actually turns left, the controlled effects will still be correct if the predictions are consistent for each action. Although it helps, this does not fully solve the issue.\n\n> e_c^goals come from a vae which means they are learned vectors. How on earth do they match exactly the e_c^step that's used to match in algorithms 1 and 2. This seems very fishy and quite a limitation of the method.\n\nWe compute the euclidean distance between the two effects and use a threshold to decide if the goal has been achieved. The code refers to this as \"has_achieved\"; clearly, this function has not been defined and will be added to the paper for better understanding.", "role": "user" }, { "content": "Thank you for the authors' feedback. The response covers nearly all of the issues I raised. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
dnPQAse2ULc
CLeaR 2022 Conference
Reviewer_B1ya
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_B1ya, and the following are the paper you need to review:\n```<<dnPQAse2ULc>>```", "role": "user" }, { "content": "Review Content: Summary: The paper proposes a new hierarchical RL method, which uses finite differences in the state space (so-called effects) as sub-goals for the training agent. The authors propose an implementation of their method, and present a series of experiments in a grid world setting.\nMain review: ### Originality\n\nThe proposed method seems original to me.\n\n### Significance\n\nThe paper is not particularly relevant or significant to the CLeaR community, even if it tries to address an important problem, that of exploration and scalability in RL. While the authors try to relate their method to existing concepts in causality, I feel that the connection is very thin, and not really justified. In the end the proposed method is very similar to model-based RL, where the transition model is learned by residual learning ($f(s'-g(s'|s)|s,a)$ instead of $f(s'|s,a)$), and where the residues are used as goals for exploration. The proposed method might be interesting to the (hierarchical) RL community, but a comparison (methodological and experimental) to existing hierarchical RL methods is missing to judge of the significance of the proposed method.\n\n### Technical quality\n\nThe paper is not technically sound. Several concepts appear to be ill-defined (see detailed comments), the assumptions are not clearly stated from the start, the experimental setup appears to be flawed (comparing a method with pre-training to a method learning from scratch) or irrelevant (comparing a learned/learning policy to a random policy), and the paper is poorly written in general (it is still unclear to me how the proposed method works, which parts of the bells-and-whistles are required for it to work, if those bells-and-whistles are also part of the baseline method, and finally what exactly is the experimental setup).\n\n### Clarity\n\nI found the paper to be poorly written overall, although it is well-written in terms of language. It is hard to parse what the core contribution of the paper is, and what are just implementation details thrown here and there (see detailed comments). I also found the motivation not well explained. How does eq.5 disentangle effects from the agents ?\n\n### Detailed comments\n\np.1: we propose agents that learn by estimating the causal effect of an action on the\nstate -> Isn't that what model-based RL do already ? How does your approach differ from model-based RL ?\n\np.2 Fig.1: I would suggest improving that Figure and its caption. It took me q while to understand how (a) and (b) are related, and simply to understand what this figure describes (an RL environment).\n\np.3 Fig.2: How different is that from hierarchical RL ?\n\np.3: the definition for $e_t(s,a)$ seems rather abstract, and is not mathematically correct since $s'$ appears only one side. Unless you only consider deterministic environments where $s'=f(s,a)$ ? Also, what about state spaces that are not equipped with addition ? Or state spaces where addition does not make sense ?\n\np.3 eq.1: Here again, $\\hat{x}$ only appears on the right-hand side.\n\np.4 eq. 5: $\\text{mode}_{a_i \\in A} e_t(s, a_i)$ -> this does not make sense. from what I understand $e_t$ has same dimensionality as the state space, and therefore can be multi-dimensional. How then do you decide on the mode of a multi-dimensional function ?\n\np.4 eq.6: $e_t$ is missing arguments here. I suppose you mean $e_t(s,a)$ ? Then you're basically learning a model of the world, via residual regression.\n\np.4: In our experiments, we assume [...] -> You should make it clear from the beginning what are the assumptions required by your method.\n\np.5: distribution of effects E -> is this distribution conditioned on a state $s$ ? I don't see how an effect makes any sense without the corresponding original state.\n\np.5: we store experiences of the form [...] -> why storing $e_c(s,a)$, when it can be deduced from $s'-s$ ? Also, $r'$ and $d'$ haven't been defined yet.\n\np.5: or a punishment $P$ otherwise -> why adding a punishment ? Is a necessary trick for the approach to work ? Or is there something more fundamental ?\n\np.5: Experiences are augmented [...] to each experience. -> These sound like a lot of bells and whistles. How is each of these components related to your contribution ? Are they necessary for your approach to work ? Are the observed experimental improvements due to these components, or to the fact that you model causal effects ? I suggest to either remove these components altogether, or perform a proper ablation study with those components on and off to help the reader in performing a proper credit assignment.\n\np.5: Distribution of effects -> I do not understand how this relates to your approach as described in Section 2, or how modeling a distribution of controlled effects $s'-\\hat{s}'$ without conditioning on a state and action makes sense at all.\n\np.5 $1: $(e_c^{goal}|s,e_c)$ what is $e_c$ here ? Does it come from a specific action ? Which one ?\n\np.6: Task policy -> same comment here. I do not see how aiming for a change $s'-s$ hat does not consider the current stats $s$ makes any sense at all. Why not just learning a distribution over states then, and set those states as goals ? And why do you keep $s$ in the definition of $\\pi_t$ on the left-hand-side, if in the end you don't use $s$ ?\n\np.6 alg.1: What do add_with_her or train_if_needed do ? Those routines are not defined.\n\np.7: CEHRL and the baseline -> what is the baseline ? It's not been described yet.\n\np.7: using the pre-trained models -> it seems your approach then is cheating ? Does the DQN baseline also have access to the samples you use for pre-training your models ?\n\np.8 Fig.4: the x-axis should report the number of interactions with the environment, not the training time, to measure sample-efficiency without bias.\n\np.8: random effect exploration performs [...] three times more often -> this is very much expected, if you compare a random policy to any policy that has learned a little bit on a non-trivial task, the random policy will achieve less complex task more often.\n\np8: we record [...] by each method -> which methods ? Do you record training samples ? Does your method start from scratch, or does it use pre-trained models ?\n\np.9: controlled effects reduce the effect space to model -> I don't see how. If I expand the controlled effect definition, I get $s'-\\hat{s}$. How is the resulting space smaller than that of the total effect $s'-s$ ?\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer B1ya\n> the proposed method is very similar to model-based RL...\n\nCEHRL is a model-based RL algorithm where a generative model $E$ and a low-level policy $\\pi_e$ enable temporal abstraction. A necessary clarification is that CEHRL does not use residues/total effects as goals but controlled effects. This is stated throughout the paper; moreover, section 2 is devoted to identifying these controlled effects. Note that the goal space is largely reduced by using controlled effects instead of total effects/residues as goals. We provide evidence for this in experiment 4.4. \n\n> How does eq.5 disentangle effects from the agents?\n\nSection 2 describes this in detail. In essence, Eq. 5 compares what happened with a normative world to determine if the agent controlled an effect (change on the state) or not.\n\n> p.1: Isn't that what model-based RL do already?...\n\nCEHRL falls under the model-based paradigm where a generative model $E$ and a low-level policy $\\pi_e$ enable temporal abstraction. CEHRL differentiates from other model-based approaches by 1) creating a world model that considers only controlled aspects of the environment using counterfactual measures; 2) enabling temporal abstraction by combining a generative model and a low-level policy.\n\n> p.3 Fig.2: How different is that from hierarchical RL?\n\nWe propose a hierarchical RL (HRL) method and do not claim that we are doing something different from HRL. There are different approaches to hierarchical RL; Fig. 2 reflects our approach.\n\n> p.3: the definition for $e_t(s,a)$ seems rather abstract, and is not mathematically correct since...\n\nWe indeed consider a deterministic setting. For stochastic environments, the mode should include all possible next states. We will add this limitation and ways to solve it in a new \"limitations\" section. The key idea to estimate the mode for stochastic domains (and to avoid a call per action to the NN) is to use something similar to the mode-collapse problem/feature in GANs.\n\n> p.4 eq. 5: $\\text{mode}_{a_i \\in A} e_t(s,a_i)$-> this does not make sense...\n\nThe mode is computed per dimension. We will add a clarification to section 2.\n\n> p.4 eq.6: $e_t$ is missing arguments here...\n\nThanks, we will change to $e_t(s,a)$. Indeed, we use the total effect as target of the forward model $\\hat{e_t}$.\n\n> p.4: In our experiments, we assume [...] -> You should make it clear from the beginning...\n\nIndeed, we will move these to the definition of $e_t$.\n\n> p.5: distribution of effects E -> is this distribution conditioned on a state?...\n\nThe generative model $E$ is not conditioned on the state. By doing this, CEHRL considers all possible controlled effects as goals. Note the generative model produces controlled effects achievable in K steps and not just the next/single step. Including the state would, in many scenarios, reduce the complexity of the goal-space. We decided to leave this addition to future iterations of the proposed framework.\n\n> p.5: we store experiences of the form [...] -> why storing, when it can be deduced...\n\nIt is true that $e_c$ is redundant information and can be deduced from s and s'. We will correct this in the manuscript and move the definition of d' and r'.\n\n> p.5: or a punishment other wise -> why adding a punishment ?...\n\nAdding a small punishment is a common approach to motivate the agent to complete goals as soon as possible. It is not a requirement for the approach but a desired behavior.\n\n> p.5: Experiences are augmented [...] to each experience. -> These sound like a lot of bells and whistles...\n\nAugmenting experiences, as in Hindsight Experience Replay, is a popular approach to learning goal-conditioned policies.\n\n> p.5: Distribution of effects -> I do not understand how this relates to your approach...\n\nThe goal is to have agents that operate at a different time scale as the environment (section 3 \"Eq. 5 can be used for temporal abstraction\"). CEHRL does not generate the next (one time step) controlled state but any possible future (K time steps) controlled by the agent. Consequently, conditioning on the action would not achieve temporal abstraction since it would make the agent myopic to a single time step.\n\n> p.5 (e_c^{goal}|s,e_c)e_c$ here ?...\n\nThere seems to be a misunderstanding here. As explained before, the generative model produces a set of temporally abstract goals. Here $e_c$ is a possible way of controlling the environment, learned by $E$ using Eq. 5.\nFor example, in Fig 1(b) $E$ could generate the candidate goals of \"move red box left\" or \"move agent right\" (among others). Note that there is no action \"move red box\", just move up, down, left and right. These candidate effects are temporally abstract; they could be completed in <K time steps (K actions) and not the next step.\n\n> p.6: Task policy -> same comment here...\n\n* As explained before, CEHRL works with controlled effects. This approach is considerably different from working with random states. \n* CEHRL has three main components: $E$, $\\pi_e$ and $\\pi_t$. $E$ and $\\pi_e$ have the same implementation during exploration and task-specific phases. In contrast, $\\pi_t$ is conditioned on the state and candidate effects; it just happens that one implementation of $\\pi_t$ uses the state (during task-specific learning) and one does not (during exploration). Furthermore, future implementations of $\\pi_t$ for exploration may use the state, but the proposed framework stays the same.\n\n> p.7: CEHRL and the baseline -> what is the baseline...\n\nTrue, we will move the baseline description to the first paragraph.\n\n> p.7: using the pre-trained models -> it seems your approach then is cheating?...\n\nNo, we are not cheating. Experiment 4.1 studies if having a hierarchy of effects benefits learning new tasks. As stated in experiment 4.3, the baseline took 5 hours to learn two of the three given tasks, and it was given at least 5 hours of training time for the third unsolved task. CEHRL learns the hierarchy of effects in 6 hours and takes ~1 hour to learn the three given tasks. This shows that 1) we have given the baseline even more data than CEHRL and 2) building a hierarchy of effects may seem costly a priory, but it enables the agent to reuse prior knowledge, and consequently, its cost is amortized the more knowledge is reused between tasks. To summarize, the whole point of these two experiments is to show that controlled effects can enable efficient hierarchical learning and, as a consequence, agents reuse prior knowledge to scale better with the number and complexity of the given tasks.\n\n> p.8 Fig.4: the x-axis should report the number of interactions with the environment...\n\nAs mentioned in the paper, it would be unfair to the baseline to report interactions with the environment. DQN can do more interactions per hour due to its shorter training steps (it just trains a single neural net). In other words, DQN may take 1 hour to do 1M steps but CEHRL may take 3 hours to do the same steps since it needs to call to multiple NNs (for inference or training). This would not be reflected when using interactions. Although the numbers would look better for CEHRL when using interactions, we think it is fairer to use hours in this case.\n\n> p.8: random effect exploration performs [...] three times more often -> this is very much expected...\n\nExperiment 4.2 shows that random effect exploration can explore better even without seeing any task. The generative model and effect policy are trained in an unsupervised RL manner, i.e., even without seeing any biased reward (extrinsic or intrinsic), the agent can learn meaningful exploration. We do agree that it is expected for the random effect exploration to perform better than random actions, but the central message of this experiment is that leveraging prior knowledge is important not just to learn new tasks but to explore and discover novel effects (which in turn helps to learn new tasks).\n\nAll in all, the message we want to deliver in this work is that, as with humans, controlled effects can be used to acquire prior knowledge and that this prior knowledge is essential for efficient learning.\n\n> p.9: controlled effects reduce the effect space to model -> I don't see how...\nThis question seems to be rooted in the same misunderstanding mentioned in a previous question; see our previous clarification.\n\nExperiment 4.4 shows that when using total effects, the model does not scale well with the uncontrolled dynamics of the environment but it does when using controlled effects. This is because many total effects will be clustered together by $e_c(s, a)$. For example, under total effects, the effect of the agent moving right when a demon moves left is different than when a demon moves right. In contrast, both cases are the same under controlled effects since the agent does not control the demon. This is a simple case, but since we do not control most changes in our environment (e.g., wind, leaves, cars, rain), identifying controlled effects can lead to considerable gains in efficiency.", "role": "user" }, { "content": "Thank you for the authors' reply. It satisfactorily addresses most of my concerns, and I am inclined to revise the rating upwards.", "role": "assistant" } ]
dnPQAse2ULc
CLeaR 2022 Conference
Reviewer_wXCW
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_wXCW, and the following are the paper you need to review:\n```<<dnPQAse2ULc>>```", "role": "user" }, { "content": "Review Content: Summary: Hierarchical RL algorithm with very little insights or contributions to causal learning\nMain review: The paper proposes an interesting hierarchical RL algorithm, that attempts to decompose a long-horizon sparse reward task into compositional sub-tasks. These are accomplished using (1) a task-specifying meta-controller; (2) a VAE to sample feasible tasks; (3) a task-conditioned low-level action policy. The only causal connection is in identifying the effect of an action -- this is done using essentially advantage functions, which are re-interpreted as causal effect estimates. There are other approaches in the literature that attempt to isolate the causal effect of actions from uncontrollable effects (e.g., the literature on intrinsic motivation, curiosity-driven exploration, solutions to the \"Noisy-TV problem\", or exogenous decompositions as in http://proceedings.mlr.press/v80/dietterich18a/dietterich18a.pdf) that are not discussed or compared to. The review below focuses only on the weaknesses of the paper.\n\n- Please clarify that s is a p-dimensional vector, transitions are deterministic, and actions are discrete (before defining e_t(s,a) = s' - s). In Eqn5, mode_a e_t(s,a) is confusing -- how are ties broken? Why is this a reasonable \"normative world\"? What are the properties needed of beta_Y the normative world, and how does mode_a ... satisfy those properties?\n\n- Eqn 3: Why is V(s) a reasonable choice for a normative world G_{tilde a}? V(s) depends on an agent's policy, while G_{tilde a} and beta_Y appear to be defined independent of policy. Can you please clarify why advantages can be viewed as causal effects (rather than the conventional view that they are variance-reduced estimates of cumulative returns Q(s,a))?\n\n- Better figure: Fig2 helps to understand the main pieces of CEHRL, but does not spell out how it connects to Alg 1 and 2. Rather than Fig3, I would appreciate a figure that clarifies: two different timescales -- per-environment step and per-effect step, the timescales of experiences to train hat{e}_t (assuming it is per-environment step), E (per-effect step), pi_t (per-effect step), pi_e (per-environment step); switching from exploration phase to task-specific learning phase.\n\n- Why does pi_t need e_c as input? Is identifying the next task a non-Markovian problem, and does e_c somehow convey sufficient information to make pi_t learnable?\n\n- Eqn 5, 6: This approach of disentangling controlled effect e_c from total effect e_t requires more discussion. Prior work (e.g., https://pathak22.github.io/noreward-rl/) uses a state embedding s -> z which attempts to capture the controllable state factors. This is typically accomplished by looking at what aspects of the s-s' transition are predictive of the action taken. What are the benefits/drawbacks of learning a hat{e}_t model (which will suffer aleatoric uncertainty from uncontrollable factors) and then subtracting the mode (hoping that the uncontrollable factors affect both e_t and the mode in the same additive way); as opposed to learning a controllable minimal latent state representation?\n\n- [Minor clarification] Mismatch between definition and use of e_c^goal: e_c^goal is sampled from E which is a VAE trained on all (s,a) pairs in the experience replay. e_c(s,a) from Eqn 5 gives per-environment-step controlled effect estimate. However, it is being used as a multi-step goal in Alg 1 (e_c^goal in line 5 is uniformly sampled from E and relates a goal state to s_0; has_achieved however compares e_c step which relates s_current-s_prev to e_c^goal which is s_desired - s_0?) Why is this \"multi-step\" usage of e_c^goal consistent with its one-environment-step definition?\n\n- Clarify in Alg 1 pseudocode that pi_t is uniform (Eqn 8).\n\n- Alg 2: Is the r' <- r' * (1-d) update supposed to happen before the \"if h then\" block? Also should it be r' * (1-h) [strange that we are rewarding a trajectory even when it terminates because t > K].\n\n- (See comments above for other baselines to use in experiments). The approach of Badia et al 2020 seems especially important to benchmark against.\n\n- Strange that the example using in Fig 1 is not revisited in the minigrid2D domain and tested empirically.\n\n- Please discuss learnability issues: (this is hinted in the last paragraph about conditioning E such that only effects achievable within K are selected) An implicit assumption of Alg 1 and 2 is that pi_e can be learned for all the goals that E could emit. Why is this a reasonable assumption? How delicate is CEHRL to such an assumption?\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer wXCW\n> Please clarify that s is a p-dimensional vector, transitions are deterministic, and actions are discrete (before defining e_t(s,a) = s' - s).\n\nWe agree that this information is better suited before defining $e_t$ and not after. We will change this in the manuscript.\n\n> In Eqn5, mode_a e_t(s,a) is confusing...\n\nAlthough this work explores a fixed rule (the mode) to create a normative world, this does not need to be the case. Presumably, a learned context-dependent rule can model better what is normal; possibly closer to a human measure of normality. More research needs to be done to explore how normality can be made more flexible. Indeed, this is an active research topic in psychology. We will add this discussion to a new \"limitations\" section.\n\n> Eqn 3: Why is V(s) a reasonable choice for a normative world G_{tilde a}?... \n\nThe advantage function computes how much the return would change with respect to a policy if we were to take action $a$ instead of another action, i.e., how much improvement the agent gets from changing its policy. Eq. 2 does not impose any restriction on what kind of normal world to use. In this case, the policy is taken into account since it is easier to estimate.\n\nAn interesting question is if there is a benefit in considering each action equally (not policy weighting at the first level of the value function) when computing the normal world. Note that this question is harder to answer since we must have a good estimate for each action independently of how often the policy uses it. Since the advantage function is typically used to improve the policy itself, there may not be many benefits from doing this, but it may in other contexts/applications.\n\n> Why does pi_t need e_c as input?...\n\n* Just to clarify, $pi_t$ is not conditioned on a single effect but multiple candidates generated by the VAE model. We describe this in section 3, but we do agree that the bold style of the candidate effects $\\boldsymbol{e_c}$ can be missed.\n* The generative model $E$ encloses the dynamics of the world and is responsible for generating plausible candidates that are task-independent. Then, $\\pi_t$ chooses which candidate to perform in the context of a specific task. This way, we can learn the expensive model $E$ once without rewards. We leave for future work the task of making the generative model take reward into account without the cost of training it for each task.\n\n> Eqn 5, 6: This approach of disentangling controlled effect e_c from total effect e_t requires more discussion...\n\nInverse models have indeed been used to identify controllable elements of the environment. Although a careful study is needed to compare these methods, we argue that these methods suffer in two cases. 1) When multiple actions map to the same effect. For example, if an agent moves to a wall or takes the \"do-nothing\" action, the model will not predict the action accurately, and consequently, its representation will most probably suffer.\n2) An inverse model does not promote modeling all controlled effects. For example, if the agent moves a box when moving left, the inverse model only needs to encode the agent's movement to predict the action accurately and can freely ignore the box's movement.\n\nIn contrast, Eq. 5 encourages to model every change in the scene by using a forward model. Additionally, our method identifies controlled effects at pixel level instead of latent representations, which can benefit interpretability and many other applications. Of course, this comes with the cost of training a forward model. We will add an overview of these methods to the related work section.\n\n> [Minor clarification] Mismatch between definition and use of e_c^goal...\n\n* Just to clarify, the VAE $(E)$ is trained on controlled effects $e_c$. This is explained in section 3.1 (learning). The idea is that $E$ will generate different ways the agent can control the environment. Note that $E$ is not conditioned on the state.\n* As mentioned by the reviewer, the generative model produces temporally abstract goals (multi-step goals). Note that since the VAE is trained on any discovered controlled effect and is not conditioned on the state, $E$ produces changes agnostic to the time step. In other words, \"has_achieved\" does not compare s_desired - s_0 but to e_c^goal, which is not fixed to a time step. For example, if the agent (on the exploration phase) reaches the state $s'=[0, 0, 1]$ from the state $s=[1, 0, 0]$ and let us say that $e_c = [-1, 0, 1]$. The goal set by $E$ will be $e_c^\\text{goal}=[-1, 0, 1]$ irrespective of the current time step the agent is at right now. So the has_achieved comparison uses $e_c(s_t, a_t)$ and this generic goal $e_c^\\text{goal}$. The reason to do this is that both effects are under the agent's control and can be easily compared, in contrast to using the total effects with controlled and uncontrolled changes entangled. \n> Better figure: Fig2 helps to understand the main pieces of CEHRL, but...\n\nWe agree that a figure showing lower-level mechanics of the model would be beneficial to the reader. We will add this figure to the manuscript.\n\n> Clarify in Alg 1 pseudocode that pi_t is uniform (Eqn 8).\n\nTrue, we will correct this.\n\n> Alg 2: Is the r' <- r' * (1-d) update supposed to happen before the \"if h then\" block...\n\nIt should happen after. This line in Alg. 2 accumulates the reward given at each environment step into a single reward for each effect step. The termination of an effect step is always determined by when a goal terminates either because it was accomplished or because it exceeded the time limit or because the episode finished.\n\n> The approach of Badia et al 2020 seems especially important to benchmark against.\n\nIn this work, we decided to use DQN as a baseline to understand the benefits of temporal abstraction using counterfactuals. We recognize that an inverse model in addition to our current baseline would indeed give a broader context to the reader.\n\n> Strange that the example using in Fig 1 is not revisited in the minigrid2D domain and tested empirically.\n\nWe concur that this would add continuity to the reader. There is no particular reason not to have this environment. We chose tasks where we could test agents at different levels of difficulty and dynamics.\n\n> Please discuss learnability issues...\n\nWe will add a new \"limitations\" section with the following concerns (in addition to suggestions by other reviewers): \n\n* Although Eq. 5 identifies goals that the agent controls, there may still be uncontrolled events required to happen so the agent can fulfill this goal. For example, moving a heavy box with two agents.\n* Eq. 5 uses the mode as the normative world, but this may not be ideal in some cases. More research needs to be done to find better ways to compute normative worlds.\n* CEHRL works with the state, which can be inaccessible in complex environments.", "role": "user" }, { "content": "I appreciate the authors' feedback. They've addressed nearly all of my concerns, and I will likely adjust the score higher.", "role": "assistant" } ]
hDrn2Dmb7_I
CLeaR 2022 Conference
Reviewer_gF7i
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_gF7i, and the following are the paper you need to review:\n```<<hDrn2Dmb7_I>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a causal discovery method using leveraging interventional data. This study extends Brouillard et al. (2020). The previous study assumes that the correspondence between samples and interventions is known. This paper relaxes the assumption and propose a method based on neural networks and variational inference.\nMain review: # Quality and clarity\n- This paper is well written, well organized, and clear.\n- It contains a theoretical reasoning of their method, and their method is likely reproducible.\n- The paper is self-contained, with a good description of previous studies and existing methods.\n- The paper contains the results of the experiments. Some of the experimental results of the proposed method are inferior to those of the existing methods, but reasonable explanations are given for the results.\n\n# Originality and significance\n- This paper extends Brouillard et al. (2020) by relaxing their assumptions. Using interventional data for causal discovery is a very important problem but little has been studied. The relaxing of the assumption is important for the causal discovery from interventional data.\n\n# Feedback to the authors\n- What variable does the S in the first line of Section 2.1 refer to?\nRating: 9 strong accept, excellent work\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer gF7i\nThank you for your positive comments about our paper. \n\n> \"What variable does the S in the first line of Section 2.1 refer to?\"\n\nGood catch; S refers to the set of assignments in Eq. 1. We will clarify this in the final version. \n\n", "role": "user" }, { "content": "I acknowledge the authors' response. The authors have answered almost all of my concerns. I will continue to recommend accepting this work.", "role": "assistant" } ]
hDrn2Dmb7_I
CLeaR 2022 Conference
Reviewer_XPfV
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_XPfV, and the following are the paper you need to review:\n```<<hDrn2Dmb7_I>>```", "role": "user" }, { "content": "Review Content: Summary: Interesting problem, sound approach, empirical evaluation should be improved\nMain review: The paper focuses on the under-explored problem of score-based causal discovery, where for each sample we don't know which was the interventional distribution that generated it. It proposes a differentiable structure learning approach based on variational inference and Dirichlet processes. The framework can be extend to the case of partially knowing the intervention targets, or at least knowing which batch of samples is generated by the same intervention.\n\nThe paper is generally well-written and to the best of my knowledge technically sound. I thought the setting was particularly interesting, with many applications.\n\nOne of my concerns is that results are only on single-node interventions on simulated data (and with single node interventions on each of the variables) and the [Sachs et al. 2005] dataset, which is not the best benchmark for this case. I would really be curious about any result on multiple-targets and in the case in which one doesn't have an intervention per variable. Even in this case, the stochastic interventions results are a bit lukewarm, and I'm confused why there are no perfect interventions.\n\n\n# Minor points\n\nCurrently the method assumes the parents cannot change in an intervention - would it then make sense to just consider as the \"original\" SCM M the one in which you can have the union of the sets of parents (assuming it is still acyclic) in the different environments and then switch them off in each specific environment? \n\nIf the $\\theta_j$ are shared across interventions, I'm not sure I understand exactly how the soft interventions are modelled\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Response to reviewer XPfV\nThank you for the careful reading and overall positive comments about our paper.\n\nPlease note that our method assumes that, in an intervention, the parent set is always a subset of the parent set in the observational graph; consequently, the parent set does change, but it needs to be a subset of the original set of parents. As the reviewer hints, we can indeed extend our method to define the observational model as the one corresponding to the union of the parent sets (though this needs to be done in all variable nodes simultaneously) even if we never observe samples from it. We will mention this possibility in the final version; we thank the reviewer for pointing it out!\n\nConcerning the $\\theta_j$ being shared across interventions, this is actually one of our core contributions. We have a meta-model for all of the assignments (comprising the effects of the potentially infinite set of interventions) of each variable j. To select a particular assignment k, we define an intervention embedding ($u_k$, see Section 3.3) on which we condition on. In this way, the parameters $\\theta_j$ are shared among the interventions -- they control the meta model, and not specifically each intervention assignment function. Each assignment function therefore depends on both $\\theta_j$ and $u_k$ for intervention $k$. This is the reason why we propose the intervention embeddings and the Dirichlet process formulation. We will make this clearer in the final version. \n\nThanks for the suggestions regarding the empirical evaluation, we will add results to additional experiments to the appendix. Note that our stochastic interventions are a particular case of perfect interventions as defined by Brouillard et al. 2020: there we remove the dependencies from the parents, and define a unary distribution for the node (rather than an atomic value). We will run experiments on the flow cytometry dataset using our model with perfect/stochastic interventions as well.\n\n\n", "role": "user" }, { "content": "Thank you for the authors' feedback. The authors have answered almost all of my concerns. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
hDrn2Dmb7_I
CLeaR 2022 Conference
Reviewer_r53E
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_r53E, and the following are the paper you need to review:\n```<<hDrn2Dmb7_I>>```", "role": "user" }, { "content": "Review Content: Summary: The authors propose a score-based causal discovery method for the setting where interventions are performed latently (i.e., the targets are unknown and even the environments from which they are sampled are unknown). More specifically, the authors propose a continuous-constrained optimization method that relies on neural networks and variational inference. The authors show competitive results compared to the case where the environment is known but the target is unknown.\nMain review: The problem addressed is important since the setting considered is realistic and challenging. However, I have a few concerns: 1) since the authors don’t show the consistency of their methods, I would expect the experiments to be more thorough and 2) the method is not well situated within the causal literature, more particularly the link with latent confounders.\n\nIt is not clear how this setting is different from causal discovery in the presence of latent confounders. The differences should be clearly explained. If there are no differences, then several other state-of-the-art methods exist (e.g., Spirtes et al., 1999) and, moreover, compared to the proposed method, they often have guarantees to recover the equivalence class. In this case, the authors should definitely include these methods in their related work section, compare their method to at least one of these, and explain clearly how their proposed method differs. To develop the latter point, the related work is really short and doesn't situate the present work in the causal discovery literature.\n\nThis work has a limited novelty. It is based directly on Brouillard et al., 2020 that is also a continuous-constrained optimization method that relies on neural networks and can deal with interventional data, possibly with unknown targets (also it uses the same components: random matrices trained with the Gumbel-Softmax trick, normalizing flows (DSF), augmented Lagrangian, etc). The main contribution of the present work is to add variational inference in order to deal with latent interventions. However, contrary to previous works, in this new condition, the authors don’t have any identification results and thus, the graph with the best score can be an element that is not part of the I-Markov equivalence class. As mentioned in the previous paragraph, if their setting is equivalent to the presence of latent confounders, then the novelty is furthermore reduced since other methods exist with theoretical guarantees.\n\n**Experiments**\nSince they don’t have theoretical guarantees, I would have expected experiments to be more convincing. \n\nFor the synthetic dataset experiments, they don’t have any external baseline method. They show that “latent” is comparable to “known”. I wonder if it is simply explained by the fact that the different environments are easily distinguishable (see Fig. 3, it seems to be the case, more obviously for atomic interventions). If it is the case, then a method that first finds the environments (performing some kind of clustering for example) and then performs classical causal discovery with unknown targets would have a similar performance. This would also explain the results of the semi-supervised learning experiments where the knowledge of the environment doesn’t help.\n\nAll the experiments are performed on very small graphs (2-11 vertices). Furthermore, the authors state that their method scales well w.r.t. the number of interventions. It would be interesting to see if this method is applicable to larger graphs and how well it scales in practice. \n\nFor the real-world dataset, since their assumptions are less restrictive, I would expect to have at least another dataset than the classical Sachs et al., 2005 (which is imperfect as noted by the authors).\n\nFor the hyperparameter selection, the procedure is not clearly explained. How many experiments were performed in order to determine the optimal $\\alpha$, h, K, learning rate, etc; and was it done on only one type of dataset? If I understand correctly, once determined, the same hyperparameters are applied for all the experiments and a grid-search was only performed on $\\lambda_G$ and $\\gamma$? The authors state that the models were trained for 500 epochs for the first iteration of the augmented Lagrangian. What are the parameters used for the augmented Lagragian? Why do the authors use a fixed number of iterations?\n\nFor the reproducibility of the experiments, while the authors state that the code will be provided, all the hyperparameters used should be clearly described in the appendix (how many layers for each neural network, the hyperparameters related to the augmented lagrangian, etc).\n\n**Method**\nIn Section 3.2, if I understand correctly $\\lambda_G$ is a scalar defined by the user. Why are all the $\\lambda_{ij}$ set to this value? It seems they should be learned independently.\n\nIn Section 3.3, instead of using different neural networks for each interventional distribution, the authors use an intervention embedding $\\mathcal{U}$ and share the parameters $\\theta_j$ between all interventions. By doing this, they have a method that scales well w.r.t. the number of interventions, but they don’t fully use the invariance of the conditional for different interventional settings. Also, it gives them a prior that causal mechanisms should be similar when intervened upon, which is not always the case.\n\n**Typos and short comments**\n- In abstract, “a infinite” -> an infinite\n- Abstract, extra space after “models”\n- Section 2.1, what is S in the SCM?\n- Section 2.4, to be correct, Zheng et al., 2018 proposed the constraint $tr(e^{W \\odot W}) -d = 0$\n- Section 4: “All of the possible graphs share the MEC”. The same MEC? It is not true, the two first graphs share the same MEC (A -- B), while the last one has its own MEC (A B).\n- Section 4: “is worse than”\n- Section 4: “with a full batch (” (space before the parenthesis)\n- Section 5, p.12: “representation”\n- Section 5, real-world dataset: should cite Joris et al., 2020 for the observations related to Sachs.\n- Appendix B, what is $k$ and $l$ in the Table 3 caption?\n- Appendix C, “Given an vector”\n- Appendix C, “as show in Figure 4”\n- Appendix D, “a the correspondences”\n- Appendix D, there are extra spaces in parentheses\n- When citing, usually the publications are in order starting from the most recent to the oldest\nRating: 4 possibly reject, but has redeeming facets\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer r53E\nThank you for the careful reading and insightful comments. We answer your main points below, we hope that our answers alleviate your concerns.\n\n> “It is not clear how this setting is different from causal discovery in the presence of latent confounders.” \n\nThis is a good question. Note, however, that there is a very significant difference, in terms of underlying assumptions, between our method and that of Spirtes et al. (1999). As you mentioned, we propose a score-based causal discovery method for the case where the interventions are **fully** latent: i.e., we do not know what they alter or which samples are affected. Latent interventions are not the same as latent confounders, as the latter are always present, whereas the former are only present in some samples, and we don't know which samples. In our approach, we encode the latent interventions with the variables z (latent correspondences), and r (latent intervention targets), as described in Sec 3.3, using a shared intervention space. While, as you point out, we could regard these two sets of variables as hidden confounders of an “expanded” causal model, treating them in that way would disregard the structure of the problem encoded in z and r, and this structure is key to the success of our approach. In practice, this leads to inferior results, as our new results with the FCI algorithm show (see the response to the next question below). Regarding recovery guarantees of the equivalent class, while the FCI algorithm (Spirtes et al. 1999) and other constraint-based algorithms allow for latent confounders and selection bias, these methods have been designed to estimate the causal graph of the system from a single dataset corresponding to a purely observational context (not our scenario). In other words, they also do not have guarantees of recovering the I-MEC. \n\nFor the complex problem we are tackling, a guarantee of I-MEC seems unlikely for any method, without very strong additional assumptions: it would necessarily require some sort of z-faithfulness property (difficult to ensure in a real-world problem) as well as the usual I-faithfulness and faithfulness. While our method does not have these formal guarantees (which we point out would be very hard to get for any method), it is intuitive and performs well in practice.\n\n> “... the authors should definitely include these methods in their related work section, compare their method to at least one of these, and explain clearly how their proposed method differs. (...) the related work is really short and doesn't situate the present work in the causal discovery literature.”\n\nThanks for the suggestion, we will expand the related work section and will include a discussion about these methods, explaining the differences with respect to our method. Regarding a comparison, we followed your suggestion and tested FCI (with Fisher z-test with significance level 0.05). We used the package causal-learn (https://github.com/cmu-phil/causal-learn). We obtained the following results, on our linear Gaussian synthetic dataset (n=10, e = 1, stochastic): SHD: 20.8 $\\pm$ 4.1 vs SHD: 35 in Sachs et al. (2005), with F1=0.22, tp=4, fn=12, fp=21, rev=5. We will report these results, which are overall inferior to our method. Particularly, for the synthetic dataset, FCI tends to produce significantly denser graphs. \n\n> “All the experiments are performed on very small graphs (2-11 vertices). (...) It would be interesting to see if this method is applicable to larger graphs and how well it scales in practice.”\n\nGood suggestion, we will run experiments on larger graphs and will report them in the final version.\n\n> “For the hyperparameter selection, the procedure is not clearly explained. (...) If I understand correctly, once determined, the same hyperparameters are applied for all the experiments and a grid-search was only performed on $\\lambda_G$ and $\\gamma$”. \n\nWe searched hyperparameters for the $\\alpha$, $\\gamma$, $\\lambda$ and learning rate in a validation set based on log-evidence. Those pertaining to the architecture of the neural networks were selected based on validation datasets (data + graphs) and were held fixed for all of the experiments (the results were generally robust to them). We will include all the details in the appendix. \n\n> “For the reproducibility of the experiments, while the authors state that the code will be provided, all the hyperparameters used should be clearly described in the appendix (how many layers for each neural network, the hyperparameters related to the augmented lagrangian, etc).”\n\nWe agree, and we will follow this suggestion.\n\n> \"In Section 3.2, if I understand correctly $\\lambda_G$ is a scalar defined by the user. Why are all the $\\lambda_{ij}$ set to this value? It seems they should be learned independently.\"\n\nThis $\\lambda_G$ is from the prior distribution, not the graphs’ weighted adjacency matrix. We will clarify this by changing the notation. \n\n", "role": "user" }, { "content": "I acknowledge the authors' response. It addresses most of my concerns, and I am inclined to revise the rating upwards.", "role": "assistant" } ]
j5e7PielLe2
CLeaR 2022 Conference
Reviewer_Nc86
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Nc86, and the following are the paper you need to review:\n```<<j5e7PielLe2>>```", "role": "user" }, { "content": "Review Content: Summary: The authors present two metrics to make inference about how two different brain zones are affected by the same experimental stimulus. The first metric, zone generalization, is the correlation between the predicted task-evoked signal for one zone and the actual measured task-evoked signal for another zone. The second metric, zone residuals, is the average across pairs of participants of the correlation between the residuals of regressing the actual task-evoked signal of one zone from another zone, for each participant in a pair. Using these metrics the authors propose four types of inferences that combine conclusions about the similarity of the effect of one stimulus on two different brain zones and about stimulus-related effects not captured by the stimulus model used. The authors illustrate the application of the metrics on simulations that manipulate the degree of similarity of response and the degree of stimulus properties not captured by the stimulus model. (In the supplementary material an analysis of signal to noise ratio is also presented.) The authors then apply the metrics to two empirical fMRI movie watching datasets and 34 language regions of interest, illustrating the four different inferences allowed by their approach. In the supplementary material the authors show in simulations that functional connectivity and some implementations of representational similarity analysis do not allow for the inferences proposed in this framework.\nMain review: This is a well written paper proposing a quantitative framework to expand the inferences that can be done in the study of the causal effects of task stimuli on brain signals. The originality of the proposal is based on the explicit modeling of a unique and shared task causal effect (which the zone generalization metric aims to capture), and in the explicit modeling of unique and shared real task features causal effects not captured by the chosen task stimuli model (ie. difference between stimulus properties and stimulus representation) (which the zone residuals metric aims to capture). The manuscript contains a supplementary material file with useful code to track simulations and analysis. The supplementary material also contains a useful guideline explaining how the metrics can be normalized according to the research problem at hand and a very useful figure (Fig. 6) illustrating the relationship between brain zones task-evoked effects, stimulus properties and stimulus representations. Some points require clarification:\n\nIf you apply a standard task linear regression for zones Y1 and Y2: Y1 = bT + e1, where b, that measures the causal effect of T on Y1, is statistically significant, and Y2 = aT + e2, where a is also statistically significant and a != b, it seems that I could claim that the same task representation T produced different causal effects on zones Y1 and Y2, given that a!=b. Similarly if a == b, I could claim that task representation T produced the same causal effect on zone Y1 and Y2. How is this inference different from what you measure with the zone generalization metric?\n\nThe zone residuals metric takes into account the number of possible pairs of participants. Does the metric approximate to its real value when the number of participants in the dataset tends to infinity? Or, can the metric have a bias problem if we do not have enough participants?\n\nA fundamental point of the paper is the ability to make the four inferences proposed by the framework based on assessments of large and small values of the metrics. In the empirical application the authors used a cutoff based on simulations, and in the discussion this point is pushed to future research. I consider that given the importance of this cutoff to make the correct inference, more should be discussed in the manuscript about how the selection of the cutoff could be done, for example data-driven, previous-knowledge, comparison to other tasks. This of course will be only ideas, but they could give the readers a sense of how this problem can be solved. In my view, without a way to solve this problem the framework loses utility.\n\nThe two metrics, zone generalization and zone residuals, are pairwise measures. Can they be thought from a multivariate perspective? For example, by controlling for the effect of a third zone? In other words, can a third zone, for example, confound the metrics, as it may occur in functional connectivity analysis?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\n", "role": "assistant" }, { "content": "Title: Response to reviewer Nc86\nWe thank the reviewer for their thoughtful comments and positive initial evaluation. We believe that incorporating their feedback will strengthen the manuscript. We attempt to address the reviewer's questions below. \n\n### **Q1: zone generalization vs $a==b$** \n\nWhen T is high-dimensional and has possibly correlated features, it becomes difficult to interpret the difference between a and b. This is the case in our setting so we instead find it useful to measure the distance between a and b in the final transformed space ($\\hat{a}T$ vs $\\hat{b}T$). Zone generalization is in fact proportional to this distance (i.e. $corr(\\hat{Y_1}, \\hat{Y_2})$), in the limit of sufficient data (i.e. when the encoding model is close to the Bayes-optimal predictor). Further, zone generalization looks not only at how much the weights are different, but at how much this difference affects the ability to predict held-out data. It could be considered a more stringent method, ignoring small differences in weights that don’t affect generalization performance. We would be happy to include a discussion of this relationship in the text if the reviewer would find it useful.\n\n### **Q2: zone residuals and number of participants**\n\n**Summary:** In response to the reviewer’s question, we evaluate the stability of zone residuals with increasing numbers of participants. The zone residuals’ stability increases with increasing sample size until it stabilizes with a sample size of 5-10 participants. Therefore, we recommend using zone residuals for datasets with at least five participants. We will include this recommendation in the text. \n\n**Further details:** We evaluated the stability of the zone residuals metric by calculating this metric for 88 sample sizes (ranging from 2 to 89 participants) from the HCP dataset. For each sample size $n$ we randomly sampled $n$ participants 100 times, each time sampling without replacement. Then we calculated the zone residuals between each pair of brain zones. As it is difficult to visualize the results for all ROI pairs we present the average zone residual value for each sample size for the ROI pairs shown in Fig. 6. Open Review does not support adding images directly in the response, so we have included the new figures via an anonymized link: https://drive.google.com/file/d/18oXni0MvKIgNc2AGfLC-fHjxTIEobgJT/view?usp=sharing. These results demonstrate that the zone residual metric approximates it’s real value when the number of participants in the dataset tends to infinity when there are at least five participants.\n\n### **Q3: selecting cutoffs**\n\nWe agree with the reviewer that selecting the cutoff is an important part of applying the framework to real data. One idea is to make the decisions based on significance, similarly to the first step of the framework. Because there aren’t good priors for the chance values of zone generalization and zone residuals, the null distributions can be estimated using permutation tests. There are several parts of this analysis that are nontrivial and may influence the final outcome (e.g., permutation tests at the group-level or a participant-level, and how to aggregate results across participants), and these can be investigated in depth in future works. We will include this suggestion in the text. \n\nIndependently from these future efforts, we do believe that our current work makes important contributions by showing that common tools for inferring relationships between stimuli and brain activity are limited and to characterize these limitations, especially when considering complex stimulus representations (i.e. representations with multiple sources of variance). While more work is needed to achieve the perfect inference framework, we do believe our contributions reveal an important issue and present a significant step towards resolving it, and that these contributions are timely, considering the recent interest in relating complex stimuli representations obtained from deep neural networks to brain recordings (Jain and Huth, 2018; Toneva and Wehbe, 2019; Wang et al., 2019; Schrimpf et al., 2020; Caucheteux et al., 2021b; Goldstein et al., 2021; Cross et al., 2021).\n\nContinued from Part 1\n\n### **Q4: addressing possible confounders from additional brain zones**\n\n**Summary:** In the case of functional connectivity analysis where the researcher is interested in causal discovery (i.e., estimating a causal DAG for the brain regions), we agree that it is important to account for other zones that might confound the discovered relationship between two zones. However, our framework focuses on inferring if the causal effect of the stimulus on two zones is the same. In response to the reviewer's question, we showed with do-calculus that the introduction of a third zone does not change how we can estimate the causal effect of the stimulus on each brain zone. We provide more details about this analysis below.\n\n**Further details:** In our framework, we are interested in $P(Y_{1} = y_{1} |do(S= s))$ and $P(Y_{2} = y_{2} |do(S=s))$, where $Y_{i}$ represents the observation at brain zone $i$ and $S$ is the stimulus. In the case where there are only edges between both $S$ and $Y_{1}$ and $S$ and $Y_{2}$, these terms of interest would be equivalent to $P(Y_{1} = y_{1} |S = s)$ and $P(Y_{2} = y_{2} |S = s)$. We show in this link https://drive.google.com/file/d/1dCt3vffgywel8VVJ40y_Ek_BB6KK3DiC/view?usp=sharing (and in the response to reviewer czAT which includes simulations) that this is also the case if there is an additional edge between $Y_{1}$ and $Y_{2}$. To test if the introduction of a third zone confounds our metrics we consider possible configurations for causal directed acyclic graphs (DAGs) with four nodes, 3 brain zone nodes and 1 stimulus node under our experimental paradigm and assumptions (previously specified in the Introduction). Using do-calculus we find that in each case the probability that a brain zone ($Y_{i}$) equals a specific value ($Y_{i} = y_{i}$) conditioned on intervening on the stimulus ($S$) is $P(Y_{i} = y_{i} | do(S=s)) = P(Y_{i} = y_{i} | S=s)$ (proofs are available in link:https://drive.google.com/file/d/1dCt3vffgywel8VVJ40y_Ek_BB6KK3DiC/view?usp=sharing). This demonstrates that, under our assumptions (also specified in the link https://drive.google.com/file/d/1dCt3vffgywel8VVJ40y_Ek_BB6KK3DiC/view?usp=sharing and paper), the introduction of a third zone does not change the way we can estimate the causal effect of the stimulus on each brain zone. (We can still use an encoding model to estimate $P(Y_{i}|S)$, we do not need more complex models that account for other zones.) Therefore, we do not think that the presence of other zones confounds our metrics. We will add a discussion of this point to the paper. Now that we made the argument above, we have to add that in order to eventually fully understand brain function, it will be necessary (in future work) to model and understand the relationship between all the different brain zones that are affected by a given stimulus.\n", "role": "user" }, { "content": "I value the authors' response. The reply answers almost all of my questions. I remain supportive of my positive recommendation for this paper.", "role": "assistant" } ]
j5e7PielLe2
CLeaR 2022 Conference
Reviewer_wCe1
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_wCe1, and the following are the paper you need to review:\n```<<j5e7PielLe2>>```", "role": "user" }, { "content": "Review Content: Summary: This paper considers the problem of whether the stimulus properties affect different brain zones in the same way. To address this problem, the authors proposed a method (Figure 1 in the paper) which involves two new metrics: zone generalisation and zone residuals. Synthetic simulations and real data applications are implemented to illustrate the performance of the proposed method.\nMain review: Pros:\n1. the considered problem is interesting and important.\n2. the real data application in the paper is interesting.\n3. Although without any theoretical/mathematical justification, the proposed metrics are intuitively reasonable.\n\nCons:\n1. Section 2 of this paper is a bit of confusing to me, because three different models are presented in this section, and the main metrics, zone generalisation and zone residuals, are defined based on different models. So it is unclear to me about which is the model that you main method (Figure 1) based on?\n2. Although the considered problem has some \"causal flavour\", I feel that the proposed method is rather irrelevant to causality. So I am not sure whether this paper would be a good fit for this conference.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer wCe1\nWe thank the reviewer for their feedback and believe that incorporating the clarifications in response to the reviewer’s comments will strengthen the manuscript. We respond to the reviewer's comments below. \n\n### **Q1: model fitting**\n\n**Summary:** We fit all three metrics presented in Fig 1 based on the most general and most commonly-assumed model (encoding models and zone generalization are fit based on Eq1; zone residuals are fit based on Eq1 when the stimulus representation X is replaced by the brain recordings for a second brain zone Y_j). \n\n**Further details:** However, the true underlying model of how brain activity is generated as a function of the stimulus is of course unknown. In the remainder of Sec 2, we posit that there can be additional components in the brain activity, which are important for understanding whether two zones are affected by stimuli in a similar or different way. We introduce the two additional models (Eq2 & Eq4) to conceptually break down the independent component epsilon in Eq1 into these additional important components. Our simulations show that when these additional components are present in the brain data, our proposed metrics (zone generalization and zone residuals): 1) are indeed sensitive to these additional components, despite being fit based on the most general model in Eq 1, and 2) allow us to make one of the four main inferences in Fig 1, when used together. We thank the reviewer for pointing out this confusing part and we will clarify it in the paper. \n\n### **Q2: venue fit**\n\nWe ask the reviewer to see our response to reviewer *Nc86* (specifically to **Q4**), which more formally establishes the framework of our paper and hopefully reflects its causal aspects better.\n\nFurther, we are trying to build connections between the causal world and computational cognitive neuroscience. Our goal is to more formally establish the framework of encoding model analysis so we can begin to address issues such as what do encoding models truly capture, how much can we say about how a stimulus affects the activity in a brain region using an encoding model, what to do with inter-feature correlation, how to attribute specific information to different features when they are correlated, etc. All these concerns have always been part of the encoding model approach in neuroscience, and neuroscientists have sometimes tried to address them, but have often either simply acknowledged them or completely evaded them. We hope that featuring this paper at CLeaR will spark further connections between computational cognitive neuroscience and the world of causality. Discussing the paper in the CLeaR conference will benefit neuroscientists by bringing in ideas and feedback from causality researchers. It will also be useful for causality researchers to see what questions are open at the moment in computational cognitive neuroscience, and to understand the gap between theoretical methods for testing causal inference and the real, practical setting where they can be used, so that one day this gap can be bridged. \n", "role": "user" }, { "content": "I'm grateful for the authors' response. It satisfactorily addresses the major issues I had.", "role": "assistant" } ]
j5e7PielLe2
CLeaR 2022 Conference
Reviewer_czAT
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_czAT, and the following are the paper you need to review:\n```<<j5e7PielLe2>>```", "role": "user" }, { "content": "Review Content: Summary: An intriguing question about causal effects in neuroscience, and tools from structural equation modeling to answer them.\nMain review: The paper proposes an intriguing question, to study if two causal effects indeed share a common stimulus and if there are unobserved stimulus properties that can act as a common cause. The proposed metrics to answer the question are simple and easy to follow; the review below focuses only on the weaknesses of the paper.\n\nFigure 1 is very helpful. Consider adding a Figure 0 that illustrates encoding model performance (i.e. Stimulus properties -> Stimulus representation via e.g., ELMo; recording activity in target brain zone; measuring predictability via \"variance overlap\" as in the first row of Fig 1).\n\nEqn 3: For the zone generalization metric, it seems important to normalize by the encoding model performance corr(hat{Y}_1, Y_1) and corr(hat{Y}_2, Y_2)? If not, how should we calibrate large vs. small zone generalization values corr(hat{Y}_1, Y_2)? The text also suggests that small values for zone generalization, *in addition to* good encoding model performance, tells us that two zones respond differently. Can we define the metric using relative correlation measures? Another naive question: Should zone generalization be a symmetric metric? I.e. combine corr(hat{Y}_1, Y_2) and corr(hat{Y}_2, Y_1) somehow?\n\nFor the regression in Eqn2, should the model still be fitted using the cross-validation procedure for fitting Eqn1 (for each zone in isolation)? Eqn2 suggests that we should fit the two encoding models jointly... Please clarify how hat{g}_1 and hat{g}_2 are fitted to training data.\n\nZone residuals: It will be helpful to give intuition for why zone residuals are not suitably computed for a single participant. Is there a fundamental non-identifiability of g and h in Eqn 4 if we only get to see a single participant? (The linear assumption on g likely avoids this non-identifiability). Is there non-identifiability between h and noise epsilon_i? Would structural assumptions on h help? How do multiple participants help to avoid this difficulty?\n\nAgain, Eqn4 suggests that we should fit the encoding models across all participants jointly. Please clarify how hat{g} and the residual regressions Y_1,P <- Y_2,P etc. are fitted to data.\n\n\nSection 3 Simulations: All the simulations are well-specified/realizable. I.e., brain activity in each zone is indeed a function of X, Z and epsilon. It would be helpful to simulate mildly mis-specified settings: E.g. Zone 1 activity is a mediator or effect modifier for Zone 2 activity. (Does not fit the additive shared noise epsilon_12 model but could plausibly be true in actual fMRI data).\nIn general, Z being assumed as a common cause of Y_1 and Y_2 is a specific causal modeling assumption. It will be helpful to spell this out, and discuss alternate models for when zone generalization is large (e.g., that Y_1 is a cause of Y_2 or vice versa). Note that the zone residuals will have to be modified in the case of these alternate causal models. <-- This clarification around the modeling assumptions is the biggest opportunity to strengthen the paper.\n\nSection 4: Naive question (connecting to the asymmetric nature of zone generalization and zone residual metrics) -- Do the asymmetric relationships inferred between different ROIs suggest a different underlying mechanism than a common cause Z (or is it fully explained by different base SNR for the different ROIs, contributing to an asymmetric zone generalization value)?\n\n-----\n(After author response)\nI appreciate the authors' clarification to Q3 and Q8 (asymmetry in the metrics) and the response to Q7 (which was the weakest point) taken in conjunction with response Q4 to Nc86 have satisfied my concerns about the utility of these metrics in mis-specified settings.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer czAT\nWe thank the reviewer for their thoughtful suggestions and believe that adding a figure that introduces encoding models will strengthen the paper. We will add such a figure to the updated manuscript. We attempt to address the reviewer's questions below. \n### **Q1: normalization of zone generalization**\n\n**Summary:** We agree that normalization is important and we use normalization in all of the experiments, as discussed in Sec 2.4 and Appendix A. We did not include a specific normalization in the definition of zone generalization because different types of normalization may be suitable when using the metrics for investigating different scientific questions (Appendix A). We are happy to discuss the normalization earlier in Sec 2 if the reviewer would prefer that.\n\n**Further details:** In our experiments, we chose to normalize the zone generalization of zone_1 to zone_2 by the inter-subject correlation (ISC) for zone_2 (Sec 2.4). We chose this normalization for two main reasons. Firstly, ISC can be thought of as an estimate of the maximum possible performance (i.e. the noise ceiling, see Appendix A for a figure and a longer discussion). That’s not the case for encoding model performance. For example, in Fig 8 (2nd and 3rd rows, rightmost cells), we see that if the target brain zone_2 has higher SNR than the source brain zone_1 (beta_2 > beta_1), then zone generalization(zone_1, zone_2) > encoding model performance(zone_1) for some levels of similarity of response to stimulus properties (alpha=0.75 and alpha=1). Therefore, the encoding model performance in the source zone is not an upper bound on the zone generalization.\n\nSecondly, we normalize by ISC specifically for the target zone_2, because we find in simulations that the SNR in the target zone is much more of a limiting factor to zone generalization (Fig 8, 3rd row) than the SNR in the source zone.\n\n### **Q2: relative correlation measures** \n\nA relative correlation measure is not sufficient in all cases, because we want to distinguish the following two scenarios: 1) low encoding performance and low zone generalization, and 2) high encoding performance and high zone generalization. A relative correlation measure may make these two cases appear very similar, but we should not make any inference in the first case. One way to go around this is to threshold by the encoding model performance, which is equivalent to the first step in our framework (Fig 1, top). \n\n### **Q3: symmetric zone generalization**\n\nWe believe that the asymmetry of zone generalization can be informative (see response to **Q8** below). We agree with the reviewer that a symmetric version of zone generalization may also be informative, which can be explored by future work.\n\n### **Q4: fitting of zone generalization to training data**\n\n**Summary:** We fit all three metrics based on the most general and most commonly-assumed model which treats each brain zone independently (encoding models and zone generalization are fit based on Eq1; zone residuals are fit based on Eq1 when the stimulus representation X is replaced by the brain recordings for a second brain zone Y_j). \n\n**Further details:** However, the true underlying model of how brain activity is generated as a function of the stimulus is of course unknown. In the remainder of Sec 2, we posit that there can be additional components in the brain activity, which are important for understanding whether two zones are affected by stimuli in a similar or different way. We introduce the two additional models (Eq2 & Eq4) to conceptually break down the independent component epsilon in Eq1 into these additional important components. Our simulations show that when these additional components are present in the brain data, our proposed metrics (zone generalization and zone residuals): 1) are indeed sensitive to these additional components, despite being fit based on the most general model in Eq 1, and 2) allow us to make one of the four main inferences in Fig 1, when used together. We will clarify this in the paper. \nIn general, there can be other models that can be used to model brain zones jointly, but whether such approaches are advantageous is an open research question. For example, Wehbe et al. (AoAS 2015) showed that jointly modeling different voxels might help provide better estimates for noisy voxels, but that there is not much advantage for voxels that are already well modeled on their own.\n\nContinued from Part 1\n\n### **Q5: zone residuals and multiple participants**\n\nWe resort to zone residuals because we want to account for features of the stimulus that we cannot capture in our stimulus representations and that are processed uniquely by one zone. We assume these features affect the brain zones reliably across participants, but we cannot access them. Thus we use the fact that the residual of one zone on another is correlated across participants as an estimate for the processing of these features in one of the zones and not the other. \nOne other option is to correlate the residuals across repetitions of the same stimulus in the same participant (if available). One complexity that arises in this case is that repetitions of a stimulus are processed by the brain somewhat differently than the first presentation of the stimulus.\n\n### **Q6: fitting of zone residuals**\n\nPlease see response to **Q4**.\n\n### **Q7: simulations of misspecified settings** \n\n**Summary:** We agree with the reviewer that adding additional simulations to discuss plausible alternate models such as when $Y_{1}$ is a cause of $Y_{2}$ will strengthen the paper. We have implemented two alternate data generation models where $Y_{1}$ is a cause of $Y_{2}$ and evaluated if our proposed framework can help us infer if a stimulus affects two brain zones in the same way in these scenarios. We find that when used together zone generalization and zone residuals enable us to infer if a stimulus affects two brain zones with significant encoding model performance in the same way.\n\n**Further details:** In the first model $X$ (the stimulus-representation) and $Z$ (the remaining stimulus information not captured by $X$) directly affect $Y_{2}$ and indirectly affect $Y_{2}$ through mediator $Y_{1}$:\n$$ \nY_{1, P} = \n {g_{12, P}(X)} + \\delta \\times\n h_{12, P}(Z) + (1-\\delta) \\times \\epsilon_{12, P} \n$$\n$$\nY_{2, P} = \\tau \\times Y_{1, P} +\n {g_{2, P}(X)} + \\delta \\times\n h_{2, P}(Z) + (1-\\delta) \\times \\epsilon_{2, P}\n$$\n\nWe introduce an adjustable parameter $\\tau$ to simulate a wide range of scenarios. Parameter $\\tau \\in [-1, 1]$ controls the extent and direction that $Y_{2}$’s measurements are affected by $Y_{1}$. The remaining parameters and functions were previously defined in Appendix B.1. \n\nIn the second model $X$ and $Z$ only indirectly affect $Y_{2}$ through mediator $Y_{1}$:\n$$ Y_{1, P} = \n {g_{12, P}(X)} + \\delta \\times h_{12, P}(Z) + (1-\\delta) \\times \\epsilon_{12, P} $$\n$$ Y_{2, P} = \\tau \\times Y_{1, P} +\n \\epsilon_{2, P}\n$$\n\nWe evaluated the three metrics in our framework on synthetically generated data from both models similarly to Fig. 2. Open Review does not support adding images directly in the response, so we have included the new figures via an anonymized link: https://drive.google.com/file/d/142m0_efSIIn-YcP9BQFRvs3fq15g7qcV/view?usp=sharing\nFor both data generation models we find that only zone generalization is informative of how similar two brain zones respond to stimulus properties, enabling us to determine which pair of inferences (A or B vs. C or D) can be made (Rebuttal Figs. 4-5 left). We also find that for the first model only zone residuals vary when the proportion of stimulus properties that are driving the brain zones but not captured by the stimulus-representation changes (Rebuttal Figs. 4 right). For the second model, we find that when we vary the proportion of stimulus properties that are driving the brain zones but not captured by the stimulus-representation the zone residuals remain constant at $0$ (Rebuttal Fig. 5 right). This is expected as the stimulus properties that drive the brain zones are shared between the two zones. Therefore, for the second model only inference B or C is possible. This suggests that for both models, we can use zone generalizations to narrow our search down to a pair of inferences (A or B vs. C or D), and then zone residuals can be used to precisely infer how the stimulus affects the pair of zones.\n\n### **Q8: reasons for asymmetric metrics** \n\nThe asymmetry in both zone generalization and zone residuals can be due to a difference in SNR, and also to a difference in the proportions of all stimulus properties that similarly affect the zones. To illustrate this, consider the extreme case in which the stimulus properties that affect zone_1 are a strict subset of those that affect zone_2 and the effect on the two zones is similar. Then, zone generalization(zone1, zone2) < zone generalization(zone2, zone1) and zone residuals(zone1, zone2) < zone residuals(zone2, zone1). Disentangling this from the SNR effect on asymmetry is an interesting direction for future work. We will clarify this in the text.\n", "role": "user" }, { "content": "I appreciate the authors' feedback. They've addressed nearly all of my concerns.", "role": "assistant" } ]
nbsWo7ob-Mm
CLeaR 2022 Conference
Reviewer_Daoa
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_Daoa, and the following are the paper you need to review:\n```<<nbsWo7ob-Mm>>```", "role": "user" }, { "content": "Review Content: Summary: An interesting first step to model social influence\nMain review: The authors describe PIF, a method to disentangle social influence from other underlying factors of prople's behavior. The question addressed is whether, knowing the connections between people, the fact that a user j bought an item yesterday makes user i buy an item today.\n\nThe authors posit a causal model which includes the social graph of \"friend\" connections, as well as confounders -- the features of the items, the individual preferences of users, the attributes that are responsible for their connection in the network and could also be responsible for their purchasing behavior. The authors propose a Poisson factorization model where the latent factors that are responsible for the graph structure -- independently of the purchases -- and the factors responsible for the previous day purchase. The authors propose an experimental validation with relevant baselines\n\nThe paper is well-written and overall clear. I liked the experiment section, and the idea that \n\n\n* temporal aspect: it seems that the model is correct under the assumption that yesterday's purchases are free of social influence. It seems unlikely to hold in practice. Is it possible to extend the model to account for multi-step dynamics that would be more relevant to the real world?\n* assumption that there is no social influence if we do not observe a link: I don't see this assumption explicitly, except hidden in Eq 11, but it seems to be used implicitly (for instance in the choice of the definition of psi_{ijk}). Could the authors discuss this assumption and its relevance (e.g., on platform such as lastfm where the social component is unlikely to be very present). Also, why define psi_{ijk} conditionned on a_ij=1? It seems that removing this conditioning would be a valid definition of social influence as well. \n* assumptions for Proposition 1: it seems that some assumptions are missing. For instance, I suspect the result assumes that the true distribution follows the poisson model described in Eq 11 and that we have an infinite amount of data to estimate beta?\n* \"real-world\" study: \n- admittedly the study is not really \"real-world\", in particular given the two-step dynamics. While I found it interesting that PIF obtains better performances (maybe) because other models overestimate the effect of social influence, maybe it feels like the approach only works in this case (because the assumption that yesterday's purchases do not depend on social influence approximately holds). \n- I don't have any order of magnitude in mind, but -127 for log-likelihood of counts of music pieces seems very low. What is the LL of a model that predics uniformly across all songs with the correct number of songs listened to as total average? \n* in applications where there is a recommender system which may use previous-day purchase history of \"friends\" to make recommendation (and thus influence today's purchases), to what extent the \"social influence\" we are trying to model is actually reverse-engineering the recommender system? Can the model be modified to take disentangle the influence due to the recommender system from a true \"social influence\" that would be due to social interactions between friends?\n\nother remarks:\nFigure 1:\n* the legend uses x_{ik} but the figure x_{jk}\n* the edge x_{jk} -> rho_j is in the wrong direction\n* the edge x_{jk} -> y_{ik} is missing\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review 1\nThank you for your support of our paper. We also appreciate you pointing us to some mistakes in the causal graph in Fig 1; we have fixed these. We address the detailed questions below:\n\n+ \"temporal aspect: it seems that the model is correct under the assumption that yesterday's purchases are free of social influence. It seems unlikely to hold in practice. Is it possible to extend the model to account for multi-step dynamics that would be more relevant to the real world?\"\n\nWe agree with you that extending the notion of social influence to time-varying influence would be a fruitful next step for this line of work. However, the first step in analyzing time-varying influence is articulating how social influence spreads behaviors across just two time points. This paper is in service of that goal. Following on work such as [2], we focus on the two time-step setting to define social influence as a causal quantity, clarify assumptions, and develop estimation methods. The technical groundwork we lay down in this paper will help with estimating time-varying influence.\n\n+ \"assumption that there is no social influence if we do not observe a link: I don't see this assumption explicitly, except hidden in Eq 11, but it seems to be used implicitly (for instance in the choice of the definition of $\\psi_{ijk}$). Could the authors discuss this assumption and its relevance (e.g., on platform such as lastfm where the social component is unlikely to be very present). Also, why define $\\psi_{ijk}$ conditionned on $a_{ij}=1$? It seems that removing this conditioning would be a valid definition of social influence as well.\"\n\nIt is common in the literature on network effects (c.f. [1] and [2]) to define influence as acting only along known network ties, which is the same as conditioning on $a_{ij}=1$. We follow this convention. Even though it is mathematically valid to define social influence without this conditioning, there are fewer plausible mechanisms by which a person could influence someone with whom they share no network ties.\n\n+ \"assumptions for Proposition 1: it seems that some assumptions are missing. For instance, I suspect the result assumes that the true distribution follows the poisson model described in Eq 11 and that we have an infinite amount of data to estimate beta?\"\n\nThank you for pointing this confusion out. Our assumption in Proposition 1 is that\n$\\mu_{ik}(a,x) = \\mathbb{E}[y_{ik} | a_{ij}=a, x_{jk}=x, a_{i, \\setminus j}, x_{k, \\setminus j}, \\hat{c}_i, \\hat{w}_k]$.\nThis assumption says that the rates in the Poisson model truly capture the expected outcomes. This would hold when the outcomes are distributed according to the Poisson model, as you said. However, we appreciate that the assumptions can be clearer, and so we have clarified that we assume that the outcomes are distributed according to the Poisson model. You're also right that this is a result about identification, which assumes infinite data and doesn't consider estimation error. We have also clarified this in our remarks about P1.\n\n+ \"I don't have any order of magnitude in mind, but -127 for log-likelihood of counts of music pieces seems very low. What is the LL of a model that predicts uniformly across all songs with the correct number of songs listened to as total average?\"\n\nThanks for this idea to improve the Last.fm study. We implemented a baseline Poisson model that, for each held out Last.fm song listen, $y_{ik}^{\\textrm{heldout}}$ predicts with the Poisson rate $\\frac{1}{n_i}$, where $n_i$ is the number of songs user $i$ listened to in the previous time period (i.e., based on the variables $y_{ik}$). This baseline model predicts with an average Poisson log likelihood of -317.8 on the held out data.\n\n+ \"in applications where there is a recommender system which may use previous-day purchase history of \"friends\" to make recommendation (and thus influence today's purchases), to what extent the \"social influence\" we are trying to model is actually reverse-engineering the recommender system? Can the model be modified to take disentangle the influence due to the recommender system from a true \"social influence\" that would be due to social interactions between friends?}\n\nThis is an interesting thought. In the setting you describe, our notion of influence would calculate the total effect of social influence, combining the sub-effects mediated by the recommender and the network itself. Decomposing this into effects mediated by the recommender algorithm versus the network itself would require finer grained data about who did and did not receive social recommendations.\n\nReferences:\n\n[1] McFowland III, E. and Shalizi, C.R. 2016. Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations. arXiv:1607.06565.\n\n[2] Ogburn E.L., Sofrygin O., Diaz, I. and van der Laan M.J. 2017. Causal Inference for Social Network Data with Contagion. arXiv:1705.08527.\n Global/general Response:\n```\nWe thank all the reviewers for their comments. Overall, we're glad that the reviewers found our paper to be well-written and technically sound. We found that the reviewers generally had no major technical concerns. Below, in the response to each reviewer, we addressed each comment in detail, including providing extra empirical results as requested. We hope that reviewers will consider raising their score if we addressed their concerns or follow up with us for further discussion.\n```\n", "role": "user" }, { "content": "Thanks for the authors' response. Most of my concerns have been addressed. I continue to recommend this work positively.", "role": "assistant" } ]
nbsWo7ob-Mm
CLeaR 2022 Conference
Reviewer_YCkp
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_YCkp, and the following are the paper you need to review:\n```<<nbsWo7ob-Mm>>```", "role": "user" }, { "content": "Review Content: Summary: This work extends Deconfounder with a Poisson factorization model to estimate social influence in network observational data.\nMain review: This work aims to estimate social influence, a typical type of spillover effect in social networks. It proposes PIF (Poisson Influence Factorization), which takes social network, past and current behaviors of members as its input. It works in a similar way to Deconfoudners, which first infers latent substitutes of hidden confounders and then do a linear regression to estimate potential outcomes. Semi synthetic experiments show the proposed method PIF-Net and PIF-Joint outperforms existing methods and can achieve similar performance to Oracle. The real-world experiment further verifies PIF-joint's effectiveness through a prediction task. \n\nMajor concern:\n1. It is not quite clear why it is needed to condition on a_i^{-j} and x_i^{-j} given the fact the only confounder for the causal effect of x_jk on y_ik in fig. 1 is \\tau_k.\n2. It seems there is no strong motivation behind the poisson factorization model, can we replace it with other types of matrix factorization or neural networks?\n3. In the real-world experiment, it is not very fair to use Poisson likelihood as the performance metric since we do not know whether the underlying data generating process follows the Poisson factorization model. I would suggest to directly show the accuracy or MSE of predicted outcomes or predicted treatments.\n\nComments:\n\n1. The causal graph in Fig. 1 cannot show x_jk is prior to y_ik in time.\n2. Similarly, in Eq.(1), I believe we also need to emphasize the value of y_ik must be determined after do(x_jk).\n3. It is better to make it clear what is the expectation in eq.(3) taken over.\n4. For eq.(8) to hold, it is better to specify that it needs to assume v_i and u_i are exogenous.\n5. It is better to explain the difference between c_i and z_i, it seems both of them only influences edges.\n6. In eq.(9), why x_ik is not affected by any social influence?\n7. What is the difference between eq.(10) and the second term of eq.(9)? Both of them factorize x to latent variables.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review 2\nThanks for your comments. We address them in detail below. If we've sufficiently addressed your concerns, we hope that you'll consider updating your score.\n\n+ \"It is not quite clear why it is needed to condition on $a_i^{-j}$ and $x_i^{-j}$ given the fact the only confounder for the causal effect of $x_jk$ on $y_ik$ in fig. 1 is $\\tau_k$.\"\n\nThe causal effect $\\psi_{ijk}$ is a marginal quantity that implicitly marginalizes out the other causes of $y_{ik}$, which are $a_i^{-j}$ and $x_i^{-j}$. When we write this marginal out explicitly, we get Eq. 4, where we condition on the variables $a_i^{-j}$ and $x_i^{-j}$. Note that this conditioning is not because the variables $a_i^{-j}$ and $x_i^{-j}$ are confounders. We explained this idea in Section 2.4 above Eq. 4.\n\n+ \"It seems there is no strong motivation behind the poisson factorization model, can we replace it with other types of matrix factorization or neural networks?\"\n\nThe motivation behind using Poisson community models and Poisson matrix factorization is that they are well-studied, have been successful choices for modeling network and heterogeneous data such as purchases, and admit efficient inference algorithms [1, 2, 3]. We discussed this in section 3.1. We agree that it would be an interesting area of future research to fit neural models of outcomes; we highlighted it as an area of future work in our discussion in section 5.\n\n+ \"In the real-world experiment, it is not very fair to use Poisson likelihood as the performance metric since we do not know whether the underlying data generating process follows the Poisson factorization model. I would suggest to directly show the accuracy or MSE of predicted outcomes or predicted treatments.\"\n\nWe calculated the area under the ROC curve (AUC), a metric of classification accuracy, for each of the compared methods by treating the estimated rate from the Poisson model as the prediction score. Recall that the task is to predict Last.fm song listens (binary) for a future, unseen period of time. The AUC results are:\n\n+ Unadjusted: 0.55\n+ mSPF: 0.66\n+ Net-Only: 0.55\n+ PIF: 0.67\n\nWe see that PIF is still the best performing model according to AUC.\n\nReferences:\n\n[1] Ball, B., Karrer, B. and Newman, M.E., 2011. Efficient and principled method for detecting communities in networks. Physical Review E, 84(3), p.036103.\n\n[2] Gopalan, P.K. and Blei, D.M., 2013. Efficient discovery of overlapping communities in massive networks. Proceedings of the National Academy of Sciences, 110(36), pp.14534-14539.\n\n[3] Gopalan, P., Hofman, J.M. and Blei, D.M., 2015, July. Scalable Recommendation with Hierarchical Poisson Factorization. In UAI.\n Global/general Response:\n```\nWe thank all the reviewers for their comments. Overall, we're glad that the reviewers found our paper to be well-written and technically sound. We found that the reviewers generally had no major technical concerns. Below, in the response to each reviewer, we addressed each comment in detail, including providing extra empirical results as requested. We hope that reviewers will consider raising their score if we addressed their concerns or follow up with us for further discussion.\n```\n", "role": "user" }, { "content": "I appreciate the authors' feedback. It satisfactorily addresses most of my concerns, and I am considering increasing the score.", "role": "assistant" } ]
nbsWo7ob-Mm
CLeaR 2022 Conference
Reviewer_4BGp
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_4BGp, and the following are the paper you need to review:\n```<<nbsWo7ob-Mm>>```", "role": "user" }, { "content": "Review Content: Summary: The paper introduces a causal model for social influence in order to estimate social influence when there are unobserved confounders. The idea is that there is an observed social network that determines the relationship between individual \"buyers\", and unobserved attributes that affect both the connections between buyers and items that are bought. The model also assumes the there exist only two different times (called yesterday and today), which attribute to the causal interpretation of the model. In order to estimate the social influence, a joint factor Poisson model is fitted for certain unifying substitutes of the confounders, and then these substitutes are estimated. There is a simulation study that shows under the setting of the simulation, this model (when fitting a joint model of purchases and the network data) performs better than two other alternative models. There is also a real world data analysis of the model using the data collected from a song-sharing platform.\nMain review: The paper is clearly written, and the technical details seem correct to me. I appreciate the use of Poisson model for social influence -- although the model itself as well as the use of a single latent variable instead of several of them in graphical models are not novel, the idea of a joint model for both network connections and purchases is quite interesting. However, I have some concerns and confusions about the model as I describe below:\n\nFirstly, the idea of two time points (today and yesterday) is rather strange to me. Of course the notion of time here (perhaps too conveniently) imposes a causal direction between two nodes of the network. However, it completely disregards the fact that, when correlation is observed and the effect of confounders are no longer a concern (as it is the case here), the causal direction could be in either direction: what if the day after the allegedly cause node buys the item again -- can not the effect be considered a cause with exactly the same setting?\n\nIn addition, the idea of hypothetical intervention is rather confusing to me. It seems to me in reality a counterfactual distribution is being computed not an interventional one, but this should be made clear regardless. More importantly, the use of backdoor criterion to relate the observational and interventional distributions is questionable here as firstly the causal graph is just assumed (not learned) and secondly the Markov property is not formally shown. \n\nThe causal graph of Figure 1 looks rather strange to me as well. Since $a_{ij}$ is a collider node in the graph, the model is basically saying that, given the relationship between $i$ and $j$ in the network (regardless of whether they are connected or not), The nodal attributes are correlated,and consequently, after them being marginalized over, $x_{jk}$ and $y_{ik}$ have some symmetric association (which is usually indicated by an undiredcted edge in mixed graphical modeling). I do not think this is what the authors intend for their model. At some stage (when discussing Figure 2), it seems that the assumption, when nodes are at the same time point, is dyadic independence given the attributes, but the situation is more complicated as there are two different time points in the model. Moreover, the graphical model is only described for two nodes in the network. How does this expand when one considers the whole network, and how does the social influence is measured for nodes that are not adjacent by connected via a path in the graphical models? In general, the independence structure of the whole model is not clear to me.\n\nThere are also many assumptions especially when using the Bayesian estimation for substitutes. There should be a justification of these assumptions, or at least a discursion on them.\n\nFinally, my problem with the simulation study is that, by design, the purchases are happening where the connections in networks are. This is rather counterintuitive when one wants to test a joint model for purchases and network connections. I think there should be a variety of scenarios regarding purchases to make this simulation study more reliable.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review 3\nThanks for your comments. We addressed to the best of our ability, but we had some questions about your comments, as we note below. We hope during the open discussion, we can clarify all your comments.\n\n+ \"the notion of time here imposes a causal direction between two nodes of the network [...] what if the day after the allegedly cause node buys the item again -- can not the effect be considered a cause with exactly the same setting?\"\n\nWe follow a large body of work on network effects (c.f. [1,2]) in assuming that yesterday's outcome causes today's outcome. If the outcome $y_{ik}$ influences person $j$'s outcomes at a next time point, we would use $y_{jk}^{t+1}$ to denote the future outcome and draw a causal arrow from $y_{ik}$ to $y_{jk}^{t+1}$. This is commonly how time varying influence is defined [1,2].\n\n+ \"the idea of hypothetical intervention is rather confusing to me [...]\"\n\nAn interventional distribution over the purchase $y_{ik}$ that arises by intervening on the purchase $x_{jk}$ is a type of counterfactual distribution; in Chapter 4 of [3], Pearl discusses that interventional distributions are types of counterfactual distributions. In this paper, we follow Pearl in using \"do\" notation and interventional distributions to define social influence since counterfactual queries can be more complex than interventional ones.\n\nRe: backdoor (BD) criteria, it is standard to apply the criteria to assumed causal models. It is less common to apply the BD criteria to causal models learned from data. In Appendix 6.1, we discuss how the BD criteria is satisfied by the confounders $\\rho$ and $\\tau$. We ask the reviewer to please clarify what they mean by \"the Markov property is not formally shown\" so that we can discuss this point better.\n\n+ \"The causal graph of Figure 1 looks rather strange to me as well [...] In general, the independence structure of the whole model is not clear to me.}\"\n\nWe will clarify some of the assumptions encoded in the causal model in Figs 1 and 2.\n\nThe backdoor path in Fig 1 between $x_{jk}$ and $y_{ik}$ via the network tie $a_{ij}$ means: knowing (i.e., conditioning on) whether or not two people $i$ and $j$ are connected gives us information about how similar (or dissimilar) their latent traits $\\rho_i$ and $\\rho_j$ are. This is due to homophily, the tendency of similar people to form ties. If we have information about how similar or (dissimilar) two people are, then, given one person's outcome yesterday, we can predict their friend's outcome tomorrow. That is, there is an a way to predict $y_{ik}$ from $x_{jk}$ based on latent traits $\\rho$ and $\\tau$. Thus, $\\rho$ and $\\tau$ are variables that block backdoor paths when we condition on them. We stress that Fig 1 involves plates over the $m$ items, $n$ people and $n x n$ network ties. This means that the assumptions encoded in the causal model extend to all variables $y_{ik}$.\n\nIn Fig 2, we unrolled the plate over the $n$ people on the left side of Fig 1 to show explicitly that the variable $\\rho_i$ is a common cause of $a_{i1}, a_{i2}, ... a_{in}$. This common cause structure is what allows us to construct a substitute for $\\rho_i$ by fitting a probabilistic factor model that assumes the same structure. We discuss this in detail in \"why are substitutes valid\" in Section 3. \n\n+ \"There are also many assumptions especially when using the Bayesian estimation for substitutes. There should be a justification of these assumptions, or at least a discursion on them.\"\n\nIn the main result in Proposition 1, our goal is to separate the assumptions needed for parametric identification (infinite data regime) from estimation (finite data regime). We present assumptions that justify using substitutes in the context of properly specified models and infinite data. We study estimation error empirically in our experiments.\n\nWe agree that there are limitations when estimating substitutes from finite samples in practice. In particular, depending on model misspecification and convergence rates of the Bayesian model we fit, we may mismeasure the substitutes confounders and obtain biased estimates of social influence. Analyzing this bias is another avenue of future work. We have added a discussion about these limitations to Section 5.\n\n+ \"Finally, my problem with the simulation study is that, by design, the purchases are happening where the connections in networks are [...]\"\n\nWe ask the reviewer to clarify what they mean in this point so that we can address it properly.\n\nReferences:\n\n[1] McFowland III, E. and Shalizi, C.R. 2016. Estimating Causal Peer Influence in Homophilous Social Networks by Inferring Latent Locations. arXiv:1607.06565.\n\n[2] Ogburn E.L., Sofrygin O., Diaz, I. and van der Laan M.J. 2017. Causal Inference for Social Network Data with Contagion. arXiv:1705.08527.\n\n[3] Pearl, J., Glymour, M. and Jewell, N.P., 2016. Causal inference in statistics: A primer.\n Global/general Response:\n```\nWe thank all the reviewers for their comments. Overall, we're glad that the reviewers found our paper to be well-written and technically sound. We found that the reviewers generally had no major technical concerns. Below, in the response to each reviewer, we addressed each comment in detail, including providing extra empirical results as requested. We hope that reviewers will consider raising their score if we addressed their concerns or follow up with us for further discussion.\n```\n", "role": "user" }, { "content": "Title: Respone to the authors\nThank you very much for your response, which clarifies some of the confusions I had. To answer to authors' questions:\n\n1) Admittedly I was not aware of the literature on \"yesterday and today\", but I still find it strange that an arbitrary notion of time is imposed, and based on that causal direction is implied after observing association (with no confounder).\n\n2) In the setting of front and backdoor criteria, the model is Markovian to the graph. Is this the case here, and could you show that?\n\n3) My issue with simulation was that you simulate today’s purchases based on the social influence. Should there not be a variety of scenarios considered, e.g., when today's purchases are independent of social influence etc.?", "role": "assistant" }, { "content": "Title: Thanks for your clarifications!\nRe: Causal direction based on time: in any time-series analysis, it is standard to assume that a variable that occurs earlier in time may cause variables that occur later in time, but not vice-versa. In the context of our paper, today's purchases cannot cause yesterday's purchases. However, our notation can be easily extended to time series data where today's purchases can subsequently cause tomorrow's purchases, and so on.\n\nRe: backdoor criteria: Yes, we can show the confounders $\\rho_i$ and $\\tau_k$ satisfy the BD criteria with respect to the assumed causal graphical model in Fig. 1:\n\n One open backdoor path between the intervened variable $x_{jk}$ and the outcome variable $y_{ik}$ goes through $\\tau_k$. Thus, by conditioning on $\\tau_k$, we block this backdoor path.\n\nThe second open backdoor path between the intervened variable $x_{jk}$ and the outcome variable $y_{ik}$ arises because we condition on $a_{ij}=1$ when we define social influence. Because the variable $a_{ij}$ is a collider, when we condition on it, there is a spurious association between its parents, $\\rho_j$ and $\\rho_i$ and we create a backdoor path between $x_{jk}$ and $y_{ik}$ via $\\rho_j$ and $\\rho_i$. Thus, by conditioning on either $\\rho_j$ or $\\rho_i$, we block this backdoor path; in this paper, we choose to include $\\rho_i$ in the adjustment set.\n\nWe explain how $\\tau_k$ and $\\rho_i$ are a valid adjustment set w.r.t. to the causal graph in Fig 1 in appendix 6.1.\n\nRe: simulation: We liked your suggestion to simulate today's purchases in a setting where everyone in the network has no social influence. That is, yesterday's purchases have no effect on today's purchases. \n\nUsing the real Pokec social network, we fixed each user's social influence to 0. Then, we simulated today's purchases as described in the paper. \n\nIn the case where both sources of confounding are present (homophily and item attributes), we report MSE between known and estimated social influence for the low, medium and high confounding regimes we studied in the paper. Here is the table, with results averaged over 10 simulations:\n\n| | Low | Med. | High |\n|:-------------|---------:|--------:|---------:|\n| Unadjusted | 0.00209 | 0.00232 | 0.00241 |\n| MSPF | 9.39e-05 | 0.00014 | 0.000137 |\n| Network Only | 0.000346 | 0.00052 | 0.000562 |\n| PIF-Joint | 7.59e-05 | 9.4e-05 | 0.000124 |\n\nThe MSE results suggest that there is some estimation bias in all methods' social influence estimates when the simulated influence is 0. However, PIF remains the least biased in this setting compared to the other methods. The trend persisted in the settings where homophily and item attributes were the only source of confounding. We'll include these results in the paper.", "role": "user" }, { "content": "It was a pleasure to engage in the above discussion with the author, which has certainly made the work more robust.", "role": "assistant" } ]
pJuO-5QEKa
CLeaR 2022 Conference
Reviewer_7Td3
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_7Td3, and the following are the paper you need to review:\n```<<pJuO-5QEKa>>```", "role": "user" }, { "content": "Review Content: Summary: A useful theoretical analysis of causal concepts for explainable AI that could use substantial revisions to the front of the paper\nMain review: The introduction distinguishes between explaining the model itself and explaining the real world. These are profoundly different things targets for explanation, and much of the work in XAI has tended to focus on the first. That is, work in XAI aims to answer the question: “Why did the model make conclusion X?” rather than “Why would making conclusion X be justified by the way that the world works?” While this is made fairly clear later in the paper, the beginning of the paper reads as if the authors believe that most work in XAI targets the latter (much more difficult) question. The introduction should be revised to better reflect the later material.\n\nThe focus of the paper (“…conceptual analysis of the relevant notions of causal explanations…”) is interesting, though limited. Still, it is a sufficient contribution to warrant acceptance.\n\nThe authors claim that “If one is interested in finding an action that changes an already observed output, then Counterfactual Explanations are called for.” This is not what many researchers mean when they say “counterfactual explanation”. Rather than focusing on *outputs*, they focus on *inputs* (conditions). A counterfactual explanation says, what output would have been produced by the model given a different conditions.\n\nStrong assumptions, such as assuming perfect knowledge of a causal model M of the target domain, imply some responsibility on the part of the authors to either argue that (1) such an assumption is reasonable in practice (a hard sell, in this case); or (2) plausible violations of the assumption do not invalidate the conclusions of the analysis based on the assumptions. In this paper, we have neither. Greater discussion around this point would improve the paper.\n\nThe paper is reasonably critical of current work on XAI (e.g., “…the literature on XAI has failed to take up the most relevant lesson for XAI that the literature on causation has to offer…”). When leveling such criticisms, it is important to accurately characterize the goals of the work being critiqued. In at least some portions, the current paper does not accurately characterize the work. Specifically, it assumes that all work on XAI intends to explain why a given action is expected to achieve a particular outcome (how the world works) rather than to explain why a given model recommended a given action (how the model works). This difference is a fundamental one, and it is one that has been discussed extensively among (at least some) researchers working on XAI.\n\nOf course, a quite plausible question is whether it is possible to produce counterfactual explanations (about the model alone) that will be correctly interpreted. The explanation “Our model for credit scoring would have produced a different outcome if your education level had been higher” appears only slightly different than saying “You would be more worthy of credit if you had more education”. The two statements are miles apart in terms of the knowledge required of the world to make them with confidence, but users of an explanation system might easily confuse them. Furthermore, one might argue (and the current paper does) that counterfactual explanations of the model alone are really not possible without implicit reference to the world. This argument should be foregrounded rather than remain in its current (somewhat buried) position in the paper.\n\nThe discussion at the beginning of Section 5 is an extremely important one for the XAI enterprise, and the paper would be improved if the introduction of the paper did a better job of forecasting this future discussion rather than (as it does now) assuming that most XAI literature is doing far more than what Section 5 calls “the first interpretation”.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to reviewer\nThanks a lot for your positive and detailed feedback! The numbers indicate which paragraph of the review is being addressed.\n\n1: \nI agree entirely that it is crucial to distinguish between explanations of the model, and explanations of the system in the world that we are modeling, and this is why I do so in the second and third paragraphs. I also agree that most of XAI is focused on the former, and I will make this more explicit in the introduction. However, I am not so sure that most people who are reading work on XAI are aware of this, and I do believe that this in part due to the fact that most work on XAI fails to clearly specify the different types and purposes that an explanation can serve. (See note 5 and 6 below.)\n\n3: \nI agree that the current formulation fails to take into account the conditions (input). Perhaps this formulation is better: “If one is interested in finding an action that changes already observed conditions to produce a change in an already observed output, then Counterfactual Explanations are called for.” I will reverse the order: first introduce sufficient explanations, then counterfactual explanations.\n\n4:\nI agree that there is much more to be said about this, but I do want to point out two important elements that are already mentioned in the paper. First, there is definitely a strongly increasing interest in learning partial causal models that are designed specifically to aid and interact with ML-models, which is referenced at the beginning of page 2. Although these are just a few papers, they are all very recent, and as the current conference illustrates it is to be expected that more will follow soon. Second, the paragraph before Definition 3 (briefly) explains how the current deterministic definitions could be generalized to probabilistic definitions, and those would also be applicable to partial causal models. Of course the proof of the pudding is in the eating, and thus I agree that until such a generalization is produced the reader might remain somewhat sceptical. Still, I believe it is important not to conflate conceptual and definitional issues (which are the target of the paper) with practical empirical issues. This paper describes what it *means* to offer (counterfactual, sufficient, etc.) explanations which are action-guiding in the ideal setting of perfect knowledge, and therefore offers a starting point on which to build definitions in epistemically limited settings. \n\n5 and 6:\nAs mentioned in note 1 above, I do believe that the paper carefully distinguishes between action-guiding explanations and explanations of the ML-model itself, but I agree that the paper could also be more explicit about this distinction when levelling criticism at current work in XAI (the quotation of the reviewer offers a good case in point). However, speaking from personal experience I have come across many people who are not themselves experts in the field but would like to make use of its results (such as medical practitioners) who are not at all clear about this distinction. This latter point is very congenial to the reviewer’s point about interpreting counterfactual explanations, and I agree that it deserves more attention. I will therefore add a paragraph to the introduction clarifying that although most experts in XAI are (usually) careful not to conflate the different types of explanations, this does not mean that the same holds for those hearing and using such explanations. This is precisely why it is important to define explanations in such a manner that these ambiguities cannot arise.\n\n7:\nI am glad to hear that the reviewer values the discussion in Section 5. I agree that, given the importance of counterfactual explanations in XAI, the distinctions made should be mentioned more explicitly as novel contributions of the paper in the introduction, and will adjust this.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The response addresses most of my concerns. I maintain my positive recommendation for this work.", "role": "assistant" } ]
pJuO-5QEKa
CLeaR 2022 Conference
Reviewer_u6p5
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_u6p5, and the following are the paper you need to review:\n```<<pJuO-5QEKa>>```", "role": "user" }, { "content": "Review Content: Summary: This paper give discussion about sufficient explaination, counterfactual explaination, and actual causation explaination.\nMain review: This paper introduce the relations between XAI and causation, and give some clearly and insteresting definination of causation based explaination. I think such discussion contributes to make the action guiding explaination to go one step further. BTW, is it necessary to require the ML model and the causal model agree on each other? Maybe it can be relaxed.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review\nThanks for your positive comments! Regarding whether it is necessary that the ML-model and the causal model agree on all observations, note that the paper already addresses this to some extent, for it says: “In future work both requirements can be loosened by adding probabilities to each. For the first, we can demand that both models are *likely* to agree on observations.” \n\nOf course this short comment leaves much to be determined. For example, if the causal model disagrees with the ML-model for some input-output pair, do we want to assume that at least one of them is correct? And if so, should it always be the same model? Or should we generalize such that both models come with probabilities which represent their uncertainty? The latter scenario would be most natural if both models were learnt entirely independently of each other, but that seems unrealistic (and inefficient). In fact, given that ML-methods are influential precisely because they are so good at learning associations, it seems likely that we would use the function h as a constraint when learning the solutions to the causal model in the absence of interventions, and then — per construction — both models would agree on all observations. I believe addressing these issues (and how to incorporate uncertainty more generally) merits a paper of its own. The reason is that otherwise one runs the risk of conflating conceptual and definitional issues with practical empirical issues.", "role": "user" }, { "content": "I value the authors' response. It resolves the majority of my concerns. I uphold my recommendation to accept this paper.", "role": "assistant" } ]
pJuO-5QEKa
CLeaR 2022 Conference
Reviewer_77oH
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_77oH, and the following are the paper you need to review:\n```<<pJuO-5QEKa>>```", "role": "user" }, { "content": "Review Content: Summary: This paper introduces and motivates refined notions of causal explanations in deterministic structural causal models. \nMain review: This paper introduces and motivates refined notions of causal explanations in deterministic structural causal models. These are compared with alternative notions of causal explanations and discussed alongside the corresponding notions of causation. It is being argued for why the presented notion of causal explanations is useful beyond the already existing notions. The rather abstract discussion is illuminated with a few examples. Overall a good paper, accept.\n\n\n\n# Pros:\n\n* Well written\n\n# Cons:\n\n* unclear what is new in this paper.\n\n* significance for machine learning is mostly limited to this conceptual extension of causal terminoloy\n\n\n# Originality\n\nThis point is my major concern. I find it not obvious to disentangle which parts of the paper are a collection and comparison of notions that have already appeared elsewhere and which parts are new to the paper. In my opinion the paper would benefit from stating this more clearly, for example by including a list of the key novel contributions in the introduction.\nAs far as I understand, the various notions of causation (weak sufficiency, direct sufficiency, strong sufficiency, counterfactual dependence, actual causation) are taken from the literature (mostly Beckers, 2021) while the notions of causal explanations derived thereof are new to this paper. This, i.e., going from notions of causation to notions of causal explanations, is a logical step to take. Further novel aspects are the proofs that several distinct notions of causation and explanation collapse under the independence assumption as well as the (brief) application to fairness.\n\n# Significance\n\nAs showcased by the examples, the proposed notion of sufficient explanation does capture important information about how the actual value of the target variable has been obtained or how the desired value can be obtained. I can see this being relevant for action-guiding and thus think the discussion is relevant from a general point of view. The significance for machine learning is mostly limited to this conceptual extension of causal terminology, while it is not discussed how this new terminology can be implemented concretely to explain the decisions of machine learning models or how to inform machine learning models by causal knowledge. In this sense one might say that the paper is more about causal reasoning in general rather than about XAI, but this is a matter of perspective I suppose. Also, the demonstration of how restricting the independence assumption is as well as the discussion on fairness in section 6.1 appear relevant to XAI.\n\n\n# Quality\n\nThe paper is of good quality. The discussion and comparison of several notions of what causation can be understood to mean in a structural causal model, with fine but important variations, as well as the associated notions of causal explanations appear quite illuminating to me. Moreover, the propositions and theorems answer questions that naturally come up when reading the definitions and discussion. While I did not check the proofs, the statements seem plausible. There are no obvious errors that I can see.\n\n# Clarity\n\nDue to the level of abstraction and the long list of interdependent definitions with fine variations, the paper is in my point of view a tough read. However, I think this is mostly due to the nature of the topic rather than the clarity of writing and exposition. In fact, the paper follows a clear thread and the author takes care to motivate the definitions and to illustrate some of them with examples. That being said, there are a few details in which I think the discussion could be more clear or precise, see the section \"Other comments\" below.\n\n\n\n# Further comments\n\n* First paragraph of section 3: The term \"target domain\" is somewhat vague, what exactly needs to be known?\n* \"Technically this means that there are only two kinds of endogenous variables...\" in the first paragraph of section 3. Why is this required? Even if I_b depends on I_a and an endogeneous variable V(I_b), Y could still be deterministically determined by \\bold{I}, could it not? I suppose you want that Y does not depend on an endogeneous variable which is not in \\bold{I}.\n* In Definition 3 the statement \"I = i --> Y = y\" is used as a causal formula. I suppose \"\\bold{I} = \\bold{i} --> Y = y\" means that if \\bold{I} = \\bold{i} then Y = y. However, I do not see how this fits into the general definition of causal formulas in section 2.\n* What is the set C in definition 6? Is it C = V \\ (X \\cup Y)? Or is the statement supposed to hold for all or for one C \\subset V \\ (X \\cup Y)?\n* Typo in Theorem 16: There is an additional closing parenthesis in \"W_2 = w_2)\"\n* The fire / burning house example confuses me because I do not see how it makes a case for using actual causation. Do you mean that F = 1 is not actual cause of B = 0 while F = 0 is?\n* What does \"causes\" mean in Definition 25? I suppose this refers to actual causation, right?\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to review\nThank you for your overall positive feedback and careful comments!\n\nI am quite surprised though that the reviewer identifies lack of originality as their major problem with the paper, because it is the first to formally define distinct notions of causal explanations and relate them to actual causation, and it is certainly the first bridge the gap (the chasm!) between the literature on actual causation and XAI. (For sake of completeness, I should point out though that since writing the paper, I came across Chris Hitchcock’s paper “Actual causation: what’s the use?” which also addresses the connection between actual causation and action-guidance, and I will cite it. However, its discussion is entirely informal and without much detail, so it’s not a competitor to this paper.) \n\nThe reviewer claims that going from the causal notions to the corresponding explanatory notions is a logical step to take, and I fully agree, which is what makes it all the more surprising that so far nobody had taken this step. (Beckers, for example, does not mention the word explanation once, nor does he talk about action-guiding. Halpern & Pearl do have a paper on explanations which uses their definition of causation, but it doesn’t offer distinct notions of explanation, nor does it connect to XAI in any way.) Moreover, over the past twenty years a huge literature has emerged discussing what definition of actual causation is the right one, with no consensus in sight. By connecting the definition of Beckers to explanations and action-guidance, this paper offers strong and novel support for that definition. This holds in particular because it is not at all clear if and how most other definitions of actual causation can be transformed into a corresponding explanatory notion.\n\nI agree with the reviewer that the novelty of these contributions may be lost if they are not mentioned more explicitly, and thus I will take up their suggestion of listing them in the introduction.\n\nFurther comments:\n- The target domain here simply refers to that part of the world that is being described by the variables and their interaction. I will make this explicit.\n- I am not sure how the reviewer’s question relates to their comment. The idea is simply that the variables are nicely split up into two groups: those which are determined entirely by the other variables in I, and those which are not determined by any of the variables in I and are thus causal ancestors of the variables in I. The reason for this restriction is that the language of causal models (in the notation that I use) does not allow using exogenous variables in causal formulas, and thus we want to make sure that all variables about which we want to make statements are endogenous. \n- Causal formulas are said to be Boolean combinations of atomic formulas, and those include disjunctions. The implication is purely propositional here, and therefore it is equivalent to a disjunction, which is why I slightly abuse notation. \n- That was a bit sloppy… The set C is implicitly defined when mentioning c as belonging to the range of V - (X u Y). I will make this explicit.\n- Thanks for catching the typo.\n- Yes, that is indeed what I mean, and I should mention this explicitly. The structure of this example is a common one in the literature on actual causation. \n- Yes. Again being sloppy, as Definition 19 does not use that term… I will simply write actual cause instead.\n", "role": "user" }, { "content": "Thank you for the detailed response from the authors. The authors have answered almost all of my concerns. I stand by my favorable recommendation for this paper.", "role": "assistant" } ]
s-pcpETLpY
CLeaR 2022 Conference
Reviewer_x5wb
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_x5wb, and the following are the paper you need to review:\n```<<s-pcpETLpY>>```", "role": "user" }, { "content": "Review Content: Summary: review\nMain review: The paper proposes an analysis of learning with fairness constraints in the presence of label noise. The difficulty of the framework is that the authors study the general problem of instance-dependent noise, while previous work focused on simpler notions of noise such as class-conditional noise.\n\nThe authors first study the setup of loss minimization under equalized odds constraints. essentially assuming that the noise model is known. Then the authors study a special case of counterfactual fairness (based on the model in Figure 1 and instanciated on the Adult dataset), and propose a variant of the algorithm of Kusner et al. to perform counterfactually fair prediction when the ``true'' label is not observed.\n\nOverall the paper is well written, even though the assumptions are not entirely clear (see below). The novelty and originality is limited because there is a wide literature on both fair ML with label noise and counterfactual fairness, and the paper stays close to existing approaches. Also, a large part of the paper is devoted to calculations that could be deferred to an Appendix. Experiments are carried out thoroughly with relevant baselines. \n\npros: compared to the counterfactual fairness paper (by Kusner et al.), I like the idea of distinguishing the observed biased label, a ground truth label Y and a counterfactually fair label that is learnt. This allows to avoid a problem of the original paper that it was impossible to evaluate whether the algorithm was close to an underlying truth or not, since it was not clear what the \"ground truth\" was. \n\nsome comments:\n* The value of Section 3 to the paper is unclear to me, since my understanding was that the intention was to treat the case where the noise model is unknown. \n* It would be better to spell out from the start of section 3 that the assumption is that the transition matrices are unknown\n* In Eq 7 I don't understand what T_{-1} is. The dependency of T on X seems to have magically disappeared, and I do not see how that is possible without making additional assumptions (e.g., that the noise only depends on the protected group, but not on arbitrary features of X\n* The model used in the section on counterfactual fairness is clear but not clearly motivated. Since U only depends on Z, I don't understand the value of having functions of U for W and Y -> why not simply functions of Z directly? \n* the motivation underlying mixup in that setting is not clear to me, in particular it is introduced when \"f is miled by tilde{Y}\" (what does \"mislead means\") and then the justification is that \"mixup increases the sample complexity of the training set\" (what is the sample complexity of a dataset?)\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer x5wb\nThank you for your time and efforts in reviewing our paper. We would like to address your concerns as follows.\n\n\n## The value of Section 3\n\nPlease allow me to first elaborate on the logical structure of this paper. \n1. First, taking fairness and label noise issues into consideration is significant in the era of big data. \n2. Second, we notice that noisy labels, unfortunately, make original fairness-aware algorithms even more prejudiced than fairness-unaware ones, and thereby harmful. \n3. Fairness notions can be generally divided into statistical fairness notions and causality-based fairness notions.\n* For statistical fairness notions, since they share a general formulation, we can provide a general solution to them, i.e., we rewrite the classification risk and the fairness metric in terms of noisy data and thereby build robust classifiers.\n* For causality-based fairness notions, the formulations vary a lot, so we only show how to design a robust and counterfactually fair classifier.\n4. At last, we conclude our paper.\n\nOverall, our work intended to provide general frameworks for learning (statistically or causally) fair classifiers with instance-dependent label noise. Therefore, **Section 3 plays an important role to show how to design robust classifiers with statistical fairness notions.**\n\n\n## Transition matrix\nWe didn’t assume that the transition matrices are known or unknown. The transition matrix is a theoretical concept that builds the bridge between clean class posterior and noisy class posterior. In the main part of Section 3, we just **theoretically analyzed** how to build robust classifiers with it. As for how to **acquire it**, we stated that in practical implementation, we can estimate it if it is not given at the end of Section 3.\n\n## Dropping $x$ in $T$\nThanks for your valuable feedback about this point. Note that equalized odds is a group fairness notion and the term in Eq. (7) we want to rewrite does not involve $x$. Therefore, we have to consider the transition relations at group level. Basically, there are two ways to achieve this:\n1. We can straightforwardly take the integral of $ P(X=x, \\hat{Y}=1 \\mid A=a, Y=1) $, which involves $x$, over x:\n\\begin{equation}\n \\begin{aligned} \n&P(\\hat{Y}=1 \\mid A=a, Y=1)\\\\\\\\\n&= \\int P(X=x, \\hat{Y}=1 \\mid A=a, Y=1) \\mathrm{~d} x \\\\\\\\\n&= \\int \\frac{P(X=x, \\hat{Y}=1, A=a, Y=1)}{P(A=a, Y=1)} d x \\\\\\\\\n&= \\int \\frac{P(X=x, \\hat{Y}=1, A=a, Y=1)P(X=x, A=a, \\hat{Y}=1)}{P(A=a, Y=1)P(X=x, A=a, \\hat{Y}=1)} d x \\\\\\\\\n&= \\int \\frac{P(Y=1 \\mid X=x, A=a, \\hat{Y}=1)P(X=x, A=a, \\hat{Y}=1)}{P(A=a, Y=1)} d x \\\\\\\\\n&= \\int \\frac{\\left(P(\\tilde{Y}=1 \\mid X=x, A=a, \\hat{Y}=1) - T_{-1}(x)\\right) P(X=x, A=a, \\hat{Y}=1)}{\\left(1-T_{-1}(x)-T_{+1}(x)\\right) P(A=a, Y=1)}d x,\n \\end{aligned}\n\\end{equation}\n$ P(A=a, Y=1) $ can be further rewritten as:\n\\begin{equation}\n \\begin{aligned} \n &P(A=a, Y=1)\\\\\\\\\n &= \\int P(X=x, A=a, Y=1) d x \\\\\\\\\n &= \\int P(Y=1 \\mid X=x, A=a) P(X=x, A=a) d x \\\\\\\\\n &= \\int \\frac{\\left(P(\\tilde{Y}=1 \\mid X=x, A=a) - T_{-1}(x)\\right) P(X=x, A=a)}{\\left(1-T_{-1}(x)-T_{+1}(x)\\right)} d x\n \\end{aligned}\n\\end{equation}\nThe first way is a little bit complex, and thus we consider the average transition relations\n\n2. We define the average transition relations as $ T_{-1} = P(\\tilde{Y}=+1 \\mid Y=-1)$. Then we have:\n\\begin{equation}\n \\begin{aligned} \n T_{-1} &=P(\\tilde{Y}=+1 \\mid Y=-1)\\\\\\\\\n % &= \\frac{P(\\tilde{Y}=+1, Y=-1)}{P(Y=-1)} \\\\\\\\\n &= \\frac{\\int P(X=x, \\tilde{Y}=+1, Y=-1)d x}{\\int P(X=x, Y=-1)d x} \\\\\\\\\n &= \\frac{\\int P(\\tilde{Y}=+1 \\mid X=x, Y=-1) P(Y=-1 \\mid X=x) P(x) d x}{\\int P(Y=-1 \\mid X=x) P(x) d x} \\\\\\\\\n &= \\frac{\\int T_{-1}(x) P(Y=-1 \\mid X=x) P(x) d x}{\\int P(Y=-1 \\mid X=x) P(x) d x}, \\\\\\\\\n \\end{aligned}\n\\end{equation}\n where $P(Y=-1 \\mid X=x) = \\frac{P(\\tilde{Y}=y \\mid X=x) - T_{-y}(x)}{\\left(1-T_{-1}(x)-T_{+1}(x)\\right) P(\\tilde{Y}=y \\mid X=x)}$\n\nThus, we do not need additional assumptions to drop $X$ in $T$. We are sorry we did not make it clear in the paper and we will elaborate it in our final version. Thanks again for your constructive suggestion!\n## The model used in the section on counterfactual fairness\n\nAccording to the causal graph, U depends on Z, W, and Y. \nIn the decoding phase, since U is a parent of Z, W, and Y, U should be in the functions of Z, W, and Y of the causal decomposition Eq. (10). \nIn the encoding phase, although U depends on Z, W, and Y, we infer U only from Z because (1) Y is unknown; (2) W is the descendant of A, which cannot be used in the decision function.\n\n## Mixup\nThe problem here is that the counterfactually fair prediction Y is a latent intermediate variable, which means the map between Y (cluster id) and the semantic class label is lost. To map the prediction Y to the semantic class label, and also keep prediction Y from overfitting to noisy label (the meaning of ‘misled’), we employ Mixup to learn this map.\n\nWe misused the *sample complexity*. We will change it to *complexity*. Thanks for your valuable feedback.", "role": "user" }, { "content": "I acknowledge the authors' response. They have addressed most of the issues I raised, and I am considering revising the rating.", "role": "assistant" } ]
s-pcpETLpY
CLeaR 2022 Conference
Reviewer_uZWX
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_uZWX, and the following are the paper you need to review:\n```<<s-pcpETLpY>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes a novel problem that learning a fair classifier with instance-dependent label noise. The authors provide a general framework to deal with this problem for both statistical fairness notions and causality-based fairness notions. Experimental results show the effectiveness of the proposed methods.\nMain review: Pros:\n1. This paper is well motivated. Learning a fair classifier with instance-dependent label noise is a novel, interesting and significant problem. \n2. This paper is presented clearly and organized logically.\n3. This paper proposes a general framework and considers fairness including both statistical fairness notions and causality-based fairness notions. In addition, the methods are well supported by theoretic analysis.\n4. Extensive experimental results are reported on two real datasets and compared with six representative methods.\n\nCons:\n1. My main concern is the rationality of the constructed causal graph. Although the authors take ADULT dataset as an example to demonstrate the proposed causal graph interprets the data, it is uncertain whether the construction of the causal graph is complete. For example, what happens when the parents of A exist?\n2. Table 1 shows that the label noise makes fairness-aware algorithms even more prejudiced than fairness-unaware ones, and the instance-dependent label noise is a more challenging setting compared with class-dependent label noise. It would be better if the authors can provide a deeper analysis of this phenomenon.\n3. It would be better to add some discussion about the relationship and distinction between this paper and related work.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer uZWX\nThank you for your time and efforts in reviewing our paper. We would like to address your concerns as follows.\n\n## The completeness of causal graph\nSpecifically, in our method, A is only used to reconstruct W and $\\tilde{Y}$, i.e., $P(W \\mid A \\cdots)$, $P(\\tilde{Y} \\mid A \\cdots)$. When the parents of A exist, conditioned on A, W and $\\tilde{Y}$ (the descendants of A) are independent on the parents of A. Therefore, the existence of parents of A does not affect our method. \n\nGenerally. since the true causal graph of a specific dataset is always agnostic, we can only postulate or approximately discover the causal graph, both of which cannot guarantee the completeness of the causal graph. \n\n\n## Analysis of the effects of label noise on fairness-aware algorithms\nThanks for your valuable feedback. For UC, the method which optimizes the training loss unconstrainedly, the classifier is misled to fit the noisy data. For fairness-aware algorithms with general formulation as follows:\n\\begin{equation}\n\\begin{aligned}\n &\\text{minimize} \\quad \\sum_{n=1}^{N}L(f(x_n), y_n) \\\\\\\\\n &\\text{subject to} \\quad \\text{Fair}(X, Y, f) = 0.\n\\end{aligned}\n\\end{equation}\nThe classifier is misled by both classification loss and wrong fair metric. With different fair metrics, the classifier is misled to different degrees. Therefore, the performance of fairness-aware algorithms has no guarantee and could be worse than fairness-unaware ones.\n\n\n## Related works\nThanks for your constructive suggestion! As Reviewer 2xWR agreed, this is a new problem. Thus, to the best of my knowledge, there is no work that is related enough to be discussed and distinguished. We would definitely appreciate it if you could bring up some similar works.\n", "role": "user" }, { "content": "Thanks for the authors' response. It satisfactorily addresses the major issues I had. I keep my recommendation for this paper to be accepted.", "role": "assistant" } ]
s-pcpETLpY
CLeaR 2022 Conference
Reviewer_2xWR
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_2xWR, and the following are the paper you need to review:\n```<<s-pcpETLpY>>```", "role": "user" }, { "content": "Review Content: Summary: The paper considered an important machine learning direction, the fairness issue, in the context when there is labeling noise in the outcome. The paper showed the existing method may suffer from the existence of certain types of label noises and therefore the authors developed robust method for both statistical fairness and counterfactual fairness to account for those. The proposed methods are considered together with other state-of-the-art in two real datasets.\nMain review: Originality: The paper considered an interesting situation in which there are label noises on the outcome which renders most state of the art fairness-aware methods collapsing. This is a new problem to consider in the fairness literature and the authors provided an initial attempt to solve the issue.\n\nSignificance: The paper addressed an issue in the fairness background which is clearly relevant to the CLeaR community. The method did not significantly advance the state-of-the-art but rather I feel it is an immature attempt to solve the problem, but the results in the paper indeed provided new insights to the research problem.\n\nTechnical quality: The proposed approach appears reasonable and has been substantiated by empirical results but not theoretical results.\n\nClarity: The draft is badly written, at least in the current forms, especially the experiments section. Table 2 and table 4 have exactly the same caption, which did not provide enough information for the readers. I also spent some time finding where the R-p-fair is defined. I believe the draft needs rewriting, otherwise, as a reader, I have to take a lot of guesses.\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to Reviewer 2xWR\nThank you for your time and efforts in reviewing our paper. We would like to address your concerns as follows.\n\nOriginality: Thank you for your recognition of our originality.\n\nSignificance: Our method consistently surpasses all the baselines significantly. We are not sure which state-of-the-art method that our method did not significantly advance. Could you name one and we would love to conduct additional experiments for it?\n\nTechnical quality: There are mainly two parts of the theoretical analysis of our paper. First, we theoretically analyzed how to build robust and fair classifiers for statistical fairness notions. Second, the classifier we build in Section 4 is theoretically guaranteed to be counterfactually fair.\n\nClarity: Thanks for your constructive suggestions. We will revise them in our final version.\n", "role": "user" }, { "content": "I'm grateful for the authors' response. The response addresses most of my concerns.", "role": "assistant" } ]
sKaTh2SZ4xg
CLeaR 2022 Conference
Reviewer_JALQ
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_JALQ, and the following are the paper you need to review:\n```<<sKaTh2SZ4xg>>```", "role": "user" }, { "content": "Review Content: Summary: This paper proposes an extension of the synthetic intervention estimator to handle causal imputation. \nMain review: Originality and Significance:\nPrior work on synthetic interventions estimates the \"target\" effect by constructing the synthetic version of the target context using a weighted combination of donor contexts. This paper further extends this idea also on the action side, by constructing the target action as a weighted combination of donor actions and propose a new estimator called SI-A, which has strong theoretical property and yields stronger empirical performance.\n\nClarity:\nThis paper is well-written and easy to follow.\n\nQuality:\nThe method is well-illustrated, and it would be great to include some motivations for extending the SI idea on the action part. Theoretically, the method is justified under a factor model. Empirically, it shows stronger performance than the original SI estimator. Some minor comments here: is it possible to characterize/quantify the improvements of SI-A over SI? or under which regimes, SI-A is preferable than SI?\nRating: 6 marginally above the acceptance threshold\nConfidence: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\n", "role": "assistant" }, { "content": "Title: Response to JALQ\nThanks for the positive feedback! It may be difficult to explicitly quantify the improvement of SI-A over SI in general settings, but we would like to emphasize that Figure 5(b) demonstrates a way to empirically discern which method is more fitting.", "role": "user" }, { "content": "I appreciate the authors' reply. The reply answers almost all of my questions. I maintain my positive recommendation for this work.", "role": "assistant" } ]
sKaTh2SZ4xg
CLeaR 2022 Conference
Reviewer_LHjH
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_LHjH, and the following are the paper you need to review:\n```<<sKaTh2SZ4xg>>```", "role": "user" }, { "content": "Review Content: Summary: Review of Paper#108\nMain review: This paper studies a specific identification task where the unobserved variable is a set of potential outcomes $X^{c,a}$ where C represents a context and A means an action. For each instance C = c, A = a, only a corrupt reading $\\tilde{X}^{c, a} = X^{c, a} + e^{c, a}$ where $e^{c, a}$ is an arbitrary measurement noise. The goal is to recover the actual $X^{c, a}$ from the corrupt observations $\\tilde{X}^{c, a}$. The authors propose an algorithm to solve this task.\n\nOverall, I found this paper quite confusing. Several notations and technical concepts do not seem to be well-defined. I will summarize my questions below.\n\n>\"Observations set as $\\mathcal{O} = (\\tilde{x}^{c, a}: (c, a) \\in \\Omega )$\"\n\nDo the authors imply that the learner could only obtain a single observation for each context-acton pair (c, a)? If this is the case, it does not seem feasible to recover the actual value $x^{c, a}$ if the measurement noise $e^{c, a}\\neq 0$. \n\n> \"$e^{ca}$ denotes measurement noise\"\n\nSuppose the leaner could obtain multiple observations for every $c, a$. The problem setting still seems ill-posed. For instance, let $P(e^{c, a} = 0) = P(e^{c, a} = 1) = 0.5$ and $\\tilde{x}^{c, a} = x^{c, a} \\oplus e^{c, a}$. In such setting, it is unclear if there exists any method to recover the actual $x^{c, a}$. Do the authors assume additional assumptions about the noise $e^{c, a}$, e.g., it follows a normal distirbution? Please elaborate.\n\n> \"Definition 2 A (noiseless) linear structural causal model ...\"\n\nThis brings me to Definition 2 where the authors seem to derive the identifiability result based on a fully deterministic linear SCM. That is, the measure noise $e^{c, a} = 0$. Am I missing some nuances here? If this is true, this identification condition (noiseless) is too restrictive to be practical. It is hard to imagine a real-life system where uncertainty does not exist.\n\n> \"Assumption 3\"\n\nThe linear equation in Eq. (1) seems curious. It reads \n$$\nx^{ca} = A^c x^{ca} + B^c v^a.\n$$\nThat is, the underlying SCM is cyclic, while the diagram in Figure 2 seems to suggest it being an acyclic model. At the first, I thought it was a typo, but the proof for Proposition 1 seems to suggest otherwise.\nRating: 5 marginally below the acceptance threshold\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to LHjH\n\nThanks for the helpful comments!\n\n## Measurement noise and recovery of $x^{ca}$\n\nYou are correct that some requirements are needed on the measurement noise. \n\nSuch assumptions could take one of two forms. \n\n(1) If multiple observations $\\tilde{x}^{ca}$ are available for each $(c, a)$, then we can average these values to obtain a single vector. Under very weak assumptions (e.g., integrability of $e^{ca}$ in the additive noise setting), these averages are consistent estimates of their mean, $x^{ca}$.\n\n(2) if only a single observation is available for each $(c, a)$ we may use a matrix estimation method for denoising these observations, as we suggested in Section 4. There is an extensive literature on matrix denoising that can provide conditions under which consistent denoising is possible.\n\nWe give our identifiability result in the noiseless case to abstract away the particular details of how consistent estimates of $x^{ca}$ for $(c, a) \\in \\Omega$ are obtained. We will add a remark to make it more clear why we give our identifiability result in the noiseless setting. \n\n\n## Acyclicity of the model\n\nEquation (1) does not imply that the underlying SCM is cyclic, and the proof of Proposition 1 does not suggest that it is cyclic. In fact, Proposition 1 explicitly uses the fact that the underlying SCM is acyclic, and this is explicitly stated. The form of Equation (1) is a typical way of writing linear structural causal models, see for instance Section 2 of “Direct Estimation of Differences in Causal Graphs” (Wang et al., 2018).", "role": "user" }, { "content": "Thanks to the authors for their clarification. It covers most of my concerns.", "role": "assistant" } ]
sKaTh2SZ4xg
CLeaR 2022 Conference
Reviewer_mRBv
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_mRBv, and the following are the paper you need to review:\n```<<sKaTh2SZ4xg>>```", "role": "user" }, { "content": "Review Content: Summary: The authors propose causal imputation, a method to infer the effect of actions in various contexts, when only a subset of all context-action-pairs are observed. The method is motivated by prediction of the effect of chemical compounds (actions) in cells (contexts). Prediction is performed by learning regression coefficients in a set of training contexts, where the same `donor actions’ are observed as in the target context c. Assuming that there is no noise, and that the data obeys a latent factor representation, the authors show identifiability under rank assumptions. Additionally, they connect this latent factor model to structural equation models. The paper evaluates performance in a gene expression data set, and obtains better performance than competing methods. \nMain review: #### __Conclusion__\nThe solution proposed in this paper is to the best of my knowledge original. The paper outlines a scaleable approach for imputing datasets, and although the paper would be strengthened by a theoretical analysis of the behaviour in the presence of noise, the contributions of the paper are significant. In addition to this, the paper is well-written and well-motivated, and for those reasons, I believe it would be suited for publication at CLeaR. \n\n#### __Strengths__\n1. Clarity. The paper is well written and presented, and is the method is well illustrated by the motivating example. \n2. Application. The paper proposes a framework that addresses a very concrete problem, and demonstrates real-world applicability by superior performance in the CMAP gene data set. \n\n#### __Weaknesses__\n3. Weak theoretical results. Under the latent-variable model and rank assumptions, the identifiability result is hardly surprising for the noiseless case. The paper would have been strengthened by an analysis of the noisy case. \n\n#### __Minor comments__\n4. I do not fully understand the ‘selection diagram’ that are e.g. used in Table 1. Could you be more elaborate on how this relates to a causal model?\n5. In `Additional Notations’ on p. 5, you write $\\tilde X_{\\text{train,donor}}$ twice. I believe the first should be $\\tilde X_{\\text{train,target}}$. Also in Assumption 2, you state that $x_{\\text{train}}, x_{\\text{test}}$ are defined in Section 4, but they are not. \n6. In Assumption 3, one could misunderstand that this is the SCM for only for a particular action $a$. Could you be more elaborate on the assumption that $x^{ca}$ follows this SCM for all actions $a$ (so in particular $A^c, B^c$ are shared across all those SCMs). \n7. Errors in the references: Agarwal (2020a, 2020b) need references to ArXiV and NeurIPS. And Chatterjee (2015a, 2015b) are the same reference.\nRating: 8 accept, good paper\nConfidence: The reviewer is fairly confident that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Response to mRBv\nThank you for your positive remarks! We are glad that you found the writing and presentation clear and the method applicable. We agree that further analysis of the noisy case would make the paper even stronger, but we believe that our method, our initial identifiability result, and our extensive application to the CMAP dataset constitute significant contributions.\n\n## Minor comments\n\n4. Please refer to e.g. “Generalized Transportability: Synthesis of Experiments from Heterogeneous Domains” (Lee et al., 2020) for a recent exposition on selection diagrams. Briefly, selection diagrams are simply causal models which record how mechanisms (i.e., conditional distributions of variables given their parents) change between different contexts, by including additional nodes to represent the contexts.\n\n5. Thank you for the catch, you are right, the first occurrence of $\\tilde{X}_{train,donor}$ should be changed, and the quantities in Assumption 2 should have the additional subscript “donor”.\n\n6. Thanks for pointing out this potential source of confusion regarding Assumption 3. We will clarify that there exists $A^c, B^c$ for each cell type $c$ and $v^a$ for each action $a$ such that for all $(c, a)$, $x^{ca} = A^c x^{ca} + B^c v^a$\n\n7. We will fix these references.\n", "role": "user" }, { "content": "Title: Response to the authors\nI thank the authors for their response and their clarifications. \nRegarding 4., it would be great if the authors could add this brief elaboration to the paper. ", "role": "assistant" } ]
xtYuTyZY1FJ
CLeaR 2022 Conference
Reviewer_5AfY
[ { "content": "\nYou are an experienced academic paper reviewer. You will receive a conference paper in the field of computer science or AI. Please analyze the paper content carefully and output your review content.\nMore importantly, the author will further explain or refute the review comments you gave, and you need to further discuss with the author based on the author's response. \nThe purpose of this rebuttal and discussion is:\n- Clarification: During the review process, the reviewer (you) may not accurately understand the contribution or method of the paper due to unclear expression or misunderstanding of certain details. Through rebuttal, the author has the opportunity to clarify these misunderstandings and further illustrate the effectiveness of their innovations or methods.\n- Respond to criticisms and suggestions: The reviewer (you) may make criticisms or suggestions on the paper, and the author can use the discussion session to provide specific responses, explain why certain criticisms may not apply, or how to improve and optimize certain aspects of the paper.\nFinally, you and the author will work together to enhance the persuasiveness of the paper and improve the quality of the paper.\n", "role": "system" }, { "content": "Now your reviewer ID is assigned as Reviewer_5AfY, and the following are the paper you need to review:\n```<<xtYuTyZY1FJ>>```", "role": "user" }, { "content": "Review Content: Summary: An interesting problem formulation and proposed method for Granger causality \nMain review: # Summary \nThe authors aim at solving an interesting problem for Granger causality, i.e., for different objects/environments/specific applications, the causal graphs can be different but the causal relationships/functional relationships/dynamics are in the same form, then how to infer the causal structures for different cases while utilizing the shared dynamics. \n\nThey propose a probabilistic implementation for modeling the graph structure with an encoder and the shared dynamics with a decoder.\nThe experiments are based on simple physical synthetic data and investigate the influence of the unknown confounding, especially, the authors propose a solution for the case with unobserved time-series data.\n\n# Pros\n(A) **An interesting view to see the problem.** \nThere are works in causal discovery focusing on solving the problem that the causal graph is shared in different environments while the causal relationships are in different forms. It is also important to consider the scenario where the causal graph is not shared. \nAlthough the ideal case would be that both of the causal graphs and the causal relationships are not fully shared, the work indeed provides a great value for the further development of causal discovery.\n\n(B) **A well-written paper.** \nThe paper is well written which makes the idea and method easy to be understood. Especially, first, introduce the model and high-level idea and then introduce the probabilistic implementation. \n\n(C) **Investigation for the presence of unknown confounding.** \nThe work also provides an extension for the case where there are unobserved time series. The solution is straightforward, makes sense, and has reasonable results.\n\n\n# Concerns / Questions \n(1) In the preliminaries, after a while, I understood what does the sample mean. I would suggest elaborating the concept sample with examples or further explanation. Different samples seem to be corresponding to different applications which can have different causal graphs.\n\n(2) Eqn. (8) doesn't seem to be consistent with Eqn. (13-14). Because the decoder seems to output the next time value $x_{t+1}$ based on the current time value $x_t$, but Eqn. (8) is only conditioning on $z$ without indicating the actual process.\n\n(3) Eqn. (13) use \"hard code\" to model $x_j^t$. It means that it must be the case $x_j^t$ is the direct cause of $x_j^{t+1}$; however, it may not always be the case. What if $x_j^t$ is not the direct cause?\n\n(4) One minor concern: I think it would be interesting to see how the model performs on the data generated with (non-)linear autoregressive models (which are used more often in causal discovery) instead of physical synthetic data.\n\n(5) It remains unclear for me how flexible is the model with its decoder (12-14) or how rich is the functional class? \nIt seems like modelling the function with vector as input by assuming something which can be simplified as $f([x_1, ..., x_i, ... x_N]) = f_v(\\sum_{i=1}^N f_e^i(x_i))$.\nRating: 7 accept, but needs minor improvements\nConfidence: The reviewer is confident but not absolutely certain that the evaluation is correct\n", "role": "assistant" }, { "content": "Title: Reply to Reviewer 5AfY\nWe thank the reviewer for their overall positive feedback, and would like to use this rebuttal to reply to their raised concerns:\n\n(1) We will include an example for what samples can represent in the preliminaries section for the camera-ready version. More concretely, we will describe that the $N$ time-series within sample $\\mathbf{x}_s$ may represent the recording of the spiking behaviour of $N$ neurons. Then, the $S$ samples within a dataset $\\mathbf{X} = \\\\{ \\mathbf{x}_s \\\\}_\\{s=1\\}^S$ may represent the recordings of the spiking behaviour across different neural sub-populations. As a result, the dynamics of how the time-series influence one another, i.e. how neurons cause one another to spike, stays the same across the samples, but the underlying causal graphs differ, since we look at different sub-populations of neurons.\n\n(2) We can make eq. (8) and eq. (13-14) consistent with one another by using the chain-rule of probabilities, i.e. by writing\n$p(x|z) = p(x^0|z) * p(x^1|x^0, z) * p(x^2|x^1, z) * … = p(x^0|z) * \\prod_{t=1}^T p(x^t|x^{t-1}, z)$. Then, $p(x^t|x^{t-1}, z)$ corresponds to the terms used in eq. (13-14) and we ignore the $p(x^0|z)$ term in our modelling.\n\n(3) This is correct - we use the assumption that the current step of a time series is always a direct cause of its next step. This makes modelling easier in our case since the systems we consider contain items which maintain individual state between time steps (e.g. physical systems). We note that in evaluation, we do not include the assumed self-edges in AUROC estimation. This assumption could be removed within our implementation by removing all instances of $x^t_j$ from the right hand side in Eq (13).\n\n(4) In terms of more standard causal discovery (CD) datasets, we note that we include experiments on the Netsim dataset, which has been explored in the CD literature. It contains one shared graph between all samples and is particularly challenging due to inherent noise. We show that here, despite relatively few training samples (~50) and a single underlying causal graph, ACD can match or improve on the performance of many baselines. Otherwise, we show that on more challenging simulations, ACD can make large gains over previous CD baselines.\n\n(5) We are not quite sure we understand the question the reviewer has around the flexibility of the model - some clarification would be appreciated. We note that if we adjusted the decoder to the simplification (we think) the reviewer is proposing, we would lose the causal edge structure provided by the $z_{ij}$ edge variables. As discussed at the end of Sec 3.1, using these variables in the decoder is essential for establishing the correspondence between the output of the encoder and (Granger) causal relationships.\n\n\n\n", "role": "user" }, { "content": "Title: Thanks for the response and further discussion about the flexibility of the decoder\nI would thank the authors for the effort in the response. \n(1) It would be very good to include this example; (2) Yes, the factorization fixed the ambiguity issue very well; (3) It sounds pretty good and it could be beneficial for the readers if including such discussion somewhere in the paper; (4) I think such experimental results and discussion would be helpful and useful for the causal discovery field as well. (5) I will elaborate on it in another comment window.\nAs shown in Eqn. (12-14), let's put aside the probabilistic formulation, for now, Eqn. (12-14) are used for modeling the aggregation/pooling process which collects the information at time $t$ and use the {$x_1^t,...,x_N^t$} for predicting $x_j^{t+1}$. If we write it in the form of function, it will be $x_j^{t+1} = f(x_1^t,...,x_N^t)$. And Eqn. (12-14) is one specific way (or a model choice) to implement such a process. I was trying to summarize some characteristics of it in the review, but essentially it takes the sum form for each variable pair in Eqn. (12) and the sum of $h$ in Eqn. (13).\n\nThen the question is that how flexible is the designed pooling/aggregation process with Eqn. (12-14) or how it can be justified to be a suitable/proper one in general?\nIt seems that (3) can be a reason to consider such question/point, e.g., why it requires an explicit modeling choice for the self-edges.\n\n\n", "role": "assistant" }, { "content": "Title: Response on decoder structure\nWe would like to thank the reviewer for their fast reply and the clarification. \n\nIndeed, equations (12-14) describe one specific way of implementing a decoding function $x_j^{t+1} = f(x_1^t, ..., x_N^t, z)$, but we believe that it is almost the most general one we could apply here. \n\nOverall, these equations follow the setup of a general graph neural network (GNN). Equation 12 describes an edge-propagation step, which allows different time-series to exchange information with one another and pushes the network to learn about the (causal) interactions between them. Equation 13 implements a node-propagation step, in which all incoming messages to a node are accumulated to determine the overall influence of other time-series on the current one. Both of these functions could be implemented with a wide range of neural networks. For example, we could replace the MLPs used in our experiments by recurrent neural networks which would enable the model to learn about causal relations of higher Markov order (i.e. with lag) as well.\n\nEquations (12,13) contain three restrictions to this very general GNN setup:\n\n1) As pointed out by the reviewer, in equation 13 we assume that the current step of a time-series is always a direct cause of its next step. As explained before, this assumption could be removed easily and is solely made to make training easier (in a similar way as residual connections do in other networks).\n\n2) In Equation 12, we ensure that time-series can only propagate information along their edge, if this edge was predicted to exist by the encoder (i.e. if $z_{ij,e} > 0$ for some $e>0$). This restriction is a hard requirement for our results to be interpretable as Granger causal relations, but does not actually restrict the flexibility of the decoder.\n\n3) In our decoder, we apply only a single node-edge propagation step, while in a general GNN one might employ several of these steps (essentially repeating equations (12,13)) to allow for higher-order interactions between nodes to be modeled. We believe that it might be possible to extend our decoder accordingly, but that similar effects might be achievable more easily in our setting with time-series by using recurrent neural networks.\n", "role": "user" }, { "content": "Title: Thanks for the discussion\nI would thank the authors for the discussion. After reading the other reviews and authors' responses, I believe the submission can be a good fit for the conference and bring certain interesting points after modifying the paper respectively. So I would suggest an acceptance.", "role": "assistant" } ]