paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
9.4k
| point
stringlengths 49
654
|
---|---|---|---|
ARR_2022_266_review | ARR_2022 | 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach.
2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well.
3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all.
4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability.
- BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency.
- Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches | - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. |
ARR_2022_7_review | ARR_2022 | 1. The selling point of this paper is unsupervised pretrained dense retriever(LaPraDoR) can per- form on par with supervised dense retriever, but actually, LaPraDoR is a hybrid retriever rather than a pure dense retriever. In a way, it’s unfair to compare hybrid method to dense/sparse method as shown in table 1, because it’s known that the dense retriever and sparse retriever are complementary. The comparable supervised models should also be hybrid retrievers. Besides, in table 3, it seems that without lexicon enhancement, the performance of proposed unsupervised model is not competitive either on in-domain MS-MARCO or cross domain BEIR benchmark compared with supervised model.
2. In table 4, the combination of self-supervised tasks ICT and DaPI doesn’t seem to be com- plementary, the effectiveness of DaPI task, which will double the GPU memory usage, is not significant (0.434 -> 0.438) 3. ICoL is proposed to mitigate the insufficient memory on a single GPU and allow more neg- ative instances for better performance, but there are no corresponding experiments to show the influence of the number of negatives. As far as I know, the quality of negatives is more important than the quantity of negatives as shown in TAS-B. 4. It sounds unreasonable that increasing the model size can hurt the performance, as recent paper Ni et al. shows that the scaling law is also apply to dense retrieval model, so the preliminary experimental results on Wikipedia about model size should be provided in detail.
5. Thepaperarguethattheproposedapproachistocomplementlexicalmatchingwithsemantic matching, while the training procedure of proposed model is totally independent with lexical matching. Therefore, the argument ”LEDR helps filter out such noise and allows the dense retriever to focus on fine-grained semantic matching” is confusing, because there is no suc- cession relationship between LEDR and dense retriever.
Reference: * Ni et al. 2021. https://arxiv.org/abs/2112.07899
the proposed LaPraDoR achieves relative low performance on MS-MARCO while relative high per- formance on BEIR, the inductive bias of the proposed pretrain method is worth exploring.
Line 300-304: q and d are confusing | 4. It sounds unreasonable that increasing the model size can hurt the performance, as recent paper Ni et al. shows that the scaling law is also apply to dense retrieval model, so the preliminary experimental results on Wikipedia about model size should be provided in detail. |
NIPS_2020_7 | NIPS_2020 | - The transfer scenarios in Sec 3 are confusing, which in turn makes Figs 1&2 confusing. It seems like lines 108-110 state that VGG is always used as the whitebox source model and the WRN/RNXT/DN are always used as the victim blackbox target models. However, lines 128 - 130 contradict this talking about when WRN/RNXT/DN attack VGG? This critical detail is quite confusing. I would suggest to use transfer notation such as "VGG19 --> WRN" to clarify this both in the text and the figures. - Perhaps the key weakness for me is in the experiments. (1) The eps=0.1 from which the "99% average success rate" (from conclusion) is found is an unconventionally large epsilon in L_inf adversarial attacks. I would consider conforming to a more popular large epsilon such as eps=16/255 used in many papers so readers can roughly compare across works. I realize you also report eps=0.03 and 0.05 which is good, but these are not the focus of the discussion of results (e.g., paragraph line 228). (2) Testing with a variety of source models on both datasets should be strongly considered. Only using VGG19 for CIFAR10 and RN50 for ImageNet may lead to somewhat inconclusive results because the source model in transfer attacks matters. Does the method still work as well if you use a different source model? How does the choice of source model affect the transferability to each of the target models? I would be more interested to the answers of these questions for evaluations on ImageNet. Consider results of a contemporary paper: Table 3 of https://arxiv.org/pdf/2002.05990.pdf. This table shows 2 source models, 5 attacks (4 baselines + theirs), and 6 bbox target models, all of which is useful information. (3) Finally, the momentum iterative attack would be a much better baseline than IFGSM because it is designed as a simple (trivial) change to IFGSM that creates much more transferable adversarial examples essentially for free. It is used as a standard of measure in many many transfer attack papers but it is not used here at all. - There are no results discussing targeted attacks. Since this method reuses existing attack algorithms such as IFGSM, etc., creating targeted attacks is a trivial sign flip in the attack algo. For completeness of experiments, it would also be useful to report results of creating targeted attacks with the LinBP method. - Some aspects in the presentation quality of this paper are a weakness for a high quality publication (e.g. NeurIPS). For example, Figs 1&2 as discussed before, the tables with a "-" for the method, the "Dataset" columns in the tables are not informative, the management of Fig 3 and Table 2, a "*" appearing in Table 1 with no indication of meaning, etc. | - Some aspects in the presentation quality of this paper are a weakness for a high quality publication (e.g. NeurIPS). For example, Figs 1&2 as discussed before, the tables with a "-" for the method, the "Dataset" columns in the tables are not informative, the management of Fig 3 and Table 2, a "*" appearing in Table 1 with no indication of meaning, etc. |
ARR_2022_253_review | ARR_2022 | - The paper uses much analysis to justify that the information axis is a good tool to be applied. As pointed out in conclusion, I'm curious to see some related experiments that this information axis tool can help with.
- For Figure 1, I have another angle for explaining why randomly-generated n-grams are far away from the extant words: the characterBERT would explicitly maximize the probability of seen character sequence (implicitly minimize the probability of unseen character sequence). So I guess the randomly generated n-grams would have distant different PPL value with the extant words. This is justified in Section 5.4.
- It would be better to define some notations and give a clear definition of the "information axis", "word concreteness" and also "Markov chain information content".
- Other than UMAP, there are some other tools for analyzing the geometry of high-dimensional representations. I believe the idea is not highly integrated with UMAP. So it would be better to show demonstrate results with other tools like T-SNE. | - The paper uses much analysis to justify that the information axis is a good tool to be applied. As pointed out in conclusion, I'm curious to see some related experiments that this information axis tool can help with. |
ICLR_2022_2531 | ICLR_2022 | I have several concerns about the clinical utility of this task as well as the evaluation approach.
- First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling, especially given the known faithfulness issues with text generation? There are some existing approaches for automatic ECG interpretation. How does this work fit into the existing approaches? A portion of the ECG reports from the PTB-XL dataset are actually automatically generated (See Data Acquisition under https://physionet.org/content/ptb-xl/1.0.1/). Do you filter out those notes during evaluation? How does your method compare to those automatically generated reports? - A major claim in the paper is that RTLP generates more clinically accurate reports than MLM, yet the only analysis in the paper related to this is a qualitative analysis of a single report. A more systematic analysis of the quality of generation would be useful to support the claim made in the appendix. Can you ask clinicians to evaluate the utility of the generated reports or evaluate clinical utility by using the generated reports to predict conditions identifiable from the ECG? I think that it’s fine that the RTLP method performs comparable to existing methods, but I am not sure from the current paper what the utility of using RTLP is. - More generally, I think that this paper is trying to do two things at once – present new methods for multilingual pretraining while also developing a method of ECG captioning. If the emphasis is on the former, then I would expect to see evaluation against other multilingual pretraining setups such as the Unicoder (Huang 2019a). If the core contribution is the latter, then clinical utility of the method as well as comparison to baselines for ECG captioning (or similar methods) is especially important. - I’m a bit confused as to why the diversity of the generated reports is emphasized during evaluation. While I agree that the generated reports should be faithful to the associated ECG, diversity may not actually be necessary metric to aim for in a medical context. For instance, if many of the reports are normal, you would want similar reports for each normal ECG (i.e. low diversity). - My understanding is that reports are generated in other languages using Google Translate. While this makes sense to generate multilingual reports for training, it seems a bit strange to then evaluate your model performance on these silver-standard noisy reports. Do you have a held out set of gold standard reports in different languages for evaluation (other than German)?
Other Comments: - Why do you only consider ECG segments with one label assigned to them? I would expect that the associated reports would be significantly easier than including all reports. - You might consider changing the terminology from “cardiac arrythmia” categories to something broader since hypertrophy (one of the categories) is not technically a cardiac arrythmia (although it can be detected via ECG & it does predispose you to them) - I think it’d be helpful to include an example of some of the tokens that are sampled during pretraining using your semantically similar strategy for selecting target tokens. How well does this work in languages that have very different syntactic structures compared to the source language? - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? - What kind of tokenization is used in the model? Which Spacy tokenizer? - It’d be helpful to reference the appendix when describing the setup in section 3/5 so that the reader knows that more detailed architecture information is there. - I’d be interested to know if other multilingual pretraining setups also struggle with Greek. - It’d be helpful to show the original ECG report with punctuation + make the ECG larger so that they are easier to read - Why do you think RTLP benefits from fine-tuning on multiple languages, but MARGE does not? | - I’d be interested to know if other multilingual pretraining setups also struggle with Greek. |
dvDi1Oc2y7 | EMNLP_2023 | 1) There are potentially numerous baselines as data augmentation for hard examples has several work. Given the closeness with this proposed work of using paraphrases (both negative and positive), some of baselines are necessary for comparison with GBT, especially counterfactual data-augmentation techniques as GBT uses (negative paraphrases in DA i.e., IBH0)
Feng et.al., A Survey of Data Augmentation Approaches for NLP
Li et.al., Data Augmentation Approaches in Natural Language Processing: A Survey
2) Some other, even more simpler baselines could be lower learning rate and training for more number of epochs. This would even strengthen the claims of GBT if improvements are significant.
3) The ablation study in table 5 seemed more like good baselines which is good to have as it shows GBT is more effective when applied to only hard examples. A better ablation study could be giving just positive paraphrases (IBH1) and just negative paraphrases (IBH0)
4) Some of the important technical details are unclear. For example, which datasets and how was the paraphraser trained to generate candidate sentences for selecting IBH0 and IBH1. In line 268-277, more details would be needed as to how and where the 50K examples were selected from.
5) The text in line 293-295 makes the above point a little bit more unclear. It would be difficult for readers to understand and evaluate – “we manually observed the generated examples and find the results acceptable.”
6) A very minute point – it may be interesting to compare with openLLM methods like LLaMa (after some instruction tuning for PI task). | 5) The text in line 293-295 makes the above point a little bit more unclear. It would be difficult for readers to understand and evaluate – “we manually observed the generated examples and find the results acceptable.” |
NIPS_2020_367 | NIPS_2020 | - Below eq (3), for the upper bound of $\delta_t$ the right-hand side should be $2\sum_s\eta_sa_s$ instead of $2\sum_s\eta_sa_s\delta_s$. - It is misleading to claim that it is the first work to address the stability of SGD for non-smooth convex loss functions as there are indeed existing work which already addressed stability of stochastic optimization with non-smooth loss. It would be interesting to add some discussions or comparison with these references mentioned below: 1. “Fine-Grained Analysis of Stability and Generalization for Stochastic Gradient Descent”. ICML 2020. In this paper, their work relaxes the smoothness to $\alpha$-Holder continuity of (sub)gradients, which include the non-smooth loss functions in this paper as $\alpha=0$. Their stability analysis also improves the optimal generalization bounds $O(1/\sqrt{n})$ for multi-pass SGD with $T=O(n^2)$. It seems to me that the main technical novelty appeared in the proof of Lemma 3 which studied \delta_t^2 (as opposed to the study of \delta_t in Hardt et al’s paper) using the approximate contraction for the gradient mapping for the non-smooth loss which has already explored in the above paper. Similar ideas have already explored in the above reference in a more general setting. 2. Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives, Arxiv preprint (2020). In this Arxiv preprint, the authors developed a different differentially private algorithm (Private FTRL) for non-smooth learning problems which can also achieve optimal generalization bounds. - The authors indicate that Theorem 5.1 on privacy guarantees follow from the same line of Theorem 2.1 in [3] but omit the proof. Furthermore, the authors mention that they replace the privacy analysis of the Gaussian mechanism with the tighter Moments Accountant method [1]. However, the analysis in [1] consider the Poisson sampling while Algorithm 2 considers uniform sampling with replacement. Furthermore, the moment bound in [1] is asymptotic. Therefore, it is not clear to me how to derive Theorem 5.1. I would recommend the authors to include the details for completeness as the differentially private SGD is an important application of the stability analysis for non-smooth loss functions. | 2. Private Stochastic Convex Optimization: Efficient Algorithms for Non-smooth Objectives, Arxiv preprint (2020). In this Arxiv preprint, the authors developed a different differentially private algorithm (Private FTRL) for non-smooth learning problems which can also achieve optimal generalization bounds. |
lYongcxaNz | ICLR_2025 | The weaknesses of this paper are summarized as follows:
* The presentation of this paper needs improvements. In particular, in Theorem 1, the role of $gamma$ is not clear to me. Why does one need to involve $\gamma$ in this theorem, is there any condition for $\gamma$ (may be the same condition in Lemma 1)?
* In Theorem 1, the statement is made on one particular instance $(X, w^*, y, x_q)$, it is indeed possible that the model cannot provide accurate prediction for all instances. However, in practice one may expect the model to have good performance in expectation or with high probability, would it be possible to extend the lower bound results to the expectation or high probability cases?
* It is not to call "matching upper and lower bound" as the upper bound has an additional $\log(1/\epsilon)$ factor. A more rigorous claim should be "the upper bound matches the lower one up to logarithmic factors."
* Theorem 6 is a bit weird. It claims that there exists a global minimizer that will have very bad robustness as $L$ increases. I just do not quite understand why this argument is made on one existing global minimizer, is it possible that for other global minimizers, the robustness can be better? This should be made very clear.
* The proof is extremely not well organized. Many proofs do not have clean logic and are very hard to follow, thus making it hard to rigorously check the correctness of the proof. For instance, in Lemma 3, does the result hold for any polynomial function $P(\gamma)$? | * The proof is extremely not well organized. Many proofs do not have clean logic and are very hard to follow, thus making it hard to rigorously check the correctness of the proof. For instance, in Lemma 3, does the result hold for any polynomial function $P(\gamma)$? |
ICLR_2023_700 | ICLR_2023 | 1. Some intuitions can be further explained, e.g., in section 2.2 the situation that breaks the factorized distribution can have a factorized support. It will be more convincing to give an example which does not have a factorized support will fail to disentangle, more intuitively show the relationship between factorized support and disentanglement. 2. Since this paper claims to aim at the realistic scenario of disentangled representation learning, it is better to conduct experiments on real world datasets instead of the synthetic datasets(at least for the out-of-distribution setting.). 3. The compared disentangled baselines seem to be out-of-date, it is better to incorporate the more recent disentangling methods. | 2. Since this paper claims to aim at the realistic scenario of disentangled representation learning, it is better to conduct experiments on real world datasets instead of the synthetic datasets(at least for the out-of-distribution setting.). |
NIPS_2017_35 | NIPS_2017 | - The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation.
- The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The comparison with [5] is not completely fair as [5] is designed for a more complex problem, i.e., no knowledge of the camera pose parameters.
- Some explanations are a little vague. For example, the last paragraph of Section 3 (lines 207-210) on the single image case. Questions/comments:
- In the Recurrent Grid Fusion, have you tried ordering the views sequentially with respect to the camera viewing sphere?
- The main weakness to me is the numerical evaluation. I understand that the hypothesis of clean segmentation of the object and known camera pose limit the evaluation to purely synthetic settings. However, it would be interesting to see how the architecture performs when the camera pose is not perfect and/or when the segmentation is noisy. Per category results could also be useful.
- Many typos (e.g., lines 14, 102, 161, 239 ), please run a spell-check. | - Some explanations are a little vague. For example, the last paragraph of Section 3 (lines 207-210) on the single image case. Questions/comments: |
ICLR_2022_3099 | ICLR_2022 | W1: The setting seems to be limited and not well justified. 1) It only consider ONE truck and ONE drone. Would it be easy to extend to multiple trucks and drones? This seems to be a more interesting and practical setting. 2) What is the difference of this setting versus settings where there are multiple trucks? Are there methods solving this setting, and why are they not working in TSP-D? 3) In the second paragraph of section 2.1, the two assumptions that "we allow the drone to fly for an unlimited distance" and that, "Only customer nodes and the depot can be used to launch, recharge, and load the drone." seem to be contradicting? If you allow unlimited distance, why would the drones still need to be recharged? Am I misunderstanding something? Because of the limited setting, it may not be of interest to a large audience.
W2: It is not clear why exactly an LSTM-decoder is better than an attention-based decoder. The paper justifies that "AM loses its strong competency in routing multiple vehicles in coordination". However, AM decoder still conditions "on the current location of the vehicle and the current state of nodes". Thus, I don't think it overlooks the interaction between different vehicles. It depends more on how you design the decoder. Compared to attention, an LSTM essentially adds to the historical decisions to the policy, not the interactions between vehicles. Therefore, it is not clear why exactly LSTM-decoder is better, and the justification is quite vague in the paper.
W3: Except for AM, NM by Nazari et al. (2018) has also been an important counterpart of the proposed HM. However, it is not compared as a baseline. Whereas I understand that not every baseline should be compared, but NM is mentioned a few times throughout. If historical information is important in decoding an action, why is it not important in encoding a state? Because of this, the empirical evaluation is not totally convincing to me. | 1) It only consider ONE truck and ONE drone. Would it be easy to extend to multiple trucks and drones? This seems to be a more interesting and practical setting. |
ARR_2022_162_review | ARR_2022 | 1. The proposed approach to pretraining has limited novelty since it more or less just follows the strategies used in ELECTRA.
2. It is not clear whether baselines participating in the comparison are built on the same datasets that are used to build XLM-E.
1. From the results in Table 1, we can see that XLM-E lags behind baselines in "Structured Prediction" tasks while outperforms baselines in other tasks. Any possible reason for such a phenomenon?
Some typos and grammatical errors.
1. " A detailed efficiency analysis in presented in Section 4.5". Here "in" --> "is" 2. " XLM-E substantially outperform XML on both tasks". Here "outperform" --> "outperforms" 3. "... using parallel corpus" --> "using parallel corpora" | 1. The proposed approach to pretraining has limited novelty since it more or less just follows the strategies used in ELECTRA. |
NIPS_2020_528 | NIPS_2020 | I would largely consider most of the weaknesses to be issues with motivation and presentation rather than with the technical content of the results. 1) The motivation/need for the Newton algorithm in section 4 was somewhat lacking I felt. This is essentially just a 1-dimensional line search on a convex function, so even something as basic as a bisecting line search will converge linearly. While of course quadratic convergence is better than linear convergence, how much of an impact does this actually make on the run-time of the algorithm? Experiments along these lines would help motivate the need for the analysis/algorithm. 2) The introduction would benefit from some simple organization. As written all of the applications on page 2 are somewhat mashed together without clear transitions between topics. Simply making a subsection like “Applications of the Matrix Perspective Function”, then having \paragraph{Gaussian Likelihood Estimation}, \paragraph{Graphical Model Selection}, etc would significantly improve the readability of the introduction in my view. 3) At a high level there is the question of whether an entire paper devoted to computing a proximal operator is warranted (as this is typically an intermediate result given in a paper that needs to solve the proximal operator for a novel model). However, given the fundamental importance of the potential applications (e.g., Gaussian likelihood estimation), I would imagine this work would be of interest to the community even given the limited scope. Minor points/typos: a) In the equations after lines 71 and 74: prox_\phi (X, Y) ---> prox_\phi (X, y) b) Adding a comment that the final equality in the equation above line 81 comes from (12) and the fact that \mu^* is a root of (12) would be beneficial to the reader. c) C(\mu) is used in Theorem 4, but C(\mu) is not defined until below Theorem 4 (lines 159-160). | 1) The motivation/need for the Newton algorithm in section 4 was somewhat lacking I felt. This is essentially just a 1-dimensional line search on a convex function, so even something as basic as a bisecting line search will converge linearly. While of course quadratic convergence is better than linear convergence, how much of an impact does this actually make on the run-time of the algorithm? Experiments along these lines would help motivate the need for the analysis/algorithm. |
38k1q1yyCe | EMNLP_2023 | - Regarding the synthetic experiment: It is impossible to tell to what extent the findings from the artificial language translation experiment generalise to natural data, where non-compositional translations are much more complex. To name 3 reasons: 1) idioms have various conventionalities (~ratio between idiomatic vs literal meaning) and the frequency above which models default to the non-compositional translation likely interacts with conventionality, 2) words and n-grams contained in the idioms themselves can appear outside of the idiom, 3) idioms require disambiguation unless the idiomatic meaning is 100% conventional, 4) many idiomatic translations can be partially compositional. The artificial experiment is interesting in itself, but to assume that this is a proxy for idiom processing seems like a stretch.
- Regarding the new dataset: While the newly created dataset could potentially be a very useful resource (idiom analyses are predominantly using English corpora), the paper is not very elaborate about how the authors ensure quality control for this corpus. Who annotates the Opensubtitles sentences for idiomaticity? The paper is slightly vague about this, which suggests the authors may have annotated this. When presenting a new dataset meant to be used by future work, the proper way to construct that would be using external annotators, ideally multiple annotators per example to be able to estimate reliability of the annotations and measures of disagreement. Moreover, the corpus is constructed using idioms from a few websites, instead of from idioms taken from idiom dictionaries.
- Regarding the proposed upweighing and KNN methods: For the majority of language and score combinations (see Figure 3), the impact that the methods have on idiomatic vs random data is similar; hence the proposed MT modelling methods seem far from idiom-specific. Therefore, the results simply appear to indicate that "better NMT systems are also better at idiomatic translations".
- Across the board, the paper appears to lack focus and a single, clear storyline. The experiments seem somewhat disconnected, which makes it hard to read and understand what the main contributions are. | - Regarding the proposed upweighing and KNN methods: For the majority of language and score combinations (see Figure 3), the impact that the methods have on idiomatic vs random data is similar; hence the proposed MT modelling methods seem far from idiom-specific. Therefore, the results simply appear to indicate that "better NMT systems are also better at idiomatic translations". |
NIPS_2018_15 | NIPS_2018 | weakness of this paper is its lack of clarity and aspects of the experimental evaluation. The ResNet baseline seems to be just as good, with no signs of overfitting. The complexity added to the hGRU model is not well motivated and better baselines could be chosen. What follows is a list 10 specific details that we would like to highlight, in no particular order: 1. Formatting: is this the original NIPS style? Spacing regarding sections titles, figures, and tables seem to deviate from the template. But we may be wrong. 2. The general process is still not 100% clear to us. The GRU, or RNNs in general, are applied to sequences. But unlike other RNNs applied to image classification which iterate over the pixels/spatial dimensions, the proposed model seems to iterate over a sequence of the same image. Is this correct? 2.1 Comment: The general high-level motivation seems to be map reading (see fig 1.c) but this is an inherently sequential problem to which we would apply sequential models so it seems odd that one would compare to pure CNNs in the first place. 3. Section 2 begins with a review of the GRU. But what follows doesn't seem to be the GRU of [17]. Compare eq.1 in the paper and eq.5 in [7]. a) there doesn't seem to be a trained transformation on the sequence input x_i and b) the model convolves the hidden state, which the standard GRU doesn't do (and afaik the convolution is usually done on the input stream, not on the hidden state). c) Since the authors extend the GRU we think it would make section 2 much more readable if they used the same/similar nomenclature and variable names. E.g., there are large variations of H which all mean different things. This makes it difficult to read. 4. It is not clear what horizontal connections are. One the one hand, it seems to be an essential part of the model, on the other hand, GRU is introduced as a method of learning horizontal connections. While the term certainly carries a lot of meaning in the neuroscience context, it is not clear to us what it means in the context of an RNN model. 5. What is a feed forward drive? The equations seem to indicate that is the input at every sequence step but the latter part of the sentence describes it as coming from a previous convolutional layer. 6. The dimensions of the tensors involved in the convolution don't seem to match. The convolution in a ConvNet is usually a 2D discrete convolution over the 2 spatial dimensions. If the image is WxHxC (width, height, and, e.g., the 3 colour channels), and one kernel is 1x1xC (line 77) then we believe the resulting volume should be WxHx1 and the bias is a scalar. The authors most certainly want to have several kernels and therefore several biases but we only found this hyper-parameter for the feed forward models that are described in section 3.4. The fact that they have C biases is confusing. 7. Looking very closely at the diagram, it seems that the ResNet architectures are as good if not even slightly better than the hGRU. Numerical measurements would probably help, but that is a minor issue. It's just that the authors claim that "neural networks and their extensions" struggle in those tasks. Since we may include ResNets in that definition, their own experiment would refute that claim. The fact that the hGRU is using many fewer parameters is indeed interesting but the ResNet is also a more general model and there is (surprisingly) no sign of overfitting due to a large model. So what is the motivation of the authors of having fewer parameters? 8. Given the fact that ResNets perform so well on this task, why didn't the authors consider the earlier and closely related highway (HW) networks [high1]? HWs use a gating mechanism which is inspired by the LSTM architecture, but for images. Resnets are a special case of HW, that is, HW might make an even stronger baseline as it would also allow for a mix and gain-like computation, unlike ResNets. 9. In general, the hGRU is quite a bit more complex than the GRU. How does it compare to a double layer GRU? Since the hGRU also introduces a two-layer like cell (inhibiton part is seperated by a nonlinearity from the exhibition part) it seems unfair to compare to the GRU with fewer layers (and therefore smaller model complexity) 10. Can the authors elaborate on the motivation behind using the scalars in eq 8-11? And why are they k-dimensional? What is k? 11. Related work: The authors focus on GRU, very similar to LSTM with recurrent forget gates [lstm2], but GRU cannot learn to count [gru2] or to solve context-free languages [gru2] and also does not work as well for translation [gru3]. So why not use "horizontal LSTM" instead of "horizontal GRU"? Did the authors try? What is the difference to PyramidLSTM [lstm3], the basis of PixelRNNs? Why no comparison? Authors compare against ResNets, a special case of the earlier highway nets [high1]. What about comparing to highway nets? See point 8 above. [gru2] Weiss et al. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. Preprint arXiv:1805.04908. [gru3] Britz et al (2017). Massive Exploration of Neural Machine Translation Architectures. Preprint arXiv:1703.03906 [lstm2] Gers et al. âLearning to Forget: Continual Prediction with LSTM.â Neural Computation, 12(10):2451-2471, 2000. [lstm3] Stollenga et al. Parallel Multi-Dimensional LSTM, With Application to Fast Biomedical Volumetric Image Segmentation. NIPS 2015. Preprint: arxiv:1506.07452, June 2015. [high1] Srivastava et al. Highway networks. Preprints arXiv:1505.00387 (May 2015) and arXiv:1507.06228 (Jul 2015). Also at NIPS'2015. After the rebuttal phase, this review was edited by adding the following text: Thanks for the author feedback. However, we remain unconvinced. The baseline methods used for performance comparisons (on a problem on which few compete) are not the state of the art methods for such tasks - partially because they throw away spatial information the deeper they get, while shallower layers cannot connect the dots (literally) due to the restricted field of view. Why don't the authors compare to a state of the art baseline method that can deal with arbitrary distances between pixels - standard CNNs cannot, but the good old multi-dimensional (MD) RNN can (https://arxiv.org/abs/0705.2011). For each pixel, a 2D-RNN implicitly uses the entire image as a spatial context (and a 3D-RNN uses an entire video as context). A 2D-RNN should be a natural competitor on this simple long range 2D task. The RNN is usually LSTM (such as 2D-LSTM) but could be something else. See also MD-RNN speedups through parallelization (https://arxiv.org/abs/1506.07452). The submission, however, seems to indicate that the authors donât even fully understand multi-dimensional RNNs, writing instead about "images transformed into one-dimensional sequencesâ in this context, although the point of MD-RNNs is exactly the opposite. Note that an MD-RNN in general does have local spatial organization, like the model of the authors. For any given pixel, a 2D-RNN sees this pixel plus the internal 2D-RNN states corresponding to neighbouring pixels (which already may represent info about lots of other pixels farther away). Thatâs how the 2D-RNN can recursively infer long range information despite its local 2D spatial neighbourhood wiring. So any good old MD-RNN is in fact strongly spatially organised, and in that sense even biologically plausible to some extent, AFAIK at least as plausible as the system in the present submission. The authors basically propose an alternative local 2D spatial neighbourhood wiring, which should be experimentally compared to older wirings of that type. And to our limited knowledge of biology, it is not possible to reject one of those 2D wirings based on evidence from neuroscience - as far as we can judge, the older 2D-RNN wiring is just as compatible with neurophysiological evidence as the new proposal. Since the authors talk about GRU: they could have used a 2D-GRU as a 2D-RNN baseline, instead of their more limited feedforward baseline methods. GRU, however, is a variant of the vanilla LSTM by Gers et al 2000, but lacking one gate, thatâs why it has those problems with counting and with recognising languages. Since the task might require counting, the best baseline method might be a 2D-LSTM, which was already shown to work on challenging related problems such as brain image segmentation where the long range context is important (https://arxiv.org/abs/1506.07452), while I donât know of similar 2D-GRU successes. We also agree with the AC regarding negative weights. Despite some motivation/wording that might appeal to neuroscientists, the proposed architecture is a standard ML model that has been tweaked to work on this specific problem. So it should be compared to the most appropriate alternative ML models (in that case 2D-RNNs). For now, this is a Machine Learning paper slightly disguised as a Computational Neuroscience paper. Anyway, the paper has even more important drawbacks than the baseline dispute. Lack of clarity still makes it hard to re-implement and reproduce, and a lot of complexity is added which is not well motivated or empirically evaluated through, say, an ablation study. Nevertheless, we encourage the authors to produce a major revision of this interesting work and re-submit again to the next conference! | 77) then we believe the resulting volume should be WxHx1 and the bias is a scalar. The authors most certainly want to have several kernels and therefore several biases but we only found this hyper-parameter for the feed forward models that are described in section 3.4. The fact that they have C biases is confusing. |
HM2E7fnw2U | ICLR_2024 | - I am worried about how to ensure that s contains only static features. The authors claim that static factors can be extracted from a single frame in the sequence, which is not a necessary and sufficient condition. Otherwise, any frame from the video can be used. Why the first frame?
- In addition, in Equation 8, if s contains dynamic factors, subtracting s from the dynamic information may result in the loss of some dynamic information, making it difficult for the LSTM module to capture the complete dynamic changes.
- The method of removing static information from dynamic information is by subtraction between features, which is quite naive. | - In addition, in Equation 8, if s contains dynamic factors, subtracting s from the dynamic information may result in the loss of some dynamic information, making it difficult for the LSTM module to capture the complete dynamic changes. |
NIPS_2021_2131 | NIPS_2021 | - There is not much technical novelty. Given the distinct GPs modeling the function network, the acquisition function and sampling procedure are not novel - The theoretical guarantee is pretty weak (random search is asymptotically optimal).
The discussion of not requiring dense coverage to prove the method is asymptotically consistent is interesting, but the utility of proposition 2 is not clear because although dense coverage is a consideration for proving consistency, it is not really a practical reality in sample-efficient optimization—typically BO would not have dense coverage.
Questions/comments: - There is no discussion of observation noise, which is a practical concern in many of the real world use cases mentioned in the paper. The approach of using GPs to model nodes in function network can naturally handle noisy observations, so only the acquisition function would need to be adjusted to account for noisy observations since the best objective value would be unknown. I expect that the empirical performance would remain the same (e.g. using Noisy EI from Letham et al. 2019), but the computation would be much more expensive. It would be good to discuss and demonstrated performance under noisy observations. - How does the number of MC samples affect performance, empirically? How does the network structure affect this? - It would be interesting to see a head-to-head comparison with deep GPs. How different are the runtimes (including inference times) and empirical performances?
Since the core contribution is modeling each node in the function network with a distinct GP, it would be good to see more evaluation of the function network model's predictive performance compared to a alternative modeling choices (e.g. individual models with a compositional objective, vanilla global gp, deep gp)
Grammar: - L238 “out method” -> “our method” - L335 “structurre” -> “structure”
The discussion of the work's limitations is quite thorough, and it proposes interesting directions for future work. The authors have addressed potential negative societal impacts. | - How does the number of MC samples affect performance, empirically? How does the network structure affect this? |
NIPS_2020_832 | NIPS_2020 | The reviewer has some major concerns about the experiments. 1. The paper combines many objectives (about nine loss terms in Eq. 5, Eq. 8, and Eq. 12) to optimize the reconstruction network, but has not studied these losses in the experiments section. Such a complex loss function may weaken the contribution of the data representation. Besides, it seems unfair for the compared methods. Do some of these losses can be used for other methods such as Pixel2mesh and MeshRCNN? 2. The SDF (recent SOTA 3D representation method) based approaches (e.g., DISN [1]) have not been discussed and compared in the submission. 3. While the proposed method can not perform better than existing methods such as Pixel2mesh, MeshRCNN, and DISN [1] for 3D reconstruction from images, the paper has not analyzed the reasons. The reviewer suggests presenting some qualitative results of these SOTA methods in Figure 5. 4. The reviewer suggests showing the smoothed GT shapes in Figure. 3 and Figure. 5 so that the readers can better understand the quality of the reconstruction. A minor concern: 1. For Eq. 9~11, How about directly using the last visible surface? Dose Eq. 10 really improve the performance? For example, if f_1 is partial occluded (D_a is visible), f_2 is visible. The color attribute of the pixel I should mainly depend on f_2, right? Especially in the case that f_1 and f_2 are from different parts (e.g, chair leg and chair body), then why do you directly use the color attributes of f_2. [1] Xu, Qiangeng, Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. "Disn: Deep implicit surface network for high-quality single-view 3d reconstruction." In Advances in Neural Information Processing Systems, pp. 492-502. 2019. | 4. The reviewer suggests showing the smoothed GT shapes in Figure. 3 and Figure. 5 so that the readers can better understand the quality of the reconstruction. A minor concern: |
TjfXcDgvzk | ICLR_2024 | 1. The technical novelty is relatively minor with the overall idea being a combination of prior works PRANC and NOLA. While this seems enough to provide empirical improvement, the approach itself is not that big of an innovation over prior works.
2. While the prior approach PRANC is directly modified by the authors in this work there are no direct comparisons with it in either the language or vision tasks used to evaluate the proposed approach. There is a comparison of training loss in Section 3.4 and a comparison of the rank of possible solutions of the two approaches in Section 3.5 but without a direct comparison of test accuracy it is unclear if this approach is indeed an improvement over the baseline that it directly modifies. | 2. While the prior approach PRANC is directly modified by the authors in this work there are no direct comparisons with it in either the language or vision tasks used to evaluate the proposed approach. There is a comparison of training loss in Section 3.4 and a comparison of the rank of possible solutions of the two approaches in Section 3.5 but without a direct comparison of test accuracy it is unclear if this approach is indeed an improvement over the baseline that it directly modifies. |
ACL_2017_350_review | ACL_2017 | Not much novelty in method. Not quite clear if data set is general enough for other domains.
- General Discussion: This paper describes a rule-based method for generating additional weakly labeled data for event extraction. The method has three main stages. First, it uses Freebase to find important slot fillers for matching sentences in Wikipedia (using all slot fillers is too stringent resulting in too few matches). Next, it uses FrameNet to to improve reliability of labeling trigger verbs and to find nominal triggers. Lastly, it uses a multi-instance learning to deal with the noisily generated training data.
What I like about this paper is that it improves over the state-of-the-art on a non-trival benchmark. The rules involved don't seem too obfuscated, so I think it might be useful for the practitioner who is interested to improve IE systems for other domains. On the other hand, some some manual effort is still needed, for example for mapping Freebase event types to ACE event types (as written in Section 5.3 line 578). This also makes it difficult for future work to calibrate apple-to-apple against this paper. Apart from this, the method also doesn't seem too novel.
Other comments: - I'm also concern with the generalizability of this method to other domains. Section 2 line 262 says that 21 event types are selected from Freebase. How are they selected? What is the coverage on the 33 event types in the ACE data.
- The paper is generally well-written although I have some suggestions for improvement. Section 3.1 line 316 uses "arguments liked time, location...". If you mean roles or arguments, or maybe you want to use actual realizations of time and location as examples. There are minor typos, for e.g. line 357 is missing a "that", but this is not a major concern I have for this paper. | - I'm also concern with the generalizability of this method to other domains. Section 2 line 262 says that 21 event types are selected from Freebase. How are they selected? What is the coverage on the 33 event types in the ACE data. |
NIPS_2020_373 | NIPS_2020 | - The submission would benefit from clarifying assumptions as early as possible to help categorise this work in the array of possible solutions to a practical CL problem. Specifically, as presented this is a competitive solution provided: 1. The use of memory is possible in an application of interest 2. Clear task boundaries exist and can be identified or are provided. - While the connections and differences to the most immediately related work (Section 1.1) are clearly described, I would have liked to see a boarder review of recent work in Continual Learning. Currently, references to some important past publications are scattered throughout the text (e.g. EWC and VCL in Section 2), which makes me wonder why a related work section was introduced. I suggest either expanding the related work section and moving the majority of discussion of previous work there, as well as including work not currently referenced. If the authors find the space limit in the main text constraining, I suggest moving a broader discussion in the Appendix. - While the presented results on Image datasets are good, the CL community has to start considering more challenging and realistic tasks to make impact on other areas of Machine Learning. There has been very little progress in the last 2-3 years in terms of convincing standard applications and benchmarks of Continual Learning, with current experimental protocols being hardly changed and primarily focused on Image classification. I would be happy to consider raising my score if the authors introduced an additional experiment on a challenging and convincing CL problem such as sequential decision making (e.g. Contextual Bandits or Reinforcement Learning). | 1. The use of memory is possible in an application of interest 2. Clear task boundaries exist and can be identified or are provided. |
NIPS_2021_1743 | NIPS_2021 | 1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of language modeling. The authors may consider to include tasks like language modeling, machine translation or text sumarization to strenghen this part, as this is one of the main motivations of COCO-LM. 2. Analysis of SCL in section 5.2 regarding few-shot abaility looks not convincing. The paper claims that a more regularized representation space by SCL may result in better generalization ability in few-shot scenarios. However, results in Figure 7(c) and (d) do not meet our expectation such that COCO-LM achieves much more improvements with less labels and the improvements will gradually disappear with more labels. Besides, the authors may check if COCO-LM brings benefits to sentence retrieval tasks with the learned anisotropy text representations. 3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above related works.
Questions for the Authors 1. In experimental setup, why did the authors switch the types of BPE vocabulary, i.e., uncased and cased. Will the change of BPE cause the variance of performance? 2. In Table 2, it looks like COCO-LM especially affects the performance on CoLA and RTE hence the final performance. Can the authors provide some explanation on how the proposed pre-training tasks affect the two different GLEU tasks? 3. In section 5.1, the authors say that the benefits of the stop gradient operation are more on stability. What stability, the training process? If so, are there any learning curves of COCO-LM with and without stop gradient during pre-training to support this claim? 4. In section 5.2, the term “Data Argumentation” seems wrong. Did the authors mean data augmentation?
Typos 1. Check the term “Argumentation” in line 164, 252, and 314. 2. Line 283, “a unbalanced task”, should be “an unbalanced task”. 3. Line 326, “contrast pairs”, should be “contrastive pairs” to be consistent throughout the paper? | 1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of language modeling. The authors may consider to include tasks like language modeling, machine translation or text sumarization to strenghen this part, as this is one of the main motivations of COCO-LM. |
Bwhd7GUyHH | ICLR_2025 | I am currently holding many confusions over the setting of this work, and thus not readily at a stage to judge this work. I will take a deeper look into the technical contributions after I found myself understood the basics.
Major questions:
1. The reward defined in Eqn. (1) is weird to me in the sense that as an expected reward, it would depend on historically pulled arms and randomly realized reward through the k-NN function. I have not seen similar formulations in the bandit studies, including the previous k-NN UCB paper (Reeve et al., 2018), where I think the k-NN is not a part of the expected reward.
2. With that, is the $\mu_t^a$ vector unknown while also time-varying in Eqn. (1) given the subscript $t$?
3. Is the exploration-exploitation tradeoff discussed around Eqn. (2) a part of the formulation or algorithmic design?
4. The optimal action can also be defined with more clarity. In particular, for each arm, Eqn. (3) says there is an optimal context; however, is the context generated by environment, or is the context (instead of the arm) that the player is selecting (if so, I do not see context selection in the algorithm)? Also, I found no definition of the decision space $D$.
5. The regret definition in Eqn. (4) and its expansion in Eqn. (6) to connect with the single-step regret in Eqn. (5) is work worth debating: Eqn. (5) is measured with respect to the randomly realized reward $Y$, while Eqn. (5) is with respect to the expected rewards? Hopefully the authors can explain Eqn. (6) a bit better, especially clarify the notations.
6. It seems that I found no description of the estimation of $\mu_t^a$ anywhere in the algorithm?
7. Section 3.2 seems to be about selecting a proper $k$ for k-NN; however, is k a parameter that given in the reward defintion?
8. Also, I in general did not understand the purpose of Theorem 2, i.e., what is its statement?
Minor questions:
9. The notations of $\hat{Y}$ and $Y$ are used in a mixed way in Section 2. | 9. The notations of $\hat{Y}$ and $Y$ are used in a mixed way in Section 2. |
NIPS_2020_389 | NIPS_2020 | 1. This paper lacks some very important references for domain adaptation. The authors should cite and discuss in the revised manuscript. - Li et al. Bidirectional Learning for Domain Adaptation of Semantic Segmentation. In CVPR, 2019. https://arxiv.org/pdf/1904.10620.pdf - Chen et al. CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency. In CVPR, 2019. https://arxiv.org/pdf/2001.03182.pdf 2. My minor concern is that in Table 1 A and B, the proposed method seems to have inferior performance compared to MFSAN. It is a bit pity that the proposed method is not the state-of-the-art on these two datasets. | 1. This paper lacks some very important references for domain adaptation. The authors should cite and discuss in the revised manuscript. |
NIPS_2017_337 | NIPS_2017 | of the manuscript stem from the restrictive---but acceptable---assumptions made throughout the analysis in order to make it tractable. The most important one is that the analysis considers the impact of data poisoning on the training loss in lieu of the test loss. This simplification is clearly acknowledged in the writing at line 102 and defended in Appendix B. Another related assumption is made at line 121: the parameter space is assumed to be an l2-ball of radius rho.
The paper is well written. Here are some minor comments:
- The appendices are well connected to the main body, this is very much appreciated.
- Figure 2 and 3 are hard to read on paper when printed in black-and-white.
- There is a typo on line 237.
- Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts.
- The use of the terminology "certificate" in some contexts (for instance at line 267) might be misinterpreted, due to its strong meaning in complexity theory. | - Figure 2 and 3 are hard to read on paper when printed in black-and-white. |
NIPS_2017_114 | NIPS_2017 | Weakness-
- Comparison to other semi-supervised approaches : Other approaches such as variants of Ladder networks would be relevant models to compare to. Questions/Comments-
- In Table 3, what is the difference between \Pi and \Pi (ours) ?
- In Table 3, is EMA-weighting used for other baseline models ("Supervised", \Pi, etc) ? To ensure a fair comparison, it would be good to know that all the models being compared to make use of the EMA benefits.
- The proposed model benefits from two factors : noise and keeping an exponential moving average. It would be good to see how much each factor contributes on its own. The \Pi model captures just the noise part, so it would be useful to know how much gain can be obtained by just using a noise-free exponential moving average.
- If averaging in parameter space is being used, it seems that it should be possible to apply the consistency cost in the intermediate layers of the model as well. That could potentially provide a richer consistency gradient. Was this tried ?
Minor comments and typos-
- In the abstract : "The recently proposed Temporal Ensembling has ... ": Please cite.
- "when learning large datasets." -> "when learning on large datasets."
- "zero-dimensional data points of the input space": It may not be accurate to say that the data points are zero-dimensional.
- "barely applying", " barely replicating" : "barely" -> "merely"
- "softmax output of a model does not provide a good Bayesian approximation outside training data". Bayesian approximation to what ? Please explain. Any model will have some more generalization error outside training data. Is there another source of error being referred to here ? Overall-
The paper proposes a simple and effective way of using unlabelled data and
improving generalization with labelled data. The most attractive property is
probably the low overhead of using this in practice, so it is quite likely that
this approach could be impactful and widely used. | - In Table 3, is EMA-weighting used for other baseline models ("Supervised", \Pi, etc) ? To ensure a fair comparison, it would be good to know that all the models being compared to make use of the EMA benefits. |
NIPS_2018_630 | NIPS_2018 | - While there is not much related work, I am wondering whether more experimental comparisons would be appropriate, e.g. with min-max networks, or Dugas et al., at least on some dataset where such models can express the desired constraints. - The technical delta from monotonic models (existing) to monotonic and convex/concave seems rather small, but sufficient and valuable, in my opinion. - The explanation of lattice models (S4) is fairly opaque for readers unfamiliar with such models. - The SCNN architecture is pretty much given as-is and is pretty terse; I would appreciate a bit more explanation, comparison to ICNN, and maybe a figure. It is not obvious for me to see that it leads to a convex and monotonic model, so it would be great if the paper would guide the reader a bit more there. Questions: - Lattice models expect the input to be scaled in [0, 1]. If this is done at training time using the min/max from the training set, then some test set samples might be clipped, right? Are the constraints affected in such situations? Does convexity hold? - I know the author's motivation (unlike ICNN) is not to learn easy-to-minimize functions; but would convex lattice models be easy to minimize? - Why is this paper categorized under Fairness/Accountability/Transparency, am I missing something? - The SCNN getting "lucky" on domain pricing is suspicious given your hyperparameter tuning. Are the chosen hyperparameters ever at the end of the searched range? The distance to the next best model is suspiciously large there. Presentation suggestions: - The introduction claims that "these shape constraints do not require tuning a free parameter". While technically true, the *choice* of employing a convex or concave constraint, and an increasing/decreasing constraint, can be seen as a hyperparameter that needs to be chosen or tuned. - "We have found it easier to be confident about applying ceterus paribus convexity;" -- the word "confident" threw me off a little here, as I was not sure if this is about model confidence or human interpretability. I suspect the latter, but some slight rephrasing would be great. - Unless I missed something, unconstrained neural nets are still often the best model on half of the tasks. After thinking about it, this is not surprising. It would be nice to guide the readers toward acknowledging this. - Notation: the x[d] notation is used in eqn 1 before being defined on line 133. - line 176: "corresponds" should be "corresponding" (or alternatively, replace "GAMs, with the" -> "GAMs; the") - line 216: "was not separately run" -> "it was not separately run" - line 217: "a human can summarize the machine learned as": not sure what this means, possibly "a human can summarize what the machine (has) learned as"? or "a human can summarize the machine-learned model as"? Consider rephrasing. - line 274, 279: write out "standard deviation" instead of "std dev" - line 281: write out "diminishing returns" - "Result Scoring" strikes me as a bit too vague for a section heading, it could be perceived to be about your experiment result. Is there a more specific name for this task, maybe "query relevance scoring" or something? === I have read your feedback. Thank you for addressing my observations; moving appendix D to the main seems like a good idea. I am not changing my score. | - The SCNN getting "lucky" on domain pricing is suspicious given your hyperparameter tuning. Are the chosen hyperparameters ever at the end of the searched range? The distance to the next best model is suspiciously large there. Presentation suggestions: |
ACL_2017_792_review | ACL_2017 | 1. Unfortunately, the results are rather inconsistent and one is not left entirely convinced that the proposed models are better than the alternatives, especially given the added complexity. Negative results are fine, but there is insufficient analysis to learn from them. Moreover, no results are reported on the word analogy task, besides being told that the proposed models were not competitive - this could have been interesting and analyzed further.
2. Some aspects of the experimental setup were unclear or poorly motivated, for instance w.r.t. to corpora and datasets (see details below).
3. Unfortunately, the quality of the paper deteriorates towards the end and the reader is left a little disappointed, not only w.r.t. to the results but with the quality of the presentation and the argumentation.
- General Discussion: 1. The authors aim "to learn representations for both words and senses in a shared emerging space". This is only done in the LSTMEmbed_SW version, which rather consisently performs worse than the alternatives. In any case, what is the motivation for learning representations for words and senses in a shared semantic space? This is not entirely clear and never really discussed in the paper.
2. The motivation for, or intuition behind, predicting pre-trained embeddings is not explicitly stated. Also, are the pre-trained embeddings in the LSTMEmbed_SW model representations for words or senses, or is a sum of these used again? If different alternatives are possible, which setup is used in the experiments?
3. The importance of learning sense embeddings is well recognized and also stressed by the authors. Unfortunately, however, it seems that these are never really evaluated; if they are, this remains unclear. Most or all of the word similarity datasets considers words independent of context.
4. What is the size of the training corpora? For instance, using different proportions of BabelWiki and SEW is shown in Figure 4; however, the comparison is somewhat problematic if the sizes are substantially different. The size of SemCor is moreover really small and one would typically not use such a small corpus for learning embeddings with, e.g., word2vec. If the proposed models favor small corpora, this should be stated and evaluated.
5. Some of the test sets are not independent, i.e. WS353, WSSim and WSRel, which makes comparisons problematic, in this case giving three "wins" as opposed to one.
6. The proposed models are said to be faster to train by using pre-trained embeddings in the output layer. However, no evidence to support this claim is provided. This would strengthen the paper.
7. Table 4: why not use the same dimensionality for a fair(er) comparison?
8. A section on synonym identification is missing under similarity measurement that would describe how the multiple-choice task is approached.
9. A reference to Table 2 is missing.
10. There is no description of any training for the word analogy task, which is mentioned when describing the corresponding dataset. | 2. Some aspects of the experimental setup were unclear or poorly motivated, for instance w.r.t. to corpora and datasets (see details below). |
NIPS_2019_564 | NIPS_2019 | Weakness: 1. The improvement of the proposed method over existing RL method is not impressive. 2. Compared to OR tools and RL baselines, the time and computational cost should be reported in detail to fairly compare different methods. Comment after feedback: The authors have addressed the concerns of running time. Since the applying RL in combinatorial optimization is not new, the lack of comparisons between existing RL methods makes it less convincing. Reinforcement Learning for Solving the Vehicle Routing Problem, Mohammadreza Nazari. ATTENTION, LEARN TO SOLVE ROUTING PROBLEMS!, Max Welling. Exact Combinatorial Optimization with Graph Convolutional Neural Networks, Maxime Gasse. Learning Combinatorial Optimization Algorithms over Graphs, Le Song. | 1. The improvement of the proposed method over existing RL method is not impressive. |
NIPS_2017_217 | NIPS_2017 | - The model seems to really require the final refinement step to achieve state-of-the-art performance.
- How does the size of the model (in terms of depth or number of parameters) compare to competing approaches? The authors mention that the model consists of 4 hourglass modules, but do not say how big each hourglass module is.
- There are some implementation details that are curious and will benefit from some intuition: for example, lines 158-160: why not just impose a pairwise relationship across all pairs of keypoints? the concept of anchor joints seems needlessly complex. | - How does the size of the model (in terms of depth or number of parameters) compare to competing approaches? The authors mention that the model consists of 4 hourglass modules, but do not say how big each hourglass module is. |
NIPS_2016_395 | NIPS_2016 | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work. | 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... |
7fuddaTrSu | ICLR_2025 | 1. Grammatical errors and careless statements plague the manuscript. It should be carefully proofread. I'm including some of the grammar mistakes/typos at the end of the "Weaknesses" section. Here are some examples for the careless statements, just from the introduction and related work:
- "The past decade has seen superior performing data-driven weather forecasting models" -> "The past **years** have seen superior performing data-driven weather forecasting models"
- The sentence *"the medium range forecasting ability makes them unstable for climate modelling several years into the future"* doesn't make sense. Just because a model is able to perform medium-range forecasting doesn't make it unstable (see e.g. ACE).
- Citing Gupta & Brandstetter (2022) in the context of *"Climate models are governed by temporal partial differential equations (PDEs)"* doesn't make sense. If you choose to cite something, it would be better to cite a standard textbook on climate science or at least a climate science paper.
- I don't think that Table 1 shows anything related to data-efficiency, as claimed by the authors *"making it data-efficient as shown in Table 1"*. The authors might have meant computational efficiency.
- The claim that *"To address this gap, we propose PACE, which treats climate emulation as a diagnostic-type prediction"* is misleading without making clear that prior work (e.g. ClimateBench or ClimateSet) does exactly this.
- I don't think that *"Nguyen et al. (2023a) accounts for multi-model training, however it is limited to medium range forecasting."* is true. ClimaX contains climate emulation experiments on the ClimateBench dataset.
- The citation format is sometimes off. Not using brackets for non-inline citations hurts the reading flow.
2. Basic mistakes or imprecisions:
- Including ACE and LUCIE in Table 1 is unfair since they were designed for the "autoregressive" climate-dynamics emulation problem rather than the diagnostics-type emulation problem studied in this paper. The inputs-outputs are quite different between these emulation approaches. The relationship between them is more complex in the autoregressive case.
- Section 3.3.: The name of the Convolution Block Attention Module (CBAM) is misleading since it contains no attention layers. Similarly for the "channel attention map" and the "spatial attention map".
- The abstract mentions that *'While deep learning methodologies have made significant progress in weather forecasting, they are still unstable for climate emulation tasks"*. In my opinion, this statement is wrong and misleading: i) ACE, LUCIE, or Spherical DYffusion [1] are counterexamples of pure deep learning methods that perform stable climate long-term climate emulation with reasonable weather forecasting skill; ii) The statement suggest to me that the paper deals with emulation of ***temporal*** climate dynamics (and producing stable, long-term rollouts). However, this is not true since the paper deals with diagnostic-type climate emulation where the mapping from forcings (e.g. GHG) to climate states (e.g. temperatures) are learned (climate dynamics are not being emulated).
- Adaptation of SFNO architecture (especially Appendix A.2) is not consistent with the configuration from LUCIE (nor is it with the one from ACE nor the original SFNO paper) as wrongly claimed by the paper. For example, the latent dimension is 72 for LUCIE and 256 for ACE, which are both much larger than the 32 used in this paper (similarly for the number of SFNO blocks). Lastly, it's not clear to me why the paper chooses to add *"a 2D convolutional layer designed to handle inputs with 4 channels and produce outputs with 2 channels"* rather than simply changing the number of input and output channels of the original SFNO architecture.
3. The strength of the results is debatable
- I have doubts about the interpretation shown in Fig. 1. The climate models show clear increasing temperature trends, which are not properly emulated by PACE. In one case there's no clear increasing trend (is PACE simply learning the mean?), in the other case it's much smaller than the climate model one. As a side question, what SSP is this? Can you include that in the caption please?
- Fig. 4 shows that PACE's predictions are very pixelated. This is a problem in climate modeling, where high spatial resolutions are highly desirable. The climate models in CMIP6 are already relatively coarse, so it seems important to at least keep their granularity.
- No error bars are included. I strongly recommend re-training PACE (and the best baselines) with different random seeds, and reporting error bars on the corresponding RMSEs. Otherwise, it is hard to judge how significant the results are, especially since the main results (e.g. Fig. 3, 6, 7) don't seem to indicate a clear edge for PACE compared to the baselines.
- Diagnostic-type climate emulation, as studied in this paper, of temperature (and in some cases even for precipitation [2]) has been shown to work well with simple, non-neural ML approaches like Gaussian Processes (see ClimateBench and ClimateSet) and even linear regression (see [2]). Including these approaches would be crucial, given their simplicity. I appreciate the point of the authors that achieving good RMSEs on ClimateSet with a lightweight neural network is possible, but these non-neural approaches are important to include to carefully compare PACE to even more lightweight approaches.
- The title and model mention uncertainty aware climate emulation, but none of the experiments study this (e.g. ensembling and comparisons to the CMIP6 ensembles themselves).
- Can you share insights with respect to the training and inference runtimes of PACE? How does it compare to fully neural approaches that don't require ODE solvers?
4. Some method details are unclearly presented/lack explanation.
- Can you elaborate on Eq. 14? The relationship to Eq. 13 and the rest of the paper is not clear to me. Is $y$ the climate model temperature/precip. target data? What do you use for $\sigma^2$? How do you choose it?
- Information about the periodic boundary condition (PBC) is completely missing. Literally the only information that the manuscript gives is *"We implement periodic boundary condition (PBC) to simulate the entire planet"*. How this is implemented is not discussed.
- Similarly, it's not clear to me how/where the "harmonic embeddings" are used in PACE. The diagram in Fig. 2 doesn't show them and only their definition is stated in the manuscript itself. What do you use for $t$? Also, the section title says "Harmonics Spatio-Temporal Embeddings" but their definition suggests that they're temporal at most.
- Fig. 2 diagram indicates that a "Adaptive Pooling" module is used at the end of PACE. I could not find any information about this module anywhere else in the manuscript.
- I presume that including such a module is important because the "spatial attention map" outputs (which in the diagram comes just before the adaptive pooling module) are squished to (0, 1) by the sigmoid function, which does not seem to match the actual range of the standardized targets.
- How exactly is a Neural ODE used inside PACE? You say that you use the dopri5 ODE solver, which is a traditional method not based on neural networks. This seems to contradict the claim that a Neural ODE is used.
A selection of typos (but note that there are many more that should be fixed):
- "medium range" -> "**medium-range**"
- "two key phenomenon" -> "two key **phenomena**"
- "modelling key physical law" -> "modelling key physical **laws**"
- Line 159: "it's" -> "**its**"
- Line 162: "descritized" -> "**discretized**"
Also, the global maps in Figures 2 and 4 are "upside-down". References:
[1] Probabilistic Emulation of a Global Climate Model with Spherical DYffusion (https://arxiv.org/abs/2406.14798; NeurIPS 2024)
[2] The impact of internal variability on benchmarking deep learning climate emulators (https://arxiv.org/abs/2408.05288) | - The claim that *"To address this gap, we propose PACE, which treats climate emulation as a diagnostic-type prediction"* is misleading without making clear that prior work (e.g. ClimateBench or ClimateSet) does exactly this. |
ICLR_2022_2163 | ICLR_2022 | Weakness: 1. This paper only uses metric embedding to tell a story for DNN models and does not provide the specific relationship between metric learning and DNNs. For example, whether the feature transformation obtained by DNN meets the definition of metric (or part of the definition), and whether the perspective of metric embedding can bring new inspiration to the theory of DNNs. 2. The metric learning theory in this paper basically comes from the generalization theory of neural networks [Bartlett et al. (2017)]. Compared with the previous theoretical results, the metric perspective analysis proposed in this paper does not give better results. From the existing content of this paper, the part of metric learning does not seem to work. | 2. The metric learning theory in this paper basically comes from the generalization theory of neural networks [Bartlett et al. (2017)]. Compared with the previous theoretical results, the metric perspective analysis proposed in this paper does not give better results. From the existing content of this paper, the part of metric learning does not seem to work. |
NIPS_2017_370 | NIPS_2017 | - There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of input and output channels is small (around 32). Can FMN scale reasonably well when the number of filter parameters is huge (say, 128 to 512 input and output channels which is common to many CNN architectures)?
- From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique.
- The improvements on image deconvolution is minimal with CNN-X working better than ACNN when all the dataset is considered. This shows that the adaptive convolutions are not universally applicable when the side information is available. Also, there are no comparisons with state-of-the-art network architectures for digit recognition and image deconvolution. Suggestions:
- It would be good to move some visual results from supplementary to the main paper. In the main paper, there is almost no visual results on crowd density estimation which forms the main experiment of the paper. At present, there are 3 different figures for illustrating the proposed network architecture. Probably, authors can condense it to two and make use of that space for some visual results.
- It would be great if authors can address some of the above weaknesses in the revision to make this a good paper.
Review Summary:
- Despite some drawbacks in terms of experimental analysis and the general applicability of the proposed technique, the paper has several experiments and insights that would be interesting to the community. ------------------
After the Rebuttal: ------------------
My concern with this paper is insufficient analysis of 'filter manifold network' architecture and the placement of adaptive convolutions in a given CNN. Authors partially addressed these points in their rebuttal while promising to add the discussion into a revised version and deferring some other parts to future work.
With the expectation that authors would revise the paper and also since other reviewers are fairly positive about this work, I recommend this paper for acceptance. | - It would be good to move some visual results from supplementary to the main paper. In the main paper, there is almost no visual results on crowd density estimation which forms the main experiment of the paper. At present, there are 3 different figures for illustrating the proposed network architecture. Probably, authors can condense it to two and make use of that space for some visual results. |
NIPS_2022_183 | NIPS_2022 | The writing can be improved as it causes difficulty even for experienced readers. Examples include but not limit to 1) Last column in Table 1 should refer to Theorem 7 rather than Theorem 6; 2) Using r
to denote the risk for minimization problems and primal risk for minimax problem at the same time is confusing; 3) Overall, the paper is very dense and somehow lack of organization, especially Section 4. For example, Lemma 2 and Proposition 1 can be combined. Lemma 4 is underfull box.
The necessity of proposing the "primal gap" can be checked and I think it is an artifact of writing the paper. In Line 220, we see the relation between the generalization error of the primal gap and primal risk is the extra error between the empirical minimax learner and the population one. This term is of course requiring attention since the final target is always to study the primal population risk defined in (2), but it is also algorithm-independent. The analyses of GDA and GDmax still focus on the generalization error of the primal risk. Overall, I think the main contribution should be Lemma 1 and Theorem 7 in the Appendix, and the title might be a little exaggerated.
Despite it being the first work establishing generalization bounds for ϵ
-stable algorithms without strong concavity, the dependence on ϵ
is not as good as before. While it can be argued that the assumption is weaker, it still draws back the primal population risk and primal dual population risk. For example, in the (non)convex minimization counterpart, such dependence is known to be ϵ
[Hardt et al., 2016]. Furthermore, when the empirical risk is considered together in realization of Assumption 5 to study the population risk, it seems provide worse trade-offs. | 2) Using r to denote the risk for minimization problems and primal risk for minimax problem at the same time is confusing; |
NIPS_2021_311 | NIPS_2021 | - The paper leaves some natural questions open (see questions below). - Line 170 mentions that the corpus residual can be used to detect an unsuitable corpus, but there are no experiments to support this.
After authors' response All the weakness points have been addressed by the authors' response. Consequently I have raised my score. In particular:
The left open questions have all been answered.
There indeed is an experiment to support this, thanks to the authors' for clarifying this, that connection was not clear to me previously.
Questions - Line 60: Why do you say that e.g. influence functions cannot be used to explain a prediction? The explanation of a prediction could be the training examples whose removal (as determined by the influence function) would lead to the largest score drop for a prediction. - How does the method scale as the corpus size or hidden dimension size is increased? - What happens if a too small corpus is chosen? Can this be detected? - What if we don’t know that a test example is crucially different, e.g. what if we don’t know that the patient of Figure 8 is “British” and we use the American corpus to explain it? Can this be detected with the corpus residual value? - In the supplementary material you mention how it is possible to check if a decomposition is unique. Do you do this in practice when conducting experiments? How do you choose a decomposition if it is not unique? What does it imply for the experiments (and the usage of the method in real-world applications) if the decomposition is not unique?
Typos, representation etc. - Line 50: An example of when a prototype model would be unsuitable would strengthen your argument. - Footnote 2: “or” -> “of” - Line 191: when the baseline is first introduced, [10] or other references would be helpful to support this approach - Line 319: “the the” -> “the” - Line 380: “at” -> “to”?
A broader impact section could be added. In a separate section (e.g. supplementary material), there could be an explicit discussion on when the method should not be used, e.g. as shown in Figure 8, the American corpus shouldn’t be used to explain the British patient. Also see last question above – what if we don’t know that the patient is British? Can this be detected? This should also be discussed in such a section. | - What if we don’t know that a test example is crucially different, e.g. what if we don’t know that the patient of Figure 8 is “British” and we use the American corpus to explain it? Can this be detected with the corpus residual value? |
ACL_2017_606_review | ACL_2017 | - [Choice of Dataset] The authors use WebQuestionsSP as the testbed. Why not using the most popular WebQuestions (Berant et al., 2013) benchmark set? Since NSM only requires weak supervision, using WebQuestions would be more intuitive and straightforward, plus it could facilitate direct comparison with main-stream QA research.
- [Analysis of Compositionality] One of the contribution of this work is the usage of symbolic intermediate execution results to facilitate modeling language compositionality. One interesting question is how well questions with various compositional depth are handled. Simple one-hop questions are the easiest to solve, while complex multi-hop ones that require filtering and superlative operations (argmax/min) would be highly non-trivial. The authors should present detailed analysis regarding the performance on question sets with different compositional depth.
- [Missing References] I find some relevant papers in this field missing. For example, the authors should cite previous RL-based methods for knowledge-based semantic parsing (e.g., Berant and Liang., 2015), the sequence level REINFORCE training method of (Ranzato et al., 2016) which is closely related to augmented REINFORCE, and the neural enquirer work (Yin et al., 2016) which uses continuous differentiable memories for modeling neural execution.
- Misc.
- Why is the REINFORCE algorithm randomly initialized (Algo. 1) instead of using parameters pre-trained with iterative ML?
- What is KG server in Figure 5? | - [Choice of Dataset] The authors use WebQuestionsSP as the testbed. Why not using the most popular WebQuestions (Berant et al., 2013) benchmark set? Since NSM only requires weak supervision, using WebQuestions would be more intuitive and straightforward, plus it could facilitate direct comparison with main-stream QA research. |
NIPS_2020_1824 | NIPS_2020 | - The two settings considered, the fixed design and the low-smoothness setting are both fairly restricted. In particular, requiring that the smoothness parameter beta < 1 is rather strong, as indicated by the example/discussion given in Section 4. - The machinery used for analysis, e.g., kernel-methods and differencing are known and used often in nonparametric estimation. Nevertheless, the application yields interesting results here. - There are no empirical results included in the paper. These could have been used to study the conjectured phase transition from beta < 1 to beta > 1. Given the proposed algorithms, this seems like a missed opportunity. | - The machinery used for analysis, e.g., kernel-methods and differencing are known and used often in nonparametric estimation. Nevertheless, the application yields interesting results here. |
TskzCtpMEO | ICLR_2024 | 1. the experiments are quite bare-bones for a BNN paper, there is no evaluation of predictive uncertainty besides calibration -- we don't need a Bayesian approach do well on this metric. I would either suggest adding e.g. a temperature scaling baseline applied to a sparse deterministic net or (preferably) the usual out-of-distribution and distribution shift benchmarks.
2. primarily testing at a single sparsity level as in Table 2 also seems a bit limited to me. In my view, there are broadly two possible goals when using sparsity: opitimizing sparsity at a given performance level, e.g. close to optimal, or optimizing performance at a given sparsity level. I would have liked to see more figures in the style of Figure 2 left and Figure 3 to cover both of these settings also for the baselines.
3. I would have liked to see a bit more in-depth investigation of the pruning criteria, e.g. a plot of Spearman correlations between the preferred score and the others throughout training or a correlation matrix at various stages (say beginning, halfway through and end of training). I must say that I am not overly convinced that they matter too much, the variation of accuracy in Fig 2 seems to be only about 0.5% (although see questions). So I think it might be worth saving the page discussing the criteria in fewer of more thorough experiments.
4. the paper makes some rather inaccurate claims vs the existing literature. In particular, it is not the first paper introducing a "fully sparse BNN framework that maintains a consistently sparse Bayesian model through- out the training and inference", this statement also applies to the (Ritter et al., 2021) paper, which is incorrectly cited as a post-hoc pruning paper (the paper does use post-hoc pruning as an optional step to further increase sparsity, but the core low-rank parameterization is maintained throughout training). This doesn't affect the contribution of course, but prior work needs to be contextualized correctly.
5. I don't really see the need to make such claims in the first place, it is not obvious that sparsity in training is desirable. Of course it may be the case that a larger network that would not fit into memory without sparsity performs better, but then this needs to be demonstrated (or like-wise any hypothetical training speed increases resulting from a reduced number of FLOPs - in the age of parallelized computation, that is a mostly meaningless metric if it cannot be shown that a practical implementation can lead to actual cost savings).
6. the abstract is simultaneously wordy and vague. I did not know what the paper was doing specifically after reading it, even though it really isn't hard to describe the method in 1 or 2 sentences. I would say that the low-rank/basis terminology led me in the wrong direction of thinking and a pruning-based description would have been clearer, but this may of course differ for readers with a different background. | 5. I don't really see the need to make such claims in the first place, it is not obvious that sparsity in training is desirable. Of course it may be the case that a larger network that would not fit into memory without sparsity performs better, but then this needs to be demonstrated (or like-wise any hypothetical training speed increases resulting from a reduced number of FLOPs - in the age of parallelized computation, that is a mostly meaningless metric if it cannot be shown that a practical implementation can lead to actual cost savings). |
NIPS_2021_1860 | NIPS_2021 | Please refer to Main Review for the detailed comments. 1 Novelty is limited. The design is not quite new, based on the fact that attention for motion learning has been widely used in video understanding. 2 By the way, temporal shift module [TSM: Temporal Shift Module for Efficient Video Understanding, ICCV2019] is a popular mechanism for early action recognition. It would be interesting to see how it works in Table 7. | 1 Novelty is limited. The design is not quite new, based on the fact that attention for motion learning has been widely used in video understanding. |
ICLR_2023_1400 | ICLR_2023 | - While the paper shows improvements on CIFAR derivatives, it lacks analysis or results on other datasets (e.g., ImageNet derivatives). Verifying the effectiveness of the framework on ImageNet-1k or even ImageNet-100 is important. These results ideally can be presented in the main paper.
- The authors should add some details on how to solve the optimization in the main paper. It's an important piece of information currently lacking in the paper.
- Some baselines such as [1] are not considered and should be added.
I feel that influence function can be replaced by other influence estimation methods such as datamodels[2] or tracin[3]. It will be beneficial to understand if the updated framework results in better pruning than the baselines. I am assuming it would result in better pruning results, however it would be beneficial to understand which influence based methods are particularly suitable for pruning.
[1]. https://arxiv.org/pdf/2107.07075
[2]. https://arxiv.org/abs/2202.00622
[3]. https://arxiv.org/abs/2002.08484 | - While the paper shows improvements on CIFAR derivatives, it lacks analysis or results on other datasets (e.g., ImageNet derivatives). Verifying the effectiveness of the framework on ImageNet-1k or even ImageNet-100 is important. These results ideally can be presented in the main paper. |
NIPS_2022_1523 | NIPS_2022 | Weakness:
1 Causality: I think the main drawback of this manuscript is the discussion of causality. In line 25, the authors claim that causality has been mathematically defined by Wiener et.al.. it would be nice to explicitly give the definition here, as reviewers may not familiar with this definition. Importantly, the nuance of causality definition varies from literature [1] to literature [2]. Without presenting the exact definition of causality quoted in this paper and discussing related definitions, it makes the readers hard to understand the main idea. In terms of 'classification of cause-effect', I am not sure if this terminology makes sense or not. What does it mean by classifying cause-effect (later causality detection is brought in line 38)? I believe the authors should discuss its connection to causal variable identification. This also relates to the fact the study is conducted on observational data.
[1] Peters J, Janzing D, Schölkopf B. Elements of causal inference: foundations and learning algorithms[M]. The MIT Press, 2017. [2] Hernán M A, Robins J M. Causal inference. 2010.
2 Unclear model design: The model architecture and learning details are fragmented or missing. The authors could either provide a plot of model illustration, pseudo-code table, or code repository. Considering that Neurochaos Learning is not a well-known method, it is important to demonstrate integrated details to facilitate reproductivity.
3 Experimental design: The experiments regarding Coupled autoregressive (AR) processes and Coupled 1D chaotic maps etc. don't seem to be well-motivated. Could the authors reason why particularly using such setting to investigate cause-effect of time series. Lastly, the comparison to a five-layer neural network seems to be less convincing, given the rapid developments of deep learning architectures.
Yes, the authors fairly discussed the limitations of the method. The potential negative impact may not be applicable to this study. | 2 Unclear model design: The model architecture and learning details are fragmented or missing. The authors could either provide a plot of model illustration, pseudo-code table, or code repository. Considering that Neurochaos Learning is not a well-known method, it is important to demonstrate integrated details to facilitate reproductivity. |
orefzVRWqV | EMNLP_2023 | I only have these concerns from the paper:
1. BigFive and MBTI are stated as models to be extended in Abstract and Introduction sections while they are used as mere datasets in Experiments. It's better to just state them as datasets throughout the paper unless the authors should provide an extended explanation why they are addressing them like that.
2. It's imperative to provide train/validation/test splits and statistics of data used in the experiments to aid in understanding the model performance, how the evaluation is being made, and for reproducibility.
3. Typos and presentation inconsistencies. | 1. BigFive and MBTI are stated as models to be extended in Abstract and Introduction sections while they are used as mere datasets in Experiments. It's better to just state them as datasets throughout the paper unless the authors should provide an extended explanation why they are addressing them like that. |
rGvDRT4Z60 | ICLR_2024 | - The implications of rejecting for fairness are not considered. Rejection for privacy has
implications in terms of privacy budget and likewise rejections for fairness come with
implications and ignoring them might be responsible for the observed gains on the Pareto
frontier. Consider the noted rejection example:
"If at inference-time a decision cannot be made without violating a pre-specified fairness
metric, then the model can refuse to answer, at which point that decision could be relegated
to a human judge"
The important implication here is that there will still be a judgement; it is just that the model
will not be making it. Regardless of whether the result of the human judgement will produce fair
or unfair overall statistics (that consider ultimate judgement whether by model or human), those
decisions need to be incorporated into subsequent fairness calculus. Even if a query is rejected
due to privacy, and if a decision is made for it subsequently, it would need to be accounted for
in subsequent fairness decisions.
Suggestion: incorporate ultimate decisions, whether by model or human, into the rejection
mechanism; i.e. update counts m(z, k) based on human decisions. Given that humans might put the
group counts into already violating territory, it may be necessary to rewrite Line 7 of Algorithm
1 to check whether the fairness criterion is improving or not due to the decision and allow
queries that improve statistics even though those statistics already violate γ threshold.
Handling rejection in experiments will also need to be done but unsure what the best approach
there would be. Perhaps a random human decision maker?
- In arguments for intervention points, assumptions are made which preclude solutions. They assume
the intervention need to be made independent of other mechanisms in PATE. That is, they cannot
consider information internal to decision making that is not described by Figure 1 like
individual teacher outputs. This leaves the possibility that some fairness methods might be able
to integrated with PATE in a closer manner than the options described. One example is that they
might include the teacher outputs instead of operating on the overall predicted class like
Algorithm 1 assumes presently. C3 in particular suggests that some interventions will not account
for privacy budget correctly due to special circumstances and suggests at Point 4, they can be
budgeting can be handled correctly. Nothing is stopping a design from refunding privacy budget if
a query is rejected subsequently to an intervention point.
Suggestion: rephrase arguments for why some intervention points are bad to make sure they don't
also make assumptions about how the interventions are made and whether they can interact with
privacy budget.
- Results in the Pareto frontier show small improvements, no improvements, and in some cases worse
results than prior baselines.
Suggestion: Include more experimental samples in the results to make sure the statistical
validity of any improvement claims is good. This may require larger datasets. Related, the
experiments show error bars but how they are derived is not explained.
- Comparisons against methods in which rejection due to fairness is not an option may not be fair.
Suggestion: either integrate suggestion regarding accounting for rejection above, or incorporate
some form of rejection (or simulate it) in the existing methods being compared to. It may be that
the best methodology is not FairPATE but some existing baselines if adjusted to include fairness
rejection option.
Smaller things:
- Rejection rate is not shown in any experiments. One could view a misclassification as a
rejection, however. Please include rejection rates or view them as misclassifications in the results.
- The distribution whose fairness need to be protected is left to be guessed by the reader. For
privacy, it is more clear that it is the private data that is sensitive and thus privacy
budgeting is done when accessing that private data as opposed to the public data. For fairness,
the impact on individuals in the private dataset seems to be non-existent as the decisions for
them are never made, released, or implemented in some downstream outcome. I presume, then, it is
the fairness needs to be respected on the public data.
Algorithm 1 and several points throughout the work hint at this. However, there is also the
consideration of intervention points 1,2,3 which seem odd as they points seen before any
individual for whom fairness is considered is seen. That is, fairness about public individuals
cannot be made there, independent of any other issues such as privacy budgeting. Further, Theorem
1 discusses a demographic parity pre-processor which achieves demographic parity on private data
which I presume is irrelevant.
- The statement
"PATE relies on unlabeled public data, which lacks the ground truth labels Y"
is a bit confusing unless one has already understood that fairness is with respect to public
data. PATE also relies on private labeled data to create the teachers.
- The Privacy Analysis paragraph could be greatly simplified to just the last sentence regarding post-processing.
Smallest things:
- Double "violations" near "violations of demographic disparity violations".
- The statement "DP that only protects privacy of a given sensitive feature" might be
mischaracterizing DP. It is not focused on features or even data but rather the impact of
*individuals* on visible results. | - Rejection rate is not shown in any experiments. One could view a misclassification as a rejection, however. Please include rejection rates or view them as misclassifications in the results. |
NIPS_2022_1666 | NIPS_2022 | I cannot give a clear acceptance to the current manuscript due to the following concerns:
1. Inaccurate Contribution: One claimed contribution of this work is the compact continuous parameterization of the solution space. However, as discussed in the paper, DIMES directly uses the widely-used GNN models to generate the solution heatmap for TSP[1,2] and MIS[3] problems, respectively. The credit for compact continuous parameterization should be given to the previous work [1,2,3] but not this work.
For TSP, Joshi et al.[1] have systemactilly studied the effect of different solution decoding (e.g., Autoregressive Decoding (AR) v.s. Non-autoregressive decoding (NAR, the heatmap approach) and learning methods ( supversied learning (SL) v.s. reinforcement learning (RL)). To my understanding, the combination of AR + SL, AR + RL and NAR(heatmap) + SL have been investigated in Joshi et.al. and other work (e.g., PtrNet-SL, PtrNet-RL/AM, GCN), but I am not aware of othe work on NAR(heatmap) + RL. The NAR + RL combination could be the novel contribution of this work.
2. Actual Cost of Meta-Learning: The meta-learning (meta-update/fine-tuning) approach is crucial for the proposed method's promising performance. However, its actual cost has not been clearly discussed in the main paper. For example, Table 1 reports that DIMES only needs a few minutes to solve 128 TSP500/TSP1000 and 16 TSP10000 instances. However, at inference, DIMES actually needs extra meta-gradient update steps to adapt its model parameters to each problem instance. The costs of the meta-gradient steps are 1.5h - 10h for TSP500 to TSP10000 as reported in Appendix C.1. Since all the other heuristic/learning methods do not require such meta update step, it is unfair to report that the runtime of DIMES is only a few minutes (which should be a few hours) in Table 1.
3. Generalization v.s. Testing Performance: To my understanding, all the other learning-based methods in Table 1 are trained on TSP100 instances but not TSP500-TSP10000 as for DIMES. Therefore, the results reported in Table 1 are actually their out-of-distribution generalization performance. There are two important generalization gaps compared with DIMES: 1) generalization from TSP100 to TSP10000, 2) generalization to the specific TSP instances (the fine-tuning step in DIMES). I do see it is DIMES's own advantages (direct RL training for large-scale problems + meta fine-tuning) to overcome these two generalization gaps, but the difference should be clearly clarified in the paper.
In addition, it is also interesting to see a comparison of DIMES with other methods on TSP100 (in-distribution testing performance) with/without meta-learning.
4. Advantage of NAR(heatmap) + RL + Meta-Learning: From Table 1&2, for TSP1000, the generalization performance of AM (G: 31.15, BS: 29.90) trained on TSP100 is not very far from the testing performance of DIMES without meta-learning (27.11) directly trained on TSP1000. It could be helpful to check whether the more powerful POMO approach[4] can have a smaller performance gap. Reporting the results for POMO and DIMES without meta-learning for all instances in Table 1 could make the advantage of the NAR(heatmap) + RL approach in DIMES much clearer.
Hottung et al.[5] shows that POMO + Efficient Active Search (EAS) can achieve promising generalization performance for larger TSP instances on TSP and CVRP. The comparison with POMO + EAS could be important to better evaluate the advantage of meta-learning in DIMES.
[1] Chaitanya K Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, and Xavier Bresson. Learning tsp requires rethinking generalization. arXiv preprint arXiv:2006.07054,2020.
[2] Chaitanya K Joshi, Thomas Laurent, and Xavier Bresson. An efficient graph convolutional network technique for the travelling salesman problem. arXiv preprint arXiv:1906.01227, 2019.
[3] Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolutional networks and guided tree search. NeurIPS 2018.
[4] Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, and Seungjai Min. POMO: Policy optimization with multiple optima for reinforcement learning. NeurIPS 2020.
[5] André Hottung, Yeong-Dae Kwon, and Kevin Tierney. Efficient active search for combinatorial optimization problems. ICLR 2022.
Yes, the limitations have been adequately addressed in Section 5 Concluding Remarks. I do not see any potential negative societal impact of this work. | 2) generalization to the specific TSP instances (the fine-tuning step in DIMES). I do see it is DIMES's own advantages (direct RL training for large-scale problems + meta fine-tuning) to overcome these two generalization gaps, but the difference should be clearly clarified in the paper. In addition, it is also interesting to see a comparison of DIMES with other methods on TSP100 (in-distribution testing performance) with/without meta-learning. |
ARR_2022_98_review | ARR_2022 | 1. Human evaluations were not performed. Given the weaknesses of SARI (Vásquez-Rodríguez et al. 2021) and FKGL (Tanprasert and Kauchak, 2021), the lack of human evaluations severely limits the potential impact of the results, combined with the variability in the results on different datasets.
2. While the authors explain the need to include text generation models into the framework of (Kumar et al., 2020), it is not clear as to why only the delete operation was retained from the framework, which used multiple edit operations (reordering, deletion, lexical simplification, etc.). Further, it is not clear how including those other operations will affect the quality and performance of the system.
3. ( minor) It is unclear how the authors arrived at the different components of the "scoring function," nor is it clear how they arrived at the different threshold values/ranges.
4. Finally, one might wonder that the performance gains on Newsela are due to a domain effect, given that the system was explicitly tuned for deletion operations (that abound in Newsela) and that performance is much lower on the ASSET test corpus. It is unclear how the system would generalize to new datasets with varying levels of complexity, and peripheral content.
1. Is there any reason why 'Gold Reference' was not reported for Newsela? It makes it hard to assess the performance of the existing system. 2. Similarly, is there a reason why the effect of linguistic acceptability was not analyzed (Table 3 and Section 4.6)?
3. It will be nice to see some examples of the system on actual texts (vs. other components & models).
4. What were the final thresholds that were used for the results? It will also be good for reproducibility if the authors can share the full set of hyperparameters as well. | 4. What were the final thresholds that were used for the results? It will also be good for reproducibility if the authors can share the full set of hyperparameters as well. |
ACL_2017_148_review | ACL_2017 | - The goal of your paper is not entirely clear. I had to read the paper 4 times and I still do not understand what you are talking about!
- The article is highly ambiguous what it talks about - machine comprehension or text readability for humans - you miss important work in the readability field - Section 2.2. has completely unrelated discussion of theoretical topics.
- I have the feeling that this paper is trying to answer too many questions in the same time, by this making itself quite weak. Questions such as “does text readability have impact on RC datasets” should be analyzed separately from all these prerequisite skills.
- General Discussion: - The title is a bit ambiguous, it would be good to clarify that you are referring to machine comprehension of text, and not human reading comprehension, because “reading comprehension” and “readability” usually mean that.
- You say that your “dataset analysis suggested that the readability of RC datasets does not directly affect the question difficulty”, but this depends on the method/features used for answer detection, e.g. if you use POS/dependency parse features.
- You need to proofread the English of your paper, there are some important omissions, like “the question is easy to solve simply look..” on page 1.
- How do you annotate datasets with “metrics”??
- Here you are mixing machine reading comprehension of texts and human reading comprehension of texts, which, although somewhat similar, are also quite different, and also large areas.
- “readability of text” is not “difficulty of reading contents”. Check this: DuBay, W.H. 2004. The Principles of Readability. Costa Mesa, CA: Impact information. - it would be good if you put more pointers distinguishing your work from readability of questions for humans, because this article is highly ambiguous.
E.g. on page 1 “These two examples show that the readability of the text does not necessarily correlate with the difficulty of the questions” you should add “for machine comprehension” - Section 3.1. - Again: are you referring to such skills for humans or for machines? If for machines, why are you citing papers for humans, and how sure are you they are referring to machines too?
- How many questions the annotators had to annotate? Were the annotators clear they annotate the questions keeping in mind machines and not people? | - You say that your “dataset analysis suggested that the readability of RC datasets does not directly affect the question difficulty”, but this depends on the method/features used for answer detection, e.g. if you use POS/dependency parse features. |
NIPS_2018_66 | NIPS_2018 | of their proposed method for disentangling discrete features in different datasets. I think that the main of the paper lies in the relatively thorough experimentation. I thought the results in Figure 6 were particularly interesting in that they suggest that there is an ordering in features in terms of mutual information between data and latent variable (for which the KL is an upper bound), where higher mutual information features appear first as the capacity is increased. I also appreciate the explicit discussion of the robust of the degree of disentanglement across restarts, as well as the sensitivity to hyperparameters. Given the difficulties observed in Figure 4 in distinguishing between similar digits (such as 5s and 8s), it would be interesting to see results for this method on a dataset like dSprites, where the shapes are very similar in pixel space. The inferred chair rotations in Figure 7 are also a nice illustration of the ability of the method to generalize to the test set. The main thing that this paper lacks is a more quantitative evaluation. A number of recent papers have proposed metrics for evaluating disentangled representations. In addition the metrics proposed by Kim & Mnih (2018) and Chen et al. (2018), the work by Eastwood & Williams (2017) [1] is relevant in this context. All of these metrics presume that we have access to labels for true latent factors, which is not the case for any of the datasets considered in the experimentation. However, it would probably be worth evaluating one or more of these metrics on a dataset such as dSprites. A minor criticism is that details the training procedure and network architectures are somewhat scarce in the main text. It would be helpful to briefly describe the architectures and training setup in a bit more detail, and explicitly call out the relevant sections of the supplementary material. In particular, it would be good to list key parameters such as γ and the schedule for the capacities Cz and Cc, e.g., the figure captions. In Figure 6a, please mark the 25k iterations (e.g. with a vertical dashed line) to indicate that this is where the capacity is no longer increased further. Questions - How robust is the ordering on features Figure 6, given the noted variability across restarts in Section 4.3? I would hypothesize that the discrete variable always emerges first (given that this variable is in some sense given a âgreaterâ capacity than individual dimensions in the continuous variables). Is the ordering on the continuous variables always the same? What happens when you keep increasing the capacity beyond 25k iterations. Does the network eventually use all of the dimensions of the latent variables? - I would also appreciate some discussion of how the hyperparameters in the objective were chosen. In particular, one could imagine that the relative magnitude of Cc and Cz would matter, as well as γ. This means that there are more parameter to tune than in, e.g., a vanilla β-VAE. Can the authors comment on how they chose the reported values, and perhaps discuss the sensitivity to these particular hyperparameters in more detail? - In Figure 2, what is the range of values over which traversal is performed? Related Work In addition to the work by Eastwood & Williams, there are a couple of related references that the authors should probably cite: - Kumar et. al [2] also proposed the total correlation term along with Kim & Mnih (2018) and Chen et al. (2018). - A recent paper by Esmaeli et al. [3] employs an objective based on the Total Correlation, related to the one in Kim & Mnih (2018) and Chen et. al (2018) to induce disentangled representations that can incorporate both discrete and continuous variables. Minor Comments - As the authors write in the introduction, one of the purported advantages of VAEs over GANs is stability of training. However, as mentioned by the author, including multiple variables of different types also makes the representation unstable. Given this observation, maybe it is worth qualifying these statements in the introduction. - I would say that section 3.2 can be eliminated - I think that at this point readers can be presumed to know about the Gumbel-Softmax/Concrete distribution. - Figure 1 could be optimized to use less whitespace. - I would recommend to replace instances of (\citet{key}) with \citep{key}. References [1] Eastwood, C. & Williams, C. K. I. A Framework for the Quantitative Evaluation of Disentangled Representations. (2018). [2] Kumar, A., Sattigeri, P. & Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848 (2017). [3] Esmaeili, B. et al. Structured Disentangled Representations. arXiv:1804.02086 [cs, stat] (2018). | - Figure 1 could be optimized to use less whitespace. |
ACL_2017_494_review | ACL_2017 | - fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the vector space. The framework is based on the idea of "attract" and "repel" constraints, where attract constraints are used to pull morphological variations close together (e.g. look/looking) and repel constraints are used to push derivational antonyms apart (e.g. responsible/irresponsible). They test their algorithm on multiple different vector spaces and several language, and show consistent improvements on intrinsic evaluation (SimLex-999, and SimVerb-3500). They also test on the extrinsic task of dialogue state tracking, and again demonstrate measurable improvements over using morphologically-unaware word embeddings.
I think this is a very nice paper. It is a simple and clean way to incorporate linguistic knowledge into distributional models of semantics, and the empirical results are very convincing. I have some questions/comments below, but nothing that I feel should prevent it from being published.
- Comments for Authors 1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumptions the model makes. I don't think you need to include this dataset at all, since it is a potentially erroneous evaluation that can cause confusion, and your results are convincing enough on the standard datasets.
2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actually compare against it empirically.
3) Ideally, we would have a vector space where morphological variants are just close together, but where we can assign specific semantics to the different inflections. Do you have any evidence that the geometry of the space you end with is meaningful. E.g. does "looking" - "look" + "walk" = "walking"? It would be nice to have some analysis that suggests the morphfitting results in a more meaningful space, not just better embeddings. | 3) Ideally, we would have a vector space where morphological variants are just close together, but where we can assign specific semantics to the different inflections. Do you have any evidence that the geometry of the space you end with is meaningful. E.g. does "looking" - "look" + "walk" = "walking"? It would be nice to have some analysis that suggests the morphfitting results in a more meaningful space, not just better embeddings. |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. | - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. |
NIPS_2017_303 | NIPS_2017 | of their approach with respect to the previous SUCRL. The provided numerical simulation is not conclusive but supports the above considerations;
- Clarity: the paper could be clearer but is sufficiently clear. The authors provide an example and a theoretical discussion which help understanding the mathematical framework;
- Originality: the work seems to be sufficiently original with respect to its predecessor (SUCRL) and with respect to other published works in NIPS;
- Significance: the motivation of the paper is clear and relevant since it addresses a significant limitation of previous methods;
Other comments:
- Line 140: here the first column of Qo is replaced by vo to form P'o, so that the first state is not reachable anymore but from a terminating state. I assume that either Ass.1 (finite length of an option) or Ass. 2 (the starting state is a terminal state) clarify this choice. In the event this is the case, the authors should mention the connection between the two;
- Line 283: "four" -> "for";
- Line 284: "where" s-> "were"; | - Line 140: here the first column of Qo is replaced by vo to form P'o, so that the first state is not reachable anymore but from a terminating state. I assume that either Ass.1 (finite length of an option) or Ass. |
ICLR_2022_3188 | ICLR_2022 | One major concern is that using recurrent networks may increase computation complexity. Authors should include FLOPs and inference time in all tables. Computation is a very important factor in networks - one can easily have a much stronger network with fewer #parameters but more computation. On the other hand, having FLOPs is not enough, as low FLOPs does not mean low inference time. Therefore, including both FLOPs and inference time can be a fair comparison.
Authors should compare to more and latest network compression works, not just vanilla version. For example, Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. Stabilizing the lottery ticket hypothesis.arXiv preprint arXiv:1903.01611, 2019 and Yonathan Aflalo, Asaf Noy, Ming Lin, Itamar Friedman, and Lihi Zelnik. Knapsack pruning with inner distillation.arXiv preprint arXiv:2002.08258, 2020.
Similar works such as Tied Block Convolution: Leaner and Better CNNs with Shared Thinner Filters (AAAI 2021) need to be discussed and compared.
Following comment 1) - if authors did not find improvement in FLOPs or inference time, I suggest looking at if there is any improvement on the accuracy or specific properties. For example, with the recurrent model, maybe the sequential relationship is easier to mode? | 1) - if authors did not find improvement in FLOPs or inference time, I suggest looking at if there is any improvement on the accuracy or specific properties. For example, with the recurrent model, maybe the sequential relationship is easier to mode? |
ACL_2017_588_review | ACL_2017 | and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems.
- Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task.
2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary.
This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities.
- Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested?
2) Have you tried building a classifier that just takes h_i^e as inputs?
I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores. | 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested? |
ARR_2022_98_review | ARR_2022 | 1. Human evaluations were not performed. Given the weaknesses of SARI (Vásquez-Rodríguez et al. 2021) and FKGL (Tanprasert and Kauchak, 2021), the lack of human evaluations severely limits the potential impact of the results, combined with the variability in the results on different datasets.
2. While the authors explain the need to include text generation models into the framework of (Kumar et al., 2020), it is not clear as to why only the delete operation was retained from the framework, which used multiple edit operations (reordering, deletion, lexical simplification, etc.). Further, it is not clear how including those other operations will affect the quality and performance of the system.
3. ( minor) It is unclear how the authors arrived at the different components of the "scoring function," nor is it clear how they arrived at the different threshold values/ranges.
4. Finally, one might wonder that the performance gains on Newsela are due to a domain effect, given that the system was explicitly tuned for deletion operations (that abound in Newsela) and that performance is much lower on the ASSET test corpus. It is unclear how the system would generalize to new datasets with varying levels of complexity, and peripheral content.
1. Is there any reason why 'Gold Reference' was not reported for Newsela? It makes it hard to assess the performance of the existing system. 2. Similarly, is there a reason why the effect of linguistic acceptability was not analyzed (Table 3 and Section 4.6)?
3. It will be nice to see some examples of the system on actual texts (vs. other components & models).
4. What were the final thresholds that were used for the results? It will also be good for reproducibility if the authors can share the full set of hyperparameters as well. | 3. ( minor) It is unclear how the authors arrived at the different components of the "scoring function," nor is it clear how they arrived at the different threshold values/ranges. |
NIPS_2022_51 | NIPS_2022 | Weakness
My major concern for this paper is that the empirical contribution is over-claimed. However, Section 5.1 is the place I think the authors measure their work in a correct way but the corresponding results are neither significantly better nor comprehensive enough to support their claimed contribution. I will elaborate.
Training time comparisons are unfair. The cost of the CRT pipeline in terms of the per-epoch time is measured in the wrong way. The current way, excluding Section 5.1, to measure the training cost for CRT is total cost = student cost. If this paper is about to compare the training cost between baselines that train student networks, this is correct. However, all the baselines are methods that actually train the teacher network. Therefore, the correct cost of CRT should be total cost = teacher cost + student cost. It is okay to assume a robust teacher network exists but not to assume that the cost of having a teacher network is zero. The authors seem to have noticed this problem. I found the correct measurements in Section 5.1. In fact, if this paper follows the way Section 5.1 is designed, the results will be very impressive: only do robust training on a tiny/small network and use them as teachers for monster networks, which can save a significant amount of time. To do this, the current experiments in Section 5.1 is far from enough. As one may notice that the training cost of the teacher dominates the training budget of CRT because training students do not need robustness regularizations. If one ignores the training cost of teachers, this paper is just to compare the training cost between robustness regularization and standard training, which is not interesting.
The significance of the work needs more justifications. All the results show that the student networks are just marginally better or worse than the teachers. This raises a question: why in practice does one want to do this robustness transfer? Why do we not directly use the teacher network? The transfer is only between architectures and not between datasets. Unless network architectures are limited in some sense or I missed something, I don’t see why we need CRT to produce a similar network.
The discussion on the scalability of CRT is confusing. Section 5.3 aims to show the scalability of CRT, which is pretty confusing to me. My superficial understanding of the scalability of CRT is determined by the scalability of training teachers and the scalability of training students. The scalability of teachers is determined by the robustness training method proposed by other work. The training of student models is just standard training, which always scales up. However, if the authors take my advice to shift the focus of this paper to train large robust student networks from small teachers, then it is fair to claim that CRT is the way to scale robustness training for large networks.
Some figures and tables are not necessary. I find Figure 1 and Table 1 are not necessary at all because 1) I don’t see any information related to the topic this paper tries to discuss in Figure 1. This is just a very general plot for deep learning, and 2) making the factors in a table does not help convey more messages than pure text. There is no more information at all. | 2) making the factors in a table does not help convey more messages than pure text. There is no more information at all. |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | * How many different kinds of physical interaction can be in one simulation? |
NIPS_2021_894 | NIPS_2021 | Unfortunately, the weaknesses may outweigh the strengths for this submission. The paper is uncomfortably split in two between (1) the comparison of established models and (2) the introduction of the FT-Transformer. I believe that both parts would benefit from additional work.
1. Model Comparison
For a paper which lists the 'thorough' comparison of models on a 'wide range' of datasets as a key contribution, the chosen selection of datasets is not adequate for a variety of reasons:
Only one of the datasets has categorical features. All other datasets have exclusively numerical features.
Categorical features are generally regarded as more challenging (especially for deep learning), so this omission may affect conclusions.
For this one dataset, the authors do not employ one hot encoding, which may negatively affect performance for some models.
11 Datasets is definitely on the small side. (The SELU paper compares performance on 121 datasets.)
All datasets are between 20k and 1M samples.
There are a lot of UCI datasets with <20k samples. Why were they excluded? Surely these are relevant for such a general comparison?
Why were datasets cut to <1M samples? E.g. the Higgs dataset (usually 11M samples) was cut in a non-standard way to <1M samples. Further,
one of the conclusions of the paper is that GBDT outperforms DL models on 'heterogeneous' but not on 'homogeneous' data.
The separation into homogeneous and heterogeneous datasets performed by the authors perfectly aligns with this conclusion.
However, the specific reasons for how datasets where classified as homo-/heterogeneous is not explained in detail and feels rather arbitrary to me.
The authors state that a dataset is heterogenous if it describes 'different physical properties' and is of 'diverse units of measurements'. As far as I can tell, the Higgs dataset should be heterogenous by this definition, but I would be happy to hear explanations from the author. (Higgs contains a bunch of particle physics properties. Why are these less homogeneous then the housing-related attributes of California housing?)
For epsilon, I can't find any information online about the composition of its features, so why is it homogeneous then?
The perfect separation between GBDT and DL performance on heterogeneous vs homogeneous datasets as well as the lack of justification makes me suspicious that it is not impossible that the authors came up with the classification of these datasets into hetero-/homogeneous only after observing the results.
Also, no explanation of why performance of GDBT and DL differs on hetero-/homogeneous datasets is given.
2. Experiments
5.) For the Adult dataset, I have compared the reported performances in this paper to publically available results at https://www.openml.org/t/7592, which shows a large 0.1 difference between the top reported AUROC values. This might be indicative of problems with the experiment routine. I have not found such irregularities for a few other datasets that I've checked.
6.) The paper makes claims of superiority of one method (e.g ResNets / FT-Transformer) over another across all observed datasets. Why not performance in terms of average rank order across models directly, instead of eyeballing things? Additionally, I would have preferred for the authors to also quantify variance across different splits (instead just over different model seeds), especially for datasets without established test sets.
3. FT-Transformer
7.) The paper is split weirdly between 'model comparison/literature review' and introducing the 'FT-Transformer' architecture. Although both parts are individually clear, the introduction of the FT-Transformer architecture comes up somewhat abruptly in the middle of the paper, after the reader has already seen results for the FT-Transformer, affecting clarity of the overall submission.
I feel that making a big deal of the FT-Transformer as a contribution generally hurts this paper, because
It takes away space that could have been used for a more thorough comparison of architectures
The differences to AutoInt are small, despite what the authors claim. (Adding biases to the encoding sounds like a one line code change that seems to add a 0.01 improvement. Adding the mask tokens at input is more interesting but is not ablated.)
However, both the ResNet-MLP and the FT-Transformer architecture are potentially interesting as a baselines for future work and maybe the authors could have spent more time discussing the tricks needed to make them work well. (Further, the conclusion that MLPs aren't horrible for tabular DL are in line with other recent work (https://arxiv.org/abs/2106.11189, previously at https://openreview.net/forum?id=2d34y5bRWxB).)
Overall, I feel like the split between 'review comparison of prior work' and 'introduction of new models' does not work well in this submission.
4. Architecture Bias Experiments
The idea of randomly generating targets from GBDT/NN to study biases of the particular architectures is interesting. However, I feel that the idea is underdeveloped in this submission:
What's benefit of linearly interpolating targets? (In my mind, this doesn't add anything as only α ∈ [ 0 , 1 ]
has a clear meaning.)
Why not generate data from the transformer as well?
Why is XGBoost not included here? Why not extend this to all models in the spirit of the 'big comparison' paper?
Fig. 3 seems to suggest that CatBoost can accommodate all data while ResNets can't acccomodate CatBoost-data. How does this relate to the experiments (where ostensibly, ResNet > CatBoost for some datasets).
Can this be related to the statements about hetero-/homogeneous datasets? Conclusion
Making a good literature review and model comparison paper is hard.
This paper is split between trying to provide that comparison and introducing the FT-Transformer. I am not convinced that the FT-Transformer contains enough information to warrant publication. The comparison between models might be enough if appropriately extended. However, I feel that a significant number of the above comments need to be addressed first (mainly: more datasets, compare over different CV splits, show rank order, extend 4.1 to more methods).
[edit] Score updated to a 5. See discussion below. [edit] Score updated to a 7. See discussion below.
The paper discusses limitations of the FT-Transformer in a paragraph in S. 3.6, where they identify the quadratic scaling in the number of features as the main limitation of the architecture. Given that a main contribution of this paper is the comparison of architectures, a discussion of the limitations here would have been nice.
I believe the authors have misunderstood the request for discussing 'potential negative societal impacts' of submissions: in the checklist, they again reference S. 3.6 where they discuss only scaling limitations and how to (potentially) alleviate them in future work. It is unclear to me how this relates to societal impact. | 1. Model Comparison For a paper which lists the 'thorough' comparison of models on a 'wide range' of datasets as a key contribution, the chosen selection of datasets is not adequate for a variety of reasons: Only one of the datasets has categorical features. All other datasets have exclusively numerical features. Categorical features are generally regarded as more challenging (especially for deep learning), so this omission may affect conclusions. For this one dataset, the authors do not employ one hot encoding, which may negatively affect performance for some models. |
NIPS_2020_232 | NIPS_2020 | Currently I am giving a score 8, mainly because the idea, motivation and storyline are exciting. But the draft’s Sections 3 & 4 remain unclear in several ways. My final score will depend on how the authors clarify the main questions below: -Section 3 appears to be too “high level” (it shouldn’t be, for the many new things discussed). For example, I was expecting to see how backpropagation was done for the two new layers, but they were unexplained (not even in the supplementary). Also, it is surprising that “fixing shift” as an important extension towards the authors’ claimed “coarse/fine flexibility” only takes five lines in Section 3. A true gem may be overlooked! -Section 4: it is totally unclear what are the dimensions of shift and add layers? For example, when you compare “ShiftAddNet” with ResNet-20, shall I imagine either shift or add layer to have the same dimension as the conv layer, for each layer? Or else? How about DeepShift/AdderNet? Are they fair-comparable to ShiftAddNets in layer/model sizes? - Section 4: The two IoT datasets (FlatCam Face [26], Head-pose detection [11]) are unpopular, weird choices. The former is relatively recent but not substantially followed yet. The latter was published in 2004 and was no longer used much recently. I feel strange why the authors choose the two uncommon datasets, that makes their benchmarking results a bit hard to sense and evaluate. There should have been better options for IoT benchmarking, such as some wearable health or mobile activity recognition data, or even some sets in UCI. | - Section 4: The two IoT datasets (FlatCam Face [26], Head-pose detection [11]) are unpopular, weird choices. The former is relatively recent but not substantially followed yet. The latter was published in 2004 and was no longer used much recently. I feel strange why the authors choose the two uncommon datasets, that makes their benchmarking results a bit hard to sense and evaluate. There should have been better options for IoT benchmarking, such as some wearable health or mobile activity recognition data, or even some sets in UCI. |
ICLR_2021_1527 | ICLR_2021 | weakness of the paper: I am not convinced, that the efficiency of RL self-play is best measured per agent. In the appendix, it is rightfully argued, that part of the training could be parallelized. However, the conclusion that the baseline experiments thus could be repeated N times, seems to ignore that the additional compute could be used to train stronger opponents also for the baselines. The experiments don’t account for this. Further, the algorithm requires the evaluation of each agent-match-up combination for each round to choose the next opponent and thus involves policy roll-outs that are quadratic in the population size. This seems very expensive especially for larger populations. To highlight the efficiency of the approach in the title of the paper thus might be a misnomer.
Recommendation: In its current form, I thus vote for a weak reject.
Support: I acknowledge the paper and its results to be of interest for self-play in RL. However, in my opinion, the paper fails to properly account for the weakness of its approach. (I believe the paper to become stronger, if the efficiency of the approach was discussed critically.)
Rating: 5 out of 10
Confidence: 3 out of 5
CoE: I don’t see the paper in violation of the ICLR’s Code of Ethics. | 3 out of 5 CoE: I don’t see the paper in violation of the ICLR’s Code of Ethics. |
NIPS_2019_573 | NIPS_2019 | of the paper: - no theoretical guarantees for convergence/pruning - though experiments on the small networks (LeNet300 and LeNet5) are very promising: similar to DNS [16] on LeNet300, significantly better than DNS [16] on LeNet5, the ultimate goal of pruning is to reduce the compute needed for large networks. - on the large models authors only compare GSM to L-OBS. No motivation given for the choice of the competing algorithm. Based on the smaller experiments it should be DNS [16], the closest competitor, rather than L-OBS, showed quite poor performance compared to others. - Authors state that GSM can be used for automated pruning sensitivity estimation. 1) While graphs (Fig 2) show that GSM indeed correlates with layer sensitivity, it was not shown how to actually predict sensitivity, i.e. no algorithm that inputs model, runs GSM, processes GSM result and output sensitivity for each layer. 2) Authors don't explain the detail on how the ground truth of sensitivity is achieved, lines 238-239 just say "we first estimate a layer's sensitivity by pruning ...", but no details on how actual pruning was done. comments: 1) Table 1, Table 2, Table 3 - "origin/remain params|compression ratio| non-zero ratio" --- all these columns duplicate the information, only one of the is enough. 2) Figure 1 - plot 3, 4 - two lines are indistinguishable (not even sure if there are two, just a guess), would be better to plot relative error of approximation, rather than actual values; why plot 3, 4 are only for one value of beta while plot 1 and 2 are for three values? 3) All figures - unreadable in black and white 4) Pruning majorly works with large networks, which are usually trained in distributed settings, authors do not mention anything about potential necessity to find global top Q values of the metric over the average of gradients. This will potentially break big portion of acceleration techniques, such as quantization and sparsification. | 4) Pruning majorly works with large networks, which are usually trained in distributed settings, authors do not mention anything about potential necessity to find global top Q values of the metric over the average of gradients. This will potentially break big portion of acceleration techniques, such as quantization and sparsification. |
NIPS_2016_478 | NIPS_2016 | weakness is in the evaluation. The datasets used are very simple (whether artificial or real). Furthermore, there is no particularly convincing direct demonstration on real data (e.g. MNIST digits) that the network is actually robust to gain variation. Figure 3 shows that performance is worse without IP, but this is not quite the same thing. In addition, while GSM is discussed and stated as "mathematically distinct" (l.232), etc., it is not clear why GSM cannot be used on the same data and results compared to the PPG model's results. Minor comments (no need for authors to respond): - The link between IP and the terms/equations could be explained more explicitly and prominently - Pls include labels for subfigures in Figs 3 and 4, and not just state in the captions. - Have some of the subfigures in Figs 1 and 2 been swapped by mistake? | - Have some of the subfigures in Figs 1 and 2 been swapped by mistake? |
ARR_2022_340_review | ARR_2022 | My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure.
Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information.
Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect.
I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art." | 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. |
NIPS_2020_1316 | NIPS_2020 | My concerns are as follows. 1. The regret in [1] is defined on function value while this work defines it with the norm of gradient. It is better to provide the same measurement for a fair comparison. 2. The topic that reduces variance with importance sampling is not new. Besides vanilla SGD, more baselines with variance reduction or importance sampling should be included in experiments for demonstration, especially when the theoretical improvement is not significant. 3. It lacks the experiments for the main method in this work that the gradient is computed from a single example. The only experiment is for mini-batch estimator while the additional experiment for the extreme case is necessary for evaluating the effectiveness of the theorem. 4. Authors claim that the regret bound for the proposed mini-batch method is cast to appendix. However, I didn’t find the regret bound for the mini-batch estimator in the supplementary. [1] Zalan Borsos, Andreas Krause, and Kfir Y Levy. Online Variance Reduction for Stochastic Optimization. | 4. Authors claim that the regret bound for the proposed mini-batch method is cast to appendix. However, I didn’t find the regret bound for the mini-batch estimator in the supplementary. [1] Zalan Borsos, Andreas Krause, and Kfir Y Levy. Online Variance Reduction for Stochastic Optimization. |
NIPS_2019_1246 | NIPS_2019 | - The formatting of the paper seems to be off. - The paper could benefit from reorganization of the experimental section; e. g. introducing NLP experiments in the main body of the paper. - Since the paper is proposing a new interpretation of mixup training, it could benefit by extending the comparisons in Figure 2 by including [8] and model ensembles (e. g. [14]). Some suggestions on how the paper could be improved: * Paper formatting seems to be off - It does not follow the NeurIPS formatting style. The abstract font is too large and the bottom page margins seem to be altered. By fixing the paper style the authors should gain some space and the NLP experiments could be included in the main body of the paper. * It would be interesting to see plots similar to 2(h) and 2(i) for ERL and label smoothing. * Figure 2 - it would be beneficial to include comparisons to [8] and [14] in the plots. * Figure 3 - it would be beneficial to include comparisons to [8] and [14]. * Adding calibration plots (similar to Figure 2 (a:d) ) would make the paper stronger. * Section 3.4.1 - What is the intuition behind the change of trends when comparing the results in Figure 3(b) to 3(c)? * Adding calibration results for manifold mixup [24] would further improve the paper. * Figure 5: Caption is missing. The mixup-feats-only results could be included in the Figure 2 -- this would lead to additional space that would enable moving some content from the supplementary material to the main body. * Figure 2(j) is too small. * l101 - a dot is missing * l142 - The bar plots in the bottom row -> The bar plots in the middle row | * Paper formatting seems to be off - It does not follow the NeurIPS formatting style. The abstract font is too large and the bottom page margins seem to be altered. By fixing the paper style the authors should gain some space and the NLP experiments could be included in the main body of the paper. |
ICLR_2023_2869 | ICLR_2023 | Weakness:
1.The technical quality of this paper is not enough, and it seems like a direct combination with Evidential Theory and Reinforcement Learning.
2.The paper is not sound as there are many exploration methods in RL literature, such as count-based methods and intrinsic motivations(RND,ICM). But the paper does not discuss and compare these methods.
3.The theoretical analysis is not novel, as it is a direct result of RL theory.
4.The update rule of the critic network does not follow Double DQN, but follow the clipped double q-learning in the well known TD3 algorithm.
5.The paper does not provide a specification of the experimental setup. Did the authors bulid simulator? If not, how to evaluate the performance of each policy in the offline setting?
6.Why not compare SAC in Table 2 as SAC is compared in Figure 6?
7.How to verify that the performance improvement over pervious RL methods indeed comes from the evidential reward? As we can see, you choose some advanced techniques like Eq (11), and it is not deployed in previous baselines. | 2.The paper is not sound as there are many exploration methods in RL literature, such as count-based methods and intrinsic motivations(RND,ICM). But the paper does not discuss and compare these methods. |
ICLR_2021_977 | ICLR_2021 | Weakness:
Motivations behind its technical contributions can be further sharpened; comparisons to previous related studies on the inductive graph learning domain can be further improved
Some gaps between the current experiment setup and real-world recommendation senarios ##########################################################################
Detailed Comments:
I'll address the above potential weakness in details here.
I personally find a bit difficult to digest the motivations of this work and how it differentiated from previous inductive graph learning work until diving into its detailed parametrizations. Fig. 1 and its descriptions are helpful in terms of illustrating the inductive setting, but not quite informative in terms of concrete contributions of this work conceptually. My takeaway from the proposed framework is, the attentive pooling method falls into the aggregator family of inductive graph learning, despite that the aggregation and sampling scheme are performed on user side globally instead of on the user-item local neighborhoods. In this regard, it may also be helpful to highlight the (mathematical) difference between this work and existing inductive graph learning (e.g. pinSage) after eq.5/6.
Although the experimentations are executed in a good shape, there are still some gaps between the current setup and real-world recommendation requirements.
The proposed method is largely evaluated on the rating prediction setting, AUC is reported on the amazon dataset but no Top-K ranking metrics are performed during the experiments. It is acceptable given these metrics are consistent with the optimization objective, however, the notable gap between pointwise prediction setting and the real-world online top-K ranking setting needs to be called out.
Another concern about the current evaluation protocol is, it enforces the temporal dynamics on the user side and assumes item representations remains the same - again it is consistent with the proposed method (i.e., Q remains the same) thus expected to favor it. The question is, whether these assumptions are consistent with real-world senarios. As far as I know, both movieLens and Amazon datasets have associated timestamps, what the real temporal dynamics here and what would be the warm/cold item/user distribution look like if splitting data chronologically?
Minor Concerns: - Annotations in Figure 4 can be further enlarged for visibility | - Annotations in Figure 4 can be further enlarged for visibility |
ARR_2022_237_review | ARR_2022 | of the paper include: - The introduction of relation embeddings for relation extraction is not new, for example look at all Knowledge graph completion approaches that explicitly model relation embeddings or works on distantly supervised relation extraction. However, an interesting experiment would be to show the impact that such embeddings can have by comparing with a simple baseline that does not take advantage of those.
- Improvements are incremental across datasets, with the exception of WebNLG. Why mean and standard deviation are not shown for the test set of DocRED?
- It is not clear if the benefit of the method is just performance-wise. Could this particular alignment of entity and relation embeddings (that gives the most in performance) offer some interpretability? ( perhaps this could be shown with a t-SNE plot, i.e. check that their embeddings are close in space).
Comments/Suggestions: - Lines 26-27: Multiple entities typically exist in both sentences and documents and this is the case even for relation classification, not only document-level RE or joint entity and relation extraction.
- Lines 39-42: Point to figure 1 for this particular example.
- Lines 97-98: Rephrase the sentence "one that searches for ... objects" as it is currently confusing - Line 181, Equations 4: $H^s$, $E^s$, $E^o$, etc are never explained.
- Could you show ablations on EPO and SEO? You mention in the Appendix that the proposed method is able to solve all those cases but you don't show if your method is better than others.
- It would be interesting to also show how the method performs when different number of triples reside in the input sequence. Would the method help more sequences with more triples?
Questions: - Improvement still be observed with a better encoder, e.g. RoBERTa-base, instead of BERT?
- How many seeds did you use to report mean and stdev on the development set?
- For DocRED, did you consider the documents as an entire sentence? How do you deal with concepts (multiple entity mentions referring to the same entity)? This information is currently missing from the manuscript. | - Lines 26-27: Multiple entities typically exist in both sentences and documents and this is the case even for relation classification, not only document-level RE or joint entity and relation extraction. |
ICLR_2021_973 | ICLR_2021 | .
Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well.
Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. - Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates? - Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates?
Provide additional feedback with the aim to improve the paper. - Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S. - Pg. 5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). - Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)? - Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT? - Fig. 4: On the color bar, presumably one of the labels should say “worse”. - Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers? - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. - D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4. | - Fig.4: On the color bar, presumably one of the labels should say “worse”. |
NIPS_2022_2635 | NIPS_2022 | Weakness: The writing of this paper is roughly good but could be further improved. For example, there are a few typos and mistakes in grammar: 1. Row 236 in Page 4, “…show its superiority.”: I think this sentence should be polished.
2. Row 495 in Supp. Page 15: “Hard” should be “hard”. 3. Row 757 in Supp. Page 29: “…training/validation/test” should be “…training/validation/test sets”. 4. Row 821 in Supp. Page 31: “Fig.7” should be “Fig.12”. Last but not least, each theorem and corollary appearing in the main paper should be attached to its corresponding proof link to make it easy for the reader to follow.
The primary concerns are motivation, methodology soundness, and experiment persuasion. I believe this is a qualified paper with good novelty, clear theoretical guarantees, and convincing empirical results. | 3. Row 757 in Supp. Page 29: “…training/validation/test” should be “…training/validation/test sets”. |
ARR_2022_201_review | ARR_2022 | I’m not convinced that AFiRe (the adversarial regularization) brings significant improvement, especially because - BLEU improvements are small (e.g., 27.93->28.64; would humans be able to identify the differences?)
- Hyperparameter details are missing.
- Human evaluation protocols, payment, etc. are all missing. Who are the raters? How are they "educated" and how do the authors ensure the raters provide good-faith annotations? What is the agreeement?
Other baselines are not compared against. For example, what if we just treat the explanation as a latent variable as in Zhou et al. (2021)? https://arxiv.org/pdf/2011.05268.pdf A few other points that are not fatal: - Gold-standard human explanation datasets are necessary, given the objective in line 307. - Does it mean that inference gets slowed down drastically, and there’s no way to only do inference (i.e., predict the label)? I don’t think this is fatal though. What’s the coefficient of the p(L, E | X) term in line 307? Why is it 1? Hyperparamter details are missing, so it’s not clear whether baselines are well-tuned, and whether ablation studies provide confident results. The writing is not careful, and often impedes understanding.
- Line 229: What’s t?
- Line 230: What’s n?
- Line 273: having X in the equation without defining it is a bit weird; should there be an expectation over X?
- Sometimes, the X is not bolded and not italicized (line 262). Sometimes, the X is not bolded but italicized (line 273). Sometimes, the X is bolded but not italicized (line 156). - Line 296: L and E should be defined in the immediate vicinity. Again, sometimes L, E are italicized (line 296) and sometimes not (line 302).
- Line 187: It’s best to treat Emb as a function. Having l’ and e’ as superscripts is confusing.
- In Table 4, why sometimes there are punctuations and sometimes there are no punctuations?
- Perplexity does not necessarily measure fluency. For example, an overly small perplexity may correspond to repeating common n-grams. But it’s okay to use it as a coarse approximation of fluency.
- Line 191: \cdot should be used instead of regular dot Section 2.1: It would be best to define the dimensionalities of everything.
- Line 182: A bit confusing what the superscript p means.
- Line 229: What’s t?
- Line 230: What’s n? - Line 255: Comma should not start the line. | - Does it mean that inference gets slowed down drastically, and there’s no way to only do inference (i.e., predict the label)? I don’t think this is fatal though. What’s the coefficient of the p(L, E | X) term in line 307? Why is it 1? Hyperparamter details are missing, so it’s not clear whether baselines are well-tuned, and whether ablation studies provide confident results. The writing is not careful, and often impedes understanding. |
NIPS_2021_40 | NIPS_2021 | /Questions:
I only have minor suggestions:
1.) In the discussion, it may be worth including a brief discussion on the empirical motivation for a time-varying Q ^ t and S t
, as opposed to a fixed one as in Section 4.2. For example, what is the effect on the volatility of α t
and also on the average lengths of the predictive intervals when we let Q ^ t and S t
vary with time?
2.) I found the definition of the quantile a little confusing, an extra pair of brackets around the term ( 1 | D | ∑ ( X r , Y r ) ∈ D 1 S ( X r , Y r ) ≤ s )
might help, or maybe defining the bracketed term separately if space allows.
3.) I think there are typos in Lines 93, 136, 181 (and maybe in the Appendix too): should it be Q ^ t ( 1 − α t ) instead? ##################################################################### Overall:
This is a very interesting extension to conformal prediction that no longer relies on exchangeability but is still general, which will hopefully lead to future work that guarantees coverage under weak assumptions. I believe the generality also makes this method useful in practice.
The authors have described the limitations of their theory, e.g. having a fixed Q ^
with time. | 2.) I found the definition of the quantile a little confusing, an extra pair of brackets around the term ( 1 | D | ∑ ( X r , Y r ) ∈ D 1 S ( X r , Y r ) ≤ s ) might help, or maybe defining the bracketed term separately if space allows. |
ACL_2017_433_review | ACL_2017 | - The annotation quality seems to be rather poor. They performed double annotation of 100 sentences and their inter-annotator agreement is just 75.72% in terms of LAS. This makes it hard to assess how reliable the estimate of the LAS of their model is, and the LAS of their model is in fact slightly higher than the inter-annotator agreement. UPDATE: Their rebuttal convincingly argued that the second annotator who just annotated the 100 examples to compute the IAA didn't follow the annotation guidelines for several common constructions. Once the second annotator fixed these issues, the IAA was reasonable, so I no longer consider this a real issue.
- General Discussion: I am a bit concerned about the apparently rather poor annotation quality of the data and how this might influence the results, but overall, I liked the paper a lot and I think this would be a good contribution to the conference.
- Questions for the authors: - Who annotated the sentences? You just mention that 100 sentences were annotated by one of the authors to compute inter=annotator agreement but you don't mention who annotated all the sentences.
- Why was the inter-annotator agreement so low? In which cases was there disagreement? Did you subsequently discuss and fix the sentences for which there was disagreement?
- Table A2: There seem to be a lot of discourse relations (almost as many as dobj relations) in your treebank. Is this just an artifact of the colloquial language or did you use "discourse" for things that are not considered "discourse" in other languages in UD?
- Table A3: Are all of these discourse particles or discourse + imported vocab? If the latter, perhaps put them in separate tables, and glosses would be helpful.
- Low-level comments: - It would have been interesting if you had compared your approach to the one by Martinez et al. (2017, https://arxiv.org/pdf/1701.03163.pdf). Perhaps you should mention this paper in the reference section.
- You use the word "grammar" in a slightly strange way. I think replacing "grammar" with syntactic constructions would make it clearer what you try to convey. ( e.g., line 90) - Line 291: I don't think this can be regarded as a variant of it-extraposition. But I agree with the analysis in Figure 2, so perhaps just get rid of this sentence.
- Line 152: I think the model by Dozat and Manning (2016) is no longer state-of-the art, so perhaps just replace it with "very high performing model" or something like that.
- It would be helpful if you provided glosses in Figure 2. | - Line 152: I think the model by Dozat and Manning (2016) is no longer state-of-the art, so perhaps just replace it with "very high performing model" or something like that. |
NIPS_2017_250 | NIPS_2017 | #ERROR! | 2. The proposed compression performs worse than PQ when a small code length is allowed, which is the main weakness of this method, in view of a practical side. |
NIPS_2020_1436 | NIPS_2020 | 1. For the principles for designed modules, this paper proposed three basic modules for interior image restoration, however, the interior structure of these modules is fixed, which is not so convinced to build these modules. In my opinion, an inner NAS strategy is necessary to search for a considerable structure for image restoration. Furthermore, as shown in Fig.1(c), the global flow is fixed, which consists of two parallel modules and the multiply combination of transition modules and cells. I wonder that the reason that not searching the global connections and demonstration of reasonability. Indeed, theses principled modules are common principles for other vision problems, such as image segmentation and optical flow estimation. Maybe other principled modules, such as attention modules and tasks-specific modules are vital to introduce. 2. From the experiment of this paper, some experiment settings are not very rational. As a significant contribution, the trade-off between model complexity and inference accuracy is not obviously shown when comparing with other methods for image denoising and deraining. In other words, I cannot find the final schemes for these tasks under the consideration of the trade-off between model complexity and inference accuracy. From the supplemented materials, the designed network is very deep. This paper should indicate the parameters of this paper and other networks. From the reproduction aspect, I cannot find how to search for a good structure for deraining. The various rain scenarios are also necessary to demonstrate your performance. Moreover, this paper aims to remove signal-dependent or -independent noises. Some experiments are missing, such as the performance on the real scenarios and RGB colorful images rather than deraining scenarios. 3. As a bilevel optimization method, it is not very clear the relationship between three losses and objective functions (upper and lower formulations). L_{arch} and L_{Comp} should be optimized in the upper functions. 4. Some subjective statements are inappropriate to introduce this paper. Some proofs and references are needed to demonstrate your statement. it is labor-intensive to seek an effective architecture, while the image recovery performance is sensitive to the choice of neural architecture. One more daunting task of multi-scale architecture design is unknown is that when to fuse the multi-scale feature. Besides these explicit multi-scale methods, the models with skip connections [10] could also be regarded as using multi-scale information in an implicit way. (The author should provide a detailed explanation to verify these statements.) | 4. Some subjective statements are inappropriate to introduce this paper. Some proofs and references are needed to demonstrate your statement. it is labor-intensive to seek an effective architecture, while the image recovery performance is sensitive to the choice of neural architecture. One more daunting task of multi-scale architecture design is unknown is that when to fuse the multi-scale feature. Besides these explicit multi-scale methods, the models with skip connections [10] could also be regarded as using multi-scale information in an implicit way. (The author should provide a detailed explanation to verify these statements.) |
NIPS_2022_2797 | NIPS_2022 | of this paper are 1) Why do sampled subgraphs (segments of the very large graph one wishes to learn) used in feature learning need to be similar in any way to the larger graph, the enormous discrepancy between their node/edge sizes notwithstanding, 2) what actual graph classification tasks did the computational experiments solve? and 3) How does the proposed method compare with prior art? | 3) How does the proposed method compare with prior art? |
ARR_2022_67_review | ARR_2022 | 1. Some claims in the paper lack enough groundings. For instance, in lines 246-249, "This difference in the composition of bias types explains why the bias score of BERT is higher in CrowS-Pairs, while the same is higher for SenseBERT in StereoSet." This claim will be justified if the authors can provide the specific bias scores and numbers of examples of each bias type, but I didn't find the corresponding part for analyzing this. Also, this paper mentions several times the intuition "occupations and not actions associated with those occupations are related to gender, hence can encode social biases" (lines 595-597). However, I don't really agree. Take "engineer" as an instance, in the Merriam-webster dictionary, the first meaning of "engineer" as a verb (https://www.merriam-webster.com/dictionary/engineer#:~:text=engineered%3B%20engineering%3B%20engineers,craft%20engineer%20a%20business%20deal) is "to lay out, construct, or manage as an engineer". I think it is very much biased towards the male gender as well, according to social conventions.
2. Some analyses can be more detailed. For example, in "language/nationality", the data includes Japanese, Chinese, English, Arabic, German... (~20 different types). Biases towards different languages/nationalities are different. I was wondering whether there would be some interesting observations comparing them.
3. The definition of "bias" is debatable. In SSSB, a language that is "difficult to learn/understand/write" is considered to be stereotypical, and "easy to learn/understand/write" is anti-stereotypical. In daily conversations, I think it is not widely considered as "biased". Also, I don't really understand the meaning of "<xxx language is hash>". Do you mean "harsh" by "hash"? If so, I think the conclusions derived from this dataset are less trustworthy.
4. Writing can be improved. For example, even though I read very carefully, I am not sure I fully follow the method in lines 203-210. It will be nice if you can provide some examples for s(_i) and a(_j) 5. A question: why would you use Equation 7 to derive word embeddings? From your results, I assume the sense embeddings are not normalized. This will bring an issue: the embedding will be dominated by the sense with a larger length. To make the experiments more rigorous, I think it would be nice to also use pre-trained static word embeddings (e.g. skip-gram) and normalize the embedding.
1. I think it would be more clear if you can introduce sense embeddings a bit before introducing the bias measuring procedures.
2. A number of typos exist. Mostly, it doesn't influence the reading. However, sometimes it affects my understanding of the paper (e.g., again "occupations and not actions associated with those occupations are related to gender" (lines 595-596, and -> but, if I understand it correctly)). Another round of proofreading may be needed. | 2. Some analyses can be more detailed. For example, in "language/nationality", the data includes Japanese, Chinese, English, Arabic, German... (~20 different types). Biases towards different languages/nationalities are different. I was wondering whether there would be some interesting observations comparing them. |
ICLR_2023_4236 | ICLR_2023 | Weakness: 1. Though I may be wrong, I don’t think DefRCN uses FPN. As such, if the author uses DefRCN as baseline, the author should ensure the implementation details for fair comparison. 2. I still cannot fully understand why the norms can be used to represent different features. If IoU is the only reason, one necessary comparison is to only generate features with low (or high) norm and then check the performance for better illustration. 3. A few visualization between feature norm and IoU should be provided. 4. Besides norm, is there any other property of features can be used? It is necessary and helpful for your approach design. 5. What is the semantic embedding? I didn’t find the detailed explanation in the implementation details. 6. Whether this approach can be generalized to TFA? After all, DefRCN has too much variants compared with TFA. As such, performance on TFA is necessary. | 4. Besides norm, is there any other property of features can be used? It is necessary and helpful for your approach design. |
NIPS_2020_566 | NIPS_2020 | There are several important points that need to be addressed in the paper. First, there is a non-uniform level of detail and technicality through out the paper. The authors start by trying to be very formal, and specifying that functions come from "separable Banach spaces", but quickly drop this rigor and start being vague mentioning "appropriate function-space" and "well-posed", and in the end, the actual method that they propose does not need these details, and is often explained using concepts, like "graphs", that were never formally defined. I suggest using the following heuristic: If a concept is not needed to explain how you get to eq. (12) and (13), then do not mention it at all. This will save you space that you should then use to explain what actually matters in more detail. Second, although I appreciate the comparison with other NN-based methods, it would be important for the authors to illustrate with an example how traditional methods perform. I do understand that traditional methods require recalculations for each new parameter $a$. Nonetheless, the numerical examples studied are not too complex, and I suspect that standard PDE solvers could solve them really fast. It would be good to report these time-to-solution numbers, as well as the accuracy of these classical solvers. This is also relevant for the numerical section, for one to understand how exactly the training set was generated. I assume you used some classical solver to find the training examples, yes ? Also, what is the training time? Sorry, didn't check the Appendix (as a reviewer I don't have to), maybe it is there?! In any case, a few of these numbers should be in the main text. Are you familiar with the Matlab package Chebfun? It can use Chebyshev interpolation (instead of grid-type discretizations like you do at each level) to solve PDEs very accurately, and it is really easy to use. I would be very interested in seeing how well you do in comparison with these type of tools. They also allow the computation pre-compuation of inverse operators, so, with some tricks, it could even be that as you change $a$, you do not need to recompute much stuff. Third, some of the explanations need to be clarified. I mention a few now. 1) Fig. 1 is pretty but it is not very informative. It is too generic, and it is incomprehensible at the point where it is first mentioned. I suggest that you move it to later, to where you explain your V-cycle, Section 3.2, and then also had \hat{v} and \invertedhat{v} to the different parts of the diagram. Also, I recommend not pointing the arrows inwards as we get deeper, since it is misleading. For example, in Fig. 1, in the last layer K23, K_33, and K_32 are in the same graph, while in the upper levels they are not. The whole diagram should just be a stacking of "squares", like below (hope it doesn't get distorted, since it took a while to draw). O----->---------O | | V ^ | | O----->---------O | | V ^ | | O----->---------O | | V ^ | | O----->---------O 2) The authors mention graphs several times in the paper but never really define formally what they mean by it. I am convinced that the whole paper can be written without referring to graphs ever, just sum and composition of sparse operators. However, if the authors do want to mention graphs, you should formally relate the adjacency matrix of these graphs with the sparsity patters of the kernel matrices. The same goes regarding the mentioning of GNNs. I believe the authors can avoid mentioned them at all, and just present the architerure of their NN as is. However, if you do want to mention GNNs, you need to give full details of how your method translates into a GNN, not just say that it does. E.g. from (12) and (13), it is not clear which graph you're using in the GNN. It could either be the metagraph in Fig.1, which I re-expressed above, or a fully detailed graph where, in each level, you express K as a graph operation as well. Please use a latex Definition environment. 3) The authors need to clarify what they mean by "complexity" in different parts of the paper. Is it the complexity to compute K? Is it the complexity to compute K v? Is it the complexity to compute the full map? Also, the authors need to give more detail on how these complexities are computed when more, or less, "tricks" are used. For example, at some point the authors mention that the domain of integration in eq. (3) is truncated to a ball. This corresponds to sparsifying K before doing K v. Is this truncation happening at every level of the the downward and upward pass? I.e. do we use this truncation in all matrix multiplications in eq. (12) and eq. (13)? Also, in addition to this truncation, you also use a Nyström approximation, which corresponds writing K as a product. How exactly does this affect computation time? When explaining these things, always compare what would happen if, e.g. I computed eq. (8) without any "tricks", and if I computed eq. (8) using sparsifying "tricks". Be specific, be rigors. Here it matters, when mentioning Banach spaces it doesn't. 4) At some point there is a disconnect between the flow of ideas that are guiding the development of the heuristic. You start with writing u(x) as solution for eq. (2), fine. Then you say that the r.h.s. of eq. (2) can be approximated by the action of a kernel as in eq. (3). Here I already see a problem as if u(x) = 0 in the r.h.s. of eq. (2) I still get something on the l.h.s. of eq. (2), but if u(x) = 0 in the r.h.s. of eq. (3) I get also zero on the l.h.s. of eq. (3). Then you seem to try to compensate the difference between eq. (3) and eq. (2) by introducing a matrix W. However, I still do not see how W can capture the term with f(x) in eq. (2). I also do not understand why the effect of W cannot be captured by \Kappa_a. Then you extend the dimension of the problem and you introduce activation functions. At this point, since you're learning \Kappa_a and W, you no longer are using any knowledge of the fact that you started from eq. (2). You could have started with eq. (4) directly, and things would still make sense. What's the point of the journey then? Then you explain in eq. (5) how the discretization of the domain affects calculation of eq. (4). Then you introduce the decomposition in eq. (8) as a matrix decomposition, and then clarify that these are actually tensors. Finally, you do not use eq. (8) at all as it is, but you stick an activation function in between every matrix multiplication/sum and offer eq (12) and (13) as your actual algorithm. Here again I see no reason why we need W, as this can be absorbed into K_ll. While this story is going you, you also mention here and there some truncations via B(x,r), graphs, domain discretization, and Nyström sampling, without never being very formal about these things. 5) Many solvers' algorithms are able to guarantee that some nice mathematical properties are kept. For example, that we do not lose mass, or charge, when solving a physics-related continuous PDEs via methods that are inherently discrete. They use symplectic integrators, etc. How does learning F^\dagger behave in this regard? Is it possible to do the training such that some nice conservation properties are kept? Can you illustrate, at least numerically, how well, or not, we conserve certain properties in e.g. Hamiltonian systems? | 5) Many solvers' algorithms are able to guarantee that some nice mathematical properties are kept. For example, that we do not lose mass, or charge, when solving a physics-related continuous PDEs via methods that are inherently discrete. They use symplectic integrators, etc. How does learning F^\dagger behave in this regard? Is it possible to do the training such that some nice conservation properties are kept? Can you illustrate, at least numerically, how well, or not, we conserve certain properties in e.g. Hamiltonian systems? |
ICLR_2022_331 | ICLR_2022 | .) Weaknesses:
W2: The method is mostly constructed on top of previous methods; there are no network changes or losses. There is a contribution in the signed distance function and a pipeline for transferable implicit displacement fields. Why are we using two SIRENs for f and d? Shouldn't the d be a simpler network?
W3: Considering the experimental result. I feel that the method will fail with noise because of things like the need for normal to the points. This is a very relevant fact, and it is not described in the document. There are small tests with noise in the appendices, but the level of noise added is almost 0. | .) Weaknesses:W2: The method is mostly constructed on top of previous methods; there are no network changes or losses. There is a contribution in the signed distance function and a pipeline for transferable implicit displacement fields. Why are we using two SIRENs for f and d? Shouldn't the d be a simpler network? |
bt9Ho2FMxd | EMNLP_2023 | 1) The RQ1 mentioned in the paper seems redundant. This adds no extra information for the audience. It is expected the performance will vary across multiple HS datasets when evaluated in cross-data setting. Another interesting point to analyse would've been how % of explicit hate information in the dataset affects implicit hate speech detection performance and vice-versa and it's corresponding effect on RQ2 & RQ3 t-sne plots. (Reference - https://aclanthology.org/2023.findings-eacl.9/)
2) Again it is only obvious / intuitive that employing a contrastive learning strategy would bring together the implicit and explicit hate embeddings. What would be interesting to understand is that how can these correlations be leveraged to improve the downstream classification performance ?
In it's current form the paper lacks enough significant learnings / contribution to be accepted. Incorporating the above mentioned suggestions should be sufficient | 1) The RQ1 mentioned in the paper seems redundant. This adds no extra information for the audience. It is expected the performance will vary across multiple HS datasets when evaluated in cross-data setting. Another interesting point to analyse would've been how % of explicit hate information in the dataset affects implicit hate speech detection performance and vice-versa and it's corresponding effect on RQ2 & RQ3 t-sne plots. (Reference - https://aclanthology.org/2023.findings-eacl.9/) |
ICLR_2022_2834 | ICLR_2022 | are concluded as follows:
Strengths: 1. The proposed method is novel using Fourier Transformation to measure the sample uncertainty in the limited supervision. 2. Most of the paper is easy to follow in terms of writing. 3. The experiments are comprehensively designed in three datasets and three tasks. Plus, the parameter sensitivity analysis of a l p h a
in Eq. 6, m u
in Eq. 5 and start epoch is compared.
Weaknesses: 1. Some assumptions should be justified or analyzed. For example, “the instantaneous probabilities often provide inaccurate estimation to uncertainty” in P.2 of Sec.1 is a very strong assumption and cannot be true. Besides, the four examples shown in the Figure 1 are specific cases and it is better to give statistics about the numbers of the true cases and false cases in a general case, such as average probability sequence of all training samples in a small dataset. 2. Some introduction of the previous works need suitable citations, such as “consistency-based approaches” in the P.3 of Sec. 1 and “one-bit” supervision in the P.3 of Sec. 1. 3. Some details need more clarification. For example, the Eq. 1 should have “q” in the first item or write the two items separately, as only showing the “q” in the second item misleads that the “q” will not help the training of “f”. Besides, “the iterations of stochastic gradient descent equal to a stochastic process ” in the paragraph between Eq. 2 and Eq. 3 is confusing by what “stochastic process” is. Moreover, the “predictive distribution” in the paragraph below Eq.3 is not clear in terms of the owner of distribution. Moreover, the “more accurate” has no comparison in the paragraph under Eq. 5. Plus, it is better to clarify the “consistency-based methods”. Though the work has its explanation the P.1 of Sec. 3.4, it is still unclear about the concept. Finally, the “By checking the ground-truth” … is unclear to me in the last paragraph of Sec. 3. 4. There are little typos, such as “we we” in the Sec. 3.3, “be be” in the Sec. 5. 5. It is better to answer how to choose a range of converged epochs and how to sample the some epochs from the range. | 2. Most of the paper is easy to follow in terms of writing. |
ICLR_2022_537 | ICLR_2022 | 1. The stability definition needs better justified, as the left side can be arbitrarily small under some construction of \tilde{g}. A more reasonable treatment is to make it also lower bounded. 2. It is expected to see a variety of tasks beyond link predict where PE is important. | 2. It is expected to see a variety of tasks beyond link predict where PE is important. |
ICLR_2021_1783 | ICLR_2021 | 1. The main contribution of this paper is introducing adversarial learning process between the generator and the ranker. The innovation of this paper is concerned. 2. Quality of generated images by proposed method is limited. While good continuous control is achieved, the realism of generated results showed in paper and supplemental material is limited. 3. Visual comparisons and ablation study are insufficient.
Comments/Questions: 1. Could you elaborate more on why proposed method achieves better fine-grained control over the interested attribute? Was it crucial to change the formular of ranker’s loss function from classification to regression? 2. Could you provide more visual comparisons between the proposed method and prior works? 3. There are also some other works focusing on the semantic face editing and they show the ability to achieve continuous control over different attributes, like [1]. Could you elaborate the difference between your work and these papers? 4. Statements in Section 4.2 are somewhat redundant.
Minor: 1. Missing proper expression for the third face image in Figure 2. 2. Missing close parenthesis at the bottom of Page 4. 3. Inconsistent statement and reference for Celeb Faces Attributes Dataset in experiment section.
[1] Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei. “Interpreting the Latent Space of GANs for Semantic Face Editing”, In CVPR, 2020. https://dblp.org/rec/conf/cvpr/ShenGTZ20 | 3. There are also some other works focusing on the semantic face editing and they show the ability to achieve continuous control over different attributes, like [1]. Could you elaborate the difference between your work and these papers? |
NIPS_2020_686 | NIPS_2020 | - The objective function (1): I have two concerns about the definition of this objective: 1. If the intuitive goal consists of finding a set of policies that contains an optimal policy for every test MDP in S_{test}, I would rather evaluate the quality of \overline{\Pi} with the performance in the worst MDP. In other words, I would have employed the \min over S_{test} rather than the summation. With the summation we might select a subset of policies that are very good for the majority of the MDPs in S_{test} but very bad of the remaining ones and this phenomenon would be hidden by the summation but highlighted by the \min. 2. If no conditions on the complexity of \overline{\Pi} are enforced the optimum of (1) would be exactly \Pi, or, at least, the largest subset allowed. - Latent-Conditioned Policies: How is this different from considering a hyperpolicy that is used to sample the parameters of the policy, Like in Parameter-based Exploration Policy Gradients (PGPE)? Sehnke, Frank, et al. "Policy gradients with parameter-based exploration for control." International Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg, 2008. - Choice of the latent variable distribution: at line 139 the authors say that p(Z) is chosen to be the uniform distribution, while at line 150 p(Z) is a categorical distribution. Which one is actually used in the algorithm? Is there a justification for choosing one distribution rather than another? Can the authors motivate? **Minor*** - line 64: remove comma after a_i - line 66: missing expectation around the summation - line 138: what is H(Z)? - line 322: don’t -> do not - Equation (2): at this point the policy becomes a parametric function of \theta. Moreover, the dependence on s when using the policy as an argument for R_{\mathcal{M}} should be removed - Figure 3: the labels on the axis are way too small - Font size of the captions should be the same as the text | 1. If the intuitive goal consists of finding a set of policies that contains an optimal policy for every test MDP in S_{test}, I would rather evaluate the quality of \overline{\Pi} with the performance in the worst MDP. In other words, I would have employed the \min over S_{test} rather than the summation. With the summation we might select a subset of policies that are very good for the majority of the MDPs in S_{test} but very bad of the remaining ones and this phenomenon would be hidden by the summation but highlighted by the \min. |
ARR_2022_247_review | ARR_2022 | - The authors should more explicitly discuss other work/data that addresses multi-intent sentences. Footnote 6 discusses work on multi-intent identification on ATIS/MultiWOZ/DSTC4 and synthetically generated multi-intent data (MixATIS and MixSNIPS), but this is not discussed in detail in the main text. - Additionally, footnotes are used FAR too extensively in this paper -- it's actually very distracting. Much of the content is actually important and should be moved into the main body of the paper! Details around parameter settings etc. can be moved into the appendix to make space (e.g., L468).
- Some of the intents do not really confirm to standard definitions of an intent, e.g., "card" (Fig 1). This does not actually describe the "intent" behind the utterance, which might traditionally be something like "confirm_arrival". " Card" in this case could be considered more like a slot and maintain a similar level of genericness. On the other hand, intents such as "less_lower_before" may be overloaded. While it makes sense to try to make slots more generic so they can be reused across new domains, the authors can more explicitly articulate their reasoning behind overloading/over-specifying intents.
- The ontology definition and annotation scheme itself is glossed over in this paper, although it is a major contribution. The authors should help quantify the effort required and comment on the feasibility of scaling their high-quality annotation to other domains.
Comments: - The paper in general is very dense (and thus difficult to get through in parts). The authors frequently include numbered lists in the paragraphs that might be easier to read as actual lists instead of in paragraph form (where appropriate).
- 163: This statement is unsupported "First, the models went back to focusing on single-turn utterances, which..." - Footnote 6: As described in the weaknesses section, the authors should more explicitly describe these works and provide examples of how their work aims to improve on them.
- 196: Need more description here -- many parts of the proposed NLU++ ontologies are also highly domain specific (e.g., intents like "spa" and "card").
- Table 4: Should include other attempts at multi-intent datasets here (DSTC4, MixATIS, etc.).
- Table 8: Some of the "description-questions" shown are ungrammatically, e.g., "is the intent to ask about some refund?", or "is the intent to ask something related to gym?"
- Could the annotation scheme be easily scaled up to more domains? How much effort would be involved in ontology definition and annotation?
Typos: - 166: Space after footnote 5.
- 340 (and later): "Data/Domain Setups" -> "Setups" could either be "Setup", or "Settings"/"Configurations"? | - Additionally, footnotes are used FAR too extensively in this paper -- it's actually very distracting. Much of the content is actually important and should be moved into the main body of the paper! Details around parameter settings etc. can be moved into the appendix to make space (e.g., L468). |
UQpbq4v8Xi | EMNLP_2023 | There isn't a huge amount of novelty here. The main contribution, as far as I can tell, is the exploration of the capabilities of an off-the-shelf LLM for data generation. The greatest performance is gained from the inclusion of domain-specific knowledge and few-shot demonstrations to the prompt, neither of which are "engineered" by the authors; the natural language instructions, which are, appear to have the least impact on the outcome of the generator. The inclusion of agreement-based verification is interesting, but is also a fairly obvious way to validate outputs given the structured domains they are applied to.
Missing is a deeper look into how SymGen might be scaled to new domains. I assume you'd need: 1) Some domain-specific symbolic knowledge (how much? not necessarily a given for low-resource settings); 2) If applicable, a way to execute and validate generations (probably available, but not necessarily); 3) A set of few-shot demonstrations to draw from (possible to obtain, with the help of domain experts). A discussion about this would have been appreciated.
While most of the experiments are interesting and relevant, I find the inclusion of zero-shot generation results a bit strange here. I suppose this might satisfy general curiosity about the capabilities of the LLM in this setting, but it: 1) Clearly only works for symbolic domains that form a reasonable part of its training corpus; 2) This isn't a realistic setting since I assume getting 10 examples for a domain isn't that hard. A more interesting experiment may have been to explore the characteristics of the 10 examples and how that impacts final quality.
A more minor gripe I have is that the authors use the Appendix as though it were part of the paper and refer to results in the main body of the paper. The Appendix should only be used as supplementary material, and a reviewer should be able to arrive at a fair assessment of the paper without needing to actually refer to the Appendix. Otherwise what's the 8-page limit for anyway? | 3) A set of few-shot demonstrations to draw from (possible to obtain, with the help of domain experts). A discussion about this would have been appreciated. While most of the experiments are interesting and relevant, I find the inclusion of zero-shot generation results a bit strange here. I suppose this might satisfy general curiosity about the capabilities of the LLM in this setting, but it: |
NIPS_2016_192 | NIPS_2016 | Weakness: (e.g., why I am recommending poster, and not oral) - Impact: This paper makes it easier to train models using learning to search, but it doesn't really advance state-of-the-art in terms of the kind of models we can build. - Impact: This paper could be improved by explicitly showing the settings for the various knobs of this algorithm to mimic prior work: Dagger, searn, etc...it would help the community by providing a single review of the various advances in this area. - (Minor issue) What's up with Figure 3? "OAA" is never referenced in the body text. It looks like there's more content in the appendix that is missing here, or the caption is out of date. | - (Minor issue) What's up with Figure 3? "OAA" is never referenced in the body text. It looks like there's more content in the appendix that is missing here, or the caption is out of date. |
NIPS_2020_1309 | NIPS_2020 | 1. The authors must be more clear in the introduction that the proposed solution is a "fix" of [12], rather than a new PIC approach, as introduced in lines 29-30 by saying: "... This paper presents a framework which solves instance discrimination by direct parametric instance classification (PIC)". This framework has been already proposed by [12] and the authors must mention it. 2. It is not clear to me why exactly the sliding-window data sampler improves training. My understanding is that with the sliding-window sampler, an instance is repeatedly visited several (something like B/S) times in a row, and then not visited for a very long time (something like B * N / S). This means that in the expectation, a single instance class is visited as often as it would have been visited with epoch-based training. Does this mean that the improvement in training comes only from being able to "learn well" a single instance class, before moving to another one? How about the opopsit effect like forgetting this instance class [1*], since the network does not see this instance class for a much longer period after it has been repeatedly visited? The paper is lacking a clear explanation of this phenomena and hence the sliding window sampling is not well motivated. 3. While it is nice to have Section 5, the feature visualization technique used there is not limited to models with parametric classifiers. Therefore, it would be much more valuable if we could see a comparison of visualizations and statistics (Figure 3) with other methods such as MoCo and SimCLR. Otherwise, simply stating the facts only for PIC without any comparisons is not very informative. [1*] - Toneva et. al "AN EMPIRICAL STUDY OF EXAMPLE FORGETTING DURING DEEP NEURAL NETWORK LEARNING" | 1. The authors must be more clear in the introduction that the proposed solution is a "fix" of [12], rather than a new PIC approach, as introduced in lines 29-30 by saying: "... This paper presents a framework which solves instance discrimination by direct parametric instance classification (PIC)". This framework has been already proposed by [12] and the authors must mention it. |
NIPS_2018_865 | NIPS_2018 | weakness of this paper are listed: 1) The proposed method is very similar to Squeeze-and-Excitation Networks [1], but there is no comparison to the related work quantitatively. 2) There is only the results on image classification task. However, one of success for deep learning is that it allows people leverage pretrained representation. To show the effectiveness of this approach that learns better representation, more tasks are needed, such as semantic segmentation. Especially, the key idea of this method is on the context propagation, and context information plays an important role in semantic segmentation, and thus it is important to know. 3) GS module is used to propagate the context information over different spatial locations. Is the effective receptive field improved, which can be computed from [2]? It is interesting to know how the effective receptive field changed after applying GS module. 4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to reduce its dependency. What is the reason such an observation can indicate GS-P-50 learns better representation? Reference: [1] J. Hu, L. Shen and G. Sun, Squeeze-and-Excitation Networks, CVPR, 2018. [2] W. Luo et al., Understanding the Effective Receptive Field in Deep Convolutional Neural Networks, NIPS, 2016. | 3) GS module is used to propagate the context information over different spatial locations. Is the effective receptive field improved, which can be computed from [2]? It is interesting to know how the effective receptive field changed after applying GS module. |
ICLR_2022_2370 | ICLR_2022 | The text could use more clarity when it comes to the methods. For example, to figure out the RL part of the model, I had to explore Figure 2 instead of reading the related portions of the text. Moving the first part of the Experiments section up in the text, renaming it to Methods, and appending it with details may help.
In the adversarial RL (AIRL) part of the algorithm, it is unclear whether the authors use LSTM in their network (from the writing it follows that they don’t). Conditioning the reward function on the latest chord only will prevent the model from encouraging long-term dependencies crucial for human-produced music. I do hope, however, that it’s a typo and the authors did include their LSTM in AIRL. If there is no LSTM in AIRL, adding it there will be an easy way to improve the model’s performance. Please clarify.
The authors present the samples of music generated by various algorithms, but not the baseline human-generated pieces which were the part of the comparison reported in the paper. While the music generated by the authors' pipeline sounds better than that produced by the baselines, it still sounds artificial and seems to lack temporal structure. In that light, it is unclear how it may have the same preference score as human-generated music. Perhaps, if the authors upload some samples of the music from their training set, the results of the subjective comparison will become more transparent.
Specific comments
Please clarify whether the “Q-network” and the “Target Q-network” have the same weights and are updated simultaneously (that is, their weights are always the same, but activations may differ). Please specify it in the text where the two networks are first mentioned.
Please clarify whether LSTMs are used in AIRL and, if there are no LSTMs in AIRL, please comment on the (in)consistency of the state definitions in AIRL and DQN and the resulting (un)transferability of the reward function.
Please provide a comprehensive description of AIRL in section 4.1., e.g. describe the exact relation between the discriminator network and the reward function. Please also provide the equations for that.
Please reformat the volunteers’ preference chart a 4x4 matrix/heatmap indicating, for every pair of the algorithms, which one was preferred and in what fraction of cases. Otherwise, in the current format, it's hard to interpret.
Suggestions to the authors
If there’s no LSTM in AIRL, please consider adding it for the following reasons: 1) If the reward function, learned by the discriminator, is based on the last chord only, it does not reflect long-term dependencies, normally observed in music. Music can hardly be viewed as an MDP and not accounting for long-term dependencies would prevent the models from learning to generate human-like music. 2) In the RL part of the model, the state (as in: the latest chord) is passed through an LSTM, so the real state, for which the Q-function is computed, accounts for long-range dependencies in time and is not the same with the state in AIRL, for which the reward function has been learned.
It is unclear whether there is any utility in pre-training the LSTM at all. Having an LSTM is critical for long-term dependencies in music, so it’s great that the authors have it in their model, but the objectives are different in pre-training (where the LSTM is trained to produce actions) and in finetuning (training to produce Q-values). I guess that the model would work just fine with no pre-training. If so, it will simplify the model. If not, the direct comparison of pre-trained vs. not pre-trained LSTMs would better substantiate the design choices.
Following up on the previous point, it seems to make more sense to apply the actor-critic framework here. Soft Actor-Critic (SAC), offering continuous control required by the authors, seems to be a good candidate algorithm here. It may solve a few problems at once: 1) The objective for the LSTM part would be the same for pre-training and finetuning (as in: the probabilities of the actions); in the finetuning stage, the authors may simply add another head to the network computing the value functions for the states. 2) It will solve an issue raised by the authors that all the Q-values are similar and the model tends to simply produce as many notes in a chord as possible – for which the authors have introduced a workaround. In SAC, the action probabilities are separate from the state values, and the aforementioned issue does not emerge. With that said, I understand that the authors have limited time now, so this comment may be treated as a suggestion for future directions. | 1) The objective for the LSTM part would be the same for pre-training and finetuning (as in: the probabilities of the actions); in the finetuning stage, the authors may simply add another head to the network computing the value functions for the states. |
1OGhJCGdcP | ICLR_2025 | * The proposed method does not function as a subgoal representation learning approach but rather predicts state affinity.
* The paper lacks strong positioning within the subgoal representation learning literature. It cites only one relevant work and does not provide adequate motivation or comparison with existing methods in this area.
* The method (G4RL) shares significant similarities with HRAC, raising several concerns: 1. G4RL constructs graphs by hard-thresholding distances in state feature space, while HRAC uses K-step affinity along trajectories. As a result, G4RL is both feature- and hyperparameter-dependent, introducing limitations. 2. HRAC applies a contrastive loss to ensure that the learned subgoal space adheres to a K-step adjacency constraint while preventing subgoals from being too close. How does G4RL regularize representation learning in the latent space? 3. What is the rationale behind combining G4RL with HRAC (i.e., HRAC-G4RL)? Does G4RL require HRAC's regularization in the latent space?
* The evaluation is limited in several respects: 1. The method is only tested on the AntMaze and AntGather tasks. 2. It is only compared to two pre-2020 methods, HIRO and HRAC, without including more recent subgoal representation learning methods such as LESSON, HESS, and HLPS.
* There is insufficient analysis of the method's sensitivity to hyperparameters, such as how \epsilon depends on the environment and state space features. | 3. What is the rationale behind combining G4RL with HRAC (i.e., HRAC-G4RL)? Does G4RL require HRAC's regularization in the latent space? |
ARR_2022_8_review | ARR_2022 | 1) If I understand correctly there is a need to know the word and phoneme segment boundaries for this task. This is a pretty strong assumption and can be unreliable for many languages. The experimentation done by the authors use both ground truth and provided segmentation which I think is good to show that the technique works even with a segmental model. But the authors should rephrase term "mild assumption".
2) Details about the model training and dataset is missing. Which will make this work accessible to a smaller set of research community. It would be great if the authors can provide code or additional details about the model.
1) Regarding the related works -- "there is a long line of work that use supervised, multilingual systems" -- it would be good to acknowledge some of the older works too.
2) Following up on that, there are works that recognize articulatory features, or directly predict phones -- mentioning some of those works would also be useful.
3) For the results in 5b, it would be good to add some models from the above work for comparison. As different communities would be interested in different aspects of this paper.
4) There is a recent work on unsupervised speech recognition at Neurips 2021 (https://arxiv.org/pdf/2105.11084.pdf) which does something similar but without the need for segmental acoustic models. It would be good to make a contrast or have a discussion about that for the readers to have a better understanding. | 1) Regarding the related works -- "there is a long line of work that use supervised, multilingual systems" -- it would be good to acknowledge some of the older works too. |
NIPS_2021_537 | NIPS_2021 | Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-level idea, though not explicitly highlighted, has been adopted in almost all query-based NAS approaches where the promising architectures are predicted and selected at each iteration and used to update the predictor model for next iteration. As the authors acknowledged in Section 2.3, their approach is exactly a simplified version of BO which has been extensively used for NAS [1,2,3,4]. However, unlike BO, the predictor doesn’t output uncertainty and thus the authors use a heuristic to trade-off exploitation and exploration rather than using more principled acquisition functions.
2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper.
Detailed Comments: 1. The authors conduct some ablation studies in Section 3.2. However, a more important ablation would be to modify the proposed predictor model to get some uncertainty (by deep-ensemble or add a BLR final output layer) and then use BO acquisition functions (e.g. EI) to do the sampling. The proposed greedy sampling strategy works because the search space for NAS-Bench-201 and 101 are relatively small and as demonstrated in [6], local search even gives the SOTA performance on these benchmark search spaces. For a more realistic search space like NAS-Bench-301[7], the greedy sampling strategy which lacks a principled exploitation-exploration trade-off might not work well. 2. Following the above comment, I’ll suggest the authors to evaluate their methods on NAS-Bench-301 and compare with more recent BO methods like BANANAS[2] and NAS-BOWL[4] or predictor-based method like BRP-NAS [5] which is almost the same as the proposed approach. I’m aware that the authors have compared to BONAS and shows better performance. However, BONAS uses a different surrogate which might be worse than the options proposed in this paper. More importantly, BONAS use weight-sharing to evaluate architectures queried which may significantly underestimate the true architecture performance. This trades off its performance for time efficiency. 3. For results on open-domain search, the authors perform search based on a pre-trained super-net. Thus, the good final performance of WeakNAS on MobileNet space and NASNet space might be due to the use of a good/well-trained supernet; as shown in Table 6, OFA with evalutinary algorithm can give near top performance already. More importantly, if a super-net has been well-trained and is good, the cost of finding the good subnetwork from it is rather low as each query via weight-sharing is super cheap. Thus, the cost gain in query efficiency by WeakNAS on these open-domain experiments is rather insignificant. The query efficiency improvement is likely due to the use of a predictor to guide the subnetwork selection in contrast to the naïve model-free selection methods like evolutionary algorithm or random search. A more convincing result would be to perform the proposed method on DARTS space (I acknowledge that doing it on ImageNet would be too expensive) without using the supernet (i.e. evaluate the sampled architectures from scratch) and compare its performance with BANANAS[2] or NAS-BOWL[4]. 4. If the advantage of the proposed method is query-efficiency, I’d love to see Table 2, 3 (at least the BO baselines) in plots like Fig. 4 and 5, which help better visualise the faster convergence of the proposed method. 5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS. 6. I might misunderstand something here but the results in Table 3 seem to contradicts with the results in Table 4. As in Table 4, WeakNAS takes 195 queries on average to find the best architecture on NAS-Bench-101 but in Table 3, WeakNAS cannot reach the best architecture after even 2000 queries.
7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. 8. In Table 1, what does the number of predictors mean? To me, they are simply the number of search iterations. Do the authors reuse the weak predictors from previous iterations in later iterations like an ensemble?
I understand that given the time constraint, the authors are unlikely to respond to my comments. Hope those comments can help the authors for future improvement of the paper.
References: [1] Kandasamy, Kirthevasan, et al. "Neural architecture search with Bayesian optimisation and optimal transport." NeurIPS. 2018. [2] White, Colin, et al. "BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search." AAAI. 2021. [3] Shi, Han, et al. "Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS." NeurIPS. 2020. [4] Ru, Binxin, et al. "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels." ICLR. 2020. [5] Dudziak, Lukasz, et al. "BRP-NAS: Prediction-based NAS using GCNs." NeurIPS. 2020. [6] White, Colin, et al. "Local search is state of the art for nas benchmarks." arXiv. 2020. [7] Siems, Julien, et al. "NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search." arXiv. 2020.
The limitation and social impacts are briefly discussed in the conclusion. | 7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. |
ICLR_2022_2112 | ICLR_2022 | 1 Collaborative rating prediction is a very well-studied problem, for which there are lots of existing works. Moreover, in most real recommender systems, item ranking is more consistent with a real setting.
2 The time complexity seems rather high. First, the authors use an item-oriented autoencoder, in which there may be lots of users associated with a typical item. Second, the elementwise function is expensive. Third, the number of hidden units is much larger than a typical matrix factorization-based method.
3 The authors do not provide sufficient details or justification on using a large number of hidden units and an additional elementwise function. Moreover, treating unobserved ratings as zeros may introduce bias, which is also not justified. | 2 The time complexity seems rather high. First, the authors use an item-oriented autoencoder, in which there may be lots of users associated with a typical item. Second, the elementwise function is expensive. Third, the number of hidden units is much larger than a typical matrix factorization-based method. |
ICLR_2022_1393 | ICLR_2022 | I think that:
The comparison to baselines could be improved.
Some of the claims are not carefully backed up.
The explanation of the relationship to the existing literature could be improved.
More details on the above weaknesses:
Comparison to baselines:
"We did not find good benchmarks to compare our unsupervised, iterative inferencing algorithm against" I think this is a slightly unfair comment. The unsupervised and iterative inferencing aspects are only positives if they have the claimed benefits, as compared to other ML methods (more accurate and better generalization). There is a lot of recent work addressing the same ML task (as mentioned in the related work section.) This paper contains some comparisons to previous work, but as I detail below, there seem to be some holes.
FCNN is by far the strongest competitor for the Laplace example in the appendix. Why is this left off of the baseline comparison table in the main paper? Further, is there any reason that FCNN couldn't have been used for the other examples?
Why is FNO not applied to the Chip cooling (Temperature) example?
A major point in this paper is improved generalization across PDE conditions. However, I think that's hard to check when only looking at the test errors for each method. In other words, is CoAE-MLSim's error lower than UNet's error because the approach fit the training data better, or is it because it generalized better? Further, in some cases, it's not obvious to me if the test errors are impressive, so maybe it is having a hard time generalizing. It would be helpful to see train vs. test errors, and ideally I like to see train vs. val. vs. test.
For the second main example (vortex decay over time), looking at Figures 8 and 33 (four of the fifty test conditions), CoAE-MLSim has much lower error than the baselines in the extrapolation phase but noticeably higher in the interpolation phase. In some cases, it's hard to tell how close the FNO line is to zero - it could be that CoAE-MLSim even has orders of magnitude more error. Since we can see that there's a big difference between interpolation and extrapolation, it would be helpful to see the test error averaged over the 50 test cases but not averaged over the 50 time steps. When averaged over all 50 time steps for the table on page 9, it could be that CoAE-MLSim looks better than FNO just because of the extrapolation regime. In practice, someone might pick FNO over CoAE-MLSim if they aren't interested in extrapolating in time. Do the results in the table for vortex decay back up the claim that CoAE-MLSim is generalizing over initial conditions better than FNO, or is it just better at extrapolation in time?
Backing up claims:
The abstract says that the method is tested for a variety of cases to demonstrate a list of things, including "scalability." The list of "significant contributions" also includes "This enables scaling to arbitrary PDE conditions..." I might have missed/forgotten something, but I think this wasn't tested?
"Hence, the choice of subdomain size depends on the trade-off between speed and accuracy." This isn't clear to me from the results. It seems like 32^3 is the fastest and most accurate?
I noticed some other claims that I think are speculations, not backed up with reported experiments. If I didn't miss something, this could be fixed by adding words like "might."
"Physics constrained optimization at inference time can be used to improve convergence robustness and fidelity with physics."
"The decoupling allows for better modeling of long range time dynamics and results in improved stability and generalizability."
"Each solution variable can be trained using a different autoencoder to improve accuracy."
"Since, the PDE solutions are dependent and unique to PDE conditions, establishing this explicit dependency in the autoencoder improves robustness."
"Additionally, the CoAE-MLSim apprach solves the PDE solution in the latent space, and hence, the idea of conditioning at the bottleneck layer improves solution predictions near geometry and boundaries, especially when the solution latent vector prediction has minor deviations."
"It may be observed that the FCNN performs better than both UNet and FNO and this points to an important aspect about representation of PDE conditions and its impact on accuracy." The representation of the PDE conditions could be why, but it's hard to say without careful ablation studies. There's a lot different about the networks.
Similarly: "Furthermore, compressed representations of sparse, high-dimensional PDE conditions improves generalizability."
Relationship to literature:
The citation in this sentence is abrupt and confusing because it sounds like CoAE-MLSim is a method from that paper instead of the new method: "Figure 4 shows a schematic of the autoencoder setup used in the CoAE-MLSim (Ranade et al., 2021a)." More broadly, Ranade et al., 2021a, Ranade et al., 2021b, and Maleki, et al., 2021 are all cited and all quite related to this paper. It should be more clear how the authors are building on those papers (what exactly they are citing them for), and which parts of CoAE-MLSim are new. (The Maleki part is clearer in the appendix, but the reader shouldn't have to check the appendix to know what is new in a paper.)
I thought that otherwise the related work section was okay but was largely just summarizing some papers without giving context for how they relate to this paper.
Additional feedback (minor details, could fix in a later version, but no need to discuss in the discussion phase):
- The abstract could be clearer about what the machine learning task is that CoAE-MLSim addresses.
- The text in the figures is often too small.
- "using pre-trained decoders (g)" - probably meant g_u?
- Many of the figures would be more clear if they said pre-trained solution encoders & solution decoders, since there are multiple types of autoencoders.
- The notation is inconsistent, especially with nu. For example, the notation in Figures 2 & 3 doesn't seem to match the notation in Alg 1. Then on Page 4 & Figure 4, the notation changes again.
- Why is the error table not ordered 8^3, 16^3, 32^3 like Figure 9? The order makes it harder for the reader to reason about the tradeoff.
- Why is Err(T_max) negative sometimes? Maybe I don't understand the definition, but I would expect to use absolute value?
- I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method.
- Figure 11: I'm guessing that the y-axis is log error, but this isn't labeled as such. I didn't really understand the the legend or the figure in general until I got to the appendix, since there's little discussion of it in the main paper.
- "Figure 30 shows comparisons of CoAE-MLSim with Ansys Fluent for 4 unseen objects in addition to the example shown in the main paper." - probably from previous draft. Now this whole example is in the appendix, unless I missed something.
- My understanding is that each type of autoencoder is trained separately and that there's an ordering that makes sense to do this in, so you can use one trained autoencoder for the next one (i.e. train the PDE condition AEs, then the PDE solution AE, then the flux conservation AE, then the time integration AE). This took me a while to understand though, so maybe this could be mentioned in the body of the paper. (Or perhaps I missed that!)
- It seems that the time integration autoencoder isn't actually an autoencoder if it's outputting the solution at the next time step, not reconstructing the input.
- Either I don't understand Figure 5 or the labels are wrong.
- It's implied in the paper (like in Algorithm 1) that the boundary conditions are encoded like the other PDE conditions. In the Appendix (A.1), it's stated that "The training portion of the CoAE-MLSim approach proposed in this work corresponds to training of several autoencoders to learn the representations of PDE solutions, conditions, such as geometry, boundary conditions and PDE source terms as well as flux conservation and time integration." But then later in the appendix (A.1.3), it's stated that boundary conditions could be learned with autoencoders but are actually manually encoded for this paper. That seems misleading. | - Many of the figures would be more clear if they said pre-trained solution encoders & solution decoders, since there are multiple types of autoencoders. |
NIPS_2022_874 | NIPS_2022 | I found the presentation at times to be more complicated than it needs to be. I would suggest adding a simple running example (could be very low-dimensional) throughout the paper that already clearly shows why the proposed method clearly works and we really don't need a specialized training procedure.
It would be helpful relating the proposed method to possibly special cases of how people have previously enforced monotonicity without resorting to specialized training procedures (I mentioned already that this is done in survival analysis but I'd assume that it's also done in other fields too).
Minor: Please appropriately switch between using "citep" and "citet" for references (currently, the paper basically uses the equivalent of "citet" too often so that the text suddenly switches to author names in a manner that is not grammatically correct and detracts from reading the paper). As a few examples:
In line 19-20, I'd suggest you use "citep" so that the text says "... if the other features are equal [Potharst and Feelders, 2002]."
In lines 25-28, if you use "citep", you would get: "For example, they have a better regularization capability [Dugas et al., 2000; Fard et al., 2016; You et al., 2017] and better interpretability [Gupta et al., 2016], and they can be used for fair machine learning [Wang and Gupta, 2020]." Reference:
Kvamme H, Borgan Ø. Continuous and discrete-time survival prediction with neural networks. Lifetime Data Analysis. 2021 Oct;27(4):710-36.
Yes, the author(s) do briefly address the limitation of their approach (i.e., it doesn't handle large m
), and I found their response to the question on potential negative societal impact in the checklist to be adequate. | 2021 Oct;27(4):710-36. Yes, the author(s) do briefly address the limitation of their approach (i.e., it doesn't handle large m ), and I found their response to the question on potential negative societal impact in the checklist to be adequate. |
w5oP27fmYW | ICLR_2024 | - Main concern: While the improvement in results is clear and the implementation is simple, I'm currently not convinced by the argumentation. My concern is that the authors propose adding an explicit inductive bias, which assumes that all target models are zero-centered. This assumption may or may not hold for general 3D point cloud generation, depending on how the data is provided. For example, would this be a useful cue if the objects are articulated? In such cases, a folding back action could cause the legs to move, affecting the centering. I'm not convinced by the authors' analysis, motivation, and interpretation. It's crucial that if the paper is accepted based on its strong results, it should also come with a solid understanding of why the results are improving and whether the conditions for improvement will be applicable in other setups.
- Centering in Partial Shapes and Scenes: While centering can serve as an effective canonicalization for complete shapes, it may not be well-defined for partial shapes or scenes. The authors should address this issue. As an aside—while the authors present an experiment with up to 50% missing points, my understanding is that this is merely an augmentation and the ground-truth center is provided. However, this will not be the case in general when training with partial shapes.
- Test-Time Centering: The suggested method modifies both the training and testing processes, suggesting that the improvement is due to better utilization of network capacity. I wonder if this zero-centering could be applied solely as a test-time inductive bias instead? I would be very interested in seeing a comparison that includes this experiment, as it could help validate whether the improvement is indeed due to increased network capacity or the strong inductive bias.
- Clarity: The introduction states that there is a critical problem with the center point shifting during the denoising process. The first issue raised discusses the "wasted" capacity needed to map the center of the Gaussian noise to the center of the final shape. However, the authors do not explain why this mapping is necessary. The assertion that "It is crucial to ensure that the transition of the point cloud center from the initial Gaussian noise state to the final object reconstruction is appropriately managed" requires justification.
- In Section 3.2, the authors open with the statement, "Predicting the center bias is challenging for the network in the reverse process." Why is this the case? What makes it more challenging than any other network prediction? Again, I find the explanations in this section to be somewhat "hand-wavy." Some of the claims appear to require the explicit assumption that each target output is zero-centered—an assumption that is neither guaranteed nor necessarily desirable. Is there any relevance to images in this discussion? I'm not very familiar with the details of 2D diffusion, but I believe there is normalization of the output distribution. However, I don't think the output of each image is assumed to be centered.
- Center of Mass: The authors opted to canonicalize the shapes using the point mean. In cases where the density is non-uniform, this approach could lead to issues. Specifically, very similar shapes that are sampled differently might end up having different "centroids." Did the authors consider other forms of canonicalization, such as using the bounding box?
- Loss of Diversity: The authors mention that there is a sacrifice of diversity in the outputs. This claim needs to be both demonstrated and measured.
- Qualitative results on CO-3D would be nice to see. This data was used in the main baseline work PC^2.
- I’m missing comparison with a NeRF-based methods, like the recent Zero-1-to-3
- I also recommend comparison with point-e
- I don’t see the relevance of the occlusion experiment — it doesn’t seem like the method is proposing anything specific to occlusion. Minor:
- several statements were unclear to me and would benefit further explanation. like: “each point inside the point cloud and predicted noise is independently modeled” — aren’t the points predicted jointly? | - I’m missing comparison with a NeRF-based methods, like the recent Zero-1-to-3 - I also recommend comparison with point-e - I don’t see the relevance of the occlusion experiment — it doesn’t seem like the method is proposing anything specific to occlusion. Minor: |
NIPS_2021_1604 | NIPS_2021 | ).
Weaknesses - Some parts of the paper are difficult to follow, see also Typos etc below. - Ideally other baselines would also be included, such as the other works discussed in related work [29, 5, 6].
After the Authors' Response My weakness points after been addressed in the authors' response. Consequently I raised my score.
All unclear parts have been answered
The authors' explained why the chosen baseline makes the most sense. It would be great if this is added to the final version of the paper.
Questions - Do you think there is a way to test beforehand whether I(X_1, Y_1) would be lowered more than I(X_2, Y_1) beforehand? - Out of curiosity, did you consider first using Aug and then CF.CDA? Especially for the correlated palate result it could be interesting to see if now CF.CDA can improve. - Did both CDA and MMI have the same lambda_RL (Eq 9) value? From Figure 6 it seems the biggest difference between CDA and MMI is that MMI has more discontinuous phrase/tokens.
Typos, representation etc. - Line 69: Is X_2 defined as all features of X not in X_1? Stating this explicitly would be great. - Line 88: What ideas exactly do you take from [19] and how does your approach differ? - Eq 2: Does this mean Y is a value in [0, 1] for two possible labels? Can this be extended to more labels? This should be clarified. - 262: What are the possible Y values for TripAdvisor’s location aspect? - The definitions and usage of the various variables is sometimes difficult to follow. E.g. What exactly is the definition of X_2? (see also first point above). When does X_M become X_1? Sometimes the augmented data has a superscript, sometimes it does not. In line 131 the meaning of x_1 and x_2 are reverse, which can get confusing - maybe x’_1 and x’_2 would make it easier to follow together with a table that explains the meaning of different variables? - Section 2.3: Before line 116 mentioned the change when adding the counterfactual example, it would be helpful to first state what I(X_2, Y_1) and I(X_1, Y_1) are without it.
Minor points - Line 29: How is desired relationship between input text and target labels defined? - Line 44: What is meant by the initial rationale selector is perfect? It seems if it were perfect no additional work needs to be done. - Line 14, 47: A brief explanation of “multi-aspect” would be helpful - Figure 1: Subscripts s and t should be 1 and 2? - 184: Delete “the”
There is a broader impact section which discusses the limitations and dangers adequately. | - Line 14, 47: A brief explanation of “multi-aspect” would be helpful - Figure 1: Subscripts s and t should be 1 and 2? |
ARR_2022_108_review | ARR_2022 | 1. First of all, compared with other excellent papers, this paper is slightly less innovative.
2. The baseline is is not strong enough. Expect to see experiments that compare with the baseline of the papers you cited.
3. p indicates the proportion of documents, I would like to know how the parts of sentences and documents are extracted? Do the rules of extraction have any effect on the experiment? I hope to see a more detailed analysis.
4. It lacks case study to show which document-level translation errors are improved by the proposed method.
1. It is suggested to add a structure diagram to illustrate your proposed method.
2. It is suggested to add some case studies to clarify the problems you have solved, so as to clearly show your contribution.
3. The authors are encouraged to proofread the paper more carefully and explain their methods more clearly. | 3. p indicates the proportion of documents, I would like to know how the parts of sentences and documents are extracted? Do the rules of extraction have any effect on the experiment? I hope to see a more detailed analysis. |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.