paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 176
10.5k
| point
stringlengths 42
623
|
---|---|---|---|
NIPS_2021_538 | NIPS_2021 | Unfortunately, I am not convinced that POODLE works as described, and cannot judge its significance or impact. Three reasons:
Tables 1 and 2 are interesting, but not convincing. Without confidence intervals over the evaluation trials I cannot tell whether the observed improvements are significant. And more fundamentally, while using the simple CELoss model as the baseline works in demonstrating improvement, it does not indicate improvement that is meaningful. The CE baseline is an inductive model for vanilla FSL, but your evaluation settings in Tables 1 and 2 are semi-supervised SSL and transductive SSL respectively. Thus it is impossible to tell if improvement is coming from your novel loss function, or from the fundamental shift from inductive evaluation to a more friendly evaluation setting. Appropriate baselines would be other semi-supervised learning methods and losses (few-shot or otherwise) such as [1] or [2] for table 1, or other transductive methods for table 2 (as in table 4). As is, the improvement shown is deceptively large.
No analysis or convincing explanation for why this works. All results appear as accuracy scores; it is impossible to tell if POODLE works for the reasons described (suppressing distractor features). There are even some indicators that it doesn’t: the fact that uniform-POODLE is competitive and sometimes superior to base-class-POODLE indicates that the loss function may not be suppressing distractor features at all, but rather acting as a simple regularizer on labels akin to label smoothing.
Lack of simple ablations. Some design decisions for the POODLE loss go unexplored, with important baseline ablations absent. 1) What is the performance of a model that simply assigns all negative samples to a distractor class? 2) The pushing loss incentivizes negative samples to be equidistant from all class vectors. What is the performance of a model that does this explicitly, by maximizing entropy of class predictions for negative samples? And as per above: how would this differ from a similar level of label smoothing? 3) As mentioned in sec.5.4, weighting the pull loss by the class predictions can lead to problematic behavior where large classes dominate behavior. Why not use the ground-truth class assignment instead of the predicted one?
In sum, it is not clear to me why and to what degree POODLE actually works. The second point is the most fundamental issue. Additionally:
Not actually complementary to many FSL methods. Lines 38-39, 84-85 imply that POODLE can be applied to FSL methods broadly. This is not the case: it only applies to methods that fine-tune a classifier layer at test time, which precludes all metric-based approaches to FSL. Thus POODLE is not nearly as versatile as claimed.
I hesitate to call POODLE SOTA for inductive settings. In a fair comparison without bells or whistles, [3] outperforms simple-POODLE in all settings, and going strictly by the highest published numbers, [3] outperforms the best POODLE on tiered-ImageNet and CUB. Also, DeepEMD is cited as related work but not included in Table 3, though in this case that doesn’t change the results.
These last two issues are less important and can probably be fixed with small language changes.
SMALL COMMENTS (feel free to ignore in discussion):
Missing citation: [4] is a more recent and SOTA example of employing a self-supervised auxiliary loss for FSL
Where does the name POODLE come from? I assume it’s “Penalizing Out Of Distribution sampLEs” but you never actually explain it.
In sec4 and later, you use “prototype” to refer to the class representation vector in the learned classification layer. I would consider making explicit the fact that you are referring to this learned vector and not the “class prototype” in the sense of prototypical networks (even if yours are at first initialized to the same thing).
Line 220 is misleading, if not outright factually wrong: your improvement in table 1 is lower than 1pp in many settings (eg. Rot+KD improves by only .09pp on 5-shot CUB).
I suspect you have mislabeled Table 2: L_push should be L_pull and vice versa. As-is, your notation conflicts with eq.4 and text lines 224-227 (and makes results difficult to interpret).
[1] Ren et al. ICLR 2018: Meta-Learning for Semi-Supervised Few-Shot Classification
[2] Saito et al. ICCV 2019: Semi-Supervised Domain Adaptation via Minimax Entropy
[3] Wertheimer, Tang and Hariharan CVPR 2021: Few-Shot Classification with Feature Map Reconstruction Networks
[4] Doersch, Gupta and Zisserman NEURIPS 2020: CrossTransformers: spatially-aware few-shot transfer POST-REBUTTAL:
I'd like to echo the first half of reviewer e7JL's post-rebuttal comment: I really appreciate the clear effort authors put in to address our concerns, there's good value in this work, and in my eyes too novelty is sufficient. Regarding my own review: authors mostly addressed my concerns, with the sole exception that we still have no empirical analysis of POODLE behavior beyond accuracy scores, meaning that we still cannot be entirely sure what POODLE is actually doing in practice. However, after additional clarification the method does make intuitive sense so this is not as troubling to me as it was initially.
No negative impacts are discussed. That said, any risks or dangers inherent to this method are shared with few-shot learning more broadly, so a brief mention of this would likely suffice. Also, if POODLE does in fact work by suppressing distractor features, the ability to target certain distractors (i.e. age, race, gender) could be a very large impact, though of course that is outside the scope of this particular paper to examine or demonstrate. | 1) What is the performance of a model that simply assigns all negative samples to a distractor class? |
v3XXtxWKi6 | ICLR_2024 | 1. The analysis on the preference model shows that the preference model produced by RLCD is, while better than the baseline, still not very good, especially on the harmlessness attribute (Tab. 5). It is not clear how this slight advantage over chance (2.4%~5.9%) translates into a much better downstream performance after RLHF.
2. As shown in Appendix C, RLAIF-Few-30B produces both a better preference model and a better-aligned language model than RLCD-30B on the harmlessness benchmark, which is attributed to few-shot prompting by the authors. It seems that this technique can also be integrated into RLCD to enable a fairer comparison.
3. The advantage of RLCD over RLAIF shrinks going from 7B to 30B (Tab. 2). It remains to be seen whether RLCD (or RLCD-Rescore) can scale to yet larger language models that are arguably better at differentiating responses near the decision boundary. | 3. The advantage of RLCD over RLAIF shrinks going from 7B to 30B (Tab. 2). It remains to be seen whether RLCD (or RLCD-Rescore) can scale to yet larger language models that are arguably better at differentiating responses near the decision boundary. |
NIPS_2020_813 | NIPS_2020 | * The proposed NC measure takes the whole training and test datasets as input. I can hardly imagine how this method can be learned and applied to large scale datasets (e.g. ImageNet). Is there any solution to address the scalability issue? Otherwise, the practical contribution of this paper will be significantly reduced. * There are many missing details regarding the experiments, which make the proposed method hard to reproduce. See the Clarity section for more comments. | * The proposed NC measure takes the whole training and test datasets as input. I can hardly imagine how this method can be learned and applied to large scale datasets (e.g. ImageNet). Is there any solution to address the scalability issue? Otherwise, the practical contribution of this paper will be significantly reduced. |
xajif1l65R | ICLR_2025 | 1. Lack of Quantitative Analysis on Computational Gains: While the paper claims computational benefits from replacing the MAE model with a CNN-based data augmentation strategy, it lacks specific measurements or comparisons to substantiate these gains. A quantitative analysis—such as GPU hours, memory usage, or training time—would provide stronger evidence of the efficiency improvements in DQ V2.
2. Missing Baselines: I noticed that some recent coreset selection baselines for deep learning are missing: D2 Pruning[1], CCS[2], Moderate[3]. Those baselines seem to have a stronger performance than the proposed methods.
3. Missing evaluation on ImageNet-1k: the paper argues that DQ-V2 is more efficient than DQ, but the method is only evaluated on the ImageNet subset. Previous methods including DQ all conducted evaluation on ImageNet-1k. It will be good to include an ImageNet-1k evaluation to demonstrate the scalability of the proposed methods.
4. The data augmentation part is confusing: the goal of data quantization and coreset selection is to reduce the size of the training dataset, but the data augmentation method proposed in the paper expands the datasets -- the final expanded training dataset can be even larger, which is contradicted to the goal of coreset selection.
5. Ablation study on data augmentation: The paper would benefit from a more detailed ablation study to assess the effectiveness of the data augmentation method used in DQ V2. Testing different data augmentation configurations (e.g., no augmentation, alternate augmentation techniques) would clarify its impact and help refine the methodology.
[1] Maharana, Adyasha, Prateek Yadav, and Mohit Bansal. "D2 pruning: Message passing for balancing diversity and difficulty in data pruning." ICLR 2024
[2] Zheng, Haizhong, Rui Liu, Fan Lai, and Atul Prakash. "Coverage-centric coreset selection for high pruning rates." ICLR 2023
[3] Xia, Xiaobo, Jiale Liu, Jun Yu, Xu Shen, Bo Han, and Tongliang Liu. "Moderate coreset: A universal method of data selection for real-world data-efficient deep learning." ICLR 2023 | 1. Lack of Quantitative Analysis on Computational Gains: While the paper claims computational benefits from replacing the MAE model with a CNN-based data augmentation strategy, it lacks specific measurements or comparisons to substantiate these gains. A quantitative analysis—such as GPU hours, memory usage, or training time—would provide stronger evidence of the efficiency improvements in DQ V2. |
ICLR_2023_75 | ICLR_2023 | According to Appendix B.2, the authors applied different MVS networks (at least different weights) when evaluating on different datasets. Especially, when evaluating on LLFF data, they finetune the MVS model scene-by-scene. This may cause concern/confusion because 1) the time for COLMAP and scene-by-scene fine-tuning should be taken into account when comparing, rendering the method less efficient for these scenes; 2) it is unclear why not using COLMAP point clouds directly. The authors should clarify these and ideally use a same generalized MVS model for a fair comparison.
Pulsar is a very related direct baseline of this method. However, a comparison with Pulsar seems to be missing in this paper.
The intuition behind employing a U-Net for rendering is unclear. Is it possible to use per-pixel MLP to render the features (MLP: feat -> RGB)? Is the U-Net pre-trained and weight-shared across datasets? | 1) the time for COLMAP and scene-by-scene fine-tuning should be taken into account when comparing, rendering the method less efficient for these scenes; |
NIPS_2017_370 | NIPS_2017 | - There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of input and output channels is small (around 32). Can FMN scale reasonably well when the number of filter parameters is huge (say, 128 to 512 input and output channels which is common to many CNN architectures)?
- From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique.
- The improvements on image deconvolution is minimal with CNN-X working better than ACNN when all the dataset is considered. This shows that the adaptive convolutions are not universally applicable when the side information is available. Also, there are no comparisons with state-of-the-art network architectures for digit recognition and image deconvolution. Suggestions:
- It would be good to move some visual results from supplementary to the main paper. In the main paper, there is almost no visual results on crowd density estimation which forms the main experiment of the paper. At present, there are 3 different figures for illustrating the proposed network architecture. Probably, authors can condense it to two and make use of that space for some visual results.
- It would be great if authors can address some of the above weaknesses in the revision to make this a good paper.
Review Summary:
- Despite some drawbacks in terms of experimental analysis and the general applicability of the proposed technique, the paper has several experiments and insights that would be interesting to the community. ------------------
After the Rebuttal: ------------------
My concern with this paper is insufficient analysis of 'filter manifold network' architecture and the placement of adaptive convolutions in a given CNN. Authors partially addressed these points in their rebuttal while promising to add the discussion into a revised version and deferring some other parts to future work.
With the expectation that authors would revise the paper and also since other reviewers are fairly positive about this work, I recommend this paper for acceptance. | - There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of input and output channels is small (around 32). Can FMN scale reasonably well when the number of filter parameters is huge (say, 128 to 512 input and output channels which is common to many CNN architectures)? |
5vJe8XKFv0 | ICLR_2024 | - There is no information about baseline models and training/inference times. There is a link to the code repo added, but it is impossible to figure out the settings and hyperparamters of the models. Given that the proposed model performs marginally better than FNO models this makes it impossible to judge. Side remark: Pytorch FFTs on complex inputs are much slower than on real inputs (torch.fft vs torch.rfft), thus runtime comparisons would be needed.
- The main formula (Eq 3) is hardly explained. For example in the literature the Fractional Fourier Transform is often defined as $$\mathcal{F}_{\alpha}\[f\](u) = \sqrt{1 - i \cot(\alpha)} e^{i\pi\cot(\alpha)u^2} \int e^{-2\pi i \left( \csc(\alpha) u t - \frac{\cot(\alpha)}{2}x^2\right)} f(t) dt \ .$$ What is the relation of Eq. 3 to the presented formula, and more importantly how can this be implemented? In the code the CoNO model looks very similar to the FNO model, but I assume that the Fourier transform needs to be changed since there is another term which depends on $t$?
- The proposed CoNO model uses a complex UNet part after the fractional transform. It is impossible to guess what brings the claimed performance boost - the fractional transform or the UNet operation in the fractional Fourier domain, which is comparable to pointwise multiplication as done in FNOs? At least comparisons to UNets are therefore inevitable. Especially, since on regular gridded domains UNets / convolutional operators have shown strong performances, see e.g. Raonic et al or Gupta et al.
- Ablation studies are not revealing a lot, they are basically showing the same results as the main table.
- There has been work for example on Clifford Fourier Neural Operators (Brandstetter et al) which includes complex numbers and more complicated algebras. Possibly missing a few others here. Discussions of related work and comparisons against those are missing.
Raonić, B., Molinaro, R., Rohner, T., Mishra, S., & de Bezenac, E. (2023). Convolutional Neural Operators. arXiv preprint arXiv:2302.01178.
Gupta, Jayesh K., and Johannes Brandstetter. "Towards multi-spatiotemporal-scale generalized pde modeling." arXiv preprint arXiv:2209.15616 (2022).
Brandstetter, J., Berg, R. V. D., Welling, M., & Gupta, J. K. (2022). Clifford neural layers for PDE modeling. arXiv preprint arXiv:2209.04934. | - The proposed CoNO model uses a complex UNet part after the fractional transform. It is impossible to guess what brings the claimed performance boost - the fractional transform or the UNet operation in the fractional Fourier domain, which is comparable to pointwise multiplication as done in FNOs? At least comparisons to UNets are therefore inevitable. Especially, since on regular gridded domains UNets / convolutional operators have shown strong performances, see e.g. Raonic et al or Gupta et al. |
NIPS_2020_1080 | NIPS_2020 | 1. The experiments setups are not persuasive. For the gradient estimation accuracy, the author conduct experiment only on 2 classes 2D simulation data. The author does not mention how the 100 training data generated, which is in quite a small amount even in the simulation study. The network is in special design as 5-3-3 Bernoulli cases, which is insufficient to conclude the proposed method is better in gradient estimation. The reviewer expects to see more simulation results by varying the unit number in each layer. 2. The performance on the real-world dataset is not satisfying enough. The PSA method seems not to achieve the best accuracy or the fastest convergence. The ST method is previously proposed, which I think cannot be recognized as the author's contribution. Besides, only the validation results are reported. What is the performance on the testing set? 3. Important baselines are not compared. The ARM gradient is a competitive baseline, which the author only compared under a special simulation setup. What is the reason that the author does not compare with ARM on the CIFAR classification task? 4. The proposed PSA method requires more computation than baselines. In algorithm 1, when feeding forward, the PSA requires the calculation of all the flipped previous layer output into the current layer. The comparison of computation complexity is expected in the experiment part. 5. The notations of the method are difficult to follow. It is much better if the authors can begin their analysis with 1-hidden-layer SBN first, which will simplify the notation a lot. | 4. The proposed PSA method requires more computation than baselines. In algorithm 1, when feeding forward, the PSA requires the calculation of all the flipped previous layer output into the current layer. The comparison of computation complexity is expected in the experiment part. |
NIPS_2017_114 | NIPS_2017 | Weakness-
- Comparison to other semi-supervised approaches : Other approaches such as variants of Ladder networks would be relevant models to compare to. Questions/Comments-
- In Table 3, what is the difference between \Pi and \Pi (ours) ?
- In Table 3, is EMA-weighting used for other baseline models ("Supervised", \Pi, etc) ? To ensure a fair comparison, it would be good to know that all the models being compared to make use of the EMA benefits.
- The proposed model benefits from two factors : noise and keeping an exponential moving average. It would be good to see how much each factor contributes on its own. The \Pi model captures just the noise part, so it would be useful to know how much gain can be obtained by just using a noise-free exponential moving average.
- If averaging in parameter space is being used, it seems that it should be possible to apply the consistency cost in the intermediate layers of the model as well. That could potentially provide a richer consistency gradient. Was this tried ?
Minor comments and typos-
- In the abstract : "The recently proposed Temporal Ensembling has ... ": Please cite.
- "when learning large datasets." -> "when learning on large datasets."
- "zero-dimensional data points of the input space": It may not be accurate to say that the data points are zero-dimensional.
- "barely applying", " barely replicating" : "barely" -> "merely"
- "softmax output of a model does not provide a good Bayesian approximation outside training data". Bayesian approximation to what ? Please explain. Any model will have some more generalization error outside training data. Is there another source of error being referred to here ? Overall-
The paper proposes a simple and effective way of using unlabelled data and
improving generalization with labelled data. The most attractive property is
probably the low overhead of using this in practice, so it is quite likely that
this approach could be impactful and widely used. | - The proposed model benefits from two factors : noise and keeping an exponential moving average. It would be good to see how much each factor contributes on its own. The \Pi model captures just the noise part, so it would be useful to know how much gain can be obtained by just using a noise-free exponential moving average. |
wPK65O4pqS | ICLR_2024 | 1. What is the baseline model on the ablation experiments? Is the baseline model for your own architecture or other study’s baseline? The study has shown that without the STCore and SGA, the trained model already has excellent performance (80.9% on DVS-CIFAR10) while general accuracy from other studies as shown in your table are below 80%. Would this also mean that the work in accuracy boost may not be very effective (1-2% boost) while spending extra computing resources?
2. Minor mistake on 3.1 preliminaries. Equation 3 referencing.
3. Excellent drawing on figures. However, fonts could be larger fig 1. The words in grey box may be larger. V_mem, Th_i, U_i^t too small. “CTRL” long form explanation. Also, font in figure 2 is too small. (Conv5 +BN)
4. Lack of details comparison, such as epochs and number of params, with other state-of-the-art Transformer design. A “table” manner may better emphases the data for readers to justify the improved accuracy is not because of brute-force parameters increment. | 3. Excellent drawing on figures. However, fonts could be larger fig 1. The words in grey box may be larger. V_mem, Th_i, U_i^t too small. “CTRL” long form explanation. Also, font in figure 2 is too small. (Conv5 +BN) 4. Lack of details comparison, such as epochs and number of params, with other state-of-the-art Transformer design. A “table” manner may better emphases the data for readers to justify the improved accuracy is not because of brute-force parameters increment. |
NIPS_2016_241 | NIPS_2016 | /challenges of this approach. For instance... - The paper does not discuss runtime, but I assume that the VIN module adds a *lot* of computational expense. - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. - The results are only reported after a bunch of training has occurred, but in RL we are often also interested in how the agent behaves *while* learning. I presume that early in training the model parameters are essentially garbage and the planning component of the network might actually *hurt* more than it helps. This is pure speculation, but I wonder if the CNN is able to perform reasonably well with less data. - I wonder whether more could be said about when this approach is likely to be most effective. The navigation domains all have a similar property where the *dynamics* follow relatively simple, locally comprehensible rules, and the state is only complicated due to the combinatorial number of arrangements of those local dynamics. WebNav is less clear, but then the benefit of this approach is also more modest. In what kinds of problems would this approach be inappropriate to apply? ---Clarity--- I found the paper to be clear and highly readable. I thought it did a good job of motivating the approach and also clearly explaining the work at both a high level and a technical level. I thought the results presented in the main text were sufficient to make the paper's case, and the additional details and results presented in the supplementary materials were a good compliment. This is a small point, but as a reader I personally don't like the supplementary appendix to be an entire long version of the paper; it makes it harder to simply flip to the information I want to look up. I would suggest simply taking the appendices from that document and putting them up on their own. ---Summary of Review--- I think this paper presents a clever, thought-provoking idea that has the potential for practical impact. I think it would be of significant interest to a substantial portion of the NIPS audience and I recommend that it be accepted. | - The results are only reported after a bunch of training has occurred, but in RL we are often also interested in how the agent behaves *while* learning. I presume that early in training the model parameters are essentially garbage and the planning component of the network might actually *hurt* more than it helps. This is pure speculation, but I wonder if the CNN is able to perform reasonably well with less data. |
ICLR_2023_3811 | ICLR_2023 | Most of the paper is poorly written and difficult to understand.
The idea of scheduled sampling is not new, so I would categorize this paper a purely empirical contribution. However the amount of inconsistencies, and overall lack of rigor in reporting and interpreting the results, paired with the lack of clarity in the exposition significantly subtract from its empirical value.
Some claims are unsupported.
Suggestions and questions for the authors.
The whole second page is devoted to setting up the stage of the paper's contributions, however any reader not familiar with the term "coherence" will have a hard time grasping the need of the sampling strategies that you are suggesting. On the next page you mention that several previous works model coherence as Natural Language Inference (NLI). It is only by looking at equation 2 and the meaning of f c
that we understand that coherence is also modelled as NLI in this paper. I strongly suggest better defining "coherence" in the introduction to better contextualize the contributions that the paper is proposing.
Usually in dialogue literature, the words "turn" and "utterance" are ambiguous. I suggest defining both terms precisely. For example: can a turn contain more than one utterance? Does one utterance correspond to one turn? Can one utterance span several turns? Can there be adjacent turns/utterances for a single role (i.e. the same role sending several messages one after the other)? I cannot deduce any of this from reading sections 3 and 4.
Somewhat related to the previous question: do you train your models to generate both system and user responses? Or do have your models assume only one of those roles during training?
When describing the online Evaluation you formally define the coherence between response r ^ i
and context $\bf{\hat{U}^{i-1}1} a s
c_k = \sum{i=1}^{D}{\frac{\mathbb{1}(f_c(\bf{\hat{U}^{i-1}_1}, \bf{\hat{r}}_i) = 1)}{D}} w h e r e
f_c(\bf{\hat{U}^{i-1}_1}, \bf{\hat{r}}_i)$ is an entailment classifier. However NLI classification usually has 3 possible classes: "entailment", "contradiction" and "neutral". I suggest specifying that the 1 label in the numerator corresponds to the "entailment" class, and whether you consider both "contradiction" and "neutral" as a single "non-entailment" class, or you treat them separately.
Also in equation 2, I don't understand what D
is supposed to represent. Shouldn't it be k instead? 𝟙 c k = ∑ i = 1 k 1 ( f c ( U ^ 1 i − 1 , r ^ i ) = 1 ) k
. If not, then what are the "instances for evaluation" you mention after the equation? Further when i = 1
there's a U ^ 1 0
term that shows up in the numerator. How is it defined?
In the "Utterance Level Sampling" paragraph in section 4.1 you justify the use of a Geometrical distribution by saying it "tends to sample previous utterance to be replaced" but I still not understand what this means, or why it is desired. I suggest clarifying this.
In the "Coherence Rate" paragraph in section 5.1 you say you use a v g n
as the average coherence rate, but equation (2) already defines c k
as an average. Was this intended or is it a typo?
In Table 1 you report results "w/ Noise" described on page 6, "w/ Utterance" and "w/ Semi-Utterance" described on page 4, but you also mention "w/ Hierarchical". Up to this point I had understood both "Utterance Level Sampling" and "Semi-utterance Level Sampling" as two instances of Hierarchical Sampling, so I was baffled to see an additional row for Hierarchical Sampling on this Table. I suggest being more explicit about what the "w/ Hierarchical" row means. On page 8 you mention that Hierarchical sampling is the combination of both Utterance and Semi-Utterance level Sampling, but I suggest explaining this earlier, in section 4.1.
On page 6 you also mention that you measured Pearson correlation between human-annotated and automatic coherence rates. Why did you do this only for coherence and not for non-repetition?
Why did you not report the turn-level coherences in Table 2?
In Table 4 did you average the non-repetition count for unigrams, bigrams and trigrams for calculating "Rep"? I suggest clarifying this.
On page 7 in the "Sampling vs w/o Sampling" paragraph, how did you obtain the p-value? What were the null and alternative hypotheses? In the same paragraph you state that Blender improved 2.8% when using the hierarchical sampling strategy, but table 2 actually shows a 4% difference. Why this discrepancy in numbers?
Same question for the p-value reported on page 8 in the "Human Evaluation" paragraph. In the same paragraph you state that human-evaluated coherence increases from 0.96 to 1.53, while actually these numbers refer to the human-evaluated non-repetition metric. Further you conclude from a 0.78 Pearson correlation score that model-based evaluation methods are effective, but there are many relationships between human and automated metrics that can give rise to such a score (see https://janhove.github.io/teaching/2016/11/21/what-correlations-look-like for an example). I suggest at least plotting the human vs. automated metric values + their correlations before making such a strong claim.
In the "Explicit Coherence Optimization" paragraph on page 8 you conclude from figure 4 that training the model with RL outperforms training the model with MLE in terms of coherence rate. However figure 4 shows that this statement holds only for the first 5 turns, then coherence dips below the BART baseline with beam-search based reranking, so the conclusion you reach does not follow from the evidence. Also why do you think this dip in coherence happened?
In the same paragraph you describe the reranking setup. I suggest putting this description before, where you define the other experiments.
In that same paragraph you conclude that your "hierarchical-sampling based methods consistently perform better than multi-turn BART by introducing coherence reranking". Again, this cannot be concluded from Figure 4. It does perform better in terms of coherence rate, for the first 5 turns, but you did not report on the other performance metrics under the reranking scheme. To support this claim, it would be necessary to show how the fluency and non-repetition rate change when reranking based on coherence only. My intuition tells me that these two metrics would be negatively impacted, but I would like my intuition to be proved wrong and see that actually optimizing for coherence impacts fluency and non-repetition positively.
The claim made at the end of the introduction that you "demonstrate these methods make chatbots more robust in real-world testing" is not supported, as you did not test your chatbots in the real world. They were tested in a lab setting with humans that were told to follow some experimental instructions. Moving from this setting to the real world would require a considerable amount of additional effort.
Typos and minor corrections
Page 2, paragraph 1, line 3: the term "coherence" is mentioned here for the first time. However you define it in Fig. 2's caption. I suggest defining it either as a footnote or in the main text to not disrupt the reading flow. Also, the definition "Coherence rate (Equation 2) measures the percentage of responses is coherence [sic] with the corresponding contexts." is self referential. What does it mean for a response to be coherent with the corresponding contexts? Finally, "is coherence" should be "that is coherent".
P. 3, p. 4, l. 1: You write "a conventional practice [REFERENCES] for evaluating [...]."; I suggest writing "a conventional practice for evaluating [...] [REFERENCES]." to improve the reading flow.
P. 3, p. 4, l. 7: What does the sub-index "1" mean in U ^ 1 i − 1
? Does it mean "starting from index 1"? If this is the case and you never use anything other than "1" as the starting index, I suggest removing it, and simply defining U ^ i − 1
as the context up to the i − 1
-th utterance.
P. 3, p. 5, l. 7: relative -> relatively
P. 3, p. 5, l. 9: to "conduct" a classifier does not make much sense. You can either "conduct" classification or "train", "use", "create", etc. a classifier.
P. 4, p. 4, l. 4: here you say "we first ask the model to predict the response r ^ i
based on the previous context U ^ 1 ′ i − 1
but if I understand the explanation correctly, then it should be U ^ 1 i − 1
i.e. the original context.
P. 4, p. 4, l. 7: "Given a training pair U ^ 1 t − 1
" should be "Given a training pair U ^ 1 ′ t − 1
", i.e. the training pair contains an utterance replaced through the "Utterance Level Sampling" method.
P. 4, eq. 3: U ^ 1 ′ l − 1
should be U ^ 1 ′ t − 1
i.e. the super-index of U ′
should be t − 1 not l − 1
P. 4, p. 5, l. 5: I can't understand the meaning of the sentence "While a smaller j to simulate more accumulate errors along with the inference steps.", please rewrite it.
P. 5, p. 3, l. 3: "two annotators are employed" -> "two annotators were employed"
P. 5, p. 5, l. 3: The sentence starting with "As model-based methods" is ungrammatical. I suggest reformulating it.
P. 5, p. 7, l. 1: "Following previous work (Ritter et al., 2011)" -> "Following the work by Ritter et al. (2011),"
P. 5, p. 8, l. 2: "to online evaluate these two methods" -> "to evaluate these two methods online"
P. 6, p. 1, l. 2: "non-repetitive" -> "non-repetitiveness"
P. 6, p. 6, l. 1: "After sample an utterance" -> "After sampling an utterance"
P. 6, p. 8, l. 2: "generate coherence response" -> "generate coherent responses"
P. 6, p. 8, l. 4: "with the number of turns increases" -> "as the number of turns increases"
P. 7, Figure 4, a - b: The y-axis should be labeled "coherence (%)" instead of "coherent (%)". Same for figure 5 (b) on the next page.
P. 7, p. 3, l. 7: the sentence "since sampled noises are difficult to accurately simulate errors of the inference scene during training" makes no sense. Please rewrite it.
P. 8, Figure 5: Both y-axes have a typo: (a): "Contradition" -> "Contradiction"; (b): "Coherent" -> "Coherence". Is the x-axis in (a) different to the x-axis in (b) and to those in figure 4? if not, I suggest being consistent with the x-labels.
P. 8, p. 1, l. 7: "hierarchy way" -> "hierarchical way"
P. 9, p. 2, l. 1: "incoherence response" -> "incoherent response"
Overall it feels like the paper was rushed at the end. Its earlier 25% is well written and has almost no typos, while the conclusion is barely legible. I suggest proofreading the later half of the paper on top of the corrections I make above. | 5: I can't understand the meaning of the sentence "While a smaller j to simulate more accumulate errors along with the inference steps.", please rewrite it. P. 5, p. 3, l. |
ARR_2022_237_review | ARR_2022 | of the paper include: - The introduction of relation embeddings for relation extraction is not new, for example look at all Knowledge graph completion approaches that explicitly model relation embeddings or works on distantly supervised relation extraction. However, an interesting experiment would be to show the impact that such embeddings can have by comparing with a simple baseline that does not take advantage of those.
- Improvements are incremental across datasets, with the exception of WebNLG. Why mean and standard deviation are not shown for the test set of DocRED?
- It is not clear if the benefit of the method is just performance-wise. Could this particular alignment of entity and relation embeddings (that gives the most in performance) offer some interpretability? ( perhaps this could be shown with a t-SNE plot, i.e. check that their embeddings are close in space).
Comments/Suggestions: - Lines 26-27: Multiple entities typically exist in both sentences and documents and this is the case even for relation classification, not only document-level RE or joint entity and relation extraction.
- Lines 39-42: Point to figure 1 for this particular example.
- Lines 97-98: Rephrase the sentence "one that searches for ... objects" as it is currently confusing - Line 181, Equations 4: $H^s$, $E^s$, $E^o$, etc are never explained.
- Could you show ablations on EPO and SEO? You mention in the Appendix that the proposed method is able to solve all those cases but you don't show if your method is better than others.
- It would be interesting to also show how the method performs when different number of triples reside in the input sequence. Would the method help more sequences with more triples?
Questions: - Improvement still be observed with a better encoder, e.g. RoBERTa-base, instead of BERT?
- How many seeds did you use to report mean and stdev on the development set?
- For DocRED, did you consider the documents as an entire sentence? How do you deal with concepts (multiple entity mentions referring to the same entity)? This information is currently missing from the manuscript. | - For DocRED, did you consider the documents as an entire sentence? How do you deal with concepts (multiple entity mentions referring to the same entity)? This information is currently missing from the manuscript. |
gwDuW7Ok5f | ICLR_2024 | 1. The contribution looks marginal to me since all the methods used in different stage are well designed and demonstrated. Adding another stream for low-resolution might not be a major contribution for a top-tier venue like ICLR.
2. I got some questions for the experimental results which can be seen in the questions part. | 1. The contribution looks marginal to me since all the methods used in different stage are well designed and demonstrated. Adding another stream for low-resolution might not be a major contribution for a top-tier venue like ICLR. |
ICLR_2023_4605 | ICLR_2023 | 1: The main contribution is somehow a little bit unclear. From the ablation study, we can see the performance gain is mostly from PBSD. However this paper is mostly motivated by supervised contrastive learning, that is, the DSCL part. Other than improving the discriminative of the learned representation on tail classes, any other motivations for PBSD?
2: No experiments regarding smaller-size datasets (CIFAR). Both ImageNet and iNaturaList contain relatively large images. It is necessary to validate whether the proposed method, especially the PBSD part can scale to small-size images (let's say, 32*32 resolution). It would be very interesting to add an study regarding image resolution (some candidates: 32, 64, 128, 224, 384).
Minor issues:
1: Fig 1 can be modified. The size of the clouds is not identical. | 1: The main contribution is somehow a little bit unclear. From the ablation study, we can see the performance gain is mostly from PBSD. However this paper is mostly motivated by supervised contrastive learning, that is, the DSCL part. Other than improving the discriminative of the learned representation on tail classes, any other motivations for PBSD? |
NIPS_2021_1917 | NIPS_2021 | The guarantee of the efficient algorithm i.e, Theorem 5.4 seems arguably weaker than that of Theorem 5.2.
Comments/Questions for the Authors:
Line 33: that is -> that are
Line 62: from computational -> from a computational
Line 79: is devised -> was devised
Line 129: access the -> access to the
Remark 1: where is N k
used in the definition? Reference for Mahonian number?
Section 5.3: I am a bit confused about the main result of this section, as far as I understand, Alg. 2 gives a tester for the spread parameter, but then it's not clear if it immediately yields an ( ϵ , δ )
-identity tester as well? For e.g., how is it dealing with ( π , ϕ )
pairs where ϕ = ϕ 0
, but d K ( π 0 , π )
is large? | 2 gives a tester for the spread parameter, but then it's not clear if it immediately yields an ( ϵ , δ ) -identity tester as well? For e.g., how is it dealing with ( π , ϕ ) pairs where ϕ = ϕ 0 , but d K ( π 0 , π ) is large? |
NIPS_2019_663 | NIPS_2019 | of their work?"] The submission is overall reasonably sound, although I have some comments and questions: * Regarding the model itself, I am confused by the GRU-Bayes component. I must be missing something, but why is it not possible to ingest observed data using the GRU itself, as in equation 2? This confusion would perhaps be clarified by an explanation in line 89 of why continuous observations are required. As it is written, I am not sure why it you couldn't just forecast (by solving the ODE defined by equation 3) the hidden state until the next measurement arrives, at which point g(t) and z(t) can be updated to define a new evolution equation for the hidden state. I am guessing the issue here is that this update only changes the derivative of the hidden state and not its value itself, but since the absolute value of the hidden state is not necessarily meaningful, the problem with this approach isn't very clear to me. I imagine the authors have considered such a model, so I would like to understand why it wouldn't be feasible here. * In lines 143-156, it is mentioned that the KL term of the loss can be computed empirically for binomial and Gaussian distributions. I understand that in the case of an Ornstein-Uhlenbeck SDE, the distribution of the observations are known to be (conditionally) Gaussian, but in the case of arbitrary data (e.g. health data), as far as I'm aware, few assumptions can be made of the underlying process. In this case, how is the KL term managed? Is a Gaussian distribution assumption made? Line 291 indicates this is the case, but it should be made clear that this is an assumption imposed on the data. For example, in the case of lab test results as in MIMIC, these values are rarely Gaussian-distributed and may not have Gaussian-distributed observation noise. On a similar note, it's mentioned in line 154 that many real-world cases have very little observation noise relative to the predicted distribution - I assume this is because the predicted distribution has high variance, but this statement could be better qualified (e.g. which real-world cases?). * It is mentioned several times (lines 203, 215) that the GRU (and by extension GRU-ODE-Bayes) excels at long-term forecasting problems, however in both experiments (sections 5.2 and 5.3) only near-term forecasting is explored - in both cases only the next 3 observations are predicted. To support this claim, longer prediction horizons should be considered. * I find it interesting that the experiments on MIMIC do not use any regularly-measured vital signs. I assume this was done to increase the "sporadicity" of the data, but it makes the application setting very unrealistic. It would be very unusual for values such as heart rate, respiratory rate, blood pressure and temperature not to be available in a forecasting problem in the ICU. I also think it's a missed opportunity to potentially highlight the ability of the proposed model to use the relationship between the time series to refine the hidden state. I would like to know why these variables were left out, and ideally how the model would perform in their presence. * I think the experiment in Section 5.5 is quite interesting, but I think a more direct test of the "continuity prior" would be to explicitly test how the model performs (in the low v. high data cases) on data which is explicitly continuous and *not* continuous (or at least, not 2-Lipschitz). The hypothesis that this continuity prior is useful *because* it encodes prior information about the data would be more directly tested by such a setup. At present, we can see that the model outperforms the discretised version in the low data regime, but I fear this discretisation process may introduce other factors which could explain this difference. It is slightly hard to evaluate because I'm not entirely sure what the discretised version consists of , however - this should be explained (perhaps in the appendix). Furthermore, at present there is no particular reason to believe that the data in MIMIC *is* Lipschitz-2 - indeed, in the case of inputs and outputs (Table 4, Appendix), many of these values can be quite non-smooth (e.g. a patient receiving aspirin). * It is mentioned (lines 240-242, section H.1.3) that this approach can handle "non-aligned" time series well. As mentioned, this is quite a challenging problem in the healthcare setting, so I read this with some interest. Do these statements imply that this ability is unique to GRU-ODE-Bayes, and is there a way to experimentally test this claim? My intuition is that any latent-variable model could in theory capture the unobserved "stage" of a patient's disease process, but if GRU-ODE-Bayes has some unique advantage in this setting it would be a valuable contribution. At present it is not clearly demonstrated - the superior performance shown in Table 1 could arise from any number of differences between this model and the baselines. 2.c Clarity: ["Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.)"] While I quite like the layout of the paper (specifically placing related work after a description of the methodology, which is somewhat unusual but makes sense here) and think it is overall well written, I have some minor comments: * Section 4 is placed quite far away from the Figure it refers to (Figure 1). I realise this is because Figure 1 is mentioned in the introduction of the paper, but it makes section 4 somewhat hard to follow. A possible solution would be to place section 4 before the related research, since the only related work it draws on is the NeuralODE-VAE, which is already mentioned in the Introduction. * I appreciate the clear description of baseline methods in Section 5.1. * The comprehensive Appendix is appreciated to provide additional detail about parts of the paper. I did not carefully read additional experiments described in the Appendix (e.g. the Brusselator) out of time consideration. * How are negative log-likelihoods computed for non-probabilistic models in this paper? * Typo on line 426 ("me" instead of "we"). * It would help if the form of p was described somewhere near line 135. As per my above comment, I assume it is a Gaussian distribution, but it's not explicitly stated. 2.d Significance: ["Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?"] This paper describes quite an interesting approach to the modelling of sporadically-measured time series. I think this will be of interest to the community, and appears to advance state of the art even if it is not explicitly clear where these gains come from. | * The comprehensive Appendix is appreciated to provide additional detail about parts of the paper. I did not carefully read additional experiments described in the Appendix (e.g. the Brusselator) out of time consideration. |
F0XXA9OG13 | ICLR_2024 | - The framework is quite straightforward, and there is not much technical contribution. It is mostly a combination of multiple existing models. And the idea of transferring tabular data into text is not novel at all. There are a bunch of existing works [1][2][3], including one of their baselines TabLLM[4]. The further incorporation of text information from samples from other datasets is just one trivial step forward. Furthermore, [4] actually proved that a template for transferring the tabular data works better than an LLM. Yet, in this paper, there is no comparison for such serialization methods.
- The author didn’t specify what exact features are included in these experimental datasets. Also, it is unclear how many columns are overlapped between different datasets. Yet, if there is a large portion of feature overlapping, maybe simple concatenation and removing or recoding of the missing columns will work just as well. There is no discussion regarding this whatsoever.
- The step 2 in section 2.2 is confusing:
- The authors claimed that they used active learning in step 2. Is the “active learning pipeline” method the same as traditional active learning that select informative samples to label? If not, the description can mislead the readers.
- The authors claimed that they cleaned supplementary dataset T_{1, sup} with a data audit module based on data Shapley scores. More experiments are expected to demonstrate the effectiveness of the audit module. Moreover, it would be better if the authors conducted more ablation studies to show whether the supplementary dataset improve the prediction performance.
- The datasets in Table 1 contain less than 3000 patients. It is very easy for the LLMs (e.g., BioBERT) to overfit the training set. It is unclear how the authors prevent overfitting during the fine-tuning phase.
- In Table 3, the proposed MediTab exhibits the capability to access multiple datasets during its training, in contrast to the other baseline models, which are constrained to employing a single dataset. This discrepancy in data utilization introduces an element of unfairness in the comparison. It would be more appropriate to conduct a comparison against models that have undergone training on multiple datasets. For instance, TabLLM, being a large language model, can readily undertake multi-dataset training with minor adjustments to its data preprocessing procedures. Therefore, a more equitable comparison would involve evaluating MediTab and TabLLM under identical conditions, both in the context of training on a single dataset and across multiple datasets.
- Most medical data, like MIMIC-IV, includes timestamp information of the patients’ multiple visits or collections. This framework completely ignores this part of the medical data, which limits their application to real-world clinical environments. Reference:
1. Bertsimas, Dimitris & Carballo, Kimberly & Ma, Yu & Na, Liangyuan & Boussioux, Léonard & Zeng, Cynthia & Soenksen, Luis & Fuentes, Ignacio. (2022). TabText: a Systematic Approach to Aggregate Knowledge Across Tabular Data Structures. 10.48550/arXiv.2206.10381.
2. Yin, Pengcheng & Neubig, Graham & Yih, Wen-tau & Riedel, Sebastian. TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. ACL 2020.
3. Li, Y., Li, J., Suhara, Y., Doan, A., and Tan, W.-C. (2020). Deep entity matching with pre-trained language models. Proc. VLDB Endow., 14(1):50–60.
4. Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. Tabllm: Few-shot classification of tabular data with large language models. arXiv preprint arXiv:2210.10723, 2022. | - The authors claimed that they used active learning in step 2. Is the “active learning pipeline” method the same as traditional active learning that select informative samples to label? If not, the description can mislead the readers. |
1WJoJPXwiG | EMNLP_2023 | 1. The detailed distribution of the proposed dataset is unclear;
2. Only three entities (company, organization, asset class) are annotated;
3. The experiments are a bit simple. | 1. The detailed distribution of the proposed dataset is unclear; |
EraNITdn34 | ICLR_2024 | - The contrastive loss with the label has limited novelty.
- The section on related works should be integrated into the main article, as it is difficult to discern the specific improvements in comparison to previous methods.
- While the authors experimented with diverse domains of datasets, both the pretraining and finetuning datasets for each experiment originate from the same dataset. It remains uncertain whether the proposed method can be generalized across domains.
- The proposed method necessitates annotated labels for learning semantic tokens, limiting its application to supervised training. A self-supervised pretraining approach without annotations could be more appealing. | - The proposed method necessitates annotated labels for learning semantic tokens, limiting its application to supervised training. A self-supervised pretraining approach without annotations could be more appealing. |
ICLR_2022_3111 | ICLR_2022 | of the paper: - Since standard FF is used in most experiments in this paper, it seems that the exponential growth of basis in standard FF is not a severe issue for tasks considered in the paper. As a result, the motivation and necessity of proposing LFF needs more elaboration. - Although this paper shows Fourier features bring neural net training benefits, the reason why these benefits convert to better sample efficiency remain missing. Considering how these benefits improve Q value function estimation accuracy is a possible way to bridge such gaps. - Most continuous control experiments are performed on simple and low-dimensional tasks, such as cartpole or mountain car. To fully demonstrate the scalability of LFF, it’s important to show LFF can also help to solve more challenging DRL tasks with higher input dimensionality, such as locomotion of ants or humanoids. - The PPO algorithm considered in this paper is a policy optimization algorithm, not originally designed to work together with value-based DQN methods. My suggestion is to add similar discussions for some actor-critic like algorithms, such as soft actor critic (SAC), A3C, etc. For such algorithms, improvement of DQN training efficiency should bring more performance gains. - It's better to add a related work section to systematically review previous feature-based methods in DRL or classic RL scenarios. | - Most continuous control experiments are performed on simple and low-dimensional tasks, such as cartpole or mountain car. To fully demonstrate the scalability of LFF, it’s important to show LFF can also help to solve more challenging DRL tasks with higher input dimensionality, such as locomotion of ants or humanoids. |
NIPS_2018_743 | NIPS_2018 | - quality: It seems to me that the chosen "algorithm" for choosing dendrite synapses is very much like dropout with a fixed mask. Introducing this sparsity is a form of regularization, and a more fair comparison would be to do a similar regularization for the feed-forward nets (e.g. dropout, instead of bn/ln; for small networks like this as far as I know bn/ln are more helpful for optimization than regularization). It also seems to me that the proposed structure is very much like alternating layers of maxout and regular units, with this random-fixed dropout; I think this would be worth comparing to. I think there are some references missing, in the area of similar/relevant neuroscience models and in the area of learned piecewise activation functions. It would be reassuring to mention the computation time required and whether this differs from standard ff nets. Also, most notably, there are no accuracy results presented, no val/test results, and no mention is made of generalization performance for the MNIST/CIFAR experiments. - clarity: Some of the sentences are oddly constructed, long, or contain minor spelling and grammar errors .The manuscript should be further proofread for these issues. For readers not familiar with the biological language, it would be helpful to have a diagram of a neuron/dendritic arbour; in the appendix if necessary. It was not 100% clear to me from the explanations whether or not the networks compared have the same numbers of parameters; this seems like an important point to confirm. - significance: I find it hard to assess the significance without generalizaion/accuracy results for the MNIST/CIFAR experiments. REPRODUCABILITY: For the most part the experiments and hyperparameters are well-explained (some specific comments below), and I would hope the authors would make their code available. SPECIFIC COMMENTS: - in the abstract, I think it should say something like "...attain greater expressivity, as measured by the change in linear regions in output space after [citation]. " instead of just "attain greater expressivity" - it would be nice to see learning curves for all experiments, at least in an appendix. - in Figure 1, it would be very helpful to show a FNN and D-Net with the same number of parameters in each (unless I misunderstood, the FNN has 20 and the DNN has 16). - There are some "For the rest part" -> for the rest of (or rephrase) - missing references: instead of having a section just about Maxout networks, I think the related work should have a section called something like "learned piecewise-linear activation functions" which includes maxout and other works in this category, e.g. Noisy Activation Functions (Gulcehre 2016). Also, it's not really my field but I believe there is some work on two-compartment models in neuroscience and modeling these as deep nets which would be quite relevant for this work. - It always bothers me somewhat when people refer to the brain as 'sparse' and use this as a justification for sparse neural networks. Yes, overall/considering all neurons the brain as one network it would be sparse, but building a 1000 unit network to do classification is much more analogous to a functional subunit of the brain (e.g. a subsection of the visual pathway), and connections in these networks are frequently quite dense. The authors are not the first to make this argument and I am certainly not blaming them for its origin, but I take this opportunity to point it out as (I believe) flawed. :) - the definition of "neuron transition" is not clear to me - the sentence before Definition 2 suggests that it is a change in _classification_ (output space), which leads to a switch in the linear region of a piecewise linear function, but the Definition and the subsequent sentence seem to imply it is only the latter part (a change from one linear region of the activation function to another; nothing to do with the output space). If the latter, it is not clear to me how/whether or not these "transitions" say anything useful about learning. If it is the former (more similar to Raghu et al), I find the definition given unclear. - I like the expressiveness experiments, but It would be nice to see some actual numbers instead of just descriptions. - unless I missed it somehow, the "SNN" is never defined. and it is very unclear to me whether it refers to a self-organizing neural network cited in [12]. or a "sparse" neural network, and in any case what exactly this architecture is. - also possible I missed it despite looking, but I could not find what non-linearity is used on D-Nets for the non-dendrite units OVERALL ASSESSMENT: My biggest issue with the paper is the lack of mention/results about generalization on MNIST/CIFAR, and the ambiguity about fair comparison. If these issues are resolved I would be very willing to change my rating. CONFIDENCE IN MY SCORE: This is the first time I've given a confidence of 5. With due credit to the authors, I believe I've understood most things about the paper, and I am familiar with the relevant work. Of course I'm not infallible and it's possible I've missed or misunderstood something, especially relating to the things I noted finding unclear. | - in the abstract, I think it should say something like "...attain greater expressivity, as measured by the change in linear regions in output space after [citation]. " instead of just "attain greater expressivity" - it would be nice to see learning curves for all experiments, at least in an appendix. |
ICLR_2021_2929 | ICLR_2021 | Weakness - It's unclear what this paper's motivation is as I do not see a clear application from the proposed method. The paper showed results mapping one RGB image to another RGB image (with a different style). When do we need this domain adaptation, and how would this be useful? For example, it would have been better to demonstrate the methodology's use on some actual tasks involving domain adaptation, such as adapting a model trained on a synthetic dataset to a real dataset. - Apart from the motivation, there are no comparisons against any other potential baseline approaches in the evaluation. For example, from the results shown in the paper and the supplementary material, I believe that a simple photographic style transfer method would achieve similar, if not better, effects. - When reporting the semantic segmentation IoU as a metric, it would be essential to show the baseline performance. What is the IoU when applying the segmentation model to the original visual domain? How much improvement can we get by mapping the images in the input domain to the output domain? Showing results only after the image-to-image translation is not informative. - The technical novelty of the paper is somewhat limited as well. Existing work (Pix2PixHD) has shown that one can improve the visual quality of the synthesized images using an instance edge map. This paper extends that to use edges (which include both object contour and internal structures). However, from the image synthesis perspective, this EPS input needs to be obtained from some input RGB images in the first place. Since we already start with an RGB image, why should we resort to this somewhat complicated image synthesis pipeline (as opposed to simple color/style transfer)?
In sum, while the paper showcased improved visual quality on the translated images, I have concerns about this paper in its unclear motivation and limited evaluation. I would appreciate it if the authors could clarify and, if possible, provide comparisons with the baselines (e.g., style transfer). | - It's unclear what this paper's motivation is as I do not see a clear application from the proposed method. The paper showed results mapping one RGB image to another RGB image (with a different style). When do we need this domain adaptation, and how would this be useful? For example, it would have been better to demonstrate the methodology's use on some actual tasks involving domain adaptation, such as adapting a model trained on a synthetic dataset to a real dataset. |
NrI1OkZkiy | ICLR_2024 | * The motivation of this paper should be further enhanced. What issues do this paper address that previous works have not solved?
* I do not agree some statements in the introduction, e.g.’ the majority of these methods fail to consider the interaction between different senses’. There are tons of works that focus on multimodal/multisensory interactions. To name a few:
1. MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis, ACM MM 2020
2. M2FNet: Multi-modal Fusion Network for Emotion Recognition in Conversation, CVPR workshop 2022
3. MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations, ICASSP 2022
* The paper regards MULT as the only deep learning based baseline that considers cross-sensory interaction but MULT was proposed in 2019 and thus sort of out of fashion.
* Authors state their concern about MULT on computational efficiency, but I don’t see any discussion and comparison on efficiency so it is not clear how this method addresses this issue. | 1. MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis, ACM MM 2020 2. M2FNet: Multi-modal Fusion Network for Emotion Recognition in Conversation, CVPR workshop 2022 3. MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in Conversations, ICASSP 2022 * The paper regards MULT as the only deep learning based baseline that considers cross-sensory interaction but MULT was proposed in 2019 and thus sort of out of fashion. |
NIPS_2022_2592 | NIPS_2022 | - (major) I don’t agree with the limitation (ii) of current TN models: “At least one Nth-order factor is required to physically inherit the complex interactions from an Nth-order tensor”. TT and TR can model complex modes interactions if the ranks are large enough. The fact that there is a lack of direct connections from any pair of nodes is not a limitation because any nodes are fully connected through a TR or TT. However, the price to pay with TT or TR to model complex modes interactions is having bigger core tensor (larger number of parameters). The new proposed topology has also a large price to pay in terms of model size because the core tensor C grows exponentially with the number of dimensions, which makes it intractable in practice. The paper lacks from a comparison of TR/TT and TW for a fixed size of both models (see my criticism to experiments below). - The new proposed model can be used only with a small number of dimensions because of the curse of dimensionality imposed by the core tensor C. - (major) I think the proposed TW model is equivalent to TR by noting that, if the core tensor C is represented by a TR (this can be done always), then by fusing this TR with the cores G_n we can reach to TR representation equivalent to the former TW model. I would have liked to see this analysis in the paper and a discussion justifying TW over TR. - (major) Comparison against other models in the experiments are unclear. The value of the used ranks for all the models are omitted which make not possible a fair comparison. To show the superiority of TW over TT and TR, the authors must compare the tensor completion results for all the models but having the same number of model parameters. The number of model parameters can be computed by adding the number of entries of all core tensors for each model (see my question about experiment settings below). - (minor) The title should include the term “tensor completion” because that is the only application of the new model that is presented in the paper. - (minor) The absolute value operation in the definition of the Frobenius norm in line 77 is not needed because tensor entries are real numbers. - (minor) I don’t agree with the statement in line 163: “Apparently, the O(NIR^3+R^N) scales exponentially”. The exponential grow is not apparent, it is a fact.
I updated my scores after rebuttal. See my comments below
Yes, the authors have stated that the main limitation of their proposed model is its exponentionally grow of model parameters with the number of dimensions. | - (major) Comparison against other models in the experiments are unclear. The value of the used ranks for all the models are omitted which make not possible a fair comparison. To show the superiority of TW over TT and TR, the authors must compare the tensor completion results for all the models but having the same number of model parameters. The number of model parameters can be computed by adding the number of entries of all core tensors for each model (see my question about experiment settings below). |
ICLR_2022_2725 | ICLR_2022 | 1. The differences of the proposed instance normalization from other normalization methods, such as batch/group/layer normalization, should be explained in detail. What’s more, its advantages should be elaborated. 2. The memorized restitution sounds to be the most important contribution of the proposed method. For memorized restitution, why not consider to combine general feature map G with a memory bank? On the other hand, as D is updated with a memory vector, G can be computed as R - the updated D. Please clarify the logic behind using current G. 3. It would be better if more information about memory vector M is provided. To be specific, what is the relation between M and D in multiple iterations? 4. In Table 2, under the leave one out setting the proposed method only be compared to “+LFP”. As ATA is a bit better than FP according to the results in Table 1, it would be more convincing to also including it in the comparison. 5. The ablation study is helpful to check the significance of each module. It would be better to also show the results of w/o memory bank and w/o restitution. | 4. In Table 2, under the leave one out setting the proposed method only be compared to “+LFP”. As ATA is a bit better than FP according to the results in Table 1, it would be more convincing to also including it in the comparison. |
ICLR_2022_2033 | ICLR_2022 | The choice to call with the same name (EfficientPhys) two different approaches is a bit confusing. Models are indeed three: EfficientPhys-C (convolution), EfficientPhys-T1 (transformer-regular), EfficientPhys-T2 (transformer-shrinked). They share the same normalization block and arguably a similar output block, but with a different core. They also perform in a very different way: only '-C' outperforms other approaches in terms of MAE, RMSE, ro (except DualGAN), and is the Pareto optimum for MAE/latency; '-T1' is good in terms of MAE, RMSE, ro, but not in latency; '-T2' in contrast show interesting latency but not so good accuracy. It is difficult to consider these models as unique, and results can not be claimed for EfficientPhys but for EfficientPhys-C.
the lack of a complete comparison with the best other approach, DualGAN, is disappointing. As explained by the authors, this is due to the lack of results of DualGAN on the public datasets PURE, MMSE, and cannot be totally addressed to them. Nonetheless, this diminishes the claimed results since DualGAN outperforms EfficientPhys models in UBFC dataset.
Self-Attention-Shifted Network is described by eq. 3 which is too verbose and somehow obscure, and needs to be better explained as it describes a core component of the model. It fails to give an intution of how the module works.
It is not explained at all which is the final task (and consequently modules on top of architecture) of both models: probably regression?
Fig. 2: the normalization module seems different in the two versions, but reading the text it seems the very same. Figures are great value for giving an intuition of how a system works, but a standardization of the pictograms is needed.
Fig. 4 is a bit confusing in the 0/50 latency range, 2.5/4.0 MAE: the chosen symbols overlap.
minor problems about the text: -- pag. 4, after eq. 1: 'To address this, we add a batch-normalization layer followed by the difference layer' should be: 'To address this, we add a batch-normalization layer following the difference layer' -- pag. 5, second paragraph of section 3.2: the first sentence, 'Since the 2D Swin transfromer...', is not correct and needs to be rephrased -- pag. 9, last two sentences of section 6: 'However, we are aware...': they are quite involuted and the meaning is not clear. | 2: the normalization module seems different in the two versions, but reading the text it seems the very same. Figures are great value for giving an intuition of how a system works, but a standardization of the pictograms is needed. Fig. 4 is a bit confusing in the 0/50 latency range, 2.5/4.0 MAE: the chosen symbols overlap. minor problems about the text: -- pag. 4, after eq. |
ICLR_2022_1905 | ICLR_2022 | Weakness: 1.The author claim that ‘The observation consistently shows that only parts of subdivision splines are useful for decision boundary; and the goal of pruning is to remove those (redundant) subdivision splines and find winning tickets.’, however, in theoretical part, the author didn’t provide how the proposed algorithm in detail to remove the subdivision splines. Will the algorithm need extra computation cost for such space partition building? 2. When the author introduces the proposed algorithm, the author didn’t analysis if such method has the same convergence guarantee as Lottery Ticket Hypothesis. If so, what is the bound of the error probability? 3.In the experiment, the author didn’t consider Vision Transformer, which is an important SOTA model in image classification. And it is unsure if such technique is still working for larger image dataset such as ImageNet. Will the pruning strategy will be different in self attention layers? | 1.The author claim that ‘The observation consistently shows that only parts of subdivision splines are useful for decision boundary; and the goal of pruning is to remove those (redundant) subdivision splines and find winning tickets.’, however, in theoretical part, the author didn’t provide how the proposed algorithm in detail to remove the subdivision splines. Will the algorithm need extra computation cost for such space partition building? |
ICLR_2022_1912 | ICLR_2022 | (OR SUGGESTIONS) ==
There is a certain amount of minor typos (see some below) but also a lack of some term definitions (because of late copy-paste in Appendix to observe the page limitation rule probably). Table 3 is actually very useful for the understanding, I would not put it in Appendix.
Some things which could be clarified:
Reference to deep learning methods is done for state-of-the-approaches. In the paper, it seems it is more about shallow networks (max 2 layers) that are discussed.
p.3, A3, Eq. 2: W1 and W2 are not defined. I guess they denote the Encoder and the Decoder network.
p.3, A4, eq.3: W and V not defined, same as above
Eq. 4: N is not defined
p.4, A6: M not defined.
Eq. 13, p.7: operation dMat not defined (too late to put it in p.2 of supplementary material), same in eq. for W* (which should be indexed)
Also to help the reader, it would be nice to start Sections 2 and 3 with a summary of what the section plans to achieve/demonstrate. In particular, this is quite difficult in Section 3 to follow the objective. For instance, a sentence such as "Prop. 2 also leads to the following Corollary." does not help to understand the implication of such result.
Unless I missed it, differences of implication between LR-EDLAE-1 and LR-EDLAE-2 (the proposed methods) are not enough carefully detailed.
In Table 1, it would be clearer for the reader to identify the best methods for instance, 1) by putting in bold the best obtained results and 2) by underlying the 2nd best result. Table 1 refers to Table 3 for more details: I think this is a mistake because it does not let the paper "self-contained".
Am I missing something or Mult-VAE/DAE are used as baselines during experiments but are not discussed before (contrarily to the other approaches)? Why?
Some typos :
in abstract : « (surprisnig) »
abstract: missing "-" for low-rank and closed form
p.1 in introduction: "the linear autoencoders [...] which encompasses"
missing upper case : p.2, 2nd paragraph « . we generalize the »
p.2 missing "-", "These models produce closed form full-rank estimators"; "However, no closed form solutions"; "ADMM based solutions"; "the full rank W"
end of p.2 : « The weighted unclear norm »
beginning of p. 3 : missing lower case « Therefore, Nuclear-norm regularizers »
inconsistency in convention naming for equations : eq. Or Eq. / eq (1) or eq. 1 ; 5 times in p. 3, same for Table or table, Proposition or Prop.
p.3, A2 : « This approach useS »
p.3, A4 : « probabolistic »
p.4, first row: choose between "hyperparameter" (p.3 after Lemma 1) and "hyper-parameter" (p. 4)
p.4, (ii): "choices of p [...] produces" ; "all entries in X is"
p.4, (ii): inconsistency in naming convention: nuclear-norm-based and nuclear-norm based in the same sentence, sometimes it is nuclear norm (check everywhere in the paper)
Before eq.7: "Its a closed-form solution is."
choose between "Frobenius-norm regularizer" p.4,5 and "Frobenius norm regularizers" p.5
before Sec.3, p.5: "two low-rank Frobenius-norm-based modelS"
p.8 in Table 1: "Frobinius"
eq.8, missing " " in 3rd norm, extra space to remoce: "equivalently ,"
missing "-" for closed form solutions after Proposition 2 p.5, in Section 4 p.7, in Section 5 p.7, in Q1 p.8, Q.2 in p.9 twice, Q.3, in conclusion ; check for low-rank everywhere also
p.5 choose between "rearrangement" and "re-arrangement"
Corollary 1, p.6: Missing Eq. before (9)
Section 4, p.7: missing "-" for state-of-the-art
Section 4, p.7: after ADMM: \cite instead of \citep, same after closed-form and EDLAE
Table 1: refer TO
p.8 in Q1: (eq. (11))
p8, Q1: "one of the most popular implicit matrix factorization algorithmS"
Q2: "most of them reaches"
Q2, missing "-" in EDLAE based approaches
Q2, p.9: ADMM method performS slightly better ; none-the-less
p.9: "all the models add either a nuclear-norm-based [...] or a Frobenius-norm based regularizer." ; check also the abstract
p.9 "The Frobenius-norm models are more express"
Supplementary: section A autoencoder vs auto-encoder -Supp, section A: \cite instead of \citep for 2nd paragraph | 2: W1 and W2 are not defined. I guess they denote the Encoder and the Decoder network. p.3, A4, eq.3: W and V not defined, same as above Eq. |
i3e92uSZCp | ICLR_2025 | - The experimental scenarios are simple, in which the exampled prompts and semantically controlled spaces are easy to follow yet fail to demonstrate the generalizablity and scalability --- after all, the method relies much on the description of states. LGSD’s dependence on LLMs for real-time distance evaluation might limit scalability to complex, real-time environments.
- As I understand, users have to provide specified "skill constraints" (via prompts, such as "move north" etc.), then how can it still be called "skill discovery" since users are specifying skills?
- The comparison with some baselines is somehow unfair since they lack the prior knowledge of users or any language embedding computation. A better comparison should be considered. | - The comparison with some baselines is somehow unfair since they lack the prior knowledge of users or any language embedding computation. A better comparison should be considered. |
NIPS_2022_2523 | NIPS_2022 | Novelty is incremental. The major change over the baseline ResTv1 is the pixel-shuffle only, and the rest of the modifications are not new and cannot be one of the contributions.
Any intuitions or insights of why the architecture should be designed like this are missing: why the upsampling module should be involved? What can we learn from the architectural modifications from ResTv1 to RestV2 such as the block number at the first stage is halved.
Experimental justifications in Sections 3.4 and 4.3 do not seem to be enough backups for the explanation of the proposed architectural design. For example, Figure 3 tells us the upsampling module seemingly reduces the difference in the log amplitude between particular frequencies and the center frequency. However, this does not indicate the upsampling module is necessarily used. Furthermore, one may naturally ask the questions: 1) why do some specific frequencies only benefit from information recovery?; 2) If the upsampling module really helps information flow, shouldn't the entire frequency have the same effect?; 3) why the output-side layers do not benefit from it? Furthermore, Figure 4 is not clearly illustrated.
The details of Pixel-shuffle are not clearly presented. Is it the pixel-shuffle operation used in the super-resolution field? Then, why the dimensionality remains the same after upsampling in Figure 2. (b)?
The authors did not provide the limitations and potential negative societal impact of their work. | 3) why the output-side layers do not benefit from it? Furthermore, Figure 4 is not clearly illustrated. The details of Pixel-shuffle are not clearly presented. Is it the pixel-shuffle operation used in the super-resolution field? Then, why the dimensionality remains the same after upsampling in Figure 2. (b)? The authors did not provide the limitations and potential negative societal impact of their work. |
NIPS_2018_857 | NIPS_2018 | Weakness: - Long range contexts may be helpful for object detection as shown in [a, b]. For example, the sofa in Figure 1 may help detect the monitor. But in the SNIPER, images are cropped into chips, which makes the detector cannot benefit from long range contexts. Is there any idea to address this? - The writing should be improved. Some points in the paper is unclear to me. 1. In line 121, authors said partially overlapped ground-truth instances are cropped. But is there any threshold for the partial overlap? In the lower left figure of the Figure 1 right side, there is a sofa whose bounding-box is partially overlapped with the chip, but not shown in a red rectangle. 2. In line 165, authors claimed that a large object which may generate a valid small proposal after being cropped. This is a follow-up question of the previous one. In the upper left figure of the Figure 1 right side, I would imagine the corner of the sofa would make some very small proposals to be valid and labelled as sofa. Does that distract the training process since there may be too little information to classify the little proposal to sofa? 3. Are the negative chips fixed after being generated from the lightweight RPN? Or they will be updated while the RPN is trained in the later stage? Would this (alternating between generating negative chips and train the network) help the performance? 4. What are the r^i_{min}'s, r^i_{max}'s and n in line 112? 5. In the last line of table3, the AP50 is claimed to be 48.5. Is it a typo? [a] Wang et al. Non-local neural networks. In CVPR 2018. [b] Hu et al. Relation Networks for Object Detection. In CVPR 2018. ----- Authors' response addressed most of my questions. After reading the response, I'd like to remain my overall score. I think the proposed method is useful in object detection by enabling BN and improving the speed, and I vote for acceptance. The writing issues should be fixed in the later versions. | 3. Are the negative chips fixed after being generated from the lightweight RPN? Or they will be updated while the RPN is trained in the later stage? Would this (alternating between generating negative chips and train the network) help the performance? |
tSfZo6nSN1 | EMNLP_2023 | 1. The proposed approach fails to outperform existing works. For example, in Table 1, the B-4 of proposed approach is lower than the basic baseline ViT-transformer on MIMIC-ABN. Why the ViT-transformer is not evaluated on MIMIC-CXR data set.
2. What if the patients are the first time visitors without historical reports. The authors need to evaluate the proposed approach on new patients and old patients respectively.
3. The experiment setting is not fair. For the proposed approach, the historical reports of patients are used to generate reports for current input. While these data are unseen to baseline works. These historical reports should be added into the training data set of baselines.
4. One existing work which also includes historical reports in modeling should be referenced and discussed.
DeltaNet: Conditional medical report generation for COVID-19 diagnosis, Coling 2022.
5. Since the proposed approach targets to mine the progress of diseases to generate better results. Such intermediate results (the disease progress) should be evaluated in experiments.
6. The IU data set should be included in experiments. | 2. What if the patients are the first time visitors without historical reports. The authors need to evaluate the proposed approach on new patients and old patients respectively. |
NIPS_2022_1567 | NIPS_2022 | , I want to discuss the novelty aspect of this paper. On one hand, the novelty is not remarkable: the authors are adopting the Swin backbone, while concatenation fusion is also used in Stark. However, the novelty is not limited to these aspects. On the contrary, I believe that this paper brings substantial value to the tracking field by consolidating existing techniques, while investigating important details in order to achieve a simple, yet highly powerful tracking framework. Importantly the authors provide valuable insights when motivating their approach and comparing to other techniques (modifications of fusion, transformer architecture, losses, etc.). Lastly, I find the motion token a very interesting novelty that could reopen a long-forgotten direction in tracking, namely exploiting motion prediction and other dynamic information. Although seemingly incremental at the first glance, I consider the novelty to be significant based on how much this paper advances the useful knowledge in the field.
Other strong points of the paper are:
• Very strong results, clearly SOTA.
• Simple and elegant architecture.
• Insightful discussions.
• Method and motivation are easy to follow.
• Interesting ablative experiments on multiple datasets.
• Relatively fast frame-rates. Weaknesses:
Details regarding pre-training are missing. I assume that the authors use ImageNet-22k pre-training, while most trackers use ImageNet-1k. This has shown to give about 2-3% on LaSOT in previous papers. The authors should therefore analyze this in a separate experiment.
By looking into the more detailed results in the supplementary material, it seems that the improvements mostly stem from increased accuracy. That is, better bounding box regression. The robustness (low overlap scores in the success plot) seems to be on par with recent trackers. While accuracy is also important, the major challenge in tracking is to improve the robustness. It is important to discuss these aspects in the paper in order to understand where and how SwinTrack performs better compared to other trackers. Moreover, please add more high-performing trackers to the success plots in the supplementary material.
Unfortunately, the design of the motion token is motivated. For instance, why are the past box encodings concatenated in the channel dimension, and not processed in some other manner? Why add it as a single token in the transformer, instead of one per box?
Results on UAV and VOT should be added, even if the tracker does not beat SOTA.
There are quite a few language mistakes.
I could not find discussion of negative social impact or limitations. It would be good to add these. | • Insightful discussions.• Method and motivation are easy to follow. |
NIPS_2019_220 | NIPS_2019 | 1. Unclear experimental methodology. The paper states that 300W-LP is used to train the model, but later it is claimed same procedure is used as was used for baselines. Most baselines do not use 300W-LP dataset in their training. Is 300W-LP used in all experiments or just some? If it is used in all this would provide an unfair advantage to the proposed method. 2. Missing link to similar work on Continuous Conditional Random Fields [Ristovski 2013] and Continuous Conditional Neural Fields [Baltrusaitis 2014] that has a similar structure of the CRF and ability to perform exact inference. 3. What is Gaussian NLL? This seems to come out of nowhere and is not mentioned anywhere in the paper, besides the ablation study? Trivia: Consider replacing "difference mean" with "expected difference" between two landmarks (I believe it would be clearer) | 1. Unclear experimental methodology. The paper states that 300W-LP is used to train the model, but later it is claimed same procedure is used as was used for baselines. Most baselines do not use 300W-LP dataset in their training. Is 300W-LP used in all experiments or just some? If it is used in all this would provide an unfair advantage to the proposed method. |
ICLR_2023_2721 | ICLR_2023 | Weakness: 1.In 2-WAY GRADIENT TRANSFER, the client's gradient information will be passed to the server. However, issues related to data privacy during gradient tranmission do not appear to be explored in the paper. 2. Some technique behind the algorithm may not be that novel, such as computation offloading and gradient augmentation. | 2. Some technique behind the algorithm may not be that novel, such as computation offloading and gradient augmentation. |
NIPS_2019_1350 | NIPS_2019 | . Some important related works are not discussed. Multi-task (i.e., multivariate) GPs have been widely studied in machine learning community. Although most of them assume that data values are associated with points, it would be better to mention several related multi-task GPs (e.g., [1],[2],[3]). Especially, [1] designed the dependent GP by a linear mixing of latent GPs, which is similar to this submission. Also, there is an important related work missing here: [4]. I think this paper essentially addressed a related task: Predicting the fine-grained data by using auxiliary data sets with various granularities. I would like the authors to clarify the differences and advantages of this submission. [Quality] Strengths. This paper is technically sound except for some concerns. The authors evaluate the proposed model in the simple experimental setting using synthetic and real data sets. Weaknesses. My concerns about the proposed model are as follows: 1) I have understood that the integral in Equation (1) corresponds to bag observation model in [Law et al., NeurIPS'18] or spatial aggregation process in [4]. The formulation introduced by the authors assume that the observations are obtained by averaging over the corresponding support $v$. However, the data might be aggregated by another procedure, e.g., simple summation or population weighted average; actually the disease incident data are often available in count, or rate per the number of residents. 2) In order to handle various data types (e.g., count and rate), shouldn't the corresponding aggregation processes be performed at the likelihood level? 3) I think it would be more efficient to estimate ${a_{d,q}}$ instead of $B_q$ since $b^q_{d,d'} = a_{d,q}a_{d',q}$. The major weakness of this submission is in the experiments. First, the proposed model should be compared with any typical baseline, such as regression-based model with aggregation process (e.g., Law et al., NeurIPS'18, [4]) and multi-task GP with point-referenced data (e.g., [1]). I believe the previous multi-task GP can be applied via the simplification; that is, each data value at the support $v$ is assumed to be associated with the representative point (e.g., centroid) of its support (as in the previous work [4]). Second, the extensive experiments are helpful to verify the effectiveness of the proposed model. In all the experiments, the authors consider two tasks. I would like to see the experimental results considering more tasks; then it is a good idea to discuss how to determine the number of latent GPs $Q$. Short question: I was wondering if you could give me the detail of *resolution 5 \times 5* in the experimental setting of fertility rates. [Clarity] This paper is easy to understand. Some typos: 1) In line 235, *low-cost* should be *low-accuracy*? 2) In line 239, *GP process* should be *GP*. [Significance] Aggregated data with different supports are commonplace in a wide variety of applications, so I think this is an important problem to tackle. However, the major weakness of the submission in my view is that the evaluation of the proposed model is not enough, so the effectiveness/usefulness of the model is unclear from the experimental results. I think it would be great to compare the proposed model with baseline methods. [1] Y. W. Teh et al., Semiparametric Latent Factor Models, AISTATS, 333-340, 2005. [2] P. Boyle et al., Dependent Gaussian Processes, NeurIPS, 217-224, 2005. [3] E. Bonilla et al., Multi-task Gaussian Process Prediction, NeurIPS, 153-160, 2008. [4] Y. Tanaka et al., Refining Coarse-grained Spatial Data Using Auxiliary Spatial Data Sets with Various Granularities, AAAI, 2019. https://arxiv.org/abs/1809.07952 ------------------------------ After author feedback: I appreciate the responses to my questions. The new experimental results in the rebuttal is a welcome addition. In light of this, I upgraded my score. The proposal is a combination of the coregionalization and the concept of aggregation process used in block-kriging; this is a simple but effective way. I also agree that a sensor experiment is one of the applications with the proposed model. But I'm still of the opinion that there is not enough experiments and/or discussions to support the authors' claims. The authors state that the model is a general framework and has many applications related to geostatistics (lines 14-23); the support $v$ corresponds to the 2-dimensional region, e.g., borough (line 92). As described in Related work (lines 222-229), the proposed model strongly relates to spatial downscaling and disaggregation in geostatistics. If anything, I think this application that contains spatial aggregation is a more critical one for the proposed model. In the spatial data setting, a wide variety of data sets is available at various spatial granularities (for instance, New York City publish open data in [https://opendata.cityofnewyork.us]). Naturally, one would like to handle these data sets simultaneously (as in Law et al., NeurIPS'18, [4]); namely the setting with a large number of tasks. In that case, I believe the authors should discuss several issues; for example, the sensitivity of the number of latent GPs $Q$, the approximate accuracy of integral over regions, etc. I think it would be better to clarify the scope of this study and discuss the above issues. | 1) I have understood that the integral in Equation (1) corresponds to bag observation model in [Law et al., NeurIPS'18] or spatial aggregation process in [4]. The formulation introduced by the authors assume that the observations are obtained by averaging over the corresponding support $v$. However, the data might be aggregated by another procedure, e.g., simple summation or population weighted average; actually the disease incident data are often available in count, or rate per the number of residents. |
5BWvVIa5Uz | EMNLP_2023 | - The contribution is too limited. The paper only took a pre-trained model family and evaluated them on 4 existing datasets.
- No in-depth analysis. The authors found inverse scaling happens over compute, but why? It would make the paper much more solid if the authors can provide some analysis explaining such training dynamics. | - No in-depth analysis. The authors found inverse scaling happens over compute, but why? It would make the paper much more solid if the authors can provide some analysis explaining such training dynamics. |
ICLR_2021_973 | ICLR_2021 | .
Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well.
Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. - Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates? - Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates?
Provide additional feedback with the aim to improve the paper. - Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S. - Pg. 5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). - Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)? - Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT? - Fig. 4: On the color bar, presumably one of the labels should say “worse”. - Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers? - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. - D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4. | . Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well. Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. |
NIPS_2017_575 | NIPS_2017 | - While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this.
- The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3.
- The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically.
Minor comments:
- Are you using dropout on the source/target embeddings?
- Line 146: There seems to be dangling "2" | - While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this. |
NIPS_2020_1759 | NIPS_2020 | - The result for the proposed algorithm seems to require an additional assumption that each individual’s data is iid drawn from the same distribution. Otherwise I don’t see how that \sqrt{m} argument in the beginning of Section 5.1 works out and how Theorem 6 can be applied to prove Theorem 7. I find such assumption unjustifiable because in practice, each users’ preferred set of “emojis” are very different. - It is unreasonable to assume each user contributes m data points. Usually users are very heterogeneous in terms of the number of data points they produce. This affects the user-complexity calculations and makes the problem more interesting. In the worst case, m can be as large as n, still with appropriate truncation, one can obtain meaningful frequency estimates with data-dependent privacy-mechanisms and utility bounds. Unfortunately this is not considered in the paper. - Han et al that the authors cited for the lower bounds of non-private estimation of L1-distance establishes an adaptive (upper and lower) bound that depends on the entropy of the distribution. While this, in the worst case, is proportional to k, it is often much smaller. The results in this paper do not seem to concern dependence on k. Replacing k with the entropy in the term that is introduced by DP will make the result much more interesting. - The research in estimating discrete distributions have evolved quite a bit nowadays. Other loss functions are of interest too, e.g., KL-divergence, \chi-square distance. Those metrics sometimes allow for more interesting rates and more interesting worst-case dependence on k (often log k). There are competitive notions of optimality that I encourage the authors to look into, see, e.g.: Orlitsky, A., & Suresh, A. T. (2015). Competitive distribution estimation: Why is good-turing good. In Advances in Neural Information Processing Systems (pp. 2143-2151). | - The result for the proposed algorithm seems to require an additional assumption that each individual’s data is iid drawn from the same distribution. Otherwise I don’t see how that \sqrt{m} argument in the beginning of Section 5.1 works out and how Theorem 6 can be applied to prove Theorem 7. I find such assumption unjustifiable because in practice, each users’ preferred set of “emojis” are very different. |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. | - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. |
Bou2YHsRvG | EMNLP_2023 | 1. For some experimental results, there is a lack of reasonable and sufficient explanation. For instance, in figure 3, the authors' method underperforms the baseline in the en-fr and fr-en settings. The reason and analysis for this are missing.
2. A majority of the experiments focus on the presentation of results. The analyses of the method itself and the experimental outcomes are not comprehensive enough. Given that the authors' method underperforms the baseline in some instances, one might question to what extent the performance improvement brought by this pretraining method can be attributed to the authors' claim of "moving code-switched pretraining from the word level to the sense level, by leveraging word sense-specific information from Knowledge Bases". | 2. A majority of the experiments focus on the presentation of results. The analyses of the method itself and the experimental outcomes are not comprehensive enough. Given that the authors' method underperforms the baseline in some instances, one might question to what extent the performance improvement brought by this pretraining method can be attributed to the authors' claim of "moving code-switched pretraining from the word level to the sense level, by leveraging word sense-specific information from Knowledge Bases". |
ICLR_2022_1255 | ICLR_2022 | Weakness 1. This paper mainly focuses on explaining multi-task models, which somehow limits the applicability. 2. Why does the author use $\boldsymbol{p}\otimes \mathcal{E}(\mathcal{T}\theta(\boldsymbol{p}, G)) b u t p\otimes\mathcal{E}(\mathcal{T}\theta(\boldsymbol{p},G)) . I f t h e r e i s a p e r f o r m a n c e g a p b e t w e e n t h i s t w o f o r m u l a t i o n s , I w o n d e r h o w e a c h o f t h e m a f f e c t t h e q u a l i t y o f g e n e r a t e d e x p l a n a t i o n s . 3. D u r i n g t h e s e l f − t r a i n i n g o f t h e e m b e d d i n g m o d e l ,
p$ is sampled from a multivariate Laplace distribution, while later, the input is the conditional embeddings generated by the gradient. The distributions of the two groups of inputs could be different numerically and thus may affect the specific performance of the embedding model. Can the authors comment on this a bit? 4. Some typos: last row of page 5 “as input” should be “an input”; In Section 4.1, “include four graph classification tasks” should be “include three graph classification tasks”. | 1. This paper mainly focuses on explaining multi-task models, which somehow limits the applicability. |
NIPS_2022_948 | NIPS_2022 | The main theoretical flaw is that the analysis of NEOLITHIC relies on a restrictive Assumption 5 (bounded dissimilarity): 1 n ∑ i = 1 n ‖ ∇ f i ( x ) − ∇ f ( x ) ‖ 2 ≤ b 2 , ∀ x ∈ R d
One can easily come up with an example for which this assumption does not generally hold. For example, let us consider f i ( x ) = x ⊤ A i x
, where A i ∈ R d × d
. Since ∇ f i ( x ) = B i x
, where B i = A i + A i ⊤
. The bounded dissimilarity assumption (Assumption 5), which can be written in the form 1 n ∑ i = 1 n | ( B i − 1 n ∑ j = 1 n B j ) x | 2 ≤ b 2
, also does not hold, unless B i = B j
for all i , j
, which reduces to the identical data regime, which is of limited interest.
Some details on the experimental setting are missing. See Question 1) in the next section.
Literature review ignores several papers that are seemed to be relevant [1], [2]. It seems VR-MARINA for online problems from [1] and DASHA-MVR from [2] both satisfy Assumption 2 and have a better rate than QSGD in the stochastic regime. See Question 2) in the next section.
Table 1 contains possible typos:
It mentions a paper on MEM-SGD that does not have a non-convex rate. Their rate is applicable to strongly convex functions. I would recommend here to mention another, more relevant work [3]. See Question 3) in the next section.
Similar problems with CSER, Double Squeeze, and QSGD. See corresponding Questions 4), 5) and 6) in the next section. References:
[1] Gorbunov, Eduard, Konstantin Burlachenko, Zhize Li, and Peter Richtárik. 2021. “MARINA: Faster Non-Convex Distributed Learning with Compression.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/2102.07845.
[2] Tyurin, Alexander, and Peter Richtárik. 2022. “DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/2202.01268.
[3] Koloskova, A., Lin, T., Stich, S. U., & Jaggi, M.. Decentralized deep learning with arbitrary communication compression. arXiv preprint arXiv:1907.09356, 2019
I would recommend making the y-axis in the right subfigure of Figure 1 logarithmic scale. Otherwise, it is hard to distinguish plots corresponding to different methods.
One more possibly relevant and missing citation is [7]. The Algorithm 3PCv3 (Appendix C.6, page 26) already employs a similar nested structure as proposed in Paper8019 by FCC.
I would recommend running least squares and logistic regression experiments for a longer period. It looks like the methods on the left subfigure of Figure 1 and Figure 2 were stopped quite early and did not reach the SGD-specific oscillation region.
Some minor notes:
(line 685): instead of Cauchy-Schwarz, one needs to refer to Young's inequality for product;
(line 693): instead of Cauchy-Schwarz, one needs to refer to Jensen's inequality
FINAL REMARKS:
I would be happy to rate this paper an 8 for its solid theoretical contributions and reliable experiments.
However, at this moment, I can not do so since the paper still contains several crucial issues that are needed to be clarified or fixed.
I am ready to reconsider my current rate during rebuttals once you respond to me on Weaknesses 2-4 and Questions 1 - 6.
UPDATE: After the Authors-Reviewers discussion, I decided to increase the score since my concerns were resolved.
References: [7] Richtárik, Peter, Igor Sokolov, Ilyas Fatkhullin, Elnur Gasanov, Zhize Li, and Eduard Gorbunov. 2022. “3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation.” arXiv [cs.LG]. arXiv. http://arxiv.org/abs/2202.00998. | 1) in the next section. Literature review ignores several papers that are seemed to be relevant [1], [2]. It seems VR-MARINA for online problems from [1] and DASHA-MVR from [2] both satisfy Assumption 2 and have a better rate than QSGD in the stochastic regime. See Question |
NIPS_2022_670 | NIPS_2022 | 1. Lack of numerical results. The reviewer is curious about how to apply it to some popular algorithms and their performance compared with existing DP algorithms. 2. The presentation of this paper is hard to follow for the reviewer. | 2. The presentation of this paper is hard to follow for the reviewer. |
ICLR_2021_2926 | ICLR_2021 | and suggestions: 1. It is not clear to me if the warm-up phase makes a difference in performance on larger, more realistic datasets like Clothing1M. More careful analysis of how the warm-up phase affects the sample separation in SSL versus a fully supervised setting would have been useful, including experiments on CIFAR-10. 2. Additional experiments on realistic noisy datasets like WebVision would have provided more support for C2D. 3. The paper is not clearly written. Important components like MixMatch are not explained. For instance, the Method section contains discussion on various design decisions, rather than a step-by-step description of the method itself. An algorithm figure detailing C2D method would be useful for exposition. In sum, the paper definitely has a good idea and interesting results, but it is not well-structured, which makes it harder to parse the method and results.
Questions and suggestions: 1. Do you have any additional insights into modest performance gains on Clothing1M 2. How does the algorithm perform on other real-world datasets like WebVision, evaluated by DivideMix? | 1. Do you have any additional insights into modest performance gains on Clothing1M 2. How does the algorithm perform on other real-world datasets like WebVision, evaluated by DivideMix? |
NIPS_2017_345 | NIPS_2017 | of the paper are mainly on the experiments:
- While not familiar with the compared models DMM and DVBF in details, the reviewer understood from the paper their differences with KVAE. However, the reviewer would appreciate a little bit more detailed presentation of the compared models. Specifically, the KVAE is simpler as the state space transition are linear, but it requires the computation of the time-dependant LGSSM parameters \gamma. Can the authors comment on the computation requirements of the 3 methods compared in Table 1 ?
- Why the authors did not test DMM and DVBF on the task of imputing missing data ? | - While not familiar with the compared models DMM and DVBF in details, the reviewer understood from the paper their differences with KVAE. However, the reviewer would appreciate a little bit more detailed presentation of the compared models. Specifically, the KVAE is simpler as the state space transition are linear, but it requires the computation of the time-dependant LGSSM parameters \gamma. Can the authors comment on the computation requirements of the 3 methods compared in Table 1 ? |
q38SZkUmUh | ICLR_2024 | 1. The authors conduct experiments on T5, PaLM and GPT series LLMs and show the influence of parameter size on benchmark score. However, I think more experiments on different famous LLMs like LLaMA, Falcon, etc are needed as benchmark baselines.
2. For better visualization, the best results in Table 1 need to be displayed in bold. | 1. The authors conduct experiments on T5, PaLM and GPT series LLMs and show the influence of parameter size on benchmark score. However, I think more experiments on different famous LLMs like LLaMA, Falcon, etc are needed as benchmark baselines. |
OE67D1Oatr | ICLR_2025 | - The novelty of the proposed extension to Sleeper Agent is limited.
- Limited datasets used in experiments. All experiments are done on CIFAR-10, except for one ASR experiment on GTSRB.
- The attack is very computationally intensive, requiring retraining a surrogate that approximates the attacked model (number of optimization cycles) * (number of cycle rounds) * (number of triggers) times. This implicitly assumes the attacker has access to very significant computational resources or is only able to attack small models.
- Standard ResNet-18 training uses random crop as a data augmentation [1], which is not use here. I suspect random crop would make this attack less effective.
- The paper does not describe what hyperparameters are used by each defense nor how those hyperparameters are derived. A maximally charitable evaluation of defenses would optimize hyperparameters against the attack and show how much clean data is required to remove the attack.
- The experiments in section 4.4 do not add very much beyond proving that a ResNet-18 has the capacity to learn 2-3 backdoors when training on CIFAR-10.
- It is unclear whether the increased ASR comes from the partitioning mechanism or from having multiple optimization cycles $S$ where the dataset containing the best randomized perturbations are returned.
- The paper does not provide an analysis on how different settings of cycle rounds $R$ and optimization cycles $S$ effects the success of the backdoor. The experiments section only examines one setting of these parameters without justifying how this setting is derived. This is a missed opportunity to demonstrate how valuable the proposed modification is to achieving a successful attack. Minor
- Line 22 of algorithm 1 contains a typo. | - The paper does not describe what hyperparameters are used by each defense nor how those hyperparameters are derived. A maximally charitable evaluation of defenses would optimize hyperparameters against the attack and show how much clean data is required to remove the attack. |
NIPS_2016_394 | NIPS_2016 | - The theoretical results don't have immediate practical implications, although this is certainly understandable given the novelty of the work. As someone who is more of an applied researcher who occasionally dabbles in theory, it would be ideal to see more take-away points for practitioners. The main take-away point that I observed is to query a cluster proportionally to the square root of its size, but it's unclear if this is a novel finding in this paper. - The proposed model produces only 1 node changing cluster per time step on average because the reassignment probability is 1/n. This allows for only very slow dynamics. Furthermore, the proposed evolution model is very simplistic in that no other edges are changed aside from edges with the (on average) 1 node changing cluster. - Motivation by the rate limits of social media APIs is a bit weak. The motivation would suggest that it examines the error given constraints on the number of queries. The paper actually examines the number of probes/queries necessary to achieve a near-optimal error, which is a related problem but not necessarily applicable to the social media API motivation. The resource-constrained sampling motivation is more general and a better fit to the problem actually considered in this paper, in my opinion. Suggestions: Please comment on optimality in the general case. From the discussion in the last paragraph in Section 4.3, it appears that the proposed queue algorithm would is a multiplicative factor of 1/beta from optimality. Is this indeed the case? Why not also show experiment results for just using the algorithm of Theorem 4 in addition to the random baselines? This would allow the reader to see how much practical benefit the queue algorithm provides. Line 308: You state that you show the average and standard deviation, but standard deviation is not visible in Figure 1. Are error bars present but just too small to be visible? If so, state that it is the case. Line 93: "asymptoticall" -> "asymptotically" Line 109: "the some relevant features" -> Remove "the" or "some" Line 182: "queries per steps" -> "queries per step" Line 196-197: "every neighbor of neighbor of v" -> "neighbor of" repeated Line 263: Reference to Appendix in supplementary material shows ?? Line 269: In the equation for \epsilon, perhaps it would help to put parentheses around log n, i.e. (log n)/n rather than log n/n. Line 276: "issues query" -> I believe this should be "issues 1 query" Line 278: "loosing" -> "losing" I have read the author rebuttal and other reviews and have decided not to change my scores. | - The theoretical results don't have immediate practical implications, although this is certainly understandable given the novelty of the work. As someone who is more of an applied researcher who occasionally dabbles in theory, it would be ideal to see more take-away points for practitioners. The main take-away point that I observed is to query a cluster proportionally to the square root of its size, but it's unclear if this is a novel finding in this paper. |
ACL_2017_108_review | ACL_2017 | Clarification is needed in several places.
1. In section 3, in addition to the description of the previous model, MH, you need point out the issues of MH which motivate you to propose a new model.
2. In section 4, I don't see the reason why separators are introduced. what additional info they convene beyond T/I/O?
3. section 5.1 does not seem to provide useful info regarding why the new model is superior.
4. the discussion in section 5.2 is so abstract that I don't get the insights why the new model is better than MH. can you provide examples of spurious structures? - General Discussion: The paper presents a new model for detecting overlapping entities in text. The new model improves the previous state-of-the-art, MH, in the experiments on a few benchmark datasets. But it is not clear why and how the new model works better. | 2. In section 4, I don't see the reason why separators are introduced. what additional info they convene beyond T/I/O? |
XWPp9FJ0uJ | ICLR_2025 | - L187-188: $x=\{(k_i, v_i)|i=1\ldots,n\}$ where $n$ denotes the number of features, $k$ and $v$ denote feature-name and value-pairs should be defined more clearly. I understand that you are trying to refer to a specific feature having a single possible value i.e. age = 50. However, an alternative way of interpreting this is where you can have $n$ features for $k_i$ i.e. age, education etc. but there can be more than $n$ different value-pairs. For example, for age where age = {1,2,3,…} where len(age) > $n$.
- L199-208: No citations of relevant papers i.e. justification for “LLMs often attend more strongly to tokens at the end of a sequence”.
- L235: There could be different approaches to pooling the tokens. For instance, why is it that mean pooling works? What about other pooling strategies?
- L300: Although the datasets are extensive, I would like to inquire regarding results and ablations on the most popular datasets such as the UCI [Adult](https://archive.ics.uci.edu/dataset/2/adult), [Bank](https://archive.ics.uci.edu/dataset/222/bank+marketing), and [Default](https://archive.ics.uci.edu/dataset/350/default+of+credit+card+clients) datasets.
- L368: Baselines from recent SOTA methods are missing. This includes the mentioned [TabLLM](https://arxiv.org/abs/2210.10723), and other methods such as [TabTransformer](https://arxiv.org/abs/2012.06678), [InterpreTabNet](https://arxiv.org/abs/2406.00426), [SAINT](https://arxiv.org/abs/2106.01342), [TabPFN](https://arxiv.org/abs/2207.01848) etc.
- L460: It is unclear how the ablation is conducted. What is the baseline model that the serialization is applied to? If it is ZET-LLM, would you please elaborate on the whole ablation process?
- Although “feature-wise serialization consistently outperforms sample-wise serialization”, can I clarify that this can only be applied to your framework where you are required to first encode text into embeddings?
- L467: For w/o mask results, does that mean that you drop the samples instead of ignoring the masked features? On the other hand, in cases where missing values convey implicit information (e.g., non-response bias), could masking be suboptimal compared to other techniques, such as imputation or attention-based weighting?
- Given the adaptation of a transformer-based model for tabular data, is there an assessment of computational efficiency compared to traditional tabular models? | - L235: There could be different approaches to pooling the tokens. For instance, why is it that mean pooling works? What about other pooling strategies? |
ICLR_2022_2660 | ICLR_2022 | 1. There is an assumption “graphs are topological close should have also comparable performance”. Nevertheless, it may not hold for architectures. For example, by only modifying one node/edge (add or remove skip connection), the architecture may incur significant performance drop. Thus, it is questionable to use spectral distance to evaluate the similarity. 2. In Table 3, besides the number of queries, it would be better to compare the real search cost (e.g. in terms of GPU days). 3. This paper only considers small search spaces, e.g. NASBench. Can the proposed method be used in DARTS and MobileNet search spaces? It would be better to report the results on these spaces. 4. In Section 4.2, the authors claimed that “our model selection approach is very stable”. However, there seem no empirical results to support it. 5. Since this paper focuses on learning predictors, several recent related work [a,b] should be discussed and/or compared.
Reference: [a] ReNAS: Relativistic evaluation of neural architecture search. CVPR 2021. [b] Contrastive Neural Architecture Search with Neural Architecture Comparators. CVPR 2021. | 2. In Table 3, besides the number of queries, it would be better to compare the real search cost (e.g. in terms of GPU days). |
NIPS_2022_1913 | NIPS_2022 | 1. I believe the paper would benefit from a comparison with a one-shot baseline. While the proposed approach does not require fine-tuning, it would be interesting to see how it competes.
2. The prompt engineering details are unclear. I was only able to get a vague idea on how to construct the visual prompt. Is it possible to document them more precisely? Additionally, how are different image sizes being handled? Are the images being resized?
3. Missing training details? Specifically, I am wondering if the VQGAN is pre-trained? Or only trained on the 88,635 images from the Computer Vision Figures dataset.
4. The authors have stated that they repeated the experiment with four random seed (line. 194). It would be great to report the standard deviation on the quantitative results. Also, I wonder how sensitive the approach is to the prompted image. For example, would the approach still work if the cat, in Fig.3, is prompted by an image of a white cat that's outdoor? Misc.
Line 173: "224x224" --> "224 \times 224"
The paper contains a limitation section and adequately discussed the shortcomings of the approach. Specifically, the inherit ambiguity when prompting from a single image for a task. | 3. Missing training details? Specifically, I am wondering if the VQGAN is pre-trained? Or only trained on the 88,635 images from the Computer Vision Figures dataset. |
NIPS_2018_894 | NIPS_2018 | - As this work has the perspective of task-oriented recommendation, it seems that works such as [] Li, Xiujun, et al. "End-to-end task-completion neural dialogue systems." arXiv preprint arXiv:1703.01008 (2017). are important to include, and compare to, at least conceptually. Also, discussion in general on how their work differs from other chatbox research works e.g. [] He, Ji, et al. "Deep reinforcement learning with a natural language action space." arXiv preprint arXiv:1511.04636(2015). would be very useful. - It is important that the authors highlight the strengths as well as the weaknesses of their released dataset: e.g. what are scenarios under which such a dataset would not work well? are 10,000 conversations enough for proper training? Similarly, a discussion on their approaches, in terms of things to further improve would be useful for the research community to extend -- e.g. a discussion on how the domain of movie recommendation can differ from other tasks, or a discussion on the exploration-exploitation trade-off. Particularly, it seems that this paper envisions conversational recommendations as a goal oriented chat dialogue. However, conversational recommendations could be more ambiguous.. - Although it is great that the authors have included these different modules capturing recommendations, sentiment analysis and natural language, more clear motivation on why each component is needed would help the reader. For example, the cold-start setting, and the sampling aspect of it, is not really explained. The specific choices for the models for each module are not explained in detail (why were they chosen? Why is a sentiment analysis model even needed -- can't we translate the like/dislike as ratings for the recommender?) - Evaluation -- since one of the contributions argued in the paper is "deep conversational recommenders", evaluation-wise, a quantitative analysis is needed, apart from user study results provided (currently the other quantitative results evaluate the different sub-modules independently). Also, the authors should make clearer the setup of how exactly the dataset is used to train/evaluate on the Amazon Turk conversations -- is beam-search used as in other neural language models? Overall, although I think that this paper is a nice contribution in the domain of movie conversational recommendation, I believe that the authors should better position their paper, highlighting also the weaknesses/ things to improve in their work, relating it to work on neural dialogue systems, and expanding on the motivation and details of their sub-modules and overall architecture. Some discussion also on how quantitative evaluation of the overall dialogue quality should happen would be very useful. == I've read the authors' rebuttal. It would be great if the authors add some of their comments from the rebuttal in the revised paper regarding the size of the dataset, comparison with goal-oriented chatbots and potential for quantitative evaluation. | - As this work has the perspective of task-oriented recommendation, it seems that works such as [] Li, Xiujun, et al. "End-to-end task-completion neural dialogue systems." arXiv preprint arXiv:1703.01008 (2017). are important to include, and compare to, at least conceptually. Also, discussion in general on how their work differs from other chatbox research works e.g. [] He, Ji, et al. "Deep reinforcement learning with a natural language action space." arXiv preprint arXiv:1511.04636(2015). would be very useful. |
Md1YdfqAed | EMNLP_2023 | * The proposed methods (DualIS and DualDIS) are not generic on some cross-model retrieval tasks, i.e., the performance in MSVD (Table 3) shows minor improvements.
* I think the proposed gallery bank is supplementary and less effective compared to the query bank to address hubness issues in cross-model retrieval tasks. This conclusion is also verified by the authors' experiments. | * The proposed methods (DualIS and DualDIS) are not generic on some cross-model retrieval tasks, i.e., the performance in MSVD (Table 3) shows minor improvements. |
NIPS_2019_1346 | NIPS_2019 | below. 2. Theorem 3.1 is interesting in itself, because it applies to vectors which are not in the range of the generative model. Clarity: The paper is well written and ideas clearly expressed. I believe that others can reproduce the algorithm described. I only have a problem with the way the set $S(x,theta, tau)$ is defined in line 177, since the authors do not require the signs to strictly differ on this set. Significance: I think other researchers can build on Theorem 3.1. The conditions proposed for Theorem 3.3 are novel and could be used for future results. Weaknesses: 1. I am not completely convinced by the experimental strengths of this approach. To run the proposed algorithm, the authors need to run a descent procedure for 40 different networks from the training phase. In contrast, you could simply run vanilla Adam on the final network with 40 random initial points, and one of these restarts would reach the global minimum. It is not important that EACH initialization reach the global minimum, as long as AT LEAST one initialization reaches the global minimum. 2. While Theorem 3.3 is interesting, it does not directly influence the experiments because the authors never perform the search operation in line 3 of algorithm 2. Because of this, it provides a proof of correctness for an algorithm that is quite different from the algorithm used in practice. Although Algorithm 2 and the empirical algorithm are similar in spirit, lines 1 and 3 in algorithm 2 are crucial for proof of correctness. Clarifications: 1. For the case where $y= G(z) + noise$, where noise has sufficiently low energy, you would expect a local minimum close to $z$. Would this not contradict the result of Theorem 3.1? ---Edit after author response--- Thank you for your response. After reading your rebuttal and other reviews, I have updated my score to a 8. I think Table in the rebuttal and Theorem 3.1 are solid contributions. Regarding my criticism of the definition of S(x,tau,theta)- I only meant that defining the complement of this set may make things clearer, since you only seem to work with its complement later on (this did not influence my score). | 1. I am not completely convinced by the experimental strengths of this approach. To run the proposed algorithm, the authors need to run a descent procedure for 40 different networks from the training phase. In contrast, you could simply run vanilla Adam on the final network with 40 random initial points, and one of these restarts would reach the global minimum. It is not important that EACH initialization reach the global minimum, as long as AT LEAST one initialization reaches the global minimum. |
FTSUDBM6lu | ICLR_2024 | 1. The novelty is limited, the proposed model mostly involves straight application existing feature selection methods.
2. The evaluation is limited (only on one dataset)
3. Presentations could also be improved.
4. Incomplete study, the relationship between the top selected patches and the disease is not yet established | 4. Incomplete study, the relationship between the top selected patches and the disease is not yet established |
ICLR_2021_872 | ICLR_2021 | The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a high bar from a resources standpoint).
As I noted down below, the experiments currently lack results for the standard variational BNN with mean-field Gaussians. More generally, I think it would be great to include the remaining models from Ovadia et al. (2019). More recent results from ICML could also useful to include (as referenced in the related works sections). Recommendation
Overall, I believe this is a good paper, but the current lack of experiments on a dataset larger than CIFAR-10, while also focusing on scalability, make it somewhat difficult to fully recommend acceptance. Therefore, I am currently recommending marginal acceptance for this paper.
Additional comments
p. 5-7: Including tables of results for each experiment (containing NLL, ECE, accuracy, etc.) in the main text would be helpful to more easily assess
p. 7: For the MNIST experiments, in Ovadia et al. (2019) they found that variational BNNs (SVI) outperformed all other methods (including deep ensembles) on all shifted and OOD experiments. How does your proposed method compare? I think this would be an interesting experiment to include, especially since the consensus in Ovadia et al. (2019) (and other related literature) is that full variational BNNs are quite promising but generally methodologically difficult to scale to large problems, with relative performance degrading even on CIFAR-10. Minor
p. 6: In the phrase "for 'in-between' uncertainty", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., ‘ i n − b e t w e e n ′ ).
p. 7: s/out of sitribution/out of distribution/
p. 8: s/expensive approaches 2) allows/expensive approaches, 2) allows/
p. 8: s/estimates 3) is/estimates, and 3) is/
In the references:
Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers.
Dusenberry et al. (2020) was published in ICML 2020
Osawa et al. (2019) was published in NeurIPS 2019
Swiatkowski et al. (2020) was published in ICML 2020
p. 13, supplement, Fig. 5: error bar regions should be upper and lowered bounded by [0, 1] for accuracy.
p. 13, Table 2: Splitting this into two tables, one for MNIST and one for CIFAR-10, would be easier to read. | 6: In the phrase "for 'in-between' uncertainty", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., ‘ i n − b e t w e e n ′ ). p. |
57yfvVESPE | EMNLP_2023 | 1. The writing is hard to follow. There is no contribution list at the end of Introduction part. I read it several times but I am sorry that I cannot catch your theme. Is this paper mainly about model privacy in FL (FedSP) or soft prompt usage?
2. If the theme is mainly about FedSP, the performance of FedSP is not the best in Table 1 and Table 2 on some datasets.
3. If the paper is about FedSP, the author should do more experiments about FL settings, like the number of clients, communication rounds, etc.
4. The settings of the global and client model are not clear, like the model structure, etc. | 2. If the theme is mainly about FedSP, the performance of FedSP is not the best in Table 1 and Table 2 on some datasets. |
NIPS_2016_386 | NIPS_2016 | , however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm. | * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? |
ICLR_2022_113 | ICLR_2022 | - The part of the contrastive loss is not totally clear. The authors should provide a better intuition of why the contrastive loss improves the feature representation. For example, how are image-latent pairs defined as positive? - The method focuses on learning cluster granularity for the object only, and not for the background. - It's unclear why the transformation matrix is used (other than the fact that it's part of PerturbGAN's pipeline)
A few comments on the text: - The phrase "coarse-grained images" is inaccurate, the "coarse-grained" adjective should refer to the clustering and not the images (in the intro). - The authors should share more details about the auxiliary distribution mentioned in the abstract and the intro. - Overall proofreading is required. It would be great to add some of the model's notations to figure 2 (e.g. D_base, psi_r, psi_h) | - It's unclear why the transformation matrix is used (other than the fact that it's part of PerturbGAN's pipeline) A few comments on the text: |
ICLR_2023_226 | ICLR_2023 | The world modelling task is definitely interesting but it is hard to see how it is directly relevant outside of this environment. We would likely never have access to a Markovian state in such a controlled setting. The section appears to be motivated by works such as World Models and Dreamer, but in those cases 1) the models are learned directly from pixels without a Markovian state 2) there is an agent taking actions in the world. So this is a totally different paradigm. The fact that the model generalizes better with more data here is expected, as the authors note this has been the case in a variety of other settings already.
How can we be sure the hand designed tasks are unbiased? For all we know they could be somewhat arbitrary.
While the motivation in the intro is that this world is more general than others such as MiniGrid/Crafter/MiniHack, the only RL task presented is just sand pushing. How is this more diverse and useful than for example the tasks in Crafter/MiniHack which vary from navigation to tool use?
It looks like the experiments were all just one seed. When we know RL training is volatile, it seems like an oversight to have done this given the environment is meant to be fast.
One of the motivations in the intro is the potential use for UED, but there is no demonstration of this. It would be interesting to see if this environment offers something unique here vs. the alternatives. It may be beyond the scope to run this for a rebuttal but it would likely see an increased score. | 1) the models are learned directly from pixels without a Markovian state |
NIPS_2019_656 | NIPS_2019 | Despite the shown results and the details added in the appendix K, I think that the experimental part remains the weak part of this paper. The results displayed are convincing but I am disappointed that the authors did not tried their approach on more popular problems mentioned in the supplementary such as hierarchical classification. Even if this could be improved (in order to be at the level of the theoretical treatment), the proposed content is already solid and does not change my decision concerning the quality of this work. Remark: - The use of the sequence example at different step of the paper is really useful, however I'm a bit surprised that you mention in Example 2 a 'common' practice in the context of CRF corresponding to using as a scoring loss the Hamming distance over entire parts of the sequence. I've never seen this type of approach and am only aware of works reporting the hamming loss defined node wise. It would be great if you could point out some references there. - After reading the paper a few times, I still think that the notation $\Delta(z,y|x)$ is a bit strange and I would have preferred something of the form $\Delta(z,y)$ since in practice the losses you mention never takes into account the input data and $z$ is already a function of $x$. Maybe this is only personal taste and will be contradicted by the other reviewers. Minor remarks : missing brackets [ ] in theorem 4. | - The use of the sequence example at different step of the paper is really useful, however I'm a bit surprised that you mention in Example 2 a 'common' practice in the context of CRF corresponding to using as a scoring loss the Hamming distance over entire parts of the sequence. I've never seen this type of approach and am only aware of works reporting the hamming loss defined node wise. It would be great if you could point out some references there. |
ARR_2022_329_review | ARR_2022 | Although the paper is mostly easy to follow due to its simple and clear organization, it is not very clearly written. Some sentences are not clear and the text contains many typos. Although the included tasks can definitely be helpful, the proposed benchmark does not include many important tasks that require higher-level language understanding such as natural language inference, question answering, coreference resolution, etc. Also, the authors do not mention making the benchmark publicly available by providing a link which is very important and I hope will be the case.
General comments and suggestions: 1) The name of the "Evaluation" element can be changed to "Metrics" since 'evaluation' can have a more general meaning. Even better, the corresponding sections can be removed and the metrics can be briefly mentioned along with the datasets or in the captions of the tables since most, if not all, of the metrics are well-known and used as standard practice. 2) I feel like sentence segmentation and spellchecking & correction tasks do not fit well in the same benchmark alongside the other tasks that focus more on semantics. Ideally, one should be able to use the same model/architecture for all of the tasks in the benchmark by training it on their corresponding training sets. However, these two tasks feel lower level than the other tasks probably better handled somewhat separately than others. 3) Are there any reasons behind the selected baselines? For instance, why the Haded attention- RNN and Adaptive transformer instead of more traditional LSTMs and normal encoder-decoder transformers (or decoder only transformers like GPT models)?
Specific questions, comments, and typos: - Lines 89-90: -> In Mukayese we focus on - Lines 99-100: rewrite more clearly using full sentences or more clear format (i.e., We define two requirements ...: (i) accessible with ... (ii) sharable format) - Line 100: what is 'sharable format'?
- Lines 104-105: I believe the argument about inefficiency and cost of large supervised datasets is not correct. Finetuning a model on labeled training data for the target task is usually computationally not so costly and having more diverse supervised data almost always greatly helps. - Line 119: -> one or more metrics - Line 119-120: please rewrite (a) - Lines 127-129: please rewrite in a more clear way.
- Line 135: -> for each of the following ... - Lines 138-145: The paragraph starts with talking about general language modeling and formulates it, and then claims this is only a special type of language modeling (i.e., autoregressive language modeling).
- Line 151: -> AR language modeling Line 156: What exactly is a language modeling dataset? Is it simply a text corpus? - Table 2 caption:"mean average" ?
- Lines 173-174: What is a character language modeling dataset? What is the difference from the normal language modeling dataset? If there is no difference, is there a need for a separate dataset?
- Line 188 -> generalize to - Lines 201-204: please rewrite in a more clear way.
- Lines 214-215: Are the same models trained for both normal language modeling and character-level language modeling without any architectural changes? Is this possible? More details are needed.
- Table 4 caption: these are not the results - Line 247: "Where" cannot be in a new sentence - Table 7 caption: metric (F1) should be mentioned in the caption as well - Line 338: "converting them to" converting what?
- Line 344: with -> on - Line 363: no space before "In" - Lines 396-400: Do you remove the remaining 9% from the dataset? If annotators could not come up with the correct words after 10 predictions probably it means the target word is not recoverable, right?
- Line 465: -> perform - Line 471: -> such as - Table 13 is not referenced in the text - Lines 497-498: one with (pre-training) and two without pre-training - Line 509: -> in | 1) The name of the "Evaluation" element can be changed to "Metrics" since 'evaluation' can have a more general meaning. Even better, the corresponding sections can be removed and the metrics can be briefly mentioned along with the datasets or in the captions of the tables since most, if not all, of the metrics are well-known and used as standard practice. |
ARR_2022_295_review | ARR_2022 | - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable.
- The new proposed dataset, DRRI, could have been explored more in the paper.
- It is not clear how named entities were extracted from the datasets.
An English-proofreading would significantly improve the readability of the paper. | - The new proposed dataset, DRRI, could have been explored more in the paper. |
NIPS_2018_83 | NIPS_2018 | - An argument against DEN, a competitor, is hyper-parameter sensitivity. First, this isn't really shown, but second (and more importantly) reinforcement learning is well-known to be extremely unstable and require a great deal of tuning. For example, even random seed changes are known to change the behavior of the same algorithm, and different implementation of the same algorithm can get very different results (this has been heavily discussed in the community; see keynote ICLR talk by Joelle Pineau as an example). This is not to say the proposed method doesn't have an advantage, but the argument that other methods require more tuning is not shown or consistent with known characteristics of RL. * Related to this, I am not sure I understand experiments for Figure 3. The authors say they vary the hyper-parameters but then show results with respect to # of parameters. Is that # of parameters of the final models at each timestep? Isn't that just varying one hyperparameter? I am not sure how this shows that RCL is more stable. - Newer approaches such as FearNet [1] should be compared to, as they demonstrated significant improvement in performance (although they did not compare to all of the methods compared to here). [1] FearNet: Brain-Inspired Model for Incremental Learning, Ronald Kemker, Christopher Kanan, ICLR 2018. - There is a deeper tie to meta-learning, which has several approaches as well. While these works don't target continual learning directly, they should be cited and the authors should try to distinguish those approaches. The work on RL for architecture search and/or as optimizers for learning (which are already cited) should be more heavily linked to this work, as it seems to directly follow as an application to continual learning. - It seems to me that continuously adding capacity while not fine-tuning the underlying features (which training of task 1 will determine) is extremely limiting. If the task is too different and the underlying feature space in the early layers are not appropriate to new tasks, then the method will never be able to overcome the performance gap. Perhaps the authors can comment on this. - Please review the language in the paper and fix typos/grammatical issues; a few examples: * [1] "have limitation to solve" => "are limited in their ability to solve" * [18] "In deep learning community" => "In THE deep learning community" * [24] "incrementally matche" => "incrementally MATCH" * [118] "we have already known" => "we already know" * and so on Some more specific comments/questions: - This sentence is confusing [93-95] "After we have trained the model for task t, we memorize each newly added filter by the shape of every layer to prevent the caused semantic drift." I believe I understood it after re-reading it and the subsequent sentences but it is not immediately obvious what is meant. - [218] Please use more objective terms than remarkable: "and remarkable accuracy improvement with same size of networks". Looking at the axes, which are rather squished, the improvement is definitely there but it would be difficult to characterize it as remarkable. - The symbols in the graphs across the conditions/algorithms is sometimes hard to distinguish (e.g. + vs *). Please make the graphs more readable in that regard. Overall, the idea of using reinforcement learning for continual learning is an interesting one, and one that makes sense considering recent advances in architecture search using RL. However, this paper could be strengthened by 1) Strengthening the analysis in terms of the claims made, especially with respect to not requiring as much hyper-parameter tuning, which requires more evidence given that RL often does require significant tuning, and 2) comparison to more recent methods and demonstration of more challenging continual learning setups where tasks can differ more widely. It would be good to have more in-depth analysis of the trade-offs between three approaches (regularization of large-capacity networks, growing networks, and meta-learning). ============================================== Update after rebuttal: Thank you for the rebuttal. However, there wasn't much new information in the rebuttal to change the overall conclusions. In terms of hyper-parameters, there are actually more hyper-parameters for reinforcement learning that you are not mentioning (gamma, learning rate, etc.) which your algorithm might still be sensitive to. You cannot consider only the hyper-parameter related to the continual learning part. Given this and the other limitations mentioned, overall this paper is marginally above acceptance so the score has been kept the same. | - [218] Please use more objective terms than remarkable: "and remarkable accuracy improvement with same size of networks". Looking at the axes, which are rather squished, the improvement is definitely there but it would be difficult to characterize it as remarkable. |
ICLR_2021_140 | ICLR_2021 | Weakness
When discussing the difference over [Tulyakov et al. 2018], the paper states “…applies h_t as the motion code for the frame to be generated, while the content code is fixed for all frames. However, such a design requires a recurrent network to estimate the motion while preserving consistent content from the latent vector, … difficult to learn in practice”. I do not fully understand why this is the case. It would be clearer if the paper can explain why such a design causes difficulty in learning and why the proposed design could alleviate such problems.
For motion diversity, why maximizing the mutual information between the hidden vector and the noise vector can prevent mode collapse?
It seems to me that the proposed method can only handle 1) “subtle” motion, such as facial expressions and 2) short video sequences (e.g., 16 frames). One can see the problem in the synthesized results for UCF-101: inconsistent motion, changing color, or object disappearing over time. It would be interesting to videos with a longer duration (by running the LSTM over many time steps).
In sum, this is a paper with an interesting idea and extensive experiments. While the results are still not perfect and seem to handle subtle motion, the quantitative and qualitative evaluation show clearly improved results over the previous state-of-the-art. | 2) short video sequences (e.g., 16 frames). One can see the problem in the synthesized results for UCF-101: inconsistent motion, changing color, or object disappearing over time. It would be interesting to videos with a longer duration (by running the LSTM over many time steps). In sum, this is a paper with an interesting idea and extensive experiments. While the results are still not perfect and seem to handle subtle motion, the quantitative and qualitative evaluation show clearly improved results over the previous state-of-the-art. |
NIPS_2020_1710 | NIPS_2020 | - While the baselines are strong, the way they are reported may be a bit misleading. In particular, models are compared based on the sparsity percentage, which puts models with fewer parameters (e.g., MiniBERT) at a disadvantage. - As with most work on pruning, it is not yet possible to realize efficiency gains on GPU. | - As with most work on pruning, it is not yet possible to realize efficiency gains on GPU. |
NIPS_2017_35 | NIPS_2017 | - The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation.
- The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The comparison with [5] is not completely fair as [5] is designed for a more complex problem, i.e., no knowledge of the camera pose parameters.
- Some explanations are a little vague. For example, the last paragraph of Section 3 (lines 207-210) on the single image case. Questions/comments:
- In the Recurrent Grid Fusion, have you tried ordering the views sequentially with respect to the camera viewing sphere?
- The main weakness to me is the numerical evaluation. I understand that the hypothesis of clean segmentation of the object and known camera pose limit the evaluation to purely synthetic settings. However, it would be interesting to see how the architecture performs when the camera pose is not perfect and/or when the segmentation is noisy. Per category results could also be useful.
- Many typos (e.g., lines 14, 102, 161, 239 ), please run a spell-check. | - The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The comparison with [5] is not completely fair as [5] is designed for a more complex problem, i.e., no knowledge of the camera pose parameters. |
NIPS_2019_776 | NIPS_2019 | 1. When the authors say `white box attacks`, I assume this means that the adversary can see the full network with the final layers, every network in the ensemble, every rotation used by networks in the ensemble. I would like them to confirm this is correct. 2. Did the authors study numbers of bits in logits helps against a larger epsilon in the PGD attack? Because intuition suggests that having a 32 bit logit should improve robustness against a more powerful adversary. This experiment isn't absolutely necessary, but does strengthen the paper. 3. Did the authors study the same approach on Cifar? It seems like this approach should be readily applicable there as well. ---Edit after rebuttal--- I am updating my score to 8. The improved experiments on Cifar10 make a convincing argument for your method. | 2. Did the authors study numbers of bits in logits helps against a larger epsilon in the PGD attack? Because intuition suggests that having a 32 bit logit should improve robustness against a more powerful adversary. This experiment isn't absolutely necessary, but does strengthen the paper. |
Q7uE3M5aMD | ICLR_2025 | The experiments and evaluation section in this paper claims to show that the method "achieves fair pricing effectively", but it answers none of the questions that would allow us to determine if such pricing is fair, effective, or desirable.
* How much do men and women pay for insurance after this method is applied?
* How does this compare to the benefit they receive from insurance payouts?
* Which other subgroups benefit or are made worse off by this method?
* If insurance is under/overpriced it could lead to adverse selection, where, for example, high-risk male drivers buy more insurance because it’s cheap, increasing premiums for everyone else. It could even lead to people driving more dangerously at the margin because they know that an incident won’t increase their premiums much. Is there risk of adverse selection or other negative equilibrium effects from using this pricing?
This paper paper is clearly trying to develop a method for pricing that can be used by real insurers. Unfortunately, though, it treats insurance pricing as an exercise in privacy math, and not as an input to a crucially important product for people's physical and financial health. | * How much do men and women pay for insurance after this method is applied? |
NIPS_2017_401 | NIPS_2017 | Weakness:
1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings.
2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016.
3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods.
4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players.
Initial Evaluation:
This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility:
Appears to be reproducible. | 2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016. |
qhwYFIrSm7 | EMNLP_2023 | 1. The paper split the papers according to their publication years on the ACL anthology. However, many papers are posted on arxiv much earlier than ACL anthology. For example, the BERT paper is available on arxiv from Oct. 2018 (when its influence started), but is on ACL Anthology from 2019. This may more or less influence the soundness of the causal analysis.
2. The paper claims that the framework aids the literature survey, which is true. However, the listed overviews are mostly "Global Research Trends" that are well-known by the community. For example, it is known that BLEU and Transformers fuel machine translation. The paper didn't illustrate how to use the framework to identify "Local Research Trends" which are more useful for daily research: for example, what is the impact of instruction-following LLMs and in-context learning on NLP tasks? How do multi-task learning and reinforcement learning transfer to instruction learning? | 1. The paper split the papers according to their publication years on the ACL anthology. However, many papers are posted on arxiv much earlier than ACL anthology. For example, the BERT paper is available on arxiv from Oct. |
ICLR_2022_1267 | ICLR_2022 | Weakness
1. The proposed model's parameterization depends on the number of events and predicates making it difficult to generalize to unseen events or required retraining.
2. The writing needs to be improved to clearly discuss the proposed approach.
3. The experiments baselines are of the authors' own design; it lacks a comparison to the literature baselines using the same dataset. If there is no such baseline, please discuss the criteria in choosing such baselines. Details:
1. Page 1, "causal mechanisms", causality is different from temporal relationship. Please use the terms carefully.
2. Page 3, it seems to me that M_T is defined over the probabilities of atomic events. The notation as it is used not making it difficult to make sense of this concept. Please consider providing examples to explain M_T.
3. Page 4, equation (2), it is not usual to feed probabilities to convolution.
a. Please discuss in section 3 how your framework can handle raw inputs, such as video or audio? Do you need an atomic event predictor or human label to use your proposed system? If so, is it possible to extend your framework to directly have video as input instead of event probability distributions? Can you do end2end training from raw inputs, such as video or audio? (although you mentioned Faster R-CNN in the experiment section, it is better to discuss the whole pipeline in the methodology).
b. Have you tried discrete event embeddings to represent the atomic and composite events so as the framework can learn distributional embedding representation of events so as to learn the temporal rules?
4. Page 4, please explain what you want to achieve with M_A = M_C \otimes M_D. It is unusual to multiple length by conv1D output. Also please define \otimes here. I am guessing it is elementwise multiplication from the context.
5. Page 4, "M_{D:,:,l}=l. This can be thought as a positional encoding. It is not clear to me why this can be taken as positional encoding?
6. Page 6, please detail how do you sample top c predicates. Please define what is s in a = softmax(s). It seems to me the dimension of s with \sum_i (c i) can be quite large making it softmax(s) very costly. | 2. Page 3, it seems to me that M_T is defined over the probabilities of atomic events. The notation as it is used not making it difficult to make sense of this concept. Please consider providing examples to explain M_T. |
ICLR_2022_1678 | ICLR_2022 | 1 - The authors propose a relaxation of rejection sampling which is using an arbitrary parameter β
instead of the true upper bound of the ratio p q
when the latter cannot be computed. The reviewer fails to understand why the authors did not directly use Importance sampling in the first place.
2- In algorithm 1, the reviewer fails to see a difference between QRS and RS, and will change their opinion if the authors can point out a value of u for which QRS and RS will behave differently.
3 - Uninteresting Section 2.2: - Equation 1 needs a parenthesis to avoid confusion - Equation 3 is pretty much obvious from the definition of TVD (Lemma 2.19 Aldous and Fill https://www.stat.berkeley.edu/users/aldous/RWG/book.html) - Equation 4 is obvious since using the apropriate upper bound gives you rejection sampling which is a perfect sampling algorithm.
4 - In the abstract the authors require the proposal distribution to upper bound the target everywhere which is not true as the authors themselves clarify in the text.
5 - While Equation 9 and 10 are great in that they can be used to compute TVD and KL between the true and QRS distributions, there are multiple issues which are neither stated as assumptions nor addressed appropriately, namely: - They're not unbiased estimators since Z
is not known and needs to be estimated, this point is not explicitly stated. - It is assumed that the normalizing constant of q
is known, which is not always the case. - They rely on importance sampling, which begs question 1. | 1 - The authors propose a relaxation of rejection sampling which is using an arbitrary parameter β instead of the true upper bound of the ratio p q when the latter cannot be computed. The reviewer fails to understand why the authors did not directly use Importance sampling in the first place. 2- In algorithm 1, the reviewer fails to see a difference between QRS and RS, and will change their opinion if the authors can point out a value of u for which QRS and RS will behave differently. |
ZJua8VeHCh | EMNLP_2023 | - Conceptually the proposed idea is similar to existing techniques and it could be viewed as incremental.
- The observed performance enhancements are somewhat modest, suggesting room for further refinement in the future. | - The observed performance enhancements are somewhat modest, suggesting room for further refinement in the future. |
ACL_2017_333_review | ACL_2017 | There are some few details on the implementation and on the systems to which the authors compared their work that need to be better explained. - General Discussion: - Major review: - I wonder if the summaries obtained using the proposed methods are indeed abstractive. I understand that the target vocabulary is build out of the words which appear in the summaries in the training data. But given the example shown in Figure 4, I have the impression that the summaries are rather extractive.
The authors should choose a better example for Figure 4 and give some statistics on the number of words in the output sentences which were not present in the input sentences for all test sets.
- page 2, lines 266-272: I understand the mathematical difference between the vector hi and s, but I still have the feeling that there is a great overlap between them. Both "represent the meaning". Are both indeed necessary? Did you trying using only one of them.
- Which neural network library did the authors use for implementing the system?
There is no details on the implementation.
- page 5, section 44: Which training data was used for each of the systems that the authors compare to? Diy you train any of them yourselves?
- Minor review: - page 1, line 44: Although the difference between abstractive and extractive summarization is described in section 2, this could be moved to the introduction section. At this point, some users might no be familiar with this concept.
- page 1, lines 93-96: please provide a reference for this passage: "This approach achieves huge success in tasks like neural machine translation, where alignment between all parts of the input and output are required."
- page 2, section 1, last paragraph: The contribution of the work is clear but I think the authors should emphasize that such a selective encoding model has never been proposed before (is this true?). Further, the related work section should be moved to before the methods section.
- Figure 1 vs. Table 1: the authors show two examples for abstractive summarization but I think that just one of them is enough. Further, one is called a figure while the other a table.
- Section 3.2, lines 230-234 and 234-235: please provide references for the following two passages: "In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for encoding input sentence information and decoding the sentence representation to generate an output sentence"; "Some previous works apply this framework to summarization generation tasks."
- Figure 2: What is "MLP"? It seems not to be described in the paper.
- page 3, lines 289-290: the sigmoid function and the element-wise multiplication are not defined for the formulas in section 3.1.
- page 4, first column: many elements of the formulas are not defined: b (equation 11), W (equation 12, 15, 17) and U (equation 12, 15), V (equation 15).
- page 4, line 326: the readout state rt is not depicted in Figure 2 (workflow).
- Table 2: what does "#(ref)" mean?
- Section 4.3, model parameters and training. Explain how you achieved the values to the many parameters: word embedding size, GRU hidden states, alpha, beta 1 and 2, epsilon, beam size.
- Page 5, line 450: remove "the" word in this line? " SGD as our optimizing algorithms" instead of "SGD as our the optimizing algorithms."
- Page 5, beam search: please include a reference for beam search.
- Figure 4: Is there a typo in the true sentence? " council of europe again slams french prison conditions" (again or against?)
- typo "supper script" -> "superscript" (4 times) | - Section 3.2, lines 230-234 and 234-235: please provide references for the following two passages: "In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for encoding input sentence information and decoding the sentence representation to generate an output sentence"; "Some previous works apply this framework to summarization generation tasks." - Figure 2: What is "MLP"? It seems not to be described in the paper. |
ICLR_2021_2455 | ICLR_2021 | In spite of the strengths mentioned above, there are a few questions that are confusing. 1. As for the simulated experiment: What is the purpose of the third figure in Figure 1? It shows that the perfect causal model performs bad under unobserved, while the other three methods performs almost the same. Further, the performance of the proposed DIRM and DRO is quite similar in this setting, which does not account for the effectiveness of the method. Besides, the result of IRM for this experiment is missed. 2. As for the theoretical analysis: a) For Theorem 1, the right hand equation uses L_2 norm of a function of beta. I read the prove and I think this norm is defined as an integral which has nothing do with beta any more. Therefore, I wonder what does the regularizer proposed in equation(6) means since beta has already been integrated. b) For Theorem 1, the core assumption is ‘the expected loss function as a function of beta belongs to a Sobolev space’, which is confusing. Could you provide some explanations of this assumption or give some examples of it? c) Theorem 1 provides an upper bound for one specific kind of DRO problem whose uncertainty set is formulated as an affine combination of training distributions. However, in this article, the authors do not state what is the definition of the invariance here and why solve such DRO problem could achieve the invariance. 3. As for the proposed objective function: a) As mentioned above, the L_2 norm is taken over a function of beta, which I think is not the Euclidean norm of the vector. Beta has already been integrated and this regularizer has nothing do with beta. I wonder how to compute this when optimizing? b) I wonder how this objective function can be optimized efficiently? The first concern is mentioned above as the computation of L_2 norm. The second concern is how to optimize the variance which is non-convex and hard to optimize. Namkoong et al. [1] convert the optimization of a variance-regularized problem to a f-divergence DRO for better optimization, while in this paper the authors take the opposite way. I wonder is there any theoretical guarantee of the optimization of the objective function(6). 4. As for the experiments: a) The experimental results on the last two datasets are not convincing enough to validate the effectiveness of the proposed method, since the performance is similar to IRM, which I wonder if it is caused by the problems mentioned above(in 3).
[1] Duchi, J. , & Namkoong, H. . (2016). Variance-based regularization with convex objectives. | 4. As for the experiments: a) The experimental results on the last two datasets are not convincing enough to validate the effectiveness of the proposed method, since the performance is similar to IRM, which I wonder if it is caused by the problems mentioned above(in |
ACL_2017_108_review | ACL_2017 | The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader interested. As for the approach itself, some important details are missing in my opinion: What is the decision criterion to include an edge or not? In lines 229--233 several different options for the I^k_t nodes are mentioned but it is never clarified which edges should be present!
As for the empirical evaluation, the achieved results are better than some previous approaches but not really by a large margin. I would not really call the slight improvements as "outperformed" as is done in the paper. What is the effect size? Does it really matter to some user that there is some improvement of two percentage points in F_1? What is the actual effect one can observe? How many "important" entities are discovered, that have not been discovered by previous methods? Furthermore, what performance would some simplistic dictionary-based method achieve that could also be used to find overlapping things? And in a similar direction: what would some commercial system like Google's NLP cloud that should also be able to detect and link entities would have achieved on the datasets. Just to put the results also into contrast of existing "commercial" systems.
As for the result discussion, I would have liked to see some more emphasis on actual crossing entities. How is the performance there? This in my opinion is the more interesting subset of overlapping entities than the nested ones. How many more crossing entities are detected than were possible before? Which ones were missed and maybe why? Is the performance improvement due to better nested detection only or also detecting crossing entities? Some general error discussion comparing errors made by the suggested system and previous ones would also strengthen that part.
General Discussion: I like the problems related to named entity recognition and see a point for recognizing crossing entities. However, why is one interested in nested entities? The paper at hand does not really motivate the scenario and also sheds no light on that point in the evaluation. Discussing errors and maybe advantages with some example cases and an emphasis on the results on crossing entities compared to other approaches would possibly have convinced me more.
So, I am only lukewarm about the paper with maybe a slight tendency to rejection. It just seems yet another try without really emphasizing the in my opinion important question of crossing entities.
Minor remarks: - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one?
- e.g.: why in italics?
- time linear in n: when n is sentence length, does it really matter whether it is linear or cubic?
- spurious structures: in the introduction it is not clear, what is meant - regarded as _a_ chunk - NP chunking: noun phrase chunking?
- Since they set: who?
- pervious -> previous - of Lu and Roth~(2015) - the following five types: in sentences with no large numbers, spell out the small ones, please - types of states: what is a state in a (hyper-)graph? later state seems to be used analogous to node?!
- I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph?
- in Figure 2 it was not obvious at first glance why this is a hypergraph.
colors are not visible in b/w printing. why are some nodes/edges in gray. it is also not obvious how the highlighted edges were selected and why the others are in gray ... - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one?
- denoting ...: sometimes in brackets, sometimes not ... why?
- please place footnotes not directly in front of a punctuation mark but afterwards - footnote 2: due to the missing edge: how determined that this one should be missing?
- on whether the separator defines ...: how determined?
- in _the_ mention hypergraph - last paragraph before 4.1: to represent the entity separator CS: how is the CS-edge chosen algorithmically here?
- comma after Equation 1?
- to find out: sounds a little odd here - we extract entities_._\footnote - we make two: sounds odd; we conduct or something like that?
- nested vs. crossing remark in footnote 3: why is this good? why not favor crossing? examples to clarify?
- the combination of states alone do_es_ not?
- the simple first order assumption: that is what?
- In _the_ previous section - we see that our model: demonstrated? have shown?
- used in this experiments: these - each of these distinct interpretation_s_ - published _on_ their website - The statistics of each dataset _are_ shown - allows us to use to make use: omit "to use" - tried to follow as close ... : tried to use the features suggested in previous works as close as possible?
- Following (Lu and Roth, 2015): please do not use references as nouns: Following Lu and Roth (2015) - using _the_ BILOU scheme - highlighted in bold: what about the effect size?
- significantly better: in what sense? effect size?
- In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one?
- due to _an_ insufficient - Table 5: all F_1 scores seems rather similar to me ... again, "outperform" seems a bit of a stretch here ... - is more confident: why does this increase recall?
- converge _than_ the mention hypergraph - References: some paper titles are lowercased, others not, why? | - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one? |
NIPS_2018_43 | NIPS_2018 | - Theoretical analyses are not particularly difficult, even if they do provide some insights. That is, the analyses are what I would expect any competent grad student to be able to come up with within the context of a homework assignment. I would consider the contributions there to be worthy of a posted note / arXiv article. - Section 4 is interesting, but does not provide any actionable advice to the practitioner, unlike Theorem 4. The conclusion I took was that the learned function f needs to achieve a compression rate of \zeta / m with a false positive rate F_p and false negative rate F_n. To know if my deep neural network (for example) can do that, I would have to actually train a fixed size network and then empirically measure its errors. But if I have to do that, the current theory on standard Bloom filters would provide me with an estimate of the equivalent Bloom filter that achieves the same error false positive as the learned Bloom filter. - To reiterate the above point, the analysis of Section 4 doesn't change how I would build, evaluate, and decide on whether to use learned Bloom filters. - The analytical approach of Section 4 gets confusing by starting with a fixed f with known \zeta, F_p, F_n, and then drawing the conclusion for an a priori fixed F_p, F_n (lines 231-233) before fixing the learned function f (lines 235-237). In practice, one typically fixes the function class (e.g. parameterized neural networks with the same architecture) *first* and measures F_p, F_n after. For such settings where \zeta and b are fixed a priori, one would be advised to minimize the learned Bloom filter's overall false positive (F_p + (1-F_p)\alpha^{b/F_n}) in the function class. An interesting analysis would then be to say whether this is feasible, and how it compares to the log loss function. Experiments can then conducted to back this up. This could constitute actionable advice to practitioners. Similarly for the sandwiched learned Bloom filter. - Claim (first para of Section 3.2) that "this methodology requires significant additional assumptions" seems too extreme to me. The only additional assumption is that the test set be drawn from the same distribution as the query set, which is natural for many machine learning settings where the train, validation, test sets are typically assumed to be from the same iid distribution. (If this assumption is in fact too hard to satisfy, then Theorem 4 isn't very useful too.) - Inequality on line 310 has wrong sign; compare inequality line 227 --- base \alpha < 1. - No empirical validation. I would have like to see some experiments where the bounds are validated. | - No empirical validation. I would have like to see some experiments where the bounds are validated. |
ACL_2017_49_review | ACL_2017 | There are some minor points, listed as follows: 1) Figure 1: I am a bit surprised that the function words dominate the content ones in a Japanese sentence. Sorry I may not understand Japanese. 2) In all equations, sequences/vectors (like matrices) should be represented as bold texts to distinguish from scalars, e.g., hi, xi, c, s, ... 3) Equation 12: s_j-1 instead of s_j.
4) Line 244: all encoder states should be referred to bidirectional RNN states.
5) Line 285: a bit confused about the phrase "non-sequential information such as chunks". Is chunk still sequential information???
6) Equation 21: a bit confused, e.g, perhaps insert k into s1(w) like s1(w)(k) to indicate the word in a chunk. 7) Some questions for the experiments: Table 1: source language statistics? For the baselines, why not running a baseline (without using any chunk information) instead of using (Li et al., 2016) baseline (|V_src| is different)? It would be easy to see the effect of chunk-based models. Did (Li et al., 2016) and other baselines use the same pre-processing and post-processing steps? Other baselines are not very comparable. After authors's response, I still think that (Li et al., 2016) baseline can be a reference but the baseline from the existing model should be shown. Figure 5: baseline result will be useful for comparison? chunks in the translated examples are generated *automatically* by the model or manually by the authors? Is it possible to compare the no. of chunks generated by the model and by the bunsetsu-chunking toolkit? In that case, the chunk information for Dev and Test in Table 1 will be required. BTW, the authors's response did not address my point here. 8) I am bit surprised about the beam size 20 used in the decoding process. I suppose large beam size is likely to make the model prefer shorter generated sentences. 9) Past tenses should be used in the experiments, e.g., Line 558: We *use* (used) ... Line 579-584: we *perform* (performed) ... *use* (used) ... ... - General Discussion: Overall, this is a solid work - the first one tackling the chunk-based NMT; and it well deserves a slot at ACL. | 5) Line 285: a bit confused about the phrase "non-sequential information such as chunks". Is chunk still sequential information??? |
NIPS_2019_634 | NIPS_2019 | see section 5 ("improvements") below. Originality: while the methods are not particularly novel (autoregressive and masked language modelling pretraining have both been used before for ELMo and BERT; this work extends these objectives to the multi-lingual case), the performance gains on all four tasks are still very impressive. - Quality: This paper's contributions are mostly empirical. The empirical results are strong, and the methodology is sound and explained in sufficient technical details. - Clarity: The paper is well-written, makes the connections with the relevant earlier work, and includes important details that can facilitate reproducibility (e.g. the learning rate, number of layers, etc.). - Significance: The empirical results constitute a new state of the art and are important to drive progress in the field. ---------- Update after authors' response: the response clearly addressed most of my concerns. I look forward to the addition of supervised MT experiments on other languages (beyond the relatively small Romanian-English dataset) on subsequent versions of the paper. I maintain my initial assessment that this is a strong submission with impressive empirical results, which would be useful for the community. I maintain my final recommendation of "8". | - Clarity: The paper is well-written, makes the connections with the relevant earlier work, and includes important details that can facilitate reproducibility (e.g. the learning rate, number of layers, etc.). |
NIPS_2019_1338 | NIPS_2019 | , this paper is a solid submission. The idea is interesting and effective. It outperforms the state of the art. Strength: + The paper is well written and the explanations are clear. + The quantitative results (especially Table 2) clearly demonstrate the effectiveness of the proposed method. + Figure 1 is well designed and useful to understand the model. + Qualitative results in Figure 2 is convincing and demonstrates the consistency of the attention module across different classes. Weakness: - Motivation behind 3.2 Section 3.2 describes the cropping network that uses a 2d continuous boxcar function. Motivation for this design choice is weak, as previous attempts in local attention have used Gaussian masks [a], simple bilinear sampling using spatial transformers [b], or even pooling methods [c]. If this makes a difference, it would be great to demonstate it in an experiment. At minimum, bilinear sampling should be compared against. [a] Gregor, Karol, et al. "Draw: A recurrent neural network for image generation." ICML, 2015. [b] Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. "Spatial transformer networks." NeurIPS, 2015. [c] He, Kaiming, et al. "Mask r-cnn." Proceedings of the IEEE international conference on computer vision. 2017. - Discrepancy between eq. 9 and Figure 1. From eq. 9, it seems like the output patches are not cropped parts of the input image but just masked versions of the input image where most pixels are black. Is this correct? In this case, Figure 1 is misleading. And if so, wouldn't zooming on the region of interest using bilinear sampling provide better results? - Class-Center Triplet Loss The formulation of class-center triplet loss (L_CCT) is not entirely convincing. While the authors claim L2 normalization is introduced to ease the setting of a proper margin, this also has a different effect. This would in fact, divert the formulation to be different from the traditional definition of a margin. For example, these two points in the semantic feature space could be close, but far away after the normalization that projects them on a unit hypersphere. And the other way around is also true. Especially given the fact that the unnormalized version of phi is used also in L_CLS, the effect of this formulation is not obvious. In fact, the formulation resembles the cosine distance in an inner product, and the margin would be set -- roughly speaking -- on the cosine angle. The authors should discuss this in their paper. I find the current explanation misleading. - Backbone CNN Although I assume so, in Section 3.3 / Figure 1, it is not clear which backbone CNNs share their weights, and which don't (if some don't). Is the input image going through the same CNN as the local patches? Are the local patches going through the same CNN? I suggest some coloring to make it clear if not all are shared. - Minor issues L15: "must be limited to one paragraph". L193: L_CAT --> L_CCT Equation 11: it would be clearer with indices under the max function. L215: "unit sphere" -> "unit hypersphere". Unless the dimension of the semantic feature space is 3, which in this case should be mentioned. Potential Enhancements: * This paper is targeting zero-shot classification but since the multi-attention module is a major contribution by itself, it could have been validated on other tasks. An obvious one is fine-grained classification, on CUB-200 for instance. It is maybe possible for the authors to report this result since they already use CUB-200, but I would understand if it is not done in the rebuttal. ==== POST REBUTTAL ==== The additional results have made the submission even stronger than before. I am therefore more confident in the rating. | - Discrepancy between eq. 9 and Figure 1. From eq. 9, it seems like the output patches are not cropped parts of the input image but just masked versions of the input image where most pixels are black. Is this correct? In this case, Figure 1 is misleading. And if so, wouldn't zooming on the region of interest using bilinear sampling provide better results? |
ICLR_2022_234 | ICLR_2022 | Weakness and Questions: 1. The analysis is only limited to GCNs, while the paper title is too general (GNNs). Not very 2. One concern is that the performance of GCN on Chameleon and Squirrel (Table1) differs a lot from the one reported in other papers (e.g. Geom-GCN (Pei et al. 2020), H2GNN (Zhu et al. 2020)). It seems the settings are the same as Pei et al. 2020, why is the result on these two datasets so different (2 to 3 times different)? Generally, I do not think the hyperprameter tunning should impact so much. Could the authors explain more details about it? In fact I also run some experiments on those datasets before and I cannot get those high numbers either. 3. The cross-class neighborhood similarity metric is intuitive and a good idea. However, it lacks of a direct theoretic connection to GCNs’ performance. Same as the heterophily metric, I do not think it can completely decide the GCNs’ performance, because the node feature distribution is also important here. In fact, the assumptions in Theorem 1 are quite strong. If the nodes with the same label are sampled from the same feature distribution, it means generally MLP can also have a good performance. When this assumption does not meet, the analysis will become very complex. That is why I think Figure5 may be not enough to explain everything (but it is still interesting to see this empirical result). 4. Although Theorem 1 seems correct to me, I have a question here. Assume we have a separate node with 0 neighbors, that means the upper bound here is 0. It is obviously not true. So, how to explain this exception? | 4. Although Theorem 1 seems correct to me, I have a question here. Assume we have a separate node with 0 neighbors, that means the upper bound here is 0. It is obviously not true. So, how to explain this exception? |
CsCRTvEZg1 | EMNLP_2023 | 1. Limited technical novelty. Compare with the two mentioned papers (Xing and Tsang, 2022a, b), although the previous papers focus on graph-based approaches, the idea, co-attention mechanism, and architecture of this paper are quite similar to the previous.
2. Calculating and presenting the averaged overall accuracies over two datasets (the Avg columns) in table 1 and table 2 seems kind of unfounded.
3. In the part of Task-shared encoder, a character-level word embedding has been concatenated to the vector. It might be better to briefly explain the purpose of the concatenation.
4. In the PLM part, It might be better to add an experiment with MISCA + BERT, since several strong baselines have not been applied to RoBERTa. Only one experiment with PLM seems to be quite few. . | 1. Limited technical novelty. Compare with the two mentioned papers (Xing and Tsang, 2022a, b), although the previous papers focus on graph-based approaches, the idea, co-attention mechanism, and architecture of this paper are quite similar to the previous. |
ICLR_2021_973 | ICLR_2021 | .
Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate reduction in Brain-Score. I was surprised that it worked so well.
Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. - Is the third method (updating only down-sampling layers) meant to be biologically relevant? If so, can anything more specific be said about this, other than that different cortical layers learn at different rates? - Given that the brain does everything in parallel, why is the number of weight updates a better metric than the number of network updates?
Provide additional feedback with the aim to improve the paper. - Bottom of pg. 4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S. - Pg. 5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). - Fig. 3: Did you experiment with just training the middle Conv layers (as opposed to upsample or downsample layers)? - Fig. 3: Why go to 0 trained parameters for downstream training, but minimum ~1M trained parameters for CT? - Fig. 4: On the color bar, presumably one of the labels should say “worse”. - Section B.1: How many Gaussian components were used, or how many parameters total? Or if different for each layer, what was the maximum across all layers? - Section B.3: I wasn’t clear on the numbers of parameters used in each approach. - D.1: How were CORnet-S clusters mapped to ResNet blocks? I thought different clusters were used in each layer. If not, maybe this could be highlighted in Section 4. | - Pg.5: “The training time reduction is less drastic than the parameter reduction because most gradients are still computed for early down-sampling layers (Discussion).” This seems not to have been revisited in the Discussion (which is fine, just delete “Discussion”). |
ICLR_2023_879 | ICLR_2023 | The ablations for the different pre-training tasks in section 4.5 / Figure 6 are a bit puzzling. It does seem that the CRD task has destructive value on that particular binding affinity prediction task since: a) the performance of CRD + MLM or CRD + PPI leads to both lower performance Vs MLM or PPI alone respectively b) the performance of CRD + MLM + PPI is also lower vs just using MLM + PPI. This seems particularly important from a practical standpoint, and additional experiments are needed to confirm whether: 1) that problem applies to other downstream tasks or is just specific to binding affinity prediction — and if so, why? 2) there is something fundamentally wrong with the CRD pre-training as currently implemented? 3) there is a way to anticipate ex ante (or post fine tuning) which tokens should be used to ensure optimal task performance ?
The ablation in section in Table 3 is a bit puzzling as well: it appears that the performance of PromptProtein without layer skip is lower than the performance from the conventional MTL. Could you please explain why that might be the case? (I would have assumed intermediate performance between conventional MTL and full PromptProtein as I presume the attention masks are still used in that ablation?)
Several points (in section 4 primarily) were not fully clear (see clarity paragraph below).
The following claim in conclusion does not seem fully substantiated: “PromptProtein beats state-of-the-art baselines by significant margins”. Authors do report the relevant baselines listed in the FLIP paper [1]. But since that paper was released, several methods have shown markedly superior performance for protein modeling & achieving high spearman with deep mutational scanning assays — see for example, [2] and [3]. I would suggest adding these two baselines to the analysis or tone done the SOTA claims.
[1] Dallago, C., Mou, J., Johnston, K.E., Wittmann, B.J., Bhattacharya, N., Goldman, S., Madani, A., & Yang, K.K. (2022). FLIP: Benchmark tasks in fitness landscape inference for proteins. bioRxiv.
[2] Hsu, C., Verkuil, R., Liu, J., Lin, Z., Hie, B.L., Sercu, T., Lerer, A., & Rives, A. (2022). Learning inverse folding from millions of predicted structures. bioRxiv.
[3] Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A.N., Marks, D.S., & Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML. | 1) that problem applies to other downstream tasks or is just specific to binding affinity prediction — and if so, why? |
D6zn6ozJs7 | ICLR_2025 | 1. Limited exploration of external knowledge sources: The paper acknowledges that reliance on sources such as Wikipedia is a limitation but can further explore its impact on detection performance and possible solutions.
2. Evaluation depth: Although the paper evaluated multiple models, it mainly focused on the zero-shot setting. If fine-tuning models or more ablation experiments are added, the experimental results may be more convincing.
3. The handling of rumors generated by GPT: The paper points out the challenges of detecting rumors generated by GPT, but further analysis or solutions can be proposed. There is no analysis of why GPT-generated Rumor is closer to Natural Rumor, or in other words, why is GPT-generated Rumor about as difficult to detect as Natural Rumor? After all, Artificial Rumor is also written by humans, so it should be about the same difficulty as Natural Rumor, but the experimental result is that Natural Rumor is the easiest to detect.
4. It is suggested to discuss and compare more related works such as [1] in this paper.
[1] Detecting and Grounding Multi-Modal Media Manipulation and Beyond. TPAMI 2024. | 3. The handling of rumors generated by GPT: The paper points out the challenges of detecting rumors generated by GPT, but further analysis or solutions can be proposed. There is no analysis of why GPT-generated Rumor is closer to Natural Rumor, or in other words, why is GPT-generated Rumor about as difficult to detect as Natural Rumor? After all, Artificial Rumor is also written by humans, so it should be about the same difficulty as Natural Rumor, but the experimental result is that Natural Rumor is the easiest to detect. |
ICLR_2023_2640 | ICLR_2023 | Weakness: 1 For key issues in federated recommendation, the authors do not contribute/discuss much, e.g., communication cost, privacy protection, time complexity.
2 The technical contribution is limited. For example, the contents of Section 4 are not about a formal and principled solution, but most about heuristics.
3 For the studied problem, there are many recent works, which are not studied in the experiments. | 2 The technical contribution is limited. For example, the contents of Section 4 are not about a formal and principled solution, but most about heuristics. |
ARR_2022_113_review | ARR_2022 | - Although BFS is briefly introduced in Section 3, it's still uneasy to understand for people who have not studied the problem. More explanation is preferable.
- Algorithm 1, line 11: the function s(·) should accept a single argument according to line 198.
- Figure 6: the font size is a little bit small. | - Figure 6: the font size is a little bit small. |
NIPS_2020_936 | NIPS_2020 | I have a few comments on this paper, even though it would be unfair to call them weaknesses. They are listed below in no particular order. - It's regrettable that the probability mass function is practically unexploited. In MixBoost it is set to a quasi-uniform distribution, which depends on only one single parameter. Intuitively, each learner class should be considered individually, even in the case of BDT of different depths. I think that considering various probability mass function would've added further depth to the experimental setting (unless I'm missing an obvious reason why the quasi-uniform distribution is well suited...). - Continuing from the previous point, it would have been interesting to have a discussion on how the choice of the probability mass function influences the theoretical guarantees of in section 2.4. - The main strength of HNBM resides in using arbitrary mass functions, yet MixBoost only relies in BDT and LR. I strongly think that combining other types of classifiers should provide further insight on MixBoost. | - It's regrettable that the probability mass function is practically unexploited. In MixBoost it is set to a quasi-uniform distribution, which depends on only one single parameter. Intuitively, each learner class should be considered individually, even in the case of BDT of different depths. I think that considering various probability mass function would've added further depth to the experimental setting (unless I'm missing an obvious reason why the quasi-uniform distribution is well suited...). |
AGVANImv7S | EMNLP_2023 | **Weaknesses:**
1. The proposed evaluation pipeline is similar to prior works.
2. ChatGPT shows a great percentage of abstention than other models. Is that fair to compare their accuracies?
3. Reproducibility. The author doesn't mention if the code will be available to the public. | 2. ChatGPT shows a great percentage of abstention than other models. Is that fair to compare their accuracies? |
NIPS_2018_947 | NIPS_2018 | weakness of the paper, in its current version, is the experimental results. This is not to say that the proposed method is not promising - it definitely is. However, I have some questions that I hope the authors can address. - Time limit of 10 seconds: I am quite intrigued as to the particular choice of time limit, which seems really small. In comparison, when I look at the SMT Competition of 2017, specifically the QF_NIA division (http://smtcomp.sourceforge.net/2017/results-QF_NIA.shtml?v=1500632282), I find that all 5 solvers listed require 300-700 seconds. The same can be said about QF_BF and QF_NRA (links to results here http://smtcomp.sourceforge.net/2017/results-toc.shtml). While the learned model definitely improves over Z3 under the time limit of 10 seconds, the discrepancy with the competition results on similar formula types is intriguing. Can you please clarify? I should note that while researching this point, I found that the SMT Competition of 2018 will have a "10 Second wonder" category (http://smtcomp.sourceforge.net/2018/rules18.pdf). - Pruning via equivalence classes: I could not understand what is the partial "current cost" you mention here. Thanks for clarifying. - Figure 3: please annotate the axes!! - Bilinear model: is the label y_i in {-1,+1}? - Dataset statistics: please provide statistics for each of the datasets: number of formulas, sizes of the formulas, etc. - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? - Missing references: the references below are relevant to your topic, especially [a]. Please discuss connections with [a], which uses supervised learning in QBF solving, where QBF generalizes SMT, in my understanding. [a] Samulowitz, Horst, and Roland Memisevic. "Learning to solve QBF." AAAI. Vol. 7. 2007. [b] Khalil, Elias Boutros, et al. "Learning to Branch in Mixed Integer Programming." AAAI. 2016. Minor typos: - Line 283: looses -> loses | - Missing references: the references below are relevant to your topic, especially [a]. Please discuss connections with [a], which uses supervised learning in QBF solving, where QBF generalizes SMT, in my understanding. [a] Samulowitz, Horst, and Roland Memisevic. "Learning to solve QBF." AAAI. Vol. |
ICLR_2023_2237 | ICLR_2023 | 1.Similar methods have already been proposed for multi-task learning and has not been disccussed in this paper [1].
1.When sampling on the convex hull parameterization, authors choose to adopt the Dirichlet distribution since its support is the T-dimensional simplex. Does this distribution have other properties. Why using this distribution? If p≫1,how the ensemble will change.
2.When training, a mono tonic relationship is imposed between the degree of a single-task predictor participation and the weight of the corresponding task loss. As a result, the ensemble engenders a subspace that explicitly encodes tradeoffs and results in a continuous parameterization of the Pareto Front. Whether the mono tonic relationship can be replaced by other relationships? Explaining this point may be better.
[1]Navon A, Shamsian A, Fetaya E, et al. Learning the Pareto Front with Hypernetworks[C]//International Conference on Learning Representations. 2020. | 2.When training, a mono tonic relationship is imposed between the degree of a single-task predictor participation and the weight of the corresponding task loss. As a result, the ensemble engenders a subspace that explicitly encodes tradeoffs and results in a continuous parameterization of the Pareto Front. Whether the mono tonic relationship can be replaced by other relationships? Explaining this point may be better. [1]Navon A, Shamsian A, Fetaya E, et al. Learning the Pareto Front with Hypernetworks[C]//International Conference on Learning Representations. 2020. |
NIPS_2020_576 | NIPS_2020 | 1. Although the problem studied in this paper is interesting, this pape is not easy to follow. 2. More baselines (sampling approaches designed for GNNs) are needed. In Table 2, S-GCN [5] is a simple sampler. ClusterGCN and GraphSAINT are designed for sampling (sub)graphs. The same for Table 3. 3. To be honest, I am kind of confused about Table 3. It would be better if the authors provide more analysis for Table 3. And more analysis when Tables 2,3 are considered together. 4. How did the authors determine the hyper-parameter settings? 5. I am interested in seeing more experimental comparison on the datasets with a large number of nodes. 6. How about the comparison in terms of computation cost / running time? | 6. How about the comparison in terms of computation cost / running time? |
NIPS_2018_330 | NIPS_2018 | weakness of the paper is its insufficient motivation of the problem, and thus, I fail to see the relevance. Specifically, the authors do not show that the process communication is indeed a problem in practice. This is particularly surprising as typical empirical risk minimization (ERM) problems naturally decompose as sum of losses and the gradient is the sum of the local loss gradients. Thus, the required inter-process communication limits to one map-reduce and one broadcast per iteration. The paper lacks experiments reporting runtime vs. number of processors, in particular with respect to different communication strategies. In effect, I cannot follow _why_ the authors would like to proof convergence results for this setting, as I am not convinced _why_ this setting is preferable to other communication and optimization strategies. I am uncertain whether this is only a problem of presentation and not of significance, but it is unfortunately insufficient in either case. Here are some suggetions which I hope the authors might find useful for future presentations of the subject: - I did not get a clear picture from the goal of the paper in the introduction. My guess is that the examples chosen did not convince me that there are problems which require a lot of inter-process communication. This holds particularly for the second paragraph where sampling-based Bayesian methods are particularly mentioned as an example where the paper's results are irrelevant as they are already embarrassingly parallel. Instead, I suggest the authors try to focus on problems where the loss function does not decompose as the sum of sample losses and other ERM-based distributed algorithms such as Hogwild. - From the discussion in lines 60-74 (beginning of Sect. 2), I had the impression that the authors want to focus on a situation where the gradient of the sum is not the sum of the individual gradients, but this is not communicated in the text. In particular, the paragraph lines 70-74 is a setting that is shared in all ERM approaches and could be discussed in less space. A situation where the the gradient of the sum of the losses is not the sum of the individual loss gradients is rare and could require some space. - Line 114, the authors should introduce the notation for audiences not familiar with manifolds ('D' in "D R_theta ...") - From the presentations in the supplement, I cannot see how the consideration of the events immediately imply the Theorem. I assume this presentation is incomplete? Generally, the authors could point which parts explicitely of the proof need to be adapted, why, and how it's done. The authors also seem to refer to statements in Lemmata defined in other texts (Lemma 3, Lemma 4). These should be restated or more clearly referenced. - In algorithm 1, the subscript 's' denotes the master machine, but also iteration times. I propose to use subscript '1' to denote to the master machine (problematic lines are line 6 "Transmit the local ..." and lines 8-9 "Form the surrogate function ...") - the text should be checked for typos ("addictive constant" and others) --- Post-rebuttal update: I appreciate the author's feedback. I am afraid my missing point was on a deeper level than the authors anticipated: I fail to see in what situations the access to the global higher-derivatives is required which seems to be the crux and the motivation of the analysis. In particular, to me this is still in stark contrast with the rebuttal to comment 2. If I'm using gradient descent and if the gradient of the distributed sum is the sum of the distributed gradients, how is the implementation more costly than a map-reduce followed-up by a broadcast? For gradient descent to converge, no global higher-order derivative information is required, no? So this could be me not having a good overview of more elaborate optimization algorithms or some other misunderstanding of the paper. But I'm seeing that my colleagues seem to grasp the significance so I'm looking forward to be convinced in the future. | - I did not get a clear picture from the goal of the paper in the introduction. My guess is that the examples chosen did not convince me that there are problems which require a lot of inter-process communication. This holds particularly for the second paragraph where sampling-based Bayesian methods are particularly mentioned as an example where the paper's results are irrelevant as they are already embarrassingly parallel. Instead, I suggest the authors try to focus on problems where the loss function does not decompose as the sum of sample losses and other ERM-based distributed algorithms such as Hogwild. |
tUiYbVqcuQ | ICLR_2024 | * The claims of the paper are unclear. What does it mean that the "optimal actions are personalized?". How do we measure personalization?
* What kind of communication overhead we are talking about, during training or during inference? How big is the communication cost for a stoplight? Although this is one of the three claims, the paper does not seem to measure communication cost anywhere in the rest of the paper.
* Why is this approach more privacy preserving than other federated learning approaches? Is privacy preservation an issue for traffic signal control, i.e. one traffic signal not to know what is the color of the next one? One would think that this is a very bad example of an application of federated learning. | * Why is this approach more privacy preserving than other federated learning approaches? Is privacy preservation an issue for traffic signal control, i.e. one traffic signal not to know what is the color of the next one? One would think that this is a very bad example of an application of federated learning. |
0DkaimvWs0 | EMNLP_2023 | 1) Inadequate method details: The "expert ID embedding" mentioned in Section 4.3 is somewhat confusing as it lacks specific clarification. It remains unclear whether this ID refers to the registered name of the expert or some other form of identification. If it simply represents the expert's registered name, its ability to capture the personalized characteristics of the expert is questionable.
2) Insufficient experiments: The "Further Analysis" section of the paper and the experiments of "ExpertPLM: Pre-training Expert Representation for Expert Finding (https://aclanthology.org/2022.findings-emnlp.74.pdf)" share significant similarities. However, there is a lack of experimental analysis specific to certain parameters within the paper itself, such as the length of the question body and title and the number of negative samples (K value).
3) Lack of fair comparison: In Figure 3, the authors compared CPEF with PMEF to demonstrate the advantages of the pre-trained question representation model under data scarcity conditions (line 529-lin534). However, emphasizing the advantages of CPEF through this comparison is unjust since PMEF lacks a pre-training module. To ensure fairness, it is recommended to compare CPEF with another pre-trained model, such as ExpertBert, to showcase the advantage of the innovative pre-training module design of CPEF. | 3) Lack of fair comparison: In Figure 3, the authors compared CPEF with PMEF to demonstrate the advantages of the pre-trained question representation model under data scarcity conditions (line 529-lin534). However, emphasizing the advantages of CPEF through this comparison is unjust since PMEF lacks a pre-training module. To ensure fairness, it is recommended to compare CPEF with another pre-trained model, such as ExpertBert, to showcase the advantage of the innovative pre-training module design of CPEF. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.