id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1107.3765 | Jordan Boyd-Graber | Ke Zhai, Jordan Boyd-Graber, and Nima Asadi | Using Variational Inference and MapReduce to Scale Topic Modeling | null | null | null | null | cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for
exploring document collections. Because of the increasing prevalence of large
datasets, there is a need to improve the scalability of inference of LDA. In
this paper, we propose a technique called ~\emph{MapReduce LDA} (Mr. LDA) to
accommodate very large corpus collections in the MapReduce framework. In
contrast to other techniques to scale inference for LDA, which use Gibbs
sampling, we use variational inference. Our solution efficiently distributes
computation and is relatively simple to implement. More importantly, this
variational implementation, unlike highly tuned and specialized
implementations, is easily extensible. We demonstrate two extensions of the
model possible with this scalable framework: informed priors to guide topic
discovery and modeling topics from a multilingual corpus.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2011 16:32:22 GMT"
}
] | 2011-07-20T00:00:00 | [
[
"Zhai",
"Ke",
""
],
[
"Boyd-Graber",
"Jordan",
""
],
[
"Asadi",
"Nima",
""
]
] | TITLE: Using Variational Inference and MapReduce to Scale Topic Modeling
ABSTRACT: Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for
exploring document collections. Because of the increasing prevalence of large
datasets, there is a need to improve the scalability of inference of LDA. In
this paper, we propose a technique called ~\emph{MapReduce LDA} (Mr. LDA) to
accommodate very large corpus collections in the MapReduce framework. In
contrast to other techniques to scale inference for LDA, which use Gibbs
sampling, we use variational inference. Our solution efficiently distributes
computation and is relatively simple to implement. More importantly, this
variational implementation, unlike highly tuned and specialized
implementations, is easily extensible. We demonstrate two extensions of the
model possible with this scalable framework: informed priors to guide topic
discovery and modeling topics from a multilingual corpus.
| no_new_dataset | 0.947575 |
1103.5112 | Mikail Rubinov | Mikail Rubinov and Olaf Sporns | Weight-conserving characterization of complex functional brain networks | NeuroImage, in press | Neuroimage. 2011 Jun 15;56(4):2068-79. Epub 2011 Apr 1 | 10.1016/j.neuroimage.2011.03.069 | null | q-bio.NC cond-mat.dis-nn physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex functional brain networks are large networks of brain regions and
functional brain connections. Statistical characterizations of these networks
aim to quantify global and local properties of brain activity with a small
number of network measures. Important functional network measures include
measures of modularity (measures of the goodness with which a network is
optimally partitioned into functional subgroups) and measures of centrality
(measures of the functional influence of individual brain regions).
Characterizations of functional networks are increasing in popularity, but are
associated with several important methodological problems. These problems
include the inability to characterize densely connected and weighted functional
networks, the neglect of degenerate topologically distinct high-modularity
partitions of these networks, and the absence of a network null model for
testing hypotheses of association between observed nontrivial network
properties and simple weighted connectivity properties. In this study we
describe a set of methods to overcome these problems. Specifically, we
generalize measures of modularity and centrality to fully connected and
weighted complex networks, describe the detection of degenerate high-modularity
partitions of these networks, and introduce a weighted-connectivity null model
of these networks. We illustrate our methods by demonstrating degenerate
high-modularity partitions and strong correlations between two complementary
measures of centrality in resting-state functional magnetic resonance imaging
(MRI) networks from the 1000 Functional Connectomes Project, an open-access
repository of resting-state functional MRI datasets. Our methods may allow more
sound and reliable characterizations and comparisons of functional brain
networks across conditions and subjects.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2011 06:57:37 GMT"
}
] | 2011-07-19T00:00:00 | [
[
"Rubinov",
"Mikail",
""
],
[
"Sporns",
"Olaf",
""
]
] | TITLE: Weight-conserving characterization of complex functional brain networks
ABSTRACT: Complex functional brain networks are large networks of brain regions and
functional brain connections. Statistical characterizations of these networks
aim to quantify global and local properties of brain activity with a small
number of network measures. Important functional network measures include
measures of modularity (measures of the goodness with which a network is
optimally partitioned into functional subgroups) and measures of centrality
(measures of the functional influence of individual brain regions).
Characterizations of functional networks are increasing in popularity, but are
associated with several important methodological problems. These problems
include the inability to characterize densely connected and weighted functional
networks, the neglect of degenerate topologically distinct high-modularity
partitions of these networks, and the absence of a network null model for
testing hypotheses of association between observed nontrivial network
properties and simple weighted connectivity properties. In this study we
describe a set of methods to overcome these problems. Specifically, we
generalize measures of modularity and centrality to fully connected and
weighted complex networks, describe the detection of degenerate high-modularity
partitions of these networks, and introduce a weighted-connectivity null model
of these networks. We illustrate our methods by demonstrating degenerate
high-modularity partitions and strong correlations between two complementary
measures of centrality in resting-state functional magnetic resonance imaging
(MRI) networks from the 1000 Functional Connectomes Project, an open-access
repository of resting-state functional MRI datasets. Our methods may allow more
sound and reliable characterizations and comparisons of functional brain
networks across conditions and subjects.
| no_new_dataset | 0.946695 |
1107.2859 | Jinhui Tang | Jinhui Tang, Shuicheng Yan, Tat-Seng Chua and Ramesh Jain | Label-Specific Training Set Construction from Web Resource for Image
Annotation | 4 pages, 5 figures | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently many research efforts have been devoted to image annotation by
leveraging on the associated tags/keywords of web images as training labels. A
key issue to resolve is the relatively low accuracy of the tags. In this paper,
we propose a novel semi-automatic framework to construct a more accurate and
effective training set from these web media resources for each label that we
want to learn. Experiments conducted on a real-world dataset demonstrate that
the constructed training set can result in higher accuracy for image
annotation.
| [
{
"version": "v1",
"created": "Thu, 14 Jul 2011 15:52:21 GMT"
}
] | 2011-07-15T00:00:00 | [
[
"Tang",
"Jinhui",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Jain",
"Ramesh",
""
]
] | TITLE: Label-Specific Training Set Construction from Web Resource for Image
Annotation
ABSTRACT: Recently many research efforts have been devoted to image annotation by
leveraging on the associated tags/keywords of web images as training labels. A
key issue to resolve is the relatively low accuracy of the tags. In this paper,
we propose a novel semi-automatic framework to construct a more accurate and
effective training set from these web media resources for each label that we
want to learn. Experiments conducted on a real-world dataset demonstrate that
the constructed training set can result in higher accuracy for image
annotation.
| no_new_dataset | 0.942348 |
1107.2553 | Toufiq Parag | Toufiq Parag and Vladimir Pavlovic and Ahmed Elgammal | Learning Hypergraph Labeling for Feature Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study poses the feature correspondence problem as a hypergraph node
labeling problem. Candidate feature matches and their subsets (usually of size
larger than two) are considered to be the nodes and hyperedges of a hypergraph.
A hypergraph labeling algorithm, which models the subset-wise interaction by an
undirected graphical model, is applied to label the nodes (feature
correspondences) as correct or incorrect. We describe a method to learn the
cost function of this labeling algorithm from labeled examples using a
graphical model training algorithm. The proposed feature matching algorithm is
different from the most of the existing learning point matching methods in
terms of the form of the objective function, the cost function to be learned
and the optimization method applied to minimize it. The results on standard
datasets demonstrate how learning over a hypergraph improves the matching
performance over existing algorithms, notably one that also uses higher order
information without learning.
| [
{
"version": "v1",
"created": "Wed, 13 Jul 2011 14:01:50 GMT"
}
] | 2011-07-14T00:00:00 | [
[
"Parag",
"Toufiq",
""
],
[
"Pavlovic",
"Vladimir",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Learning Hypergraph Labeling for Feature Matching
ABSTRACT: This study poses the feature correspondence problem as a hypergraph node
labeling problem. Candidate feature matches and their subsets (usually of size
larger than two) are considered to be the nodes and hyperedges of a hypergraph.
A hypergraph labeling algorithm, which models the subset-wise interaction by an
undirected graphical model, is applied to label the nodes (feature
correspondences) as correct or incorrect. We describe a method to learn the
cost function of this labeling algorithm from labeled examples using a
graphical model training algorithm. The proposed feature matching algorithm is
different from the most of the existing learning point matching methods in
terms of the form of the objective function, the cost function to be learned
and the optimization method applied to minimize it. The results on standard
datasets demonstrate how learning over a hypergraph improves the matching
performance over existing algorithms, notably one that also uses higher order
information without learning.
| no_new_dataset | 0.953232 |
1106.2603 | Sam Ma | Chengxi Ye, Zhanshan Sam Ma, Charles H. Cannon, Mihai Pop, Douglas W.
Yu | SparseAssembler: de novo Assembly with the Sparse de Bruijn Graph | Corresponding author: Douglas W. Yu, [email protected] | null | null | null | cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | de Bruijn graph-based algorithms are one of the two most widely used
approaches for de novo genome assembly. A major limitation of this approach is
the large computational memory space requirement to construct the de Bruijn
graph, which scales with k-mer length and total diversity (N) of unique k-mers
in the genome expressed in base pairs or roughly (2k+8)N bits. This limitation
is particularly important with large-scale genome analysis and for sequencing
centers that simultaneously process multiple genomes. We present a sparse de
Bruijn graph structure, based on which we developed SparseAssembler that
greatly reduces memory space requirements. The structure also allows us to
introduce a novel method for the removal of substitution errors introduced
during sequencing. The sparse de Bruijn graph structure skips g intermediate
k-mers, therefore reducing the theoretical memory space requirement to
~(2k/g+8)N. We have found that a practical value of g=16 consumes approximately
10% of the memory required by standard de Bruijn graph-based algorithms but
yields comparable results. A high error rate could potentially derail the
SparseAssembler. Therefore, we developed a sparse de Bruijn graph-based
denoising algorithm that can remove more than 99% of substitution errors from
datasets with a \leq 2% error rate. Given that substitution error rates for the
current generation of sequencers is lower than 1%, our denoising procedure is
sufficiently effective to safeguard the performance of our algorithm. Finally,
we also introduce a novel Dijkstra-like breadth-first search algorithm for the
sparse de Bruijn graph structure to circumvent residual errors and resolve
polymorphisms.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2011 04:06:06 GMT"
}
] | 2011-07-11T00:00:00 | [
[
"Ye",
"Chengxi",
""
],
[
"Ma",
"Zhanshan Sam",
""
],
[
"Cannon",
"Charles H.",
""
],
[
"Pop",
"Mihai",
""
],
[
"Yu",
"Douglas W.",
""
]
] | TITLE: SparseAssembler: de novo Assembly with the Sparse de Bruijn Graph
ABSTRACT: de Bruijn graph-based algorithms are one of the two most widely used
approaches for de novo genome assembly. A major limitation of this approach is
the large computational memory space requirement to construct the de Bruijn
graph, which scales with k-mer length and total diversity (N) of unique k-mers
in the genome expressed in base pairs or roughly (2k+8)N bits. This limitation
is particularly important with large-scale genome analysis and for sequencing
centers that simultaneously process multiple genomes. We present a sparse de
Bruijn graph structure, based on which we developed SparseAssembler that
greatly reduces memory space requirements. The structure also allows us to
introduce a novel method for the removal of substitution errors introduced
during sequencing. The sparse de Bruijn graph structure skips g intermediate
k-mers, therefore reducing the theoretical memory space requirement to
~(2k/g+8)N. We have found that a practical value of g=16 consumes approximately
10% of the memory required by standard de Bruijn graph-based algorithms but
yields comparable results. A high error rate could potentially derail the
SparseAssembler. Therefore, we developed a sparse de Bruijn graph-based
denoising algorithm that can remove more than 99% of substitution errors from
datasets with a \leq 2% error rate. Given that substitution error rates for the
current generation of sequencers is lower than 1%, our denoising procedure is
sufficiently effective to safeguard the performance of our algorithm. Finally,
we also introduce a novel Dijkstra-like breadth-first search algorithm for the
sparse de Bruijn graph structure to circumvent residual errors and resolve
polymorphisms.
| no_new_dataset | 0.95222 |
1107.1104 | Samur Araujo | Samur Araujo, Jan Hidders, Daniel Schwabe and Arjen P. de Vries | SERIMI - Resource Description Similarity, RDF Instance Matching and
Interlinking | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interlinking of datasets published in the Linked Data Cloud is a
challenging problem and a key factor for the success of the Semantic Web.
Manual rule-based methods are the most effective solution for the problem, but
they require skilled human data publishers going through a laborious, error
prone and time-consuming process for manually describing rules mapping
instances between two datasets. Thus, an automatic approach for solving this
problem is more than welcome. In this paper, we propose a novel interlinking
method, SERIMI, for solving this problem automatically. SERIMI matches
instances between a source and a target datasets, without prior knowledge of
the data, domain or schema of these datasets. Experiments conducted with
benchmark collections demonstrate that our approach considerably outperforms
state-of-the-art automatic approaches for solving the interlinking problem on
the Linked Data Cloud.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2011 11:56:34 GMT"
}
] | 2011-07-07T00:00:00 | [
[
"Araujo",
"Samur",
""
],
[
"Hidders",
"Jan",
""
],
[
"Schwabe",
"Daniel",
""
],
[
"de Vries",
"Arjen P.",
""
]
] | TITLE: SERIMI - Resource Description Similarity, RDF Instance Matching and
Interlinking
ABSTRACT: The interlinking of datasets published in the Linked Data Cloud is a
challenging problem and a key factor for the success of the Semantic Web.
Manual rule-based methods are the most effective solution for the problem, but
they require skilled human data publishers going through a laborious, error
prone and time-consuming process for manually describing rules mapping
instances between two datasets. Thus, an automatic approach for solving this
problem is more than welcome. In this paper, we propose a novel interlinking
method, SERIMI, for solving this problem automatically. SERIMI matches
instances between a source and a target datasets, without prior knowledge of
the data, domain or schema of these datasets. Experiments conducted with
benchmark collections demonstrate that our approach considerably outperforms
state-of-the-art automatic approaches for solving the interlinking problem on
the Linked Data Cloud.
| no_new_dataset | 0.949482 |
1107.1128 | Seeja K. R. | K.R Seeja | AISMOTIF-An Artificial Immune System for DNA Motif Discovery | 7 pages | IJCSI International Journal of Computer Science Issues, Vol. 8,
Issue 2, March 2011 IJCSI International Journal of Computer Science Issues,
Vol. 8, Issue 2, March 2011, ISSN (Online): 1694-0814, pages 143-149 | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovery of transcription factor binding sites is a much explored and still
exploring area of research in functional genomics. Many computational tools
have been developed for finding motifs and each of them has their own
advantages as well as disadvantages. Most of these algorithms need prior
knowledge about the data to construct background models. However there is not a
single technique that can be considered as best for finding regulatory motifs.
This paper proposes an artificial immune system based algorithm for finding the
transcription factor binding sites or motifs and two new weighted scores for
motif evaluation. The algorithm is enumerative, but sufficient pruning of the
pattern search space has been incorporated using immune system concepts. The
performance of AISMOTIF has been evaluated by comparing it with eight state of
art composite motif discovery algorithms and found that AISMOTIF predicts known
motifs as well as new motifs from the benchmark dataset without any prior
knowledge about the data.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2011 06:01:20 GMT"
}
] | 2011-07-07T00:00:00 | [
[
"Seeja",
"K. R",
""
]
] | TITLE: AISMOTIF-An Artificial Immune System for DNA Motif Discovery
ABSTRACT: Discovery of transcription factor binding sites is a much explored and still
exploring area of research in functional genomics. Many computational tools
have been developed for finding motifs and each of them has their own
advantages as well as disadvantages. Most of these algorithms need prior
knowledge about the data to construct background models. However there is not a
single technique that can be considered as best for finding regulatory motifs.
This paper proposes an artificial immune system based algorithm for finding the
transcription factor binding sites or motifs and two new weighted scores for
motif evaluation. The algorithm is enumerative, but sufficient pruning of the
pattern search space has been incorporated using immune system concepts. The
performance of AISMOTIF has been evaluated by comparing it with eight state of
art composite motif discovery algorithms and found that AISMOTIF predicts known
motifs as well as new motifs from the benchmark dataset without any prior
knowledge about the data.
| no_new_dataset | 0.947866 |
1107.1229 | Daniel Rockmore | Sean Brocklebank, Scott Pauls, Daniel Rockmore, Timothy C. Bates | Characteristic Characteristics | 23 pages, 5 Figures, 3 Tables | null | null | null | stat.AP cs.IR physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While five-factor models of personality are widespread, there is still not
universal agreement on this as a structural framework. Part of the reason for
the lingering debate is its dependence on factor analysis. In particular,
derivation or refutation of the model via other statistical means is a
worthwhile project. In this paper we use the methodology of spectral clustering
to articulate the structure in the dataset of responses of 20,993 subjects on a
300-item item version of the IPIP NEO personality questionnaire, and we compare
our results to those obtained from a factor analytic solution. We found support
for five- and six-cluster solutions. The five-cluster solution was similar to a
conventional five-factor solution, but the six-cluster and six-factor solutions
differed significantly, and only the six-cluster solution was readily
interpretable: it gave a model similar to the HEXACO model. We suggest that
spectral clustering provides a robust alternative view of personality data.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2011 19:45:14 GMT"
}
] | 2011-07-07T00:00:00 | [
[
"Brocklebank",
"Sean",
""
],
[
"Pauls",
"Scott",
""
],
[
"Rockmore",
"Daniel",
""
],
[
"Bates",
"Timothy C.",
""
]
] | TITLE: Characteristic Characteristics
ABSTRACT: While five-factor models of personality are widespread, there is still not
universal agreement on this as a structural framework. Part of the reason for
the lingering debate is its dependence on factor analysis. In particular,
derivation or refutation of the model via other statistical means is a
worthwhile project. In this paper we use the methodology of spectral clustering
to articulate the structure in the dataset of responses of 20,993 subjects on a
300-item item version of the IPIP NEO personality questionnaire, and we compare
our results to those obtained from a factor analytic solution. We found support
for five- and six-cluster solutions. The five-cluster solution was similar to a
conventional five-factor solution, but the six-cluster and six-factor solutions
differed significantly, and only the six-cluster solution was readily
interpretable: it gave a model similar to the HEXACO model. We suggest that
spectral clustering provides a robust alternative view of personality data.
| no_new_dataset | 0.944638 |
1107.0922 | Danny Bickson | Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos
Guestrin | GraphLab: A Distributed Framework for Machine Learning in the Cloud | CMU Tech Report, GraphLab project webpage: http://graphlab.org | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Learning (ML) techniques are indispensable in a wide range of fields.
Unfortunately, the exponential increase of dataset sizes are rapidly extending
the runtime of sequential algorithms and threatening to slow future progress in
ML. With the promise of affordable large-scale parallel computing, Cloud
systems offer a viable platform to resolve the computational challenges in ML.
However, designing and implementing efficient, provably correct distributed ML
algorithms is often prohibitively challenging. To enable ML researchers to
easily and efficiently use parallel systems, we introduced the GraphLab
abstraction which is designed to represent the computational patterns in ML
algorithms while permitting efficient parallel and distributed implementations.
In this paper we provide a formal description of the GraphLab parallel
abstraction and present an efficient distributed implementation. We conduct a
comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms
using real large-scale data and a 64 node EC2 cluster of 512 processors. We
find that GraphLab achieves orders of magnitude performance gains over Hadoop
while performing comparably or superior to hand-tuned MPI implementations.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2011 16:56:53 GMT"
}
] | 2011-07-06T00:00:00 | [
[
"Low",
"Yucheng",
""
],
[
"Gonzalez",
"Joseph",
""
],
[
"Kyrola",
"Aapo",
""
],
[
"Bickson",
"Danny",
""
],
[
"Guestrin",
"Carlos",
""
]
] | TITLE: GraphLab: A Distributed Framework for Machine Learning in the Cloud
ABSTRACT: Machine Learning (ML) techniques are indispensable in a wide range of fields.
Unfortunately, the exponential increase of dataset sizes are rapidly extending
the runtime of sequential algorithms and threatening to slow future progress in
ML. With the promise of affordable large-scale parallel computing, Cloud
systems offer a viable platform to resolve the computational challenges in ML.
However, designing and implementing efficient, provably correct distributed ML
algorithms is often prohibitively challenging. To enable ML researchers to
easily and efficiently use parallel systems, we introduced the GraphLab
abstraction which is designed to represent the computational patterns in ML
algorithms while permitting efficient parallel and distributed implementations.
In this paper we provide a formal description of the GraphLab parallel
abstraction and present an efficient distributed implementation. We conduct a
comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms
using real large-scale data and a 64 node EC2 cluster of 512 processors. We
find that GraphLab achieves orders of magnitude performance gains over Hadoop
while performing comparably or superior to hand-tuned MPI implementations.
| no_new_dataset | 0.946349 |
1107.0414 | Francois Meyer | Kye M. Taylor and Francois G. Meyer | A random walk on image patches | null | null | null | null | physics.data-an cs.DM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we address the problem of understanding the success of
algorithms that organize patches according to graph-based metrics. Algorithms
that analyze patches extracted from images or time series have led to
state-of-the art techniques for classification, denoising, and the study of
nonlinear dynamics. The main contribution of this work is to provide a
theoretical explanation for the above experimental observations. Our approach
relies on a detailed analysis of the commute time metric on prototypical graph
models that epitomize the geometry observed in general patch graphs. We prove
that a parametrization of the graph based on commute times shrinks the mutual
distances between patches that correspond to rapid local changes in the signal,
while the distances between patches that correspond to slow local changes
expand. In effect, our results explain why the parametrization of the set of
patches based on the eigenfunctions of the Laplacian can concentrate patches
that correspond to rapid local changes, which would otherwise be shattered in
the space of patches. While our results are based on a large sample analysis,
numerical experimentations on synthetic and real data indicate that the results
hold for datasets that are very small in practice.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2011 20:37:07 GMT"
}
] | 2011-07-05T00:00:00 | [
[
"Taylor",
"Kye M.",
""
],
[
"Meyer",
"Francois G.",
""
]
] | TITLE: A random walk on image patches
ABSTRACT: In this paper we address the problem of understanding the success of
algorithms that organize patches according to graph-based metrics. Algorithms
that analyze patches extracted from images or time series have led to
state-of-the art techniques for classification, denoising, and the study of
nonlinear dynamics. The main contribution of this work is to provide a
theoretical explanation for the above experimental observations. Our approach
relies on a detailed analysis of the commute time metric on prototypical graph
models that epitomize the geometry observed in general patch graphs. We prove
that a parametrization of the graph based on commute times shrinks the mutual
distances between patches that correspond to rapid local changes in the signal,
while the distances between patches that correspond to slow local changes
expand. In effect, our results explain why the parametrization of the set of
patches based on the eigenfunctions of the Laplacian can concentrate patches
that correspond to rapid local changes, which would otherwise be shattered in
the space of patches. While our results are based on a large sample analysis,
numerical experimentations on synthetic and real data indicate that the results
hold for datasets that are very small in practice.
| no_new_dataset | 0.947721 |
1102.5499 | Linyuan Lu | Linyuan Lu, Weiping Liu | Information filtering via preferential diffusion | 12 pages, 10 figures, 2 tables | Physical Review E 83, 066119 (2011) | 10.1103/PhysRevE.83.066119 | null | physics.data-an cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems have shown great potential to address information
overload problem, namely to help users in finding interesting and relevant
objects within a huge information space. Some physical dynamics, including heat
conduction process and mass or energy diffusion on networks, have recently
found applications in personalized recommendation. Most of the previous studies
focus overwhelmingly on recommendation accuracy as the only important factor,
while overlook the significance of diversity and novelty which indeed provide
the vitality of the system. In this paper, we propose a recommendation
algorithm based on the preferential diffusion process on user-object bipartite
network. Numerical analyses on two benchmark datasets, MovieLens and Netflix,
indicate that our method outperforms the state-of-the-art methods.
Specifically, it can not only provide more accurate recommendations, but also
generate more diverse and novel recommendations by accurately recommending
unpopular objects.
| [
{
"version": "v1",
"created": "Sun, 27 Feb 2011 13:12:53 GMT"
}
] | 2011-07-04T00:00:00 | [
[
"Lu",
"Linyuan",
""
],
[
"Liu",
"Weiping",
""
]
] | TITLE: Information filtering via preferential diffusion
ABSTRACT: Recommender systems have shown great potential to address information
overload problem, namely to help users in finding interesting and relevant
objects within a huge information space. Some physical dynamics, including heat
conduction process and mass or energy diffusion on networks, have recently
found applications in personalized recommendation. Most of the previous studies
focus overwhelmingly on recommendation accuracy as the only important factor,
while overlook the significance of diversity and novelty which indeed provide
the vitality of the system. In this paper, we propose a recommendation
algorithm based on the preferential diffusion process on user-object bipartite
network. Numerical analyses on two benchmark datasets, MovieLens and Netflix,
indicate that our method outperforms the state-of-the-art methods.
Specifically, it can not only provide more accurate recommendations, but also
generate more diverse and novel recommendations by accurately recommending
unpopular objects.
| no_new_dataset | 0.949153 |
1106.5917 | Jitesh Dundas | Jitesh Dundas and David Chik | Implementing Human-like Intuition Mechanism in Artificial Intelligence | 14 pages with 1 figure + 1 table | null | null | null | cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human intuition has been simulated by several research projects using
artificial intelligence techniques. Most of these algorithms or models lack the
ability to handle complications or diversions. Moreover, they also do not
explain the factors influencing intuition and the accuracy of the results from
this process. In this paper, we present a simple series based model for
implementation of human-like intuition using the principles of connectivity and
unknown entities. By using Poker hand datasets and Car evaluation datasets, we
compare the performance of some well-known models with our intuition model. The
aim of the experiment was to predict the maximum accurate answers using
intuition based models. We found that the presence of unknown entities,
diversion from the current problem scenario, and identifying weakness without
the normal logic based execution, greatly affects the reliability of the
answers. Generally, the intuition based models cannot be a substitute for the
logic based mechanisms in handling such problems. The intuition can only act as
a support for an ongoing logic based model that processes all the steps in a
sequential manner. However, when time and computational cost are very strict
constraints, this intuition based model becomes extremely important and useful,
because it can give a reasonably good performance. Factors affecting intuition
are analyzed and interpreted through our model.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2011 12:03:33 GMT"
}
] | 2011-06-30T00:00:00 | [
[
"Dundas",
"Jitesh",
""
],
[
"Chik",
"David",
""
]
] | TITLE: Implementing Human-like Intuition Mechanism in Artificial Intelligence
ABSTRACT: Human intuition has been simulated by several research projects using
artificial intelligence techniques. Most of these algorithms or models lack the
ability to handle complications or diversions. Moreover, they also do not
explain the factors influencing intuition and the accuracy of the results from
this process. In this paper, we present a simple series based model for
implementation of human-like intuition using the principles of connectivity and
unknown entities. By using Poker hand datasets and Car evaluation datasets, we
compare the performance of some well-known models with our intuition model. The
aim of the experiment was to predict the maximum accurate answers using
intuition based models. We found that the presence of unknown entities,
diversion from the current problem scenario, and identifying weakness without
the normal logic based execution, greatly affects the reliability of the
answers. Generally, the intuition based models cannot be a substitute for the
logic based mechanisms in handling such problems. The intuition can only act as
a support for an ongoing logic based model that processes all the steps in a
sequential manner. However, when time and computational cost are very strict
constraints, this intuition based model becomes extremely important and useful,
because it can give a reasonably good performance. Factors affecting intuition
are analyzed and interpreted through our model.
| no_new_dataset | 0.946448 |
1106.6024 | Indraneel Mukherjee | Indraneel Mukherjee and Cynthia Rudin and Robert E. Schapire | The Rate of Convergence of AdaBoost | A preliminary version will appear in COLT 2011 | null | null | null | math.OC cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The AdaBoost algorithm was designed to combine many "weak" hypotheses that
perform slightly better than random guessing into a "strong" hypothesis that
has very low error. We study the rate at which AdaBoost iteratively converges
to the minimum of the "exponential loss." Unlike previous work, our proofs do
not require a weak-learning assumption, nor do they require that minimizers of
the exponential loss are finite. Our first result shows that at iteration $t$,
the exponential loss of AdaBoost's computed parameter vector will be at most
$\epsilon$ more than that of any parameter vector of $\ell_1$-norm bounded by
$B$ in a number of rounds that is at most a polynomial in $B$ and $1/\epsilon$.
We also provide lower bounds showing that a polynomial dependence on these
parameters is necessary. Our second result is that within $C/\epsilon$
iterations, AdaBoost achieves a value of the exponential loss that is at most
$\epsilon$ more than the best possible value, where $C$ depends on the dataset.
We show that this dependence of the rate on $\epsilon$ is optimal up to
constant factors, i.e., at least $\Omega(1/\epsilon)$ rounds are necessary to
achieve within $\epsilon$ of the optimal exponential loss.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2011 18:53:46 GMT"
}
] | 2011-06-30T00:00:00 | [
[
"Mukherjee",
"Indraneel",
""
],
[
"Rudin",
"Cynthia",
""
],
[
"Schapire",
"Robert E.",
""
]
] | TITLE: The Rate of Convergence of AdaBoost
ABSTRACT: The AdaBoost algorithm was designed to combine many "weak" hypotheses that
perform slightly better than random guessing into a "strong" hypothesis that
has very low error. We study the rate at which AdaBoost iteratively converges
to the minimum of the "exponential loss." Unlike previous work, our proofs do
not require a weak-learning assumption, nor do they require that minimizers of
the exponential loss are finite. Our first result shows that at iteration $t$,
the exponential loss of AdaBoost's computed parameter vector will be at most
$\epsilon$ more than that of any parameter vector of $\ell_1$-norm bounded by
$B$ in a number of rounds that is at most a polynomial in $B$ and $1/\epsilon$.
We also provide lower bounds showing that a polynomial dependence on these
parameters is necessary. Our second result is that within $C/\epsilon$
iterations, AdaBoost achieves a value of the exponential loss that is at most
$\epsilon$ more than the best possible value, where $C$ depends on the dataset.
We show that this dependence of the rate on $\epsilon$ is optimal up to
constant factors, i.e., at least $\Omega(1/\epsilon)$ rounds are necessary to
achieve within $\epsilon$ of the optimal exponential loss.
| no_new_dataset | 0.941868 |
1106.5186 | Ula\c{s} Ba\u{g}ci | Ulas Bagci, Jianhua Yao, Jesus Caban, Anthony F. Suffredini, Tara N.
Palmore, Daniel J. Mollura | Learning Shape and Texture Characteristics of CT Tree-in-Bud Opacities
for CAD Systems | 7 pages, 4 figures. Published in Proc. of Medical Image Computing and
Computer Assisted Interventions (MICCAI), 2011 | null | null | NIH-CIDI-MICCAI2011 | cs.CV | http://creativecommons.org/licenses/publicdomain/ | Although radiologists can employ CAD systems to characterize malignancies,
pulmonary fibrosis and other chronic diseases; the design of imaging techniques
to quantify infectious diseases continue to lag behind. There exists a need to
create more CAD systems capable of detecting and quantifying characteristic
patterns often seen in respiratory tract infections such as influenza,
bacterial pneumonia, or tuborculosis. One of such patterns is Tree-in-bud (TIB)
which presents \textit{thickened} bronchial structures surrounding by clusters
of \textit{micro-nodules}. Automatic detection of TIB patterns is a challenging
task because of their weak boundary, noisy appearance, and small lesion size.
In this paper, we present two novel methods for automatically detecting TIB
patterns: (1) a fast localization of candidate patterns using information from
local scale of the images, and (2) a M\"{o}bius invariant feature extraction
method based on learned local shape and texture properties. A comparative
evaluation of the proposed methods is presented with a dataset of 39 laboratory
confirmed viral bronchiolitis human parainfluenza (HPIV) CTs and 21 normal lung
CTs. Experimental results demonstrate that the proposed CAD system can achieve
high detection rate with an overall accuracy of 90.96%.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 03:35:08 GMT"
}
] | 2011-06-28T00:00:00 | [
[
"Bagci",
"Ulas",
""
],
[
"Yao",
"Jianhua",
""
],
[
"Caban",
"Jesus",
""
],
[
"Suffredini",
"Anthony F.",
""
],
[
"Palmore",
"Tara N.",
""
],
[
"Mollura",
"Daniel J.",
""
]
] | TITLE: Learning Shape and Texture Characteristics of CT Tree-in-Bud Opacities
for CAD Systems
ABSTRACT: Although radiologists can employ CAD systems to characterize malignancies,
pulmonary fibrosis and other chronic diseases; the design of imaging techniques
to quantify infectious diseases continue to lag behind. There exists a need to
create more CAD systems capable of detecting and quantifying characteristic
patterns often seen in respiratory tract infections such as influenza,
bacterial pneumonia, or tuborculosis. One of such patterns is Tree-in-bud (TIB)
which presents \textit{thickened} bronchial structures surrounding by clusters
of \textit{micro-nodules}. Automatic detection of TIB patterns is a challenging
task because of their weak boundary, noisy appearance, and small lesion size.
In this paper, we present two novel methods for automatically detecting TIB
patterns: (1) a fast localization of candidate patterns using information from
local scale of the images, and (2) a M\"{o}bius invariant feature extraction
method based on learned local shape and texture properties. A comparative
evaluation of the proposed methods is presented with a dataset of 39 laboratory
confirmed viral bronchiolitis human parainfluenza (HPIV) CTs and 21 normal lung
CTs. Experimental results demonstrate that the proposed CAD system can achieve
high detection rate with an overall accuracy of 90.96%.
| new_dataset | 0.968321 |
1106.4880 | Ying Ding | Qian Zhu, Yuyin Sun, Sashikiran Challa, Ying Ding, Michael S.
Lajiness, David J. Wild | Semantic Inference using Chemogenomics Data for Drug Discovery | 23 pages, 9 figures, 4 tables | null | 10.1186/1471-2105-12-256 | null | q-bio.QM cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background Semantic Web Technology (SWT) makes it possible to integrate and
search the large volume of life science datasets in the public domain, as
demonstrated by well-known linked data projects such as LODD, Bio2RDF, and
Chem2Bio2RDF. Integration of these sets creates large networks of information.
We have previously described a tool called WENDI for aggregating information
pertaining to new chemical compounds, effectively creating evidence paths
relating the compounds to genes, diseases and so on. In this paper we examine
the utility of automatically inferring new compound-disease associations (and
thus new links in the network) based on semantically marked-up versions of
these evidence paths, rule-sets and inference engines.
Results Through the implementation of a semantic inference algorithm, rule
set, Semantic Web methods (RDF, OWL and SPARQL) and new interfaces, we have
created a new tool called Chemogenomic Explorer that uses networks of
ontologically annotated RDF statements along with deductive reasoning tools to
infer new associations between the query structure and genes and diseases from
WENDI results. The tool then permits interactive clustering and filtering of
these evidence paths.
Conclusions We present a new aggregate approach to inferring links between
chemical compounds and diseases using semantic inference. This approach allows
multiple evidence paths between compounds and diseases to be identified using a
rule-set and semantically annotated data, and for these evidence paths to be
clustered to show overall evidence linking the compound to a disease. We
believe this is a powerful approach, because it allows compound-disease
relationships to be ranked by the amount of evidence supporting them.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 03:21:56 GMT"
}
] | 2011-06-27T00:00:00 | [
[
"Zhu",
"Qian",
""
],
[
"Sun",
"Yuyin",
""
],
[
"Challa",
"Sashikiran",
""
],
[
"Ding",
"Ying",
""
],
[
"Lajiness",
"Michael S.",
""
],
[
"Wild",
"David J.",
""
]
] | TITLE: Semantic Inference using Chemogenomics Data for Drug Discovery
ABSTRACT: Background Semantic Web Technology (SWT) makes it possible to integrate and
search the large volume of life science datasets in the public domain, as
demonstrated by well-known linked data projects such as LODD, Bio2RDF, and
Chem2Bio2RDF. Integration of these sets creates large networks of information.
We have previously described a tool called WENDI for aggregating information
pertaining to new chemical compounds, effectively creating evidence paths
relating the compounds to genes, diseases and so on. In this paper we examine
the utility of automatically inferring new compound-disease associations (and
thus new links in the network) based on semantically marked-up versions of
these evidence paths, rule-sets and inference engines.
Results Through the implementation of a semantic inference algorithm, rule
set, Semantic Web methods (RDF, OWL and SPARQL) and new interfaces, we have
created a new tool called Chemogenomic Explorer that uses networks of
ontologically annotated RDF statements along with deductive reasoning tools to
infer new associations between the query structure and genes and diseases from
WENDI results. The tool then permits interactive clustering and filtering of
these evidence paths.
Conclusions We present a new aggregate approach to inferring links between
chemical compounds and diseases using semantic inference. This approach allows
multiple evidence paths between compounds and diseases to be identified using a
rule-set and semantically annotated data, and for these evidence paths to be
clustered to show overall evidence linking the compound to a disease. We
believe this is a powerful approach, because it allows compound-disease
relationships to be ranked by the amount of evidence supporting them.
| no_new_dataset | 0.948394 |
1106.3791 | Shanika Kuruppu Ms | Shanika Kuruppu, Simon Puglisi and Justin Zobel | Reference Sequence Construction for Relative Compression of Genomes | 12 pages, 2 figures, to appear in the Proceedings of SPIRE2011 as a
short paper | null | null | null | q-bio.QM cs.CE cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relative compression, where a set of similar strings are compressed with
respect to a reference string, is a very effective method of compressing DNA
datasets containing multiple similar sequences. Relative compression is fast to
perform and also supports rapid random access to the underlying data. The main
difficulty of relative compression is in selecting an appropriate reference
sequence. In this paper, we explore using the dictionary of repeats generated
by Comrad, Re-pair and Dna-x algorithms as reference sequences for relative
compression. We show this technique allows better compression and supports
random access just as well. The technique also allows more general repetitive
datasets to be compressed using relative compression.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2011 01:10:01 GMT"
}
] | 2011-06-21T00:00:00 | [
[
"Kuruppu",
"Shanika",
""
],
[
"Puglisi",
"Simon",
""
],
[
"Zobel",
"Justin",
""
]
] | TITLE: Reference Sequence Construction for Relative Compression of Genomes
ABSTRACT: Relative compression, where a set of similar strings are compressed with
respect to a reference string, is a very effective method of compressing DNA
datasets containing multiple similar sequences. Relative compression is fast to
perform and also supports rapid random access to the underlying data. The main
difficulty of relative compression is in selecting an appropriate reference
sequence. In this paper, we explore using the dictionary of repeats generated
by Comrad, Re-pair and Dna-x algorithms as reference sequences for relative
compression. We show this technique allows better compression and supports
random access just as well. The technique also allows more general repetitive
datasets to be compressed using relative compression.
| no_new_dataset | 0.946001 |
1106.3395 | Remi Flamary | R\'emi Flamary (LITIS), Alain Rakotomamonjy (LITIS) | Decoding finger movements from ECoG signals using switching linear
models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the major challenges of ECoG-based Brain-Machine Interfaces is the
movement prediction of a human subject. Several methods exist to predict an arm
2-D trajectory. The fourth BCI Competition gives a dataset in which the aim is
to predict individual finger movements (5-D trajectory). The difficulty lies in
the fact that there is no simple relation between ECoG signals and finger
movement. We propose in this paper to decode finger flexions using switching
models. This method permits to simplify the system as it is now described as an
ensemble of linear models depending on an internal state. We show that an
interesting accuracy prediction can be obtained by such a model.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2011 06:53:47 GMT"
}
] | 2011-06-20T00:00:00 | [
[
"Flamary",
"Rémi",
"",
"LITIS"
],
[
"Rakotomamonjy",
"Alain",
"",
"LITIS"
]
] | TITLE: Decoding finger movements from ECoG signals using switching linear
models
ABSTRACT: One of the major challenges of ECoG-based Brain-Machine Interfaces is the
movement prediction of a human subject. Several methods exist to predict an arm
2-D trajectory. The fourth BCI Competition gives a dataset in which the aim is
to predict individual finger movements (5-D trajectory). The difficulty lies in
the fact that there is no simple relation between ECoG signals and finger
movement. We propose in this paper to decode finger flexions using switching
models. This method permits to simplify the system as it is now described as an
ensemble of linear models depending on an internal state. We show that an
interesting accuracy prediction can be obtained by such a model.
| no_new_dataset | 0.919787 |
1106.3396 | Remi Flamary | R\'emi Flamary (LITIS), Benjamin Labb\'e (LITIS), Alain Rakotomamonjy
(LITIS) | Large margin filtering for signal sequence labeling | IEEE International Conference on Acoustics Speech and Signal
Processing (ICASSP), 2010, Dallas : United States (2010) | null | 10.1109/ICASSP.2010.5495281 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Signal Sequence Labeling consists in predicting a sequence of labels given an
observed sequence of samples. A naive way is to filter the signal in order to
reduce the noise and to apply a classification algorithm on the filtered
samples. We propose in this paper to jointly learn the filter with the
classifier leading to a large margin filtering for classification. This method
allows to learn the optimal cutoff frequency and phase of the filter that may
be different from zero. Two methods are proposed and tested on a toy dataset
and on a real life BCI dataset from BCI Competition III.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2011 06:54:35 GMT"
}
] | 2011-06-20T00:00:00 | [
[
"Flamary",
"Rémi",
"",
"LITIS"
],
[
"Labbé",
"Benjamin",
"",
"LITIS"
],
[
"Rakotomamonjy",
"Alain",
"",
"LITIS"
]
] | TITLE: Large margin filtering for signal sequence labeling
ABSTRACT: Signal Sequence Labeling consists in predicting a sequence of labels given an
observed sequence of samples. A naive way is to filter the signal in order to
reduce the noise and to apply a classification algorithm on the filtered
samples. We propose in this paper to jointly learn the filter with the
classifier leading to a large margin filtering for classification. This method
allows to learn the optimal cutoff frequency and phase of the filter that may
be different from zero. Two methods are proposed and tested on a toy dataset
and on a real life BCI dataset from BCI Competition III.
| no_new_dataset | 0.953013 |
1106.3467 | Debotosh Bhattacharjee | Arindam Kar, Debotosh Bhattacharjee, Dipak Kumar Basu, Mita Nasipuri,
Mahantapas Kundu | High Performance Human Face Recognition using Independent High Intensity
Gabor Wavelet Responses: A Statistical Approach | Keywords: Feature extraction; Gabor Wavelets; independent
high-intensity feature (IHIF); Independent Component Analysis (ICA);
Specificity; Sensitivity; Cosine Similarity Measure; E-ISSN: 2044-6004 | International Journal of Computer Science & Emerging Technologies
pp 178-187, Volume 2, Issue 1, February 2011 | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | In this paper, we present a technique by which high-intensity feature vectors
extracted from the Gabor wavelet transformation of frontal face images, is
combined together with Independent Component Analysis (ICA) for enhanced face
recognition. Firstly, the high-intensity feature vectors are automatically
extracted using the local characteristics of each individual face from the
Gabor transformed images. Then ICA is applied on these locally extracted
high-intensity feature vectors of the facial images to obtain the independent
high intensity feature (IHIF) vectors. These IHIF forms the basis of the work.
Finally, the image classification is done using these IHIF vectors, which are
considered as representatives of the images. The importance behind implementing
ICA along with the high-intensity features of Gabor wavelet transformation is
twofold. On the one hand, selecting peaks of the Gabor transformed face images
exhibit strong characteristics of spatial locality, scale, and orientation
selectivity. Thus these images produce salient local features that are most
suitable for face recognition. On the other hand, as the ICA employs locally
salient features from the high informative facial parts, it reduces redundancy
and represents independent features explicitly. These independent features are
most useful for subsequent facial discrimination and associative recall. The
efficiency of IHIF method is demonstrated by the experiment on frontal facial
images dataset, selected from the FERET, FRAV2D, and the ORL database.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2011 12:42:26 GMT"
}
] | 2011-06-20T00:00:00 | [
[
"Kar",
"Arindam",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Basu",
"Dipak Kumar",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Kundu",
"Mahantapas",
""
]
] | TITLE: High Performance Human Face Recognition using Independent High Intensity
Gabor Wavelet Responses: A Statistical Approach
ABSTRACT: In this paper, we present a technique by which high-intensity feature vectors
extracted from the Gabor wavelet transformation of frontal face images, is
combined together with Independent Component Analysis (ICA) for enhanced face
recognition. Firstly, the high-intensity feature vectors are automatically
extracted using the local characteristics of each individual face from the
Gabor transformed images. Then ICA is applied on these locally extracted
high-intensity feature vectors of the facial images to obtain the independent
high intensity feature (IHIF) vectors. These IHIF forms the basis of the work.
Finally, the image classification is done using these IHIF vectors, which are
considered as representatives of the images. The importance behind implementing
ICA along with the high-intensity features of Gabor wavelet transformation is
twofold. On the one hand, selecting peaks of the Gabor transformed face images
exhibit strong characteristics of spatial locality, scale, and orientation
selectivity. Thus these images produce salient local features that are most
suitable for face recognition. On the other hand, as the ICA employs locally
salient features from the high informative facial parts, it reduces redundancy
and represents independent features explicitly. These independent features are
most useful for subsequent facial discrimination and associative recall. The
efficiency of IHIF method is demonstrated by the experiment on frontal facial
images dataset, selected from the FERET, FRAV2D, and the ORL database.
| no_new_dataset | 0.95253 |
1106.3166 | Jan Buytaert | J.A.N. Buytaert, W.H.M. Salih, M. Dierick, P. Jacobs and J.J.J. Dirckx | Realistic 3D computer model of the gerbil middle ear, featuring accurate
morphology of bone and soft tissue structures | 41 pages, 14 figures, to be published in JARO - Journal of the
Association for Research in Otolaryngology | null | null | null | q-bio.TO physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In order to improve realism in middle ear (ME) finite element modeling (FEM),
comprehensive and precise morphological data are needed. To date, micro-scale
X-ray computed tomography (\mu CT) recordings have been used as geometric input
data for FEM models of the ME ossicles. Previously, attempts were made to
obtain this data on ME soft tissue structures as well. However, due to low
X-ray absorption of soft tissue, quality of these images is limited. Another
popular approach is using histological sections as data for 3D models,
delivering high in-plane resolution for the sections, but the technique is
destructive in nature and registration of the sections is difficult. We combine
data from high-resolution \mu CT recordings with data from high-resolution
orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both
obtained on the same gerbil specimen. State-of-the-art \mu CT delivers
high-resolution data on the three-dimensional shape of ossicles and other ME
bony structures, while the OPFOS setup generates data of unprecedented quality
both on bone and soft tissue ME structures. Each of these techniques is
tomographic and non-destructive, and delivers sets of automatically aligned
virtual sections. The datasets coming from different techniques need to be
registered with respect to each other. By combining both datasets, we obtain a
complete high-resolution morphological model of all functional components in
the gerbil ME. The resulting three-dimensional model can be readily imported in
FEM software and is made freely available to the research community. In this
paper, we discuss the methods used, present the resulting merged model and
discuss morphological properties of the soft tissue structures, such as muscles
and ligaments.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2011 08:26:53 GMT"
}
] | 2011-06-17T00:00:00 | [
[
"Buytaert",
"J. A. N.",
""
],
[
"Salih",
"W. H. M.",
""
],
[
"Dierick",
"M.",
""
],
[
"Jacobs",
"P.",
""
],
[
"Dirckx",
"J. J. J.",
""
]
] | TITLE: Realistic 3D computer model of the gerbil middle ear, featuring accurate
morphology of bone and soft tissue structures
ABSTRACT: In order to improve realism in middle ear (ME) finite element modeling (FEM),
comprehensive and precise morphological data are needed. To date, micro-scale
X-ray computed tomography (\mu CT) recordings have been used as geometric input
data for FEM models of the ME ossicles. Previously, attempts were made to
obtain this data on ME soft tissue structures as well. However, due to low
X-ray absorption of soft tissue, quality of these images is limited. Another
popular approach is using histological sections as data for 3D models,
delivering high in-plane resolution for the sections, but the technique is
destructive in nature and registration of the sections is difficult. We combine
data from high-resolution \mu CT recordings with data from high-resolution
orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both
obtained on the same gerbil specimen. State-of-the-art \mu CT delivers
high-resolution data on the three-dimensional shape of ossicles and other ME
bony structures, while the OPFOS setup generates data of unprecedented quality
both on bone and soft tissue ME structures. Each of these techniques is
tomographic and non-destructive, and delivers sets of automatically aligned
virtual sections. The datasets coming from different techniques need to be
registered with respect to each other. By combining both datasets, we obtain a
complete high-resolution morphological model of all functional components in
the gerbil ME. The resulting three-dimensional model can be readily imported in
FEM software and is made freely available to the research community. In this
paper, we discuss the methods used, present the resulting merged model and
discuss morphological properties of the soft tissue structures, such as muscles
and ligaments.
| no_new_dataset | 0.953101 |
1106.2312 | Rathipriya R | R.Rathipriya, Dr. K.Thangavel and J.Bagyamani | Evolutionary Biclustering of Clickstream Data | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biclustering is a two way clustering approach involving simultaneous
clustering along two dimensions of the data matrix. Finding biclusters of web
objects (i.e. web users and web pages) is an emerging topic in the context of
web usage mining. It overcomes the problem associated with traditional
clustering methods by allowing automatic discovery of browsing pattern based on
a subset of attributes. A coherent bicluster of clickstream data is a local
browsing pattern such that users in bicluster exhibit correlated browsing
pattern through a subset of pages of a web site. This paper proposed a new
application of biclustering to web data using a combination of heuristics and
meta-heuristics such as K-means, Greedy Search Procedure and Genetic Algorithms
to identify the coherent browsing pattern. Experiment is conducted on the
benchmark clickstream msnbc dataset from UCI repository. Results demonstrate
the efficiency and beneficial outcome of the proposed method by correlating the
users and pages of a web site in high degree.This approach shows excellent
performance at finding high degree of overlapped coherent biclusters from web
data.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2011 14:34:16 GMT"
}
] | 2011-06-14T00:00:00 | [
[
"Rathipriya",
"R.",
""
],
[
"Thangavel",
"Dr. K.",
""
],
[
"Bagyamani",
"J.",
""
]
] | TITLE: Evolutionary Biclustering of Clickstream Data
ABSTRACT: Biclustering is a two way clustering approach involving simultaneous
clustering along two dimensions of the data matrix. Finding biclusters of web
objects (i.e. web users and web pages) is an emerging topic in the context of
web usage mining. It overcomes the problem associated with traditional
clustering methods by allowing automatic discovery of browsing pattern based on
a subset of attributes. A coherent bicluster of clickstream data is a local
browsing pattern such that users in bicluster exhibit correlated browsing
pattern through a subset of pages of a web site. This paper proposed a new
application of biclustering to web data using a combination of heuristics and
meta-heuristics such as K-means, Greedy Search Procedure and Genetic Algorithms
to identify the coherent browsing pattern. Experiment is conducted on the
benchmark clickstream msnbc dataset from UCI repository. Results demonstrate
the efficiency and beneficial outcome of the proposed method by correlating the
users and pages of a web site in high degree.This approach shows excellent
performance at finding high degree of overlapped coherent biclusters from web
data.
| no_new_dataset | 0.948822 |
1102.3937 | Victor Lee | Ruoming Jin, Victor E. Lee, Hui Hong | Axiomatic Ranking of Network Role Similarity | 17 pages, twocolumn Version 2 of this technical report fixes minor
errors in the Triangle Inequality proof, grammatical errors, and other typos.
Edited and more polished version to be published in KDD'11, August 2011 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key task in social network and other complex network analysis is role
analysis: describing and categorizing nodes according to how they interact with
other nodes. Two nodes have the same role if they interact with equivalent sets
of neighbors. The most fundamental role equivalence is automorphic equivalence.
Unfortunately, the fastest algorithms known for graph automorphism are
nonpolynomial. Moreover, since exact equivalence may be rare, a more meaningful
task is to measure the role similarity between any two nodes. This task is
closely related to the structural or link-based similarity problem that SimRank
attempts to solve. However, SimRank and most of its offshoots are not
sufficient because they do not fully recognize automorphically or structurally
equivalent nodes. In this paper we tackle two problems. First, what are the
necessary properties for a role similarity measure or metric? Second, how can
we derive a role similarity measure satisfying these properties? For the first
problem, we justify several axiomatic properties necessary for a role
similarity measure or metric: range, maximal similarity, automorphic
equivalence, transitive similarity, and the triangle inequality. For the second
problem, we present RoleSim, a new similarity metric with a simple iterative
computational method. We rigorously prove that RoleSim satisfies all the
axiomatic properties. We also introduce an iceberg RoleSim algorithm which can
guarantee to discover all pairs with RoleSim score no less than a user-defined
threshold $\theta$ without computing the RoleSim for every pair. We demonstrate
the superior interpretative power of RoleSim on both both synthetic and real
datasets.
| [
{
"version": "v1",
"created": "Fri, 18 Feb 2011 23:36:05 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2011 03:06:15 GMT"
}
] | 2011-06-13T00:00:00 | [
[
"Jin",
"Ruoming",
""
],
[
"Lee",
"Victor E.",
""
],
[
"Hong",
"Hui",
""
]
] | TITLE: Axiomatic Ranking of Network Role Similarity
ABSTRACT: A key task in social network and other complex network analysis is role
analysis: describing and categorizing nodes according to how they interact with
other nodes. Two nodes have the same role if they interact with equivalent sets
of neighbors. The most fundamental role equivalence is automorphic equivalence.
Unfortunately, the fastest algorithms known for graph automorphism are
nonpolynomial. Moreover, since exact equivalence may be rare, a more meaningful
task is to measure the role similarity between any two nodes. This task is
closely related to the structural or link-based similarity problem that SimRank
attempts to solve. However, SimRank and most of its offshoots are not
sufficient because they do not fully recognize automorphically or structurally
equivalent nodes. In this paper we tackle two problems. First, what are the
necessary properties for a role similarity measure or metric? Second, how can
we derive a role similarity measure satisfying these properties? For the first
problem, we justify several axiomatic properties necessary for a role
similarity measure or metric: range, maximal similarity, automorphic
equivalence, transitive similarity, and the triangle inequality. For the second
problem, we present RoleSim, a new similarity metric with a simple iterative
computational method. We rigorously prove that RoleSim satisfies all the
axiomatic properties. We also introduce an iceberg RoleSim algorithm which can
guarantee to discover all pairs with RoleSim score no less than a user-defined
threshold $\theta$ without computing the RoleSim for every pair. We demonstrate
the superior interpretative power of RoleSim on both both synthetic and real
datasets.
| no_new_dataset | 0.946051 |
1106.1811 | Arnab Bhattacharya | Arnab Bhattacharya and B. Palvali Teja and Sourav Dutta | Caching Stars in the Sky: A Semantic Caching Approach to Accelerate
Skyline Queries | 11 pages; will be published in DEXA 2011 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-criteria decision making has been made possible with the advent of
skyline queries. However, processing such queries for high dimensional datasets
remains a time consuming task. Real-time applications are thus infeasible,
especially for non-indexed skyline techniques where the datasets arrive online.
In this paper, we propose a caching mechanism that uses the semantics of
previous skyline queries to improve the processing time of a new query. In
addition to exact queries, utilizing such special semantics allow accelerating
related queries. We achieve this by generating partial result sets guaranteed
to be in the skyline sets. We also propose an index structure for efficient
organization of the cached queries. Experiments on synthetic and real datasets
show the effectiveness and scalability of our proposed methods.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:47:34 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2011 07:32:04 GMT"
}
] | 2011-06-13T00:00:00 | [
[
"Bhattacharya",
"Arnab",
""
],
[
"Teja",
"B. Palvali",
""
],
[
"Dutta",
"Sourav",
""
]
] | TITLE: Caching Stars in the Sky: A Semantic Caching Approach to Accelerate
Skyline Queries
ABSTRACT: Multi-criteria decision making has been made possible with the advent of
skyline queries. However, processing such queries for high dimensional datasets
remains a time consuming task. Real-time applications are thus infeasible,
especially for non-indexed skyline techniques where the datasets arrive online.
In this paper, we propose a caching mechanism that uses the semantics of
previous skyline queries to improve the processing time of a new query. In
addition to exact queries, utilizing such special semantics allow accelerating
related queries. We achieve this by generating partial result sets guaranteed
to be in the skyline sets. We also propose an index structure for efficient
organization of the cached queries. Experiments on synthetic and real datasets
show the effectiveness and scalability of our proposed methods.
| no_new_dataset | 0.936692 |
1106.1684 | Mehmet Umut Sen Mr. | Mehmet Umut Sen and Hakan Erdogan | Max-Margin Stacking and Sparse Regularization for Linear Classifier
Combination and Selection | 8 pages, 3 figures, 6 tables, journal | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main principle of stacked generalization (or Stacking) is using a
second-level generalizer to combine the outputs of base classifiers in an
ensemble. In this paper, we investigate different combination types under the
stacking framework; namely weighted sum (WS), class-dependent weighted sum
(CWS) and linear stacked generalization (LSG). For learning the weights, we
propose using regularized empirical risk minimization with the hinge loss. In
addition, we propose using group sparsity for regularization to facilitate
classifier selection. We performed experiments using two different ensemble
setups with differing diversities on 8 real-world datasets. Results show the
power of regularized learning with the hinge loss function. Using sparse
regularization, we are able to reduce the number of selected classifiers of the
diverse ensemble without sacrificing accuracy. With the non-diverse ensembles,
we even gain accuracy on average by using sparse regularization.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2011 23:03:47 GMT"
}
] | 2011-06-10T00:00:00 | [
[
"Sen",
"Mehmet Umut",
""
],
[
"Erdogan",
"Hakan",
""
]
] | TITLE: Max-Margin Stacking and Sparse Regularization for Linear Classifier
Combination and Selection
ABSTRACT: The main principle of stacked generalization (or Stacking) is using a
second-level generalizer to combine the outputs of base classifiers in an
ensemble. In this paper, we investigate different combination types under the
stacking framework; namely weighted sum (WS), class-dependent weighted sum
(CWS) and linear stacked generalization (LSG). For learning the weights, we
propose using regularized empirical risk minimization with the hinge loss. In
addition, we propose using group sparsity for regularization to facilitate
classifier selection. We performed experiments using two different ensemble
setups with differing diversities on 8 real-world datasets. Results show the
power of regularized learning with the hinge loss function. Using sparse
regularization, we are able to reduce the number of selected classifiers of the
diverse ensemble without sacrificing accuracy. With the non-diverse ensembles,
we even gain accuracy on average by using sparse regularization.
| no_new_dataset | 0.950134 |
0911.4046 | Ryota Tomioka | Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama | Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for
Sparsity Regularized Estimation | 51 pages, 9 figures | Journal of Machine Learning Research, 12(May):1537-1586, 2011 | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale $\ell_1$-regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2009 13:44:28 GMT"
},
{
"version": "v2",
"created": "Wed, 12 May 2010 12:33:07 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Jan 2011 07:04:21 GMT"
}
] | 2011-06-07T00:00:00 | [
[
"Tomioka",
"Ryota",
""
],
[
"Suzuki",
"Taiji",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | TITLE: Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for
Sparsity Regularized Estimation
ABSTRACT: We analyze the convergence behaviour of a recently proposed algorithm for
regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is
based on a new interpretation of DAL as a proximal minimization algorithm. We
theoretically show under some conditions that DAL converges super-linearly in a
non-asymptotic and global sense. Due to a special modelling of sparse
estimation problems in the context of machine learning, the assumptions we make
are milder and more natural than those made in conventional analysis of
augmented Lagrangian algorithms. In addition, the new interpretation enables us
to generalize DAL to wide varieties of sparse estimation problems. We
experimentally confirm our analysis in a large scale $\ell_1$-regularized
logistic regression problem and extensively compare the efficiency of DAL
algorithm to previously proposed algorithms on both synthetic and benchmark
datasets.
| no_new_dataset | 0.949342 |
1106.0967 | Ping Li | Ping Li, Anshumali Shrivastava, Joshua Moore, Arnd Christian Konig | Hashing Algorithms for Large-Scale Learning | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we first demonstrate that b-bit minwise hashing, whose
estimators are positive definite kernels, can be naturally integrated with
learning algorithms such as SVM and logistic regression. We adopt a simple
scheme to transform the nonlinear (resemblance) kernel into linear (inner
product) kernel; and hence large-scale problems can be solved extremely
efficiently. Our method provides a simple effective solution to large-scale
learning in massive and extremely high-dimensional datasets, especially when
data do not fit in memory.
We then compare b-bit minwise hashing with the Vowpal Wabbit (VW) algorithm
(which is related the Count-Min (CM) sketch). Interestingly, VW has the same
variances as random projections. Our theoretical and empirical comparisons
illustrate that usually $b$-bit minwise hashing is significantly more accurate
(at the same storage) than VW (and random projections) in binary data.
Furthermore, $b$-bit minwise hashing can be combined with VW to achieve further
improvements in terms of training speed, especially when $b$ is large.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2011 06:38:20 GMT"
}
] | 2011-06-07T00:00:00 | [
[
"Li",
"Ping",
""
],
[
"Shrivastava",
"Anshumali",
""
],
[
"Moore",
"Joshua",
""
],
[
"Konig",
"Arnd Christian",
""
]
] | TITLE: Hashing Algorithms for Large-Scale Learning
ABSTRACT: In this paper, we first demonstrate that b-bit minwise hashing, whose
estimators are positive definite kernels, can be naturally integrated with
learning algorithms such as SVM and logistic regression. We adopt a simple
scheme to transform the nonlinear (resemblance) kernel into linear (inner
product) kernel; and hence large-scale problems can be solved extremely
efficiently. Our method provides a simple effective solution to large-scale
learning in massive and extremely high-dimensional datasets, especially when
data do not fit in memory.
We then compare b-bit minwise hashing with the Vowpal Wabbit (VW) algorithm
(which is related the Count-Min (CM) sketch). Interestingly, VW has the same
variances as random projections. Our theoretical and empirical comparisons
illustrate that usually $b$-bit minwise hashing is significantly more accurate
(at the same storage) than VW (and random projections) in binary data.
Furthermore, $b$-bit minwise hashing can be combined with VW to achieve further
improvements in terms of training speed, especially when $b$ is large.
| no_new_dataset | 0.946941 |
1008.4815 | Alberto Costa | Alberto Costa, Fabio Roda | Recommender Systems by means of Information Retrieval | null | null | 10.1145/1988688.1988755 | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper we present a method for reformulating the Recommender Systems
problem in an Information Retrieval one. In our tests we have a dataset of
users who give ratings for some movies; we hide some values from the dataset,
and we try to predict them again using its remaining portion (the so-called
"leave-n-out approach"). In order to use an Information Retrieval algorithm, we
reformulate this Recommender Systems problem in this way: a user corresponds to
a document, a movie corresponds to a term, the active user (whose rating we
want to predict) plays the role of the query, and the ratings are used as
weigths, in place of the weighting schema of the original IR algorithm. The
output is the ranking list of the documents ("users") relevant for the query
("active user"). We use the ratings of these users, weighted according to the
rank, to predict the rating of the active user. We carry out the comparison by
means of a typical metric, namely the accuracy of the predictions returned by
the algorithm, and we compare this to the real ratings from users. In our first
tests, we use two different Information Retrieval algorithms: LSPR, a recently
proposed model based on Discrete Fourier Transform, and a simple vector space
model.
| [
{
"version": "v1",
"created": "Fri, 27 Aug 2010 22:24:25 GMT"
}
] | 2011-06-03T00:00:00 | [
[
"Costa",
"Alberto",
""
],
[
"Roda",
"Fabio",
""
]
] | TITLE: Recommender Systems by means of Information Retrieval
ABSTRACT: In this paper we present a method for reformulating the Recommender Systems
problem in an Information Retrieval one. In our tests we have a dataset of
users who give ratings for some movies; we hide some values from the dataset,
and we try to predict them again using its remaining portion (the so-called
"leave-n-out approach"). In order to use an Information Retrieval algorithm, we
reformulate this Recommender Systems problem in this way: a user corresponds to
a document, a movie corresponds to a term, the active user (whose rating we
want to predict) plays the role of the query, and the ratings are used as
weigths, in place of the weighting schema of the original IR algorithm. The
output is the ranking list of the documents ("users") relevant for the query
("active user"). We use the ratings of these users, weighted according to the
rank, to predict the rating of the active user. We carry out the comparison by
means of a typical metric, namely the accuracy of the predictions returned by
the algorithm, and we compare this to the real ratings from users. In our first
tests, we use two different Information Retrieval algorithms: LSPR, a recently
proposed model based on Discrete Fourier Transform, and a simple vector space
model.
| no_new_dataset | 0.943504 |
1106.0357 | Mohamad Tarifi | Mohamad Tarifi, Meera Sitharam, Jeffery Ho | Learning Hierarchical Sparse Representations using Iterative Dictionary
Learning and Dimension Reduction | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an elemental building block which combines Dictionary
Learning and Dimension Reduction (DRDL). We show how this foundational element
can be used to iteratively construct a Hierarchical Sparse Representation (HSR)
of a sensory stream. We compare our approach to existing models showing the
generality of our simple prescription. We then perform preliminary experiments
using this framework, illustrating with the example of an object recognition
task using standard datasets. This work introduces the very first steps towards
an integrated framework for designing and analyzing various computational tasks
from learning to attention to action. The ultimate goal is building a
mathematically rigorous, integrated theory of intelligence.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2011 02:31:04 GMT"
}
] | 2011-06-03T00:00:00 | [
[
"Tarifi",
"Mohamad",
""
],
[
"Sitharam",
"Meera",
""
],
[
"Ho",
"Jeffery",
""
]
] | TITLE: Learning Hierarchical Sparse Representations using Iterative Dictionary
Learning and Dimension Reduction
ABSTRACT: This paper introduces an elemental building block which combines Dictionary
Learning and Dimension Reduction (DRDL). We show how this foundational element
can be used to iteratively construct a Hierarchical Sparse Representation (HSR)
of a sensory stream. We compare our approach to existing models showing the
generality of our simple prescription. We then perform preliminary experiments
using this framework, illustrating with the example of an object recognition
task using standard datasets. This work introduces the very first steps towards
an integrated framework for designing and analyzing various computational tasks
from learning to attention to action. The ultimate goal is building a
mathematically rigorous, integrated theory of intelligence.
| no_new_dataset | 0.948346 |
1106.0219 | C. E. Brodley | C. E. Brodley, M. A. Friedl | Identifying Mislabeled Training Data | null | Journal Of Artificial Intelligence Research, Volume 11, pages
131-167, 1999 | 10.1613/jair.606 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new approach to identifying and eliminating mislabeled
training instances for supervised learning. The goal of this approach is to
improve classification accuracies produced by learning algorithms by improving
the quality of the training data. Our approach uses a set of learning
algorithms to create classifiers that serve as noise filters for the training
data. We evaluate single algorithm, majority vote and consensus filters on five
datasets that are prone to labeling errors. Our experiments illustrate that
filtering significantly improves classification accuracy for noise levels up to
30 percent. An analytical and empirical evaluation of the precision of our
approach shows that consensus filters are conservative at throwing away good
data at the expense of retaining bad data and that majority filters are better
at detecting bad data at the expense of throwing away good data. This suggests
that for situations in which there is a paucity of data, consensus filters are
preferable, whereas majority vote filters are preferable for situations with an
abundance of data.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:15:28 GMT"
}
] | 2011-06-02T00:00:00 | [
[
"Brodley",
"C. E.",
""
],
[
"Friedl",
"M. A.",
""
]
] | TITLE: Identifying Mislabeled Training Data
ABSTRACT: This paper presents a new approach to identifying and eliminating mislabeled
training instances for supervised learning. The goal of this approach is to
improve classification accuracies produced by learning algorithms by improving
the quality of the training data. Our approach uses a set of learning
algorithms to create classifiers that serve as noise filters for the training
data. We evaluate single algorithm, majority vote and consensus filters on five
datasets that are prone to labeling errors. Our experiments illustrate that
filtering significantly improves classification accuracy for noise levels up to
30 percent. An analytical and empirical evaluation of the precision of our
approach shows that consensus filters are conservative at throwing away good
data at the expense of retaining bad data and that majority filters are better
at detecting bad data at the expense of throwing away good data. This suggests
that for situations in which there is a paucity of data, consensus filters are
preferable, whereas majority vote filters are preferable for situations with an
abundance of data.
| no_new_dataset | 0.95511 |
1105.6118 | Amani Tahat | Amani Tahat, Maurice HT Ling | Mapping Relational Operations onto Hypergraph Model | 21 pages | The Python Papers 6(1): 4,2011 | null | null | cs.DB cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relational model is the most commonly used data model for storing large
datasets, perhaps due to the simplicity of the tabular format which had
revolutionized database management systems. However, many real world objects
are recursive and associative in nature which makes storage in the relational
model difficult. The hypergraph model is a generalization of a graph model,
where each hypernode can be made up of other nodes or graphs and each hyperedge
can be made up of one or more edges. It may address the recursive and
associative limitations of relational model. However, the hypergraph model is
non-tabular; thus, loses the simplicity of the relational model. In this study,
we consider the means to convert a relational model into a hypergraph model in
two layers. At the bottom layer, each relational tuple can be considered as a
star graph centered where the primary key node is surrounded by non-primary key
attributes. At the top layer, each tuple is a hypernode, and a relation is a
set of hypernodes. We presented a reference implementation of relational
operators (project, rename, select, inner join, natural join, left join, right
join, outer join and Cartesian join) on a hypergraph model. Using a simple
example, we demonstrate that a relation and relational operators can be
implemented on this hypergraph model.
| [
{
"version": "v1",
"created": "Mon, 30 May 2011 21:34:51 GMT"
}
] | 2011-06-01T00:00:00 | [
[
"Tahat",
"Amani",
""
],
[
"Ling",
"Maurice HT",
""
]
] | TITLE: Mapping Relational Operations onto Hypergraph Model
ABSTRACT: The relational model is the most commonly used data model for storing large
datasets, perhaps due to the simplicity of the tabular format which had
revolutionized database management systems. However, many real world objects
are recursive and associative in nature which makes storage in the relational
model difficult. The hypergraph model is a generalization of a graph model,
where each hypernode can be made up of other nodes or graphs and each hyperedge
can be made up of one or more edges. It may address the recursive and
associative limitations of relational model. However, the hypergraph model is
non-tabular; thus, loses the simplicity of the relational model. In this study,
we consider the means to convert a relational model into a hypergraph model in
two layers. At the bottom layer, each relational tuple can be considered as a
star graph centered where the primary key node is surrounded by non-primary key
attributes. At the top layer, each tuple is a hypernode, and a relation is a
set of hypernodes. We presented a reference implementation of relational
operators (project, rename, select, inner join, natural join, left join, right
join, outer join and Cartesian join) on a hypergraph model. Using a simple
example, we demonstrate that a relation and relational operators can be
implemented on this hypergraph model.
| no_new_dataset | 0.948106 |
1105.4151 | Gautam Thakur | Gautam S. Thakur, Pan Hui, Hamed Ketabdar, Ahmed Helmy | Towards Realistic Vehicular Network Modeling Using Planet-scale Public
Webcams | null | null | null | null | cs.NI stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Realistic modeling of vehicular mobility has been particularly challenging
due to a lack of large libraries of measurements in the research community. In
this paper we introduce a novel method for large-scale monitoring, analysis,
and identification of spatio-temporal models for vehicular mobility using the
freely available online webcams in cities across the globe. We collect
vehicular mobility traces from 2,700 traffic webcams in 10 different cities for
several months and generate a mobility dataset of 7.5 Terabytes consisting of
125 million of images. To the best of our knowl- edge, this is the largest data
set ever used in such study. To process and analyze this data, we propose an
efficient and scalable algorithm to estimate traffic density based on
background image subtraction. Initial results show that at least 82% of
individual cameras with less than 5% deviation from four cities follow
Loglogistic distribution and also 94% cameras from Toronto follow gamma
distribution. The aggregate results from each city also demonstrate that Log-
Logistic and gamma distribution pass the KS-test with 95% confidence.
Furthermore, many of the camera traces exhibit long range dependence, with
self-similarity evident in the aggregates of traffic (per city). We believe our
novel data collection method and dataset provide a much needed contribution to
the research community for realistic modeling of vehicular networks and
mobility.
| [
{
"version": "v1",
"created": "Thu, 19 May 2011 12:36:46 GMT"
}
] | 2011-05-26T00:00:00 | [
[
"Thakur",
"Gautam S.",
""
],
[
"Hui",
"Pan",
""
],
[
"Ketabdar",
"Hamed",
""
],
[
"Helmy",
"Ahmed",
""
]
] | TITLE: Towards Realistic Vehicular Network Modeling Using Planet-scale Public
Webcams
ABSTRACT: Realistic modeling of vehicular mobility has been particularly challenging
due to a lack of large libraries of measurements in the research community. In
this paper we introduce a novel method for large-scale monitoring, analysis,
and identification of spatio-temporal models for vehicular mobility using the
freely available online webcams in cities across the globe. We collect
vehicular mobility traces from 2,700 traffic webcams in 10 different cities for
several months and generate a mobility dataset of 7.5 Terabytes consisting of
125 million of images. To the best of our knowl- edge, this is the largest data
set ever used in such study. To process and analyze this data, we propose an
efficient and scalable algorithm to estimate traffic density based on
background image subtraction. Initial results show that at least 82% of
individual cameras with less than 5% deviation from four cities follow
Loglogistic distribution and also 94% cameras from Toronto follow gamma
distribution. The aggregate results from each city also demonstrate that Log-
Logistic and gamma distribution pass the KS-test with 95% confidence.
Furthermore, many of the camera traces exhibit long range dependence, with
self-similarity evident in the aggregates of traffic (per city). We believe our
novel data collection method and dataset provide a much needed contribution to
the research community for realistic modeling of vehicular networks and
mobility.
| new_dataset | 0.934155 |
1105.4256 | Gianmarco De Francisci Morales | Gianmarco De Francisci Morales (IMT Lucca), Aristides Gionis (Yahoo!
Research), Mauro Sozio (MPI Saarbruecken) | Social content matching in MapReduce | VLDB2011 | Proceedings of the VLDB Endowment (PVLDB), Vol. 4, No. 7, pp.
460-469 (2011) | null | null | cs.SI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matching problems are ubiquitous. They occur in economic markets, labor
markets, internet advertising, and elsewhere. In this paper we focus on an
application of matching for social media. Our goal is to distribute content
from information suppliers to information consumers. We seek to maximize the
overall relevance of the matched content from suppliers to consumers while
regulating the overall activity, e.g., ensuring that no consumer is overwhelmed
with data and that all suppliers have chances to deliver their content.
We propose two matching algorithms, GreedyMR and StackMR, geared for the
MapReduce paradigm. Both algorithms have provable approximation guarantees, and
in practice they produce high-quality solutions. While both algorithms scale
extremely well, we can show that StackMR requires only a poly-logarithmic
number of MapReduce steps, making it an attractive option for applications with
very large datasets. We experimentally show the trade-offs between quality and
efficiency of our solutions on two large datasets coming from real-world
social-media web sites.
| [
{
"version": "v1",
"created": "Sat, 21 May 2011 12:11:12 GMT"
}
] | 2011-05-24T00:00:00 | [
[
"Morales",
"Gianmarco De Francisci",
"",
"IMT Lucca"
],
[
"Gionis",
"Aristides",
"",
"Yahoo!\n Research"
],
[
"Sozio",
"Mauro",
"",
"MPI Saarbruecken"
]
] | TITLE: Social content matching in MapReduce
ABSTRACT: Matching problems are ubiquitous. They occur in economic markets, labor
markets, internet advertising, and elsewhere. In this paper we focus on an
application of matching for social media. Our goal is to distribute content
from information suppliers to information consumers. We seek to maximize the
overall relevance of the matched content from suppliers to consumers while
regulating the overall activity, e.g., ensuring that no consumer is overwhelmed
with data and that all suppliers have chances to deliver their content.
We propose two matching algorithms, GreedyMR and StackMR, geared for the
MapReduce paradigm. Both algorithms have provable approximation guarantees, and
in practice they produce high-quality solutions. While both algorithms scale
extremely well, we can show that StackMR requires only a poly-logarithmic
number of MapReduce steps, making it an attractive option for applications with
very large datasets. We experimentally show the trade-offs between quality and
efficiency of our solutions on two large datasets coming from real-world
social-media web sites.
| no_new_dataset | 0.944074 |
1105.4004 | Miguel A. Martinez-Prieto | Sandra \'Alvarez-Garc\'ia and Nieves R. Brisaboa and Javier D.
Fern\'andez and Miguel A. Mart\'inez-Prieto | Compressed k2-Triples for Full-In-Memory RDF Engines | In Proc. of AMCIS'2011 | null | null | null | cs.IR cs.DB | http://creativecommons.org/licenses/by/3.0/ | Current "data deluge" has flooded the Web of Data with very large RDF
datasets. They are hosted and queried through SPARQL endpoints which act as
nodes of a semantic net built on the principles of the Linked Data project.
Although this is a realistic philosophy for global data publishing, its query
performance is diminished when the RDF engines (behind the endpoints) manage
these huge datasets. Their indexes cannot be fully loaded in main memory, hence
these systems need to perform slow disk accesses to solve SPARQL queries. This
paper addresses this problem by a compact indexed RDF structure (called
k2-triples) applying compact k2-tree structures to the well-known
vertical-partitioning technique. It obtains an ultra-compressed representation
of large RDF graphs and allows SPARQL queries to be full-in-memory performed
without decompression. We show that k2-triples clearly outperforms
state-of-the-art compressibility and traditional vertical-partitioning query
resolution, remaining very competitive with multi-index solutions.
| [
{
"version": "v1",
"created": "Fri, 20 May 2011 02:11:20 GMT"
}
] | 2011-05-23T00:00:00 | [
[
"Álvarez-García",
"Sandra",
""
],
[
"Brisaboa",
"Nieves R.",
""
],
[
"Fernández",
"Javier D.",
""
],
[
"Martínez-Prieto",
"Miguel A.",
""
]
] | TITLE: Compressed k2-Triples for Full-In-Memory RDF Engines
ABSTRACT: Current "data deluge" has flooded the Web of Data with very large RDF
datasets. They are hosted and queried through SPARQL endpoints which act as
nodes of a semantic net built on the principles of the Linked Data project.
Although this is a realistic philosophy for global data publishing, its query
performance is diminished when the RDF engines (behind the endpoints) manage
these huge datasets. Their indexes cannot be fully loaded in main memory, hence
these systems need to perform slow disk accesses to solve SPARQL queries. This
paper addresses this problem by a compact indexed RDF structure (called
k2-triples) applying compact k2-tree structures to the well-known
vertical-partitioning technique. It obtains an ultra-compressed representation
of large RDF graphs and allows SPARQL queries to be full-in-memory performed
without decompression. We show that k2-triples clearly outperforms
state-of-the-art compressibility and traditional vertical-partitioning query
resolution, remaining very competitive with multi-index solutions.
| no_new_dataset | 0.940024 |
1105.3882 | Paolo Bajardi | Paolo Bajardi, Alain Barrat, Fabrizio Natale, Lara Savini, Vittoria
Colizza | Dynamical Patterns of Cattle Trade Movements | null | PLoS ONE 6(5): e19869(2011) | 10.1371/journal.pone.0019869 | null | physics.soc-ph cond-mat.stat-mech q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite their importance for the spread of zoonotic diseases, our
understanding of the dynamical aspects characterizing the movements of farmed
animal populations remains limited as these systems are traditionally studied
as static objects and through simplified approximations. By leveraging on the
network science approach, here we are able for the first time to fully analyze
the longitudinal dataset of Italian cattle movements that reports the mobility
of individual animals among farms on a daily basis. The complexity and
inter-relations between topology, function and dynamical nature of the system
are characterized at different spatial and time resolutions, in order to
uncover patterns and vulnerabilities fundamental for the definition of targeted
prevention and control measures for zoonotic diseases. Results show how the
stationarity of statistical distributions coexists with a strong and
non-trivial evolutionary dynamics at the node and link levels, on all
timescales. Traditional static views of the displacement network hide important
patterns of structural changes affecting nodes' centrality and farms' spreading
potential, thus limiting the efficiency of interventions based on partial
longitudinal information. By fully taking into account the longitudinal
dimension, we propose a novel definition of dynamical motifs that is able to
uncover the presence of a temporal arrow describing the evolution of the system
and the causality patterns of its displacements, shedding light on mechanisms
that may play a crucial role in the definition of preventive actions.
| [
{
"version": "v1",
"created": "Thu, 19 May 2011 14:25:39 GMT"
}
] | 2011-05-20T00:00:00 | [
[
"Bajardi",
"Paolo",
""
],
[
"Barrat",
"Alain",
""
],
[
"Natale",
"Fabrizio",
""
],
[
"Savini",
"Lara",
""
],
[
"Colizza",
"Vittoria",
""
]
] | TITLE: Dynamical Patterns of Cattle Trade Movements
ABSTRACT: Despite their importance for the spread of zoonotic diseases, our
understanding of the dynamical aspects characterizing the movements of farmed
animal populations remains limited as these systems are traditionally studied
as static objects and through simplified approximations. By leveraging on the
network science approach, here we are able for the first time to fully analyze
the longitudinal dataset of Italian cattle movements that reports the mobility
of individual animals among farms on a daily basis. The complexity and
inter-relations between topology, function and dynamical nature of the system
are characterized at different spatial and time resolutions, in order to
uncover patterns and vulnerabilities fundamental for the definition of targeted
prevention and control measures for zoonotic diseases. Results show how the
stationarity of statistical distributions coexists with a strong and
non-trivial evolutionary dynamics at the node and link levels, on all
timescales. Traditional static views of the displacement network hide important
patterns of structural changes affecting nodes' centrality and farms' spreading
potential, thus limiting the efficiency of interventions based on partial
longitudinal information. By fully taking into account the longitudinal
dimension, we propose a novel definition of dynamical motifs that is able to
uncover the presence of a temporal arrow describing the evolution of the system
and the causality patterns of its displacements, shedding light on mechanisms
that may play a crucial role in the definition of preventive actions.
| no_new_dataset | 0.941493 |
1105.3685 | Afzal Godil | Afzal Godil, Zhouhui Lian, Helin Dutagaci, Rui Fang, Vanamali T.P.,
Chun Pan Cheung | Benchmarks, Performance Evaluation and Contests for 3D Shape Retrieval | Performance Metrics for Intelligent Systems (PerMIS'10) Workshop,
September, 2010 | null | null | null | cs.CV cs.CG | http://creativecommons.org/licenses/publicdomain/ | Benchmarking of 3D Shape retrieval allows developers and researchers to
compare the strengths of different algorithms on a standard dataset. Here we
describe the procedures involved in developing a benchmark and issues involved.
We then discuss some of the current 3D shape retrieval benchmarks efforts of
our group and others. We also review the different performance evaluation
measures that are developed and used by researchers in the community. After
that we give an overview of the 3D shape retrieval contest (SHREC) tracks run
under the EuroGraphics Workshop on 3D Object Retrieval and give details of
tracks that we organized for SHREC 2010. Finally we demonstrate some of the
results based on the different SHREC contest tracks and the NIST shape
benchmark.
| [
{
"version": "v1",
"created": "Wed, 18 May 2011 16:48:47 GMT"
}
] | 2011-05-19T00:00:00 | [
[
"Godil",
"Afzal",
""
],
[
"Lian",
"Zhouhui",
""
],
[
"Dutagaci",
"Helin",
""
],
[
"Fang",
"Rui",
""
],
[
"P.",
"Vanamali T.",
""
],
[
"Cheung",
"Chun Pan",
""
]
] | TITLE: Benchmarks, Performance Evaluation and Contests for 3D Shape Retrieval
ABSTRACT: Benchmarking of 3D Shape retrieval allows developers and researchers to
compare the strengths of different algorithms on a standard dataset. Here we
describe the procedures involved in developing a benchmark and issues involved.
We then discuss some of the current 3D shape retrieval benchmarks efforts of
our group and others. We also review the different performance evaluation
measures that are developed and used by researchers in the community. After
that we give an overview of the 3D shape retrieval contest (SHREC) tracks run
under the EuroGraphics Workshop on 3D Object Retrieval and give details of
tracks that we organized for SHREC 2010. Finally we demonstrate some of the
results based on the different SHREC contest tracks and the NIST shape
benchmark.
| no_new_dataset | 0.947575 |
1105.2797 | Afzal Godil | Afzal Godil, Sandy Ressler and Patrick Grother | Face Recognition using 3D Facial Shape and Color Map Information:
Comparison and Combination | Proceedings of SPIE Vol. 5404 Biometric Technology for Human
Identification, Anil K. Jain; Nalini K. Ratha, Editors, pp.351-361, ISBN:
9780819453273 Date: 25 August 2004 | null | 10.1117/12.540754 | null | cs.CV | http://creativecommons.org/licenses/publicdomain/ | In this paper, we investigate the use of 3D surface geometry for face
recognition and compare it to one based on color map information. The 3D
surface and color map data are from the CAESAR anthropometric database. We find
that the recognition performance is not very different between 3D surface and
color map information using a principal component analysis algorithm. We also
discuss the different techniques for the combination of the 3D surface and
color map information for multi-modal recognition by using different fusion
approaches and show that there is significant improvement in results. The
effectiveness of various techniques is compared and evaluated on a dataset with
200 subjects in two different positions.
| [
{
"version": "v1",
"created": "Fri, 13 May 2011 18:25:28 GMT"
}
] | 2011-05-16T00:00:00 | [
[
"Godil",
"Afzal",
""
],
[
"Ressler",
"Sandy",
""
],
[
"Grother",
"Patrick",
""
]
] | TITLE: Face Recognition using 3D Facial Shape and Color Map Information:
Comparison and Combination
ABSTRACT: In this paper, we investigate the use of 3D surface geometry for face
recognition and compare it to one based on color map information. The 3D
surface and color map data are from the CAESAR anthropometric database. We find
that the recognition performance is not very different between 3D surface and
color map information using a principal component analysis algorithm. We also
discuss the different techniques for the combination of the 3D surface and
color map information for multi-modal recognition by using different fusion
approaches and show that there is significant improvement in results. The
effectiveness of various techniques is compared and evaluated on a dataset with
200 subjects in two different positions.
| no_new_dataset | 0.919353 |
1012.5815 | Tamal Ghosh Tamal Ghosh | Tamal Ghosh, Mousumi Modak and Pranab K Dan | SAPFOCS: a metaheuristic based approach to part family formation
problems in group technology | 10 pages; 6 figures; 12 tables | nternational Journal of Management Science International Journal
of Management Science and Engineering Management, 6(3): 231-240, 2011 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article deals with Part family formation problem which is believed to be
moderately complicated to be solved in polynomial time in the vicinity of Group
Technology (GT). In the past literature researchers investigated that the part
family formation techniques are principally based on production flow analysis
(PFA) which usually considers operational requirements, sequences and time.
Part Coding Analysis (PCA) is merely considered in GT which is believed to be
the proficient method to identify the part families. PCA classifies parts by
allotting them to different families based on their resemblances in: (1) design
characteristics such as shape and size, and/or (2) manufacturing
characteristics (machining requirements). A novel approach based on simulated
annealing namely SAPFOCS is adopted in this study to develop effective part
families exploiting the PCA technique. Thereafter Taguchi's orthogonal design
method is employed to solve the critical issues on the subject of parameters
selection for the proposed metaheuristic algorithm. The adopted technique is
therefore tested on 5 different datasets of size 5 {\times} 9 to 27 {\times} 9
and the obtained results are compared with C-Linkage clustering technique. The
experimental results reported that the proposed metaheuristic algorithm is
extremely effective in terms of the quality of the solution obtained and has
outperformed C-Linkage algorithm in most instances.
| [
{
"version": "v1",
"created": "Tue, 28 Dec 2010 18:57:04 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2011 07:18:26 GMT"
}
] | 2011-05-12T00:00:00 | [
[
"Ghosh",
"Tamal",
""
],
[
"Modak",
"Mousumi",
""
],
[
"Dan",
"Pranab K",
""
]
] | TITLE: SAPFOCS: a metaheuristic based approach to part family formation
problems in group technology
ABSTRACT: This article deals with Part family formation problem which is believed to be
moderately complicated to be solved in polynomial time in the vicinity of Group
Technology (GT). In the past literature researchers investigated that the part
family formation techniques are principally based on production flow analysis
(PFA) which usually considers operational requirements, sequences and time.
Part Coding Analysis (PCA) is merely considered in GT which is believed to be
the proficient method to identify the part families. PCA classifies parts by
allotting them to different families based on their resemblances in: (1) design
characteristics such as shape and size, and/or (2) manufacturing
characteristics (machining requirements). A novel approach based on simulated
annealing namely SAPFOCS is adopted in this study to develop effective part
families exploiting the PCA technique. Thereafter Taguchi's orthogonal design
method is employed to solve the critical issues on the subject of parameters
selection for the proposed metaheuristic algorithm. The adopted technique is
therefore tested on 5 different datasets of size 5 {\times} 9 to 27 {\times} 9
and the obtained results are compared with C-Linkage clustering technique. The
experimental results reported that the proposed metaheuristic algorithm is
extremely effective in terms of the quality of the solution obtained and has
outperformed C-Linkage algorithm in most instances.
| no_new_dataset | 0.947186 |
1105.1926 | G\'abor Bortel | G\'abor Bortel, Mikl\'os Tegze | Common Arc Method for Diffraction Pattern Orientation | 16 pages, 10 figures | null | null | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very short pulses of x-ray free-electron lasers opened the way to obtain
diffraction signal from single particles beyond the radiation dose limit. For
3D structure reconstruction many patterns are recorded in the object's unknown
orientation. We describe a method for orientation of continuous diffraction
patterns of non-periodic objects, utilizing intensity correlations in the
curved intersections of the corresponding Ewald spheres, hence named Common Arc
orientation. Present implementation of the algorithm optionally takes into
account the Friedel law, handles missing data and is capable to determine the
point group of symmetric objects. Its performance is demonstrated on simulated
diffraction datasets and verification of the results indicates high orientation
accuracy even at low signal levels. The Common Arc method fills a gap in the
wide palette of the orientation methods.
| [
{
"version": "v1",
"created": "Tue, 10 May 2011 12:15:44 GMT"
}
] | 2011-05-11T00:00:00 | [
[
"Bortel",
"Gábor",
""
],
[
"Tegze",
"Miklós",
""
]
] | TITLE: Common Arc Method for Diffraction Pattern Orientation
ABSTRACT: Very short pulses of x-ray free-electron lasers opened the way to obtain
diffraction signal from single particles beyond the radiation dose limit. For
3D structure reconstruction many patterns are recorded in the object's unknown
orientation. We describe a method for orientation of continuous diffraction
patterns of non-periodic objects, utilizing intensity correlations in the
curved intersections of the corresponding Ewald spheres, hence named Common Arc
orientation. Present implementation of the algorithm optionally takes into
account the Friedel law, handles missing data and is capable to determine the
point group of symmetric objects. Its performance is demonstrated on simulated
diffraction datasets and verification of the results indicates high orientation
accuracy even at low signal levels. The Common Arc method fills a gap in the
wide palette of the orientation methods.
| no_new_dataset | 0.952175 |
0809.0490 | Alexander Gorban | A. N. Gorban, A. Y. Zinovyev | Principal Graphs and Manifolds | 36 pages, 6 figures, minor corrections | Handbook of Research on Machine Learning Applications and Trends:
Algorithms, Methods and Techniques, Ch. 2, Information Science Reference,
2009. 28-59 | 10.4018/978-1-60566-766-9 | null | cs.LG cs.NE stat.ML | http://creativecommons.org/licenses/by/3.0/ | In many physical, statistical, biological and other investigations it is
desirable to approximate a system of points by objects of lower dimension
and/or complexity. For this purpose, Karl Pearson invented principal component
analysis in 1901 and found 'lines and planes of closest fit to system of
points'. The famous k-means algorithm solves the approximation problem too, but
by finite sets instead of lines and planes. This chapter gives a brief
practical introduction into the methods of construction of general principal
objects, i.e. objects embedded in the 'middle' of the multidimensional data
set. As a basis, the unifying framework of mean squared distance approximation
of finite datasets is selected. Principal graphs and manifolds are constructed
as generalisations of principal components and k-means principal points. For
this purpose, the family of expectation/maximisation algorithms with nearest
generalisations is presented. Construction of principal graphs with controlled
complexity is based on the graph grammar approach.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2008 18:04:53 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2011 13:23:08 GMT"
}
] | 2011-05-10T00:00:00 | [
[
"Gorban",
"A. N.",
""
],
[
"Zinovyev",
"A. Y.",
""
]
] | TITLE: Principal Graphs and Manifolds
ABSTRACT: In many physical, statistical, biological and other investigations it is
desirable to approximate a system of points by objects of lower dimension
and/or complexity. For this purpose, Karl Pearson invented principal component
analysis in 1901 and found 'lines and planes of closest fit to system of
points'. The famous k-means algorithm solves the approximation problem too, but
by finite sets instead of lines and planes. This chapter gives a brief
practical introduction into the methods of construction of general principal
objects, i.e. objects embedded in the 'middle' of the multidimensional data
set. As a basis, the unifying framework of mean squared distance approximation
of finite datasets is selected. Principal graphs and manifolds are constructed
as generalisations of principal components and k-means principal points. For
this purpose, the family of expectation/maximisation algorithms with nearest
generalisations is presented. Construction of principal graphs with controlled
complexity is based on the graph grammar approach.
| no_new_dataset | 0.94887 |
1009.5168 | Konstantin Voevodski | Konstantin Voevodski, Maria-Florina Balcan, Heiko Roglin, Shang-Hua
Teng, Yu Xia | Efficient Clustering with Limited Distance Information | Full version of UAI 2010 paper | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a point set S and an unknown metric d on S, we study the problem of
efficiently partitioning S into k clusters while querying few distances between
the points. In our model we assume that we have access to one versus all
queries that given a point s in S return the distances between s and all other
points. We show that given a natural assumption about the structure of the
instance, we can efficiently find an accurate clustering using only O(k)
distance queries. Our algorithm uses an active selection strategy to choose a
small set of points that we call landmarks, and considers only the distances
between landmarks and other points to produce a clustering. We use our
algorithm to cluster proteins by sequence similarity. This setting nicely fits
our model because we can use a fast sequence database search program to query a
sequence against an entire dataset. We conduct an empirical study that shows
that even though we query a small fraction of the distances between the points,
we produce clusterings that are close to a desired clustering given by manual
classification.
| [
{
"version": "v1",
"created": "Mon, 27 Sep 2010 06:29:35 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2011 04:03:47 GMT"
}
] | 2011-05-10T00:00:00 | [
[
"Voevodski",
"Konstantin",
""
],
[
"Balcan",
"Maria-Florina",
""
],
[
"Roglin",
"Heiko",
""
],
[
"Teng",
"Shang-Hua",
""
],
[
"Xia",
"Yu",
""
]
] | TITLE: Efficient Clustering with Limited Distance Information
ABSTRACT: Given a point set S and an unknown metric d on S, we study the problem of
efficiently partitioning S into k clusters while querying few distances between
the points. In our model we assume that we have access to one versus all
queries that given a point s in S return the distances between s and all other
points. We show that given a natural assumption about the structure of the
instance, we can efficiently find an accurate clustering using only O(k)
distance queries. Our algorithm uses an active selection strategy to choose a
small set of points that we call landmarks, and considers only the distances
between landmarks and other points to produce a clustering. We use our
algorithm to cluster proteins by sequence similarity. This setting nicely fits
our model because we can use a fast sequence database search program to query a
sequence against an entire dataset. We conduct an empirical study that shows
that even though we query a small fraction of the distances between the points,
we produce clusterings that are close to a desired clustering given by manual
classification.
| no_new_dataset | 0.949435 |
1011.4632 | Christos Boutsidis | Christos Boutsidis, Anastasios Zouzias, Petros Drineas | Random Projections for $k$-means Clustering | Neural Information Processing Systems (NIPS) 2010 | null | null | null | cs.AI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses the topic of dimensionality reduction for $k$-means
clustering. We prove that any set of $n$ points in $d$ dimensions (rows in a
matrix $A \in \RR^{n \times d}$) can be projected into $t = \Omega(k / \eps^2)$
dimensions, for any $\eps \in (0,1/3)$, in $O(n d \lceil \eps^{-2} k/ \log(d)
\rceil )$ time, such that with constant probability the optimal $k$-partition
of the point set is preserved within a factor of $2+\eps$. The projection is
done by post-multiplying $A$ with a $d \times t$ random matrix $R$ having
entries $+1/\sqrt{t}$ or $-1/\sqrt{t}$ with equal probability. A numerical
implementation of our technique and experiments on a large face images dataset
verify the speed and the accuracy of our theoretical results.
| [
{
"version": "v1",
"created": "Sun, 21 Nov 2010 02:37:10 GMT"
}
] | 2011-05-05T00:00:00 | [
[
"Boutsidis",
"Christos",
""
],
[
"Zouzias",
"Anastasios",
""
],
[
"Drineas",
"Petros",
""
]
] | TITLE: Random Projections for $k$-means Clustering
ABSTRACT: This paper discusses the topic of dimensionality reduction for $k$-means
clustering. We prove that any set of $n$ points in $d$ dimensions (rows in a
matrix $A \in \RR^{n \times d}$) can be projected into $t = \Omega(k / \eps^2)$
dimensions, for any $\eps \in (0,1/3)$, in $O(n d \lceil \eps^{-2} k/ \log(d)
\rceil )$ time, such that with constant probability the optimal $k$-partition
of the point set is preserved within a factor of $2+\eps$. The projection is
done by post-multiplying $A$ with a $d \times t$ random matrix $R$ having
entries $+1/\sqrt{t}$ or $-1/\sqrt{t}$ with equal probability. A numerical
implementation of our technique and experiments on a large face images dataset
verify the speed and the accuracy of our theoretical results.
| no_new_dataset | 0.940353 |
1012.2363 | Santo Fortunato Dr | Andrea Lancichinetti, Filippo Radicchi, Jose' Javier Ramasco, Santo
Fortunato | Finding statistically significant communities in networks | 24 pages, 25 figures, 1 table. Final version published in PLoS One.
The code of OSLOM is freely available at http://www.oslom.org | PLoS One 6(4), e18961 (2011) | 10.1371/journal.pone.0018961 | null | physics.soc-ph cs.IR cs.SI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community structure is one of the main structural features of networks,
revealing both their internal organization and the similarity of their
elementary units. Despite the large variety of methods proposed to detect
communities in graphs, there is a big need for multi-purpose techniques, able
to handle different types of datasets and the subtleties of community
structure. In this paper we present OSLOM (Order Statistics Local Optimization
Method), the first method capable to detect clusters in networks accounting for
edge directions, edge weights, overlapping communities, hierarchies and
community dynamics. It is based on the local optimization of a fitness function
expressing the statistical significance of clusters with respect to random
fluctuations, which is estimated with tools of Extreme and Order Statistics.
OSLOM can be used alone or as a refinement procedure of partitions/covers
delivered by other techniques. We have also implemented sequential algorithms
combining OSLOM with other fast techniques, so that the community structure of
very large networks can be uncovered. Our method has a comparable performance
as the best existing algorithms on artificial benchmark graphs. Several
applications on real networks are shown as well. OSLOM is implemented in a
freely available software (http://www.oslom.org), and we believe it will be a
valuable tool in the analysis of networks.
| [
{
"version": "v1",
"created": "Fri, 10 Dec 2010 19:52:21 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2011 16:00:19 GMT"
}
] | 2011-05-05T00:00:00 | [
[
"Lancichinetti",
"Andrea",
""
],
[
"Radicchi",
"Filippo",
""
],
[
"Ramasco",
"Jose' Javier",
""
],
[
"Fortunato",
"Santo",
""
]
] | TITLE: Finding statistically significant communities in networks
ABSTRACT: Community structure is one of the main structural features of networks,
revealing both their internal organization and the similarity of their
elementary units. Despite the large variety of methods proposed to detect
communities in graphs, there is a big need for multi-purpose techniques, able
to handle different types of datasets and the subtleties of community
structure. In this paper we present OSLOM (Order Statistics Local Optimization
Method), the first method capable to detect clusters in networks accounting for
edge directions, edge weights, overlapping communities, hierarchies and
community dynamics. It is based on the local optimization of a fitness function
expressing the statistical significance of clusters with respect to random
fluctuations, which is estimated with tools of Extreme and Order Statistics.
OSLOM can be used alone or as a refinement procedure of partitions/covers
delivered by other techniques. We have also implemented sequential algorithms
combining OSLOM with other fast techniques, so that the community structure of
very large networks can be uncovered. Our method has a comparable performance
as the best existing algorithms on artificial benchmark graphs. Several
applications on real networks are shown as well. OSLOM is implemented in a
freely available software (http://www.oslom.org), and we believe it will be a
valuable tool in the analysis of networks.
| no_new_dataset | 0.942612 |
1105.0673 | Cristian Danescu-Niculescu-Mizil | Cristian Danescu-Niculescu-Mizil, Michael Gamon, Susan Dumais | Mark My Words! Linguistic Style Accommodation in Social Media | Talk slides available at http://www.cs.cornell.edu/~cristian/www2011 | Proceedings of WWW, pp. 141--150, 2009 | 10.1145/1963405.1963509 | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The psycholinguistic theory of communication accommodation accounts for the
general observation that participants in conversations tend to converge to one
another's communicative behavior: they coordinate in a variety of dimensions
including choice of words, syntax, utterance length, pitch and gestures. In its
almost forty years of existence, this theory has been empirically supported
exclusively through small-scale or controlled laboratory studies. Here we
address this phenomenon in the context of Twitter conversations. Undoubtedly,
this setting is unlike any other in which accommodation was observed and, thus,
challenging to the theory. Its novelty comes not only from its size, but also
from the non real-time nature of conversations, from the 140 character length
restriction, from the wide variety of social relation types, and from a design
that was initially not geared towards conversation at all. Given such
constraints, it is not clear a priori whether accommodation is robust enough to
occur given the constraints of this new environment. To investigate this, we
develop a probabilistic framework that can model accommodation and measure its
effects. We apply it to a large Twitter conversational dataset specifically
developed for this task. This is the first time the hypothesis of linguistic
style accommodation has been examined (and verified) in a large scale, real
world setting. Furthermore, when investigating concepts such as stylistic
influence and symmetry of accommodation, we discover a complexity of the
phenomenon which was never observed before. We also explore the potential
relation between stylistic influence and network features commonly associated
with social status.
| [
{
"version": "v1",
"created": "Tue, 3 May 2011 20:00:05 GMT"
}
] | 2011-05-05T00:00:00 | [
[
"Danescu-Niculescu-Mizil",
"Cristian",
""
],
[
"Gamon",
"Michael",
""
],
[
"Dumais",
"Susan",
""
]
] | TITLE: Mark My Words! Linguistic Style Accommodation in Social Media
ABSTRACT: The psycholinguistic theory of communication accommodation accounts for the
general observation that participants in conversations tend to converge to one
another's communicative behavior: they coordinate in a variety of dimensions
including choice of words, syntax, utterance length, pitch and gestures. In its
almost forty years of existence, this theory has been empirically supported
exclusively through small-scale or controlled laboratory studies. Here we
address this phenomenon in the context of Twitter conversations. Undoubtedly,
this setting is unlike any other in which accommodation was observed and, thus,
challenging to the theory. Its novelty comes not only from its size, but also
from the non real-time nature of conversations, from the 140 character length
restriction, from the wide variety of social relation types, and from a design
that was initially not geared towards conversation at all. Given such
constraints, it is not clear a priori whether accommodation is robust enough to
occur given the constraints of this new environment. To investigate this, we
develop a probabilistic framework that can model accommodation and measure its
effects. We apply it to a large Twitter conversational dataset specifically
developed for this task. This is the first time the hypothesis of linguistic
style accommodation has been examined (and verified) in a large scale, real
world setting. Furthermore, when investigating concepts such as stylistic
influence and symmetry of accommodation, we discover a complexity of the
phenomenon which was never observed before. We also explore the potential
relation between stylistic influence and network features commonly associated
with social status.
| new_dataset | 0.969266 |
1105.0470 | Jai Sukhatme | Jai Sukhatme and William R. Young | The advection-condensation model and water vapour PDFs | 13 pages, 8 figures, submitted to QJRMS | null | null | null | physics.flu-dyn physics.ao-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The statistically steady humidity distribution resulting from an interaction
of advection, modeled as an uncorrelated random walk of moist parcels on an
isentropic surface, and a vapour sink, modeled as immediate condensation
whenever the specific humidity exceeds a specified saturation humidity, is
explored with theory and simulation. A source supplies moisture at the
deep-tropical southern boundary of the domain, and the saturation humidity is
specified as a monotonically decreasing function of distance from the boundary.
The boundary source balances the interior condensation sink, so that a
stationary spatially inhomogeneous humidity distribution emerges. An exact
solution of the Fokker-Planck equation delivers a simple expression for the
resulting probability density function (PDF) of the water vapour field and also
of the relative humidity. This solution agrees completely with a numerical
simulation of the process, and the humidity PDF exhibits several features of
interest, such as bimodality close to the source and unimodality further from
the source. The PDFs of specific and relative humidity are broad and
non-Gaussian. The domain averaged relative humidity PDF is bimodal with
distinct moist and dry peaks, a feature which we show agrees with middleworld
isentropic PDFs derived from the ERA interim dataset.
| [
{
"version": "v1",
"created": "Tue, 3 May 2011 03:13:07 GMT"
}
] | 2011-05-04T00:00:00 | [
[
"Sukhatme",
"Jai",
""
],
[
"Young",
"William R.",
""
]
] | TITLE: The advection-condensation model and water vapour PDFs
ABSTRACT: The statistically steady humidity distribution resulting from an interaction
of advection, modeled as an uncorrelated random walk of moist parcels on an
isentropic surface, and a vapour sink, modeled as immediate condensation
whenever the specific humidity exceeds a specified saturation humidity, is
explored with theory and simulation. A source supplies moisture at the
deep-tropical southern boundary of the domain, and the saturation humidity is
specified as a monotonically decreasing function of distance from the boundary.
The boundary source balances the interior condensation sink, so that a
stationary spatially inhomogeneous humidity distribution emerges. An exact
solution of the Fokker-Planck equation delivers a simple expression for the
resulting probability density function (PDF) of the water vapour field and also
of the relative humidity. This solution agrees completely with a numerical
simulation of the process, and the humidity PDF exhibits several features of
interest, such as bimodality close to the source and unimodality further from
the source. The PDFs of specific and relative humidity are broad and
non-Gaussian. The domain averaged relative humidity PDF is bimodal with
distinct moist and dry peaks, a feature which we show agrees with middleworld
isentropic PDFs derived from the ERA interim dataset.
| no_new_dataset | 0.957158 |
1104.4605 | Xiaoye Jiang | Xiaoye Jiang and Yuan Yao and Han Liu and Leonidas Guibas | Compressive Network Analysis | null | null | null | null | stat.ML cs.DM cs.LG cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern data acquisition routinely produces massive amounts of network data.
Though many methods and models have been proposed to analyze such data, the
research of network data is largely disconnected with the classical theory of
statistical learning and signal processing. In this paper, we present a new
framework for modeling network data, which connects two seemingly different
areas: network data analysis and compressed sensing. From a nonparametric
perspective, we model an observed network using a large dictionary. In
particular, we consider the network clique detection problem and show
connections between our formulation with a new algebraic tool, namely Randon
basis pursuit in homogeneous spaces. Such a connection allows us to identify
rigorous recovery conditions for clique detection problems. Though this paper
is mainly conceptual, we also develop practical approximation algorithms for
solving empirical problems and demonstrate their usefulness on real-world
datasets.
| [
{
"version": "v1",
"created": "Sun, 24 Apr 2011 06:06:12 GMT"
}
] | 2011-04-26T00:00:00 | [
[
"Jiang",
"Xiaoye",
""
],
[
"Yao",
"Yuan",
""
],
[
"Liu",
"Han",
""
],
[
"Guibas",
"Leonidas",
""
]
] | TITLE: Compressive Network Analysis
ABSTRACT: Modern data acquisition routinely produces massive amounts of network data.
Though many methods and models have been proposed to analyze such data, the
research of network data is largely disconnected with the classical theory of
statistical learning and signal processing. In this paper, we present a new
framework for modeling network data, which connects two seemingly different
areas: network data analysis and compressed sensing. From a nonparametric
perspective, we model an observed network using a large dictionary. In
particular, we consider the network clique detection problem and show
connections between our formulation with a new algebraic tool, namely Randon
basis pursuit in homogeneous spaces. Such a connection allows us to identify
rigorous recovery conditions for clique detection problems. Though this paper
is mainly conceptual, we also develop practical approximation algorithms for
solving empirical problems and demonstrate their usefulness on real-world
datasets.
| no_new_dataset | 0.943086 |
1104.4153 | Salah Rifai | Salah Rifai, Xavier Muller, Xavier Glorot, Gregoire Mesnil, Yoshua
Bengio and Pascal Vincent | Learning invariant features through local space contraction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present in this paper a novel approach for training deterministic
auto-encoders. We show that by adding a well chosen penalty term to the
classical reconstruction cost function, we can achieve results that equal or
surpass those attained by other regularized auto-encoders as well as denoising
auto-encoders on a range of datasets. This penalty term corresponds to the
Frobenius norm of the Jacobian matrix of the encoder activations with respect
to the input. We show that this penalty term results in a localized space
contraction which in turn yields robust features on the activation layer.
Furthermore, we show how this penalty term is related to both regularized
auto-encoders and denoising encoders and how it can be seen as a link between
deterministic and non-deterministic auto-encoders. We find empirically that
this penalty helps to carve a representation that better captures the local
directions of variation dictated by the data, corresponding to a
lower-dimensional non-linear manifold, while being more invariant to the vast
majority of directions orthogonal to the manifold. Finally, we show that by
using the learned features to initialize a MLP, we achieve state of the art
classification error on a range of datasets, surpassing other methods of
pre-training.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2011 01:39:25 GMT"
}
] | 2011-04-22T00:00:00 | [
[
"Rifai",
"Salah",
""
],
[
"Muller",
"Xavier",
""
],
[
"Glorot",
"Xavier",
""
],
[
"Mesnil",
"Gregoire",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Vincent",
"Pascal",
""
]
] | TITLE: Learning invariant features through local space contraction
ABSTRACT: We present in this paper a novel approach for training deterministic
auto-encoders. We show that by adding a well chosen penalty term to the
classical reconstruction cost function, we can achieve results that equal or
surpass those attained by other regularized auto-encoders as well as denoising
auto-encoders on a range of datasets. This penalty term corresponds to the
Frobenius norm of the Jacobian matrix of the encoder activations with respect
to the input. We show that this penalty term results in a localized space
contraction which in turn yields robust features on the activation layer.
Furthermore, we show how this penalty term is related to both regularized
auto-encoders and denoising encoders and how it can be seen as a link between
deterministic and non-deterministic auto-encoders. We find empirically that
this penalty helps to carve a representation that better captures the local
directions of variation dictated by the data, corresponding to a
lower-dimensional non-linear manifold, while being more invariant to the vast
majority of directions orthogonal to the manifold. Finally, we show that by
using the learned features to initialize a MLP, we achieve state of the art
classification error on a range of datasets, surpassing other methods of
pre-training.
| no_new_dataset | 0.943608 |
1104.4038 | Guido Boffetta | G. Boffetta | El Nino signature in Alaskan river breakups | 4 pages, 2 figures; never able to publish | null | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A signature of El Nino-Southern Oscillation is found in the historical
dataset of the Alaskan Tanana river breakups where the average ice breaking day
is found to anticipate of about 3.4 days when conditioned over El Nino years.
This results represents a statistically significant example of ENSO
teleconnection on regions remote from tropical Pacific.
| [
{
"version": "v1",
"created": "Wed, 20 Apr 2011 14:36:30 GMT"
}
] | 2011-04-21T00:00:00 | [
[
"Boffetta",
"G.",
""
]
] | TITLE: El Nino signature in Alaskan river breakups
ABSTRACT: A signature of El Nino-Southern Oscillation is found in the historical
dataset of the Alaskan Tanana river breakups where the average ice breaking day
is found to anticipate of about 3.4 days when conditioned over El Nino years.
This results represents a statistically significant example of ENSO
teleconnection on regions remote from tropical Pacific.
| no_new_dataset | 0.919208 |
1104.3216 | Feng Niu | Feng Niu (University of Wisconsin-Madison), Christopher R\'e
(University of Wisconsin-Madison), AnHai Doan (University of
Wisconsin-Madison), Jude Shavlik (University of Wisconsin-Madison) | Tuffy: Scaling up Statistical Inference in Markov Logic Networks using
an RDBMS | VLDB2011 | Proceedings of the VLDB Endowment (PVLDB), Vol. 4, No. 6, pp.
373-384 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov Logic Networks (MLNs) have emerged as a powerful framework that
combines statistical and logical reasoning; they have been applied to many data
intensive problems including information extraction, entity resolution, and
text mining. Current implementations of MLNs do not scale to large real-world
data sets, which is preventing their wide-spread adoption. We present Tuffy
that achieves scalability via three novel contributions: (1) a bottom-up
approach to grounding that allows us to leverage the full power of the
relational optimizer, (2) a novel hybrid architecture that allows us to perform
AI-style local search efficiently using an RDBMS, and (3) a theoretical insight
that shows when one can (exponentially) improve the efficiency of stochastic
local search. We leverage (3) to build novel partitioning, loading, and
parallel algorithms. We show that our approach outperforms state-of-the-art
implementations in both quality and speed on several publicly available
datasets.
| [
{
"version": "v1",
"created": "Sat, 16 Apr 2011 08:52:25 GMT"
}
] | 2011-04-19T00:00:00 | [
[
"Niu",
"Feng",
"",
"University of Wisconsin-Madison"
],
[
"Ré",
"Christopher",
"",
"University of Wisconsin-Madison"
],
[
"Doan",
"AnHai",
"",
"University of\n Wisconsin-Madison"
],
[
"Shavlik",
"Jude",
"",
"University of Wisconsin-Madison"
]
] | TITLE: Tuffy: Scaling up Statistical Inference in Markov Logic Networks using
an RDBMS
ABSTRACT: Markov Logic Networks (MLNs) have emerged as a powerful framework that
combines statistical and logical reasoning; they have been applied to many data
intensive problems including information extraction, entity resolution, and
text mining. Current implementations of MLNs do not scale to large real-world
data sets, which is preventing their wide-spread adoption. We present Tuffy
that achieves scalability via three novel contributions: (1) a bottom-up
approach to grounding that allows us to leverage the full power of the
relational optimizer, (2) a novel hybrid architecture that allows us to perform
AI-style local search efficiently using an RDBMS, and (3) a theoretical insight
that shows when one can (exponentially) improve the efficiency of stochastic
local search. We leverage (3) to build novel partitioning, loading, and
parallel algorithms. We show that our approach outperforms state-of-the-art
implementations in both quality and speed on several publicly available
datasets.
| no_new_dataset | 0.943348 |
1104.1892 | Kallam Suresh | K. Suresh | "Improved FCM algorithm for Clustering on Web Usage Mining" | ISSN(Online):1694-0814.
http://www.ijcsi.org/papers/IJCSI-8-1-42-45.pdf | IJCSI International Journal of Computer Sciencec Issues, Vol.8
Issue 1, January 2011, p42-46 | null | null | cs.IR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present clustering method is very sensitive to the initial
center values, requirements on the data set too high, and cannot handle noisy
data the proposal method is using information entropy to initialize the cluster
centers and introduce weighting parameters to adjust the location of cluster
centers and noise problems.The navigation datasets which are sequential in
nature, Clustering web data is finding the groups which share common interests
and behavior by analyzing the data collected in the web servers, this improves
clustering on web data efficiently using improved fuzzy c-means(FCM)
clustering. Web usage mining is the application of data mining techniques to
web log data repositories. It is used in finding the user access patterns from
web access log. Web data Clusters are formed using on MSNBC web navigation
dataset.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2011 09:38:47 GMT"
}
] | 2011-04-12T00:00:00 | [
[
"Suresh",
"K.",
""
]
] | TITLE: "Improved FCM algorithm for Clustering on Web Usage Mining"
ABSTRACT: In this paper we present clustering method is very sensitive to the initial
center values, requirements on the data set too high, and cannot handle noisy
data the proposal method is using information entropy to initialize the cluster
centers and introduce weighting parameters to adjust the location of cluster
centers and noise problems.The navigation datasets which are sequential in
nature, Clustering web data is finding the groups which share common interests
and behavior by analyzing the data collected in the web servers, this improves
clustering on web data efficiently using improved fuzzy c-means(FCM)
clustering. Web usage mining is the application of data mining techniques to
web log data repositories. It is used in finding the user access patterns from
web access log. Web data Clusters are formed using on MSNBC web navigation
dataset.
| no_new_dataset | 0.951369 |
1103.0120 | Srimanta Kundu | Srimanta Kundu (1), Nibaran Das and Mita Nasipuri | Automatic Detection of Ringworm using Local Binary Pattern (LBP) | International Symposium on Medical Imaging: Perspectives on
Perception and Diagnostics (MED-IMAGE 2010) organized in conjunction with the
Seventh Indian Conference on Computer Vision, Graphics and Image Processing
(ICVGIP), 9-10th December, 2010 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a novel approach for automatic recognition of ring
worm skin disease based on LBP (Local Binary Pattern) feature extracted from
the affected skin images. The proposed method is evaluated by extensive
experiments on the skin images collected from internet. The dataset is tested
using three different classifiers i.e. Bayesian, MLP and SVM. Experimental
results show that the proposed methodology efficiently discriminates between a
ring worm skin and a normal skin. It is a low cost technique and does not
require any special imaging devices.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 2011 10:06:31 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2011 20:04:52 GMT"
}
] | 2011-04-05T00:00:00 | [
[
"Kundu",
"Srimanta",
""
],
[
"Das",
"Nibaran",
""
],
[
"Nasipuri",
"Mita",
""
]
] | TITLE: Automatic Detection of Ringworm using Local Binary Pattern (LBP)
ABSTRACT: In this paper we present a novel approach for automatic recognition of ring
worm skin disease based on LBP (Local Binary Pattern) feature extracted from
the affected skin images. The proposed method is evaluated by extensive
experiments on the skin images collected from internet. The dataset is tested
using three different classifiers i.e. Bayesian, MLP and SVM. Experimental
results show that the proposed methodology efficiently discriminates between a
ring worm skin and a normal skin. It is a low cost technique and does not
require any special imaging devices.
| no_new_dataset | 0.949902 |
1104.0579 | Michael Lew | Ye Ji | Image Retrieval Method Using Top-surf Descriptor | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report presents the results and details of a content-based image
retrieval project using the Top-surf descriptor. The experimental results are
preliminary, however, it shows the capability of deducing objects from parts of
the objects or from the objects that are similar. This paper uses a dataset
consisting of 1200 images of which 800 images are equally divided into 8
categories, namely airplane, beach, motorbike, forest, elephants, horses, bus
and building, while the other 400 images are randomly picked from the Internet.
The best results achieved are from building category.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2011 14:14:47 GMT"
}
] | 2011-04-05T00:00:00 | [
[
"Ji",
"Ye",
""
]
] | TITLE: Image Retrieval Method Using Top-surf Descriptor
ABSTRACT: This report presents the results and details of a content-based image
retrieval project using the Top-surf descriptor. The experimental results are
preliminary, however, it shows the capability of deducing objects from parts of
the objects or from the objects that are similar. This paper uses a dataset
consisting of 1200 images of which 800 images are equally divided into 8
categories, namely airplane, beach, motorbike, forest, elephants, horses, bus
and building, while the other 400 images are randomly picked from the Internet.
The best results achieved are from building category.
| new_dataset | 0.906156 |
1102.4016 | Gunnar W. Klau | Sandro Andreotti, Gunnar W. Klau, Knut Reinert | Antilope - A Lagrangian Relaxation Approach to the de novo Peptide
Sequencing Problem | null | null | 10.1109/TCBB.2011.59 | null | cs.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Peptide sequencing from mass spectrometry data is a key step in proteome
research. Especially de novo sequencing, the identification of a peptide from
its spectrum alone, is still a challenge even for state-of-the-art algorithmic
approaches. In this paper we present Antilope, a new fast and flexible approach
based on mathematical programming. It builds on the spectrum graph model and
works with a variety of scoring schemes. Antilope combines Lagrangian
relaxation for solving an integer linear programming formulation with an
adaptation of Yen's k shortest paths algorithm. It shows a significant
improvement in running time compared to mixed integer optimization and performs
at the same speed like other state-of-the-art tools. We also implemented a
generic probabilistic scoring scheme that can be trained automatically for a
dataset of annotated spectra and is independent of the mass spectrometer type.
Evaluations on benchmark data show that Antilope is competitive to the popular
state-of-the-art programs PepNovo and NovoHMM both in terms of run time and
accuracy. Furthermore, it offers increased flexibility in the number of
considered ion types. Antilope will be freely available as part of the open
source proteomics library OpenMS.
| [
{
"version": "v1",
"created": "Sat, 19 Feb 2011 19:36:34 GMT"
}
] | 2011-03-29T00:00:00 | [
[
"Andreotti",
"Sandro",
""
],
[
"Klau",
"Gunnar W.",
""
],
[
"Reinert",
"Knut",
""
]
] | TITLE: Antilope - A Lagrangian Relaxation Approach to the de novo Peptide
Sequencing Problem
ABSTRACT: Peptide sequencing from mass spectrometry data is a key step in proteome
research. Especially de novo sequencing, the identification of a peptide from
its spectrum alone, is still a challenge even for state-of-the-art algorithmic
approaches. In this paper we present Antilope, a new fast and flexible approach
based on mathematical programming. It builds on the spectrum graph model and
works with a variety of scoring schemes. Antilope combines Lagrangian
relaxation for solving an integer linear programming formulation with an
adaptation of Yen's k shortest paths algorithm. It shows a significant
improvement in running time compared to mixed integer optimization and performs
at the same speed like other state-of-the-art tools. We also implemented a
generic probabilistic scoring scheme that can be trained automatically for a
dataset of annotated spectra and is independent of the mass spectrometer type.
Evaluations on benchmark data show that Antilope is competitive to the popular
state-of-the-art programs PepNovo and NovoHMM both in terms of run time and
accuracy. Furthermore, it offers increased flexibility in the number of
considered ion types. Antilope will be freely available as part of the open
source proteomics library OpenMS.
| no_new_dataset | 0.941007 |
1103.4896 | Hugo Larochelle | J\'er\^ome Louradour and Hugo Larochelle | Classification of Sets using Restricted Boltzmann Machines | 17 pages, 4 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of classification when inputs correspond to sets of
vectors. This setting occurs in many problems such as the classification of
pieces of mail containing several pages, of web sites with several sections or
of images that have been pre-segmented into smaller regions. We propose
generalizations of the restricted Boltzmann machine (RBM) that are appropriate
in this context and explore how to incorporate different assumptions about the
relationship between the input sets and the target class within the RBM. In
experiments on standard multiple-instance learning datasets, we demonstrate the
competitiveness of approaches based on RBMs and apply the proposed variants to
the problem of incoming mail classification.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2011 02:33:27 GMT"
}
] | 2011-03-28T00:00:00 | [
[
"Louradour",
"Jérôme",
""
],
[
"Larochelle",
"Hugo",
""
]
] | TITLE: Classification of Sets using Restricted Boltzmann Machines
ABSTRACT: We consider the problem of classification when inputs correspond to sets of
vectors. This setting occurs in many problems such as the classification of
pieces of mail containing several pages, of web sites with several sections or
of images that have been pre-segmented into smaller regions. We propose
generalizations of the restricted Boltzmann machine (RBM) that are appropriate
in this context and explore how to incorporate different assumptions about the
relationship between the input sets and the target class within the RBM. In
experiments on standard multiple-instance learning datasets, we demonstrate the
competitiveness of approaches based on RBMs and apply the proposed variants to
the problem of incoming mail classification.
| no_new_dataset | 0.945096 |
1103.4778 | Jos\'e L Balc\'azar Navarro | Jos\'e L. Balc\'azar | Formal and Computational Properties of the Confidence Boost of
Association Rules | null | null | null | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some existing notions of redundancy among association rules allow for a
logical-style characterization and lead to irredundant bases of absolutely
minimum size. One can push the intuition of redundancy further and find an
intuitive notion of interest of an association rule, in terms of its "novelty"
with respect to other rules. Namely: an irredundant rule is so because its
confidence is higher than what the rest of the rules would suggest; then, one
can ask: how much higher? We propose to measure such a sort of "novelty"
through the confidence boost of a rule, which encompasses two previous similar
notions (confidence width and rule blocking, of which the latter is closely
related to the earlier measure "improvement"). Acting as a complement to
confidence and support, the confidence boost helps to obtain small and crisp
sets of mined association rules, and solves the well-known problem that, in
certain cases, rules of negative correlation may pass the confidence bound. We
analyze the properties of two versions of the notion of confidence boost, one
of them a natural generalization of the other. We develop efficient
algorithmics to filter rules according to their confidence boost, compare the
concept to some similar notions in the bibliography, and describe the results
of some experimentation employing the new notions on standard benchmark
datasets. We describe an open-source association mining tool that embodies one
of our variants of confidence boost in such a way that the data mining process
does not require the user to select any value for any parameter.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2011 14:45:50 GMT"
}
] | 2011-03-25T00:00:00 | [
[
"Balcázar",
"José L.",
""
]
] | TITLE: Formal and Computational Properties of the Confidence Boost of
Association Rules
ABSTRACT: Some existing notions of redundancy among association rules allow for a
logical-style characterization and lead to irredundant bases of absolutely
minimum size. One can push the intuition of redundancy further and find an
intuitive notion of interest of an association rule, in terms of its "novelty"
with respect to other rules. Namely: an irredundant rule is so because its
confidence is higher than what the rest of the rules would suggest; then, one
can ask: how much higher? We propose to measure such a sort of "novelty"
through the confidence boost of a rule, which encompasses two previous similar
notions (confidence width and rule blocking, of which the latter is closely
related to the earlier measure "improvement"). Acting as a complement to
confidence and support, the confidence boost helps to obtain small and crisp
sets of mined association rules, and solves the well-known problem that, in
certain cases, rules of negative correlation may pass the confidence bound. We
analyze the properties of two versions of the notion of confidence boost, one
of them a natural generalization of the other. We develop efficient
algorithmics to filter rules according to their confidence boost, compare the
concept to some similar notions in the bibliography, and describe the results
of some experimentation employing the new notions on standard benchmark
datasets. We describe an open-source association mining tool that embodies one
of our variants of confidence boost in such a way that the data mining process
does not require the user to select any value for any parameter.
| no_new_dataset | 0.9462 |
1103.4480 | Kishor Barman | Kishor Barman, Onkar Dabeer | Clustered regression with unknown clusters | 9 pages, Submitted to KDD 2011, San Diego | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a collection of prediction experiments, which are clustered in
the sense that groups of experiments ex- hibit similar relationship between the
predictor and response variables. The experiment clusters as well as the
regres- sion relationships are unknown. The regression relation- ships define
the experiment clusters, and in general, the predictor and response variables
may not exhibit any clus- tering. We call this prediction problem clustered
regres- sion with unknown clusters (CRUC) and in this paper we focus on linear
regression. We study and compare several methods for CRUC, demonstrate their
applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in-
vestigate an associated mathematical model. CRUC is at the crossroads of many
prior works and we study several prediction algorithms with diverse origins: an
adaptation of the expectation-maximization algorithm, an approach in- spired by
K-means clustering, the singular value threshold- ing approach to matrix rank
minimization under quadratic constraints, an adaptation of the Curds and Whey
method in multiple regression, and a local regression (LoR) scheme reminiscent
of neighborhood methods in collaborative filter- ing. Based on empirical
evaluation on the YLRC dataset as well as simulated data, we identify the LoR
method as a good practical choice: it yields best or near-best prediction
performance at a reasonable computational load, and it is less sensitive to the
choice of the algorithm parameter. We also provide some analysis of the LoR
method for an asso- ciated mathematical model, which sheds light on optimal
parameter choice and prediction performance.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2011 10:20:14 GMT"
}
] | 2011-03-24T00:00:00 | [
[
"Barman",
"Kishor",
""
],
[
"Dabeer",
"Onkar",
""
]
] | TITLE: Clustered regression with unknown clusters
ABSTRACT: We consider a collection of prediction experiments, which are clustered in
the sense that groups of experiments ex- hibit similar relationship between the
predictor and response variables. The experiment clusters as well as the
regres- sion relationships are unknown. The regression relation- ships define
the experiment clusters, and in general, the predictor and response variables
may not exhibit any clus- tering. We call this prediction problem clustered
regres- sion with unknown clusters (CRUC) and in this paper we focus on linear
regression. We study and compare several methods for CRUC, demonstrate their
applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in-
vestigate an associated mathematical model. CRUC is at the crossroads of many
prior works and we study several prediction algorithms with diverse origins: an
adaptation of the expectation-maximization algorithm, an approach in- spired by
K-means clustering, the singular value threshold- ing approach to matrix rank
minimization under quadratic constraints, an adaptation of the Curds and Whey
method in multiple regression, and a local regression (LoR) scheme reminiscent
of neighborhood methods in collaborative filter- ing. Based on empirical
evaluation on the YLRC dataset as well as simulated data, we identify the LoR
method as a good practical choice: it yields best or near-best prediction
performance at a reasonable computational load, and it is less sensitive to the
choice of the algorithm parameter. We also provide some analysis of the LoR
method for an asso- ciated mathematical model, which sheds light on optimal
parameter choice and prediction performance.
| no_new_dataset | 0.943243 |
1103.3103 | Mohamed Yakout | Mohamed Yakout (Purdue University), Ahmed K. Elmagarmid (Qatar
Computing Research Institute), Jennifer Neville (Purdue University), Mourad
Ouzzani (Purdue University), Ihab F. Ilyas (University of Waterloo) | Guided Data Repair | VLDB2011 | Proceedings of the VLDB Endowment (PVLDB), Vol. 4, No. 5, pp.
279-289 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present GDR, a Guided Data Repair framework that
incorporates user feedback in the cleaning process to enhance and accelerate
existing automatic repair techniques while minimizing user involvement. GDR
consults the user on the updates that are most likely to be beneficial in
improving data quality. GDR also uses machine learning methods to identify and
apply the correct updates directly to the database without the actual
involvement of the user on these specific updates. To rank potential updates
for consultation by the user, we first group these repairs and quantify the
utility of each group using the decision-theory concept of value of information
(VOI). We then apply active learning to order updates within a group based on
their ability to improve the learned model. User feedback is used to repair the
database and to adaptively refine the training set for the model. We
empirically evaluate GDR on a real-world dataset and show significant
improvement in data quality using our user guided repairing process. We also,
assess the trade-off between the user efforts and the resulting data quality.
| [
{
"version": "v1",
"created": "Wed, 16 Mar 2011 05:51:51 GMT"
}
] | 2011-03-17T00:00:00 | [
[
"Yakout",
"Mohamed",
"",
"Purdue University"
],
[
"Elmagarmid",
"Ahmed K.",
"",
"Qatar\n Computing Research Institute"
],
[
"Neville",
"Jennifer",
"",
"Purdue University"
],
[
"Ouzzani",
"Mourad",
"",
"Purdue University"
],
[
"Ilyas",
"Ihab F.",
"",
"University of Waterloo"
]
] | TITLE: Guided Data Repair
ABSTRACT: In this paper we present GDR, a Guided Data Repair framework that
incorporates user feedback in the cleaning process to enhance and accelerate
existing automatic repair techniques while minimizing user involvement. GDR
consults the user on the updates that are most likely to be beneficial in
improving data quality. GDR also uses machine learning methods to identify and
apply the correct updates directly to the database without the actual
involvement of the user on these specific updates. To rank potential updates
for consultation by the user, we first group these repairs and quantify the
utility of each group using the decision-theory concept of value of information
(VOI). We then apply active learning to order updates within a group based on
their ability to improve the learned model. User feedback is used to repair the
database and to adaptively refine the training set for the model. We
empirically evaluate GDR on a real-world dataset and show significant
improvement in data quality using our user guided repairing process. We also,
assess the trade-off between the user efforts and the resulting data quality.
| no_new_dataset | 0.95018 |
1103.2410 | Vibhor Rastogi | Vibhor Rastogi (Yahoo! Research), Nilesh Dalvi (Yahoo! Research),
Minos Garofalakis (Technical University of Crete) | Large-Scale Collective Entity Matching | VLDB2011 | Proceedings of the VLDB Endowment (PVLDB), Vol. 4, No. 4, pp.
208-218 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There have been several recent advancements in Machine Learning community on
the Entity Matching (EM) problem. However, their lack of scalability has
prevented them from being applied in practical settings on large real-life
datasets. Towards this end, we propose a principled framework to scale any
generic EM algorithm. Our technique consists of running multiple instances of
the EM algorithm on small neighborhoods of the data and passing messages across
neighborhoods to construct a global solution. We prove formal properties of our
framework and experimentally demonstrate the effectiveness of our approach in
scaling EM algorithms.
| [
{
"version": "v1",
"created": "Sat, 12 Mar 2011 01:09:30 GMT"
}
] | 2011-03-15T00:00:00 | [
[
"Rastogi",
"Vibhor",
"",
"Yahoo! Research"
],
[
"Dalvi",
"Nilesh",
"",
"Yahoo! Research"
],
[
"Garofalakis",
"Minos",
"",
"Technical University of Crete"
]
] | TITLE: Large-Scale Collective Entity Matching
ABSTRACT: There have been several recent advancements in Machine Learning community on
the Entity Matching (EM) problem. However, their lack of scalability has
prevented them from being applied in practical settings on large real-life
datasets. Towards this end, we propose a principled framework to scale any
generic EM algorithm. Our technique consists of running multiple instances of
the EM algorithm on small neighborhoods of the data and passing messages across
neighborhoods to construct a global solution. We prove formal properties of our
framework and experimentally demonstrate the effectiveness of our approach in
scaling EM algorithms.
| no_new_dataset | 0.950273 |
1103.1777 | Jan Egger | Jan Egger, Miriam H. A. Bauer, Daniela Kuhnt, Christoph Kappus,
Barbara Carl, Bernd Freisleben, Christopher Nimsky | A Flexible Semi-Automatic Approach for Glioblastoma multiforme
Segmentation | 4 pages, 4 figures, BIOSIGNAL, Berlin, 2010 | null | null | null | cs.CE physics.med-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gliomas are the most common primary brain tumors, evolving from the cerebral
supportive cells. For clinical follow-up, the evaluation of the preoperative
tumor volume is essential. Volumetric assessment of tumor volume with manual
segmentation of its outlines is a time-consuming process that can be overcome
with the help of segmentation methods. In this paper, a flexible semi-automatic
approach for grade IV glioma segmentation is presented. The approach uses a
novel segmentation scheme for spherical objects that creates a directed 3D
graph. Thereafter, the minimal cost closed set on the graph is computed via a
polynomial time s-t cut, creating an optimal segmentation of the tumor. The
user can improve the results by specifying an arbitrary number of additional
seed points to support the algorithm with grey value information and
geometrical constraints. The presented method is tested on 12 magnetic
resonance imaging datasets. The ground truth of the tumor boundaries are
manually extracted by neurosurgeons. The segmented gliomas are compared with a
one click method, and the semi-automatic approach yields an average Dice
Similarity Coefficient (DSC) of 77.72% and 83.91%, respectively.
| [
{
"version": "v1",
"created": "Wed, 9 Mar 2011 13:27:22 GMT"
}
] | 2011-03-10T00:00:00 | [
[
"Egger",
"Jan",
""
],
[
"Bauer",
"Miriam H. A.",
""
],
[
"Kuhnt",
"Daniela",
""
],
[
"Kappus",
"Christoph",
""
],
[
"Carl",
"Barbara",
""
],
[
"Freisleben",
"Bernd",
""
],
[
"Nimsky",
"Christopher",
""
]
] | TITLE: A Flexible Semi-Automatic Approach for Glioblastoma multiforme
Segmentation
ABSTRACT: Gliomas are the most common primary brain tumors, evolving from the cerebral
supportive cells. For clinical follow-up, the evaluation of the preoperative
tumor volume is essential. Volumetric assessment of tumor volume with manual
segmentation of its outlines is a time-consuming process that can be overcome
with the help of segmentation methods. In this paper, a flexible semi-automatic
approach for grade IV glioma segmentation is presented. The approach uses a
novel segmentation scheme for spherical objects that creates a directed 3D
graph. Thereafter, the minimal cost closed set on the graph is computed via a
polynomial time s-t cut, creating an optimal segmentation of the tumor. The
user can improve the results by specifying an arbitrary number of additional
seed points to support the algorithm with grey value information and
geometrical constraints. The presented method is tested on 12 magnetic
resonance imaging datasets. The ground truth of the tumor boundaries are
manually extracted by neurosurgeons. The segmented gliomas are compared with a
one click method, and the semi-automatic approach yields an average Dice
Similarity Coefficient (DSC) of 77.72% and 83.91%, respectively.
| no_new_dataset | 0.949295 |
1103.0825 | Thanh Tran | Graham Cormode, Magda Procopiuc, Divesh Srivastava, Thanh T. L. Tran | Differentially Private Publication of Sparse Data | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The problem of privately releasing data is to provide a version of a dataset
without revealing sensitive information about the individuals who contribute to
the data. The model of differential privacy allows such private release while
providing strong guarantees on the output. A basic mechanism achieves
differential privacy by adding noise to the frequency counts in the contingency
tables (or, a subset of the count data cube) derived from the dataset. However,
when the dataset is sparse in its underlying space, as is the case for most
multi-attribute relations, then the effect of adding noise is to vastly
increase the size of the published data: it implicitly creates a huge number of
dummy data points to mask the true data, making it almost impossible to work
with.
We present techniques to overcome this roadblock and allow efficient private
release of sparse data, while maintaining the guarantees of differential
privacy. Our approach is to release a compact summary of the noisy data.
Generating the noisy data and then summarizing it would still be very costly,
so we show how to shortcut this step, and instead directly generate the summary
from the input data, without materializing the vast intermediate noisy data. We
instantiate this outline for a variety of sampling and filtering methods, and
show how to use the resulting summary for approximate, private, query
answering. Our experimental study shows that this is an effective, practical
solution, with comparable and occasionally improved utility over the costly
materialization approach.
| [
{
"version": "v1",
"created": "Fri, 4 Mar 2011 05:02:47 GMT"
}
] | 2011-03-07T00:00:00 | [
[
"Cormode",
"Graham",
""
],
[
"Procopiuc",
"Magda",
""
],
[
"Srivastava",
"Divesh",
""
],
[
"Tran",
"Thanh T. L.",
""
]
] | TITLE: Differentially Private Publication of Sparse Data
ABSTRACT: The problem of privately releasing data is to provide a version of a dataset
without revealing sensitive information about the individuals who contribute to
the data. The model of differential privacy allows such private release while
providing strong guarantees on the output. A basic mechanism achieves
differential privacy by adding noise to the frequency counts in the contingency
tables (or, a subset of the count data cube) derived from the dataset. However,
when the dataset is sparse in its underlying space, as is the case for most
multi-attribute relations, then the effect of adding noise is to vastly
increase the size of the published data: it implicitly creates a huge number of
dummy data points to mask the true data, making it almost impossible to work
with.
We present techniques to overcome this roadblock and allow efficient private
release of sparse data, while maintaining the guarantees of differential
privacy. Our approach is to release a compact summary of the noisy data.
Generating the noisy data and then summarizing it would still be very costly,
so we show how to shortcut this step, and instead directly generate the summary
from the input data, without materializing the vast intermediate noisy data. We
instantiate this outline for a variety of sampling and filtering methods, and
show how to use the resulting summary for approximate, private, query
answering. Our experimental study shows that this is an effective, practical
solution, with comparable and occasionally improved utility over the costly
materialization approach.
| no_new_dataset | 0.942029 |
1103.0102 | Dacheng Tao | Tianyi Zhou and Dacheng Tao | Multi-label Learning via Structured Decomposition and Group Sparsity | 13 pages, 3 tables | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In multi-label learning, each sample is associated with several labels.
Existing works indicate that exploring correlations between labels improve the
prediction performance. However, embedding the label correlations into the
training process significantly increases the problem size. Moreover, the
mapping of the label structure in the feature space is not clear. In this
paper, we propose a novel multi-label learning method "Structured Decomposition
+ Group Sparsity (SDGS)". In SDGS, we learn a feature subspace for each label
from the structured decomposition of the training data, and predict the labels
of a new sample from its group sparse representation on the multi-subspace
obtained from the structured decomposition. In particular, in the training
stage, we decompose the data matrix $X\in R^{n\times p}$ as
$X=\sum_{i=1}^kL^i+S$, wherein the rows of $L^i$ associated with samples that
belong to label $i$ are nonzero and consist a low-rank matrix, while the other
rows are all-zeros, the residual $S$ is a sparse matrix. The row space of $L_i$
is the feature subspace corresponding to label $i$. This decomposition can be
efficiently obtained via randomized optimization. In the prediction stage, we
estimate the group sparse representation of a new sample on the multi-subspace
via group \emph{lasso}. The nonzero representation coefficients tend to
concentrate on the subspaces of labels that the sample belongs to, and thus an
effective prediction can be obtained. We evaluate SDGS on several real datasets
and compare it with popular methods. Results verify the effectiveness and
efficiency of SDGS.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 2011 08:15:28 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2011 00:00:13 GMT"
}
] | 2011-03-04T00:00:00 | [
[
"Zhou",
"Tianyi",
""
],
[
"Tao",
"Dacheng",
""
]
] | TITLE: Multi-label Learning via Structured Decomposition and Group Sparsity
ABSTRACT: In multi-label learning, each sample is associated with several labels.
Existing works indicate that exploring correlations between labels improve the
prediction performance. However, embedding the label correlations into the
training process significantly increases the problem size. Moreover, the
mapping of the label structure in the feature space is not clear. In this
paper, we propose a novel multi-label learning method "Structured Decomposition
+ Group Sparsity (SDGS)". In SDGS, we learn a feature subspace for each label
from the structured decomposition of the training data, and predict the labels
of a new sample from its group sparse representation on the multi-subspace
obtained from the structured decomposition. In particular, in the training
stage, we decompose the data matrix $X\in R^{n\times p}$ as
$X=\sum_{i=1}^kL^i+S$, wherein the rows of $L^i$ associated with samples that
belong to label $i$ are nonzero and consist a low-rank matrix, while the other
rows are all-zeros, the residual $S$ is a sparse matrix. The row space of $L_i$
is the feature subspace corresponding to label $i$. This decomposition can be
efficiently obtained via randomized optimization. In the prediction stage, we
estimate the group sparse representation of a new sample on the multi-subspace
via group \emph{lasso}. The nonzero representation coefficients tend to
concentrate on the subspaces of labels that the sample belongs to, and thus an
effective prediction can be obtained. We evaluate SDGS on several real datasets
and compare it with popular methods. Results verify the effectiveness and
efficiency of SDGS.
| no_new_dataset | 0.942718 |
1103.0086 | Xin Liu | Xin Liu and Gilles Tredan and Anwitaman Datta | A generic trust framework for large-scale open systems using machine
learning | 30 pages | null | null | null | cs.DC cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many large scale distributed systems and on the web, agents need to
interact with other unknown agents to carry out some tasks or transactions. The
ability to reason about and assess the potential risks in carrying out such
transactions is essential for providing a safe and reliable environment. A
traditional approach to reason about the trustworthiness of a transaction is to
determine the trustworthiness of the specific agent involved, derived from the
history of its behavior. As a departure from such traditional trust models, we
propose a generic, machine learning approach based trust framework where an
agent uses its own previous transactions (with other agents) to build a
knowledge base, and utilize this to assess the trustworthiness of a transaction
based on associated features, which are capable of distinguishing successful
transactions from unsuccessful ones. These features are harnessed using
appropriate machine learning algorithms to extract relationships between the
potential transaction and previous transactions. The trace driven experiments
using real auction dataset show that this approach provides good accuracy and
is highly efficient compared to other trust mechanisms, especially when
historical information of the specific agent is rare, incomplete or inaccurate.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 2011 06:03:15 GMT"
}
] | 2011-03-02T00:00:00 | [
[
"Liu",
"Xin",
""
],
[
"Tredan",
"Gilles",
""
],
[
"Datta",
"Anwitaman",
""
]
] | TITLE: A generic trust framework for large-scale open systems using machine
learning
ABSTRACT: In many large scale distributed systems and on the web, agents need to
interact with other unknown agents to carry out some tasks or transactions. The
ability to reason about and assess the potential risks in carrying out such
transactions is essential for providing a safe and reliable environment. A
traditional approach to reason about the trustworthiness of a transaction is to
determine the trustworthiness of the specific agent involved, derived from the
history of its behavior. As a departure from such traditional trust models, we
propose a generic, machine learning approach based trust framework where an
agent uses its own previous transactions (with other agents) to build a
knowledge base, and utilize this to assess the trustworthiness of a transaction
based on associated features, which are capable of distinguishing successful
transactions from unsuccessful ones. These features are harnessed using
appropriate machine learning algorithms to extract relationships between the
potential transaction and previous transactions. The trace driven experiments
using real auction dataset show that this approach provides good accuracy and
is highly efficient compared to other trust mechanisms, especially when
historical information of the specific agent is rare, incomplete or inaccurate.
| no_new_dataset | 0.947527 |
1102.4770 | Juan Guan | Juan Guan, Bo Wang, and Steve Granick | Automated Line Tracking of lambda-DNA for Single-Molecule Imaging | null | null | null | null | physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a straightforward, automated line tracking method to visualize
within optical resolution the contour of linear macromolecules as they
rearrange shape as a function of time by Brownian diffusion and under external
fields such as electrophoresis. Three sequential stages of analysis underpin
this method: first, "feature finding" to discriminate signal from noise;
second, "line tracking" to approximate those shapes as lines; third, "temporal
consistency check" to discriminate reasonable from unreasonable fitted
conformations in the time domain. The automated nature of this data analysis
makes it straightforward to accumulate vast quantities of data while excluding
the unreliable parts of it. We implement the analysis on fluorescence images of
lambda-DNA molecules in agarose gel to demonstrate its capability to produce
large datasets for subsequent statistical analysis.
| [
{
"version": "v1",
"created": "Wed, 23 Feb 2011 16:00:56 GMT"
}
] | 2011-02-24T00:00:00 | [
[
"Guan",
"Juan",
""
],
[
"Wang",
"Bo",
""
],
[
"Granick",
"Steve",
""
]
] | TITLE: Automated Line Tracking of lambda-DNA for Single-Molecule Imaging
ABSTRACT: We describe a straightforward, automated line tracking method to visualize
within optical resolution the contour of linear macromolecules as they
rearrange shape as a function of time by Brownian diffusion and under external
fields such as electrophoresis. Three sequential stages of analysis underpin
this method: first, "feature finding" to discriminate signal from noise;
second, "line tracking" to approximate those shapes as lines; third, "temporal
consistency check" to discriminate reasonable from unreasonable fitted
conformations in the time domain. The automated nature of this data analysis
makes it straightforward to accumulate vast quantities of data while excluding
the unreliable parts of it. We implement the analysis on fluorescence images of
lambda-DNA molecules in agarose gel to demonstrate its capability to produce
large datasets for subsequent statistical analysis.
| no_new_dataset | 0.95222 |
1102.4104 | Gang Fang | Gang Fang, Wen Wang, Benjamin Oatley, Brian Van Ness, Michael
Steinbach, Vipin Kumar | Characterizing Discriminative Patterns | null | null | null | null | cs.DB cs.IT math.IT q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discriminative patterns are association patterns that occur with
disproportionate frequency in some classes versus others, and have been studied
under names such as emerging patterns and contrast sets. Such patterns have
demonstrated considerable value for classification and subgroup discovery, but
a detailed understanding of the types of interactions among items in a
discriminative pattern is lacking. To address this issue, we propose to
categorize discriminative patterns according to four types of item interaction:
(i) driver-passenger, (ii) coherent, (iii) independent additive and (iv)
synergistic beyond independent additive. Either of the last three is of
practical importance, with the latter two representing a gain in the
discriminative power of a pattern over its subsets. Synergistic patterns are
most restrictive, but perhaps the most interesting since they capture a
cooperative effect. For domains such as genetic research, differentiating among
these types of patterns is critical since each yields very different biological
interpretations. For general domains, the characterization provides a novel
view of the nature of the discriminative patterns in a dataset, which yields
insights beyond those provided by current approaches that focus mostly on
pattern-based classification and subgroup discovery. This paper presents a
comprehensive discussion that defines these four pattern types and investigates
their properties and their relationship to one another. In addition, these
ideas are explored for a variety of datasets (ten UCI datasets, one gene
expression dataset and two genetic-variation datasets). The results demonstrate
the existence, characteristics and statistical significance of the different
types of patterns. They also illustrate how pattern characterization can
provide novel insights into discriminative pattern mining and the
discriminative structure of different datasets.
| [
{
"version": "v1",
"created": "Sun, 20 Feb 2011 21:34:52 GMT"
}
] | 2011-02-22T00:00:00 | [
[
"Fang",
"Gang",
""
],
[
"Wang",
"Wen",
""
],
[
"Oatley",
"Benjamin",
""
],
[
"Van Ness",
"Brian",
""
],
[
"Steinbach",
"Michael",
""
],
[
"Kumar",
"Vipin",
""
]
] | TITLE: Characterizing Discriminative Patterns
ABSTRACT: Discriminative patterns are association patterns that occur with
disproportionate frequency in some classes versus others, and have been studied
under names such as emerging patterns and contrast sets. Such patterns have
demonstrated considerable value for classification and subgroup discovery, but
a detailed understanding of the types of interactions among items in a
discriminative pattern is lacking. To address this issue, we propose to
categorize discriminative patterns according to four types of item interaction:
(i) driver-passenger, (ii) coherent, (iii) independent additive and (iv)
synergistic beyond independent additive. Either of the last three is of
practical importance, with the latter two representing a gain in the
discriminative power of a pattern over its subsets. Synergistic patterns are
most restrictive, but perhaps the most interesting since they capture a
cooperative effect. For domains such as genetic research, differentiating among
these types of patterns is critical since each yields very different biological
interpretations. For general domains, the characterization provides a novel
view of the nature of the discriminative patterns in a dataset, which yields
insights beyond those provided by current approaches that focus mostly on
pattern-based classification and subgroup discovery. This paper presents a
comprehensive discussion that defines these four pattern types and investigates
their properties and their relationship to one another. In addition, these
ideas are explored for a variety of datasets (ten UCI datasets, one gene
expression dataset and two genetic-variation datasets). The results demonstrate
the existence, characteristics and statistical significance of the different
types of patterns. They also illustrate how pattern characterization can
provide novel insights into discriminative pattern mining and the
discriminative structure of different datasets.
| no_new_dataset | 0.947866 |
1102.3828 | Herve Jegou | Herv\'e J\'egou (INRIA - IRISA), Romain Tavenard (INRIA - IRISA),
Matthijs Douze (INRIA Rh\^one-Alpes / LJK Laboratoire Jean Kuntzmann, SED),
Laurent Amsaleg (INRIA - IRISA) | Searching in one billion vectors: re-rank with source coding | International Conference on Acoustics, Speech and Signal Processing,
Prague : Czech Republic (2011) | null | null | null | cs.IR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent indexing techniques inspired by source coding have been shown
successful to index billions of high-dimensional vectors in memory. In this
paper, we propose an approach that re-ranks the neighbor hypotheses obtained by
these compressed-domain indexing methods. In contrast to the usual
post-verification scheme, which performs exact distance calculation on the
short-list of hypotheses, the estimated distances are refined based on short
quantization codes, to avoid reading the full vectors from disk. We have
released a new public dataset of one billion 128-dimensional vectors and
proposed an experimental setup to evaluate high dimensional indexing algorithms
on a realistic scale. Experiments show that our method accurately and
efficiently re-ranks the neighbor hypotheses using little memory compared to
the full vectors representation.
| [
{
"version": "v1",
"created": "Fri, 18 Feb 2011 13:15:37 GMT"
}
] | 2011-02-21T00:00:00 | [
[
"Jégou",
"Hervé",
"",
"INRIA - IRISA"
],
[
"Tavenard",
"Romain",
"",
"INRIA - IRISA"
],
[
"Douze",
"Matthijs",
"",
"INRIA Rhône-Alpes / LJK Laboratoire Jean Kuntzmann, SED"
],
[
"Amsaleg",
"Laurent",
"",
"INRIA - IRISA"
]
] | TITLE: Searching in one billion vectors: re-rank with source coding
ABSTRACT: Recent indexing techniques inspired by source coding have been shown
successful to index billions of high-dimensional vectors in memory. In this
paper, we propose an approach that re-ranks the neighbor hypotheses obtained by
these compressed-domain indexing methods. In contrast to the usual
post-verification scheme, which performs exact distance calculation on the
short-list of hypotheses, the estimated distances are refined based on short
quantization codes, to avoid reading the full vectors from disk. We have
released a new public dataset of one billion 128-dimensional vectors and
proposed an experimental setup to evaluate high dimensional indexing algorithms
on a realistic scale. Experiments show that our method accurately and
efficiently re-ranks the neighbor hypotheses using little memory compared to
the full vectors representation.
| new_dataset | 0.954009 |
1102.2915 | Filippo Utro | Filippo Utro | Algorithms for Internal Validation Clustering Measures in the Post
Genomic Era | null | PhD Thesis, University of Palermo, Italy, 2011 | null | null | cs.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring cluster structure in microarray datasets is a fundamental task for
the -omic sciences. A fundamental question in Statistics, Data Analysis and
Classification, is the prediction of the number of clusters in a dataset,
usually established via internal validation measures. Despite the wealth of
internal measures available in the literature, new ones have been recently
proposed, some of them specifically for microarray data. In this dissertation,
a study of internal validation measures is given, paying particular attention
to the stability based ones. Indeed, this class of measures is particularly
prominent and promising in order to have a reliable estimate the number of
clusters in a dataset. For those measures, a new general algorithmic paradigm
is proposed here that highlights the richness of measures in this class and
accounts for the ones already available in the literature. Moreover, some of
the most representative validation measures are also considered. Experiments on
12 benchmark datasets are performed in order to assess both the intrinsic
ability of a measure to predict the correct number of clusters in a dataset and
its merit relative to the other measures. The main result is a hierarchy of
internal validation measures in terms of precision and speed, highlighting some
of their merits and limitations not reported before in the literature. This
hierarchy shows that the faster the measure, the less accurate it is. In order
to reduce the time performance gap between the fastest and the most precise
measures, the technique of designing fast approximation algorithms is
systematically applied. The end result is a speed-up of many of the measures
studied here that brings the gap between the fastest and the most precise
within one order of magnitude in time, with no degradation in their prediction
power. Prior to this work, the time gap was at least two orders of magnitude.
| [
{
"version": "v1",
"created": "Mon, 14 Feb 2011 22:13:47 GMT"
}
] | 2011-02-16T00:00:00 | [
[
"Utro",
"Filippo",
""
]
] | TITLE: Algorithms for Internal Validation Clustering Measures in the Post
Genomic Era
ABSTRACT: Inferring cluster structure in microarray datasets is a fundamental task for
the -omic sciences. A fundamental question in Statistics, Data Analysis and
Classification, is the prediction of the number of clusters in a dataset,
usually established via internal validation measures. Despite the wealth of
internal measures available in the literature, new ones have been recently
proposed, some of them specifically for microarray data. In this dissertation,
a study of internal validation measures is given, paying particular attention
to the stability based ones. Indeed, this class of measures is particularly
prominent and promising in order to have a reliable estimate the number of
clusters in a dataset. For those measures, a new general algorithmic paradigm
is proposed here that highlights the richness of measures in this class and
accounts for the ones already available in the literature. Moreover, some of
the most representative validation measures are also considered. Experiments on
12 benchmark datasets are performed in order to assess both the intrinsic
ability of a measure to predict the correct number of clusters in a dataset and
its merit relative to the other measures. The main result is a hierarchy of
internal validation measures in terms of precision and speed, highlighting some
of their merits and limitations not reported before in the literature. This
hierarchy shows that the faster the measure, the less accurate it is. In order
to reduce the time performance gap between the fastest and the most precise
measures, the technique of designing fast approximation algorithms is
systematically applied. The end result is a speed-up of many of the measures
studied here that brings the gap between the fastest and the most precise
within one order of magnitude in time, with no degradation in their prediction
power. Prior to this work, the time gap was at least two orders of magnitude.
| no_new_dataset | 0.947381 |
1102.3047 | Loet Leydesdorff | Robert D. Shelton and Loet Leydesdorff | Publish or Patent: Bibliometric evidence for empirical trade-offs in
national funding strategies | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multivariate linear regression models suggest a trade-off in allocations of
national R&D investments. Government funding, and spending in the higher
education sector, seem to encourage publications, whereas other components such
as industrial funding, and spending in the business sector, encourage
patenting. Our results help explain why the US trails the EU in publications,
because of its focus on industrial funding - some 70% of its total R&D
investment. Conversely, it also helps explain why the EU trails the US in
patenting. Government funding is indicated as a negative incentive to
high-quality patenting. The models here can also be used to predict an output
indicator for a country, once the appropriate input indicator is known. This
usually is done within a dataset for a single year, but the process can be
extended to predict outputs a few years into the future, if reasonable
forecasts can be made of the input indicators. We provide new forecasts about
the further relationships of the US, the EU-27, and the PRC in the case of
publishing. Models for individual countries may be more successful, however,
than regression models whose parameters are averaged over a set of countries.
| [
{
"version": "v1",
"created": "Tue, 15 Feb 2011 12:27:59 GMT"
}
] | 2011-02-16T00:00:00 | [
[
"Shelton",
"Robert D.",
""
],
[
"Leydesdorff",
"Loet",
""
]
] | TITLE: Publish or Patent: Bibliometric evidence for empirical trade-offs in
national funding strategies
ABSTRACT: Multivariate linear regression models suggest a trade-off in allocations of
national R&D investments. Government funding, and spending in the higher
education sector, seem to encourage publications, whereas other components such
as industrial funding, and spending in the business sector, encourage
patenting. Our results help explain why the US trails the EU in publications,
because of its focus on industrial funding - some 70% of its total R&D
investment. Conversely, it also helps explain why the EU trails the US in
patenting. Government funding is indicated as a negative incentive to
high-quality patenting. The models here can also be used to predict an output
indicator for a country, once the appropriate input indicator is known. This
usually is done within a dataset for a single year, but the process can be
extended to predict outputs a few years into the future, if reasonable
forecasts can be made of the input indicators. We provide new forecasts about
the further relationships of the US, the EU-27, and the PRC in the case of
publishing. Models for individual countries may be more successful, however,
than regression models whose parameters are averaged over a set of countries.
| no_new_dataset | 0.933005 |
1102.2878 | Dongryeol Lee | Dongryeol Lee, Alexander G. Gray, and Andrew W. Moore | Dual-Tree Fast Gauss Transforms | Extended version of a conference paper. Submitted to a journal | null | null | null | stat.CO cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel density estimation (KDE) is a popular statistical technique for
estimating the underlying density distribution with minimal assumptions.
Although they can be shown to achieve asymptotic estimation optimality for any
input distribution, cross-validating for an optimal parameter requires
significant computation dominated by kernel summations. In this paper we
present an improvement to the dual-tree algorithm, the first practical kernel
summation algorithm for general dimension. Our extension is based on the
series-expansion for the Gaussian kernel used by fast Gauss transform. First,
we derive two additional analytical machinery for extending the original
algorithm to utilize a hierarchical data structure, demonstrating the first
truly hierarchical fast Gauss transform. Second, we show how to integrate the
series-expansion approximation within the dual-tree approach to compute kernel
summations with a user-controllable relative error bound. We evaluate our
algorithm on real-world datasets in the context of optimal bandwidth selection
in kernel density estimation. Our results demonstrate that our new algorithm is
the only one that guarantees a hard relative error bound and offers fast
performance across a wide range of bandwidths evaluated in cross validation
procedures.
| [
{
"version": "v1",
"created": "Mon, 14 Feb 2011 20:24:01 GMT"
}
] | 2011-02-15T00:00:00 | [
[
"Lee",
"Dongryeol",
""
],
[
"Gray",
"Alexander G.",
""
],
[
"Moore",
"Andrew W.",
""
]
] | TITLE: Dual-Tree Fast Gauss Transforms
ABSTRACT: Kernel density estimation (KDE) is a popular statistical technique for
estimating the underlying density distribution with minimal assumptions.
Although they can be shown to achieve asymptotic estimation optimality for any
input distribution, cross-validating for an optimal parameter requires
significant computation dominated by kernel summations. In this paper we
present an improvement to the dual-tree algorithm, the first practical kernel
summation algorithm for general dimension. Our extension is based on the
series-expansion for the Gaussian kernel used by fast Gauss transform. First,
we derive two additional analytical machinery for extending the original
algorithm to utilize a hierarchical data structure, demonstrating the first
truly hierarchical fast Gauss transform. Second, we show how to integrate the
series-expansion approximation within the dual-tree approach to compute kernel
summations with a user-controllable relative error bound. We evaluate our
algorithm on real-world datasets in the context of optimal bandwidth selection
in kernel density estimation. Our results demonstrate that our new algorithm is
the only one that guarantees a hard relative error bound and offers fast
performance across a wide range of bandwidths evaluated in cross validation
procedures.
| no_new_dataset | 0.949153 |
1010.2225 | Jere Jenkins | P.A. Sturrock, J.B. Buncher, E. Fischbach, J.T. Gruenwald, D. Javorsek
II, J.H. Jenkins, R.H. Lee, J.J. Mattes, J.R. Newport | Power Spectrum Analysis of Physikalisch-Technische Bundesanstalt
Decay-Rate Data: Evidence for Solar Rotational Modulation | 15 pages, 13 figures | Solar Physics, 2010. 267(2): p. 251-265 | 10.1007/s11207-010-9659-4 | null | astro-ph.SR nucl-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evidence for an anomalous annual periodicity in certain nuclear decay data
has led to speculation concerning a possible solar influence on nuclear
processes. We have recently analyzed data concerning the decay rates of Cl-36
and Si-32, acquired at the Brookhaven National Laboratory (BNL), to search for
evidence that might be indicative of a process involving solar rotation.
Smoothing of the power spectrum by weighted-running-mean analysis leads to a
significant peak at frequency 11.18/yr, which is lower than the equatorial
synodic rotation rates of the convection and radiative zones. This article
concerns measurements of the decay rates of Ra-226 acquired at the
Physikalisch-Technische Bundesanstalt (PTB) in Germany. We find that a similar
(but not identical) analysis yields a significant peak in the PTB dataset at
frequency 11.21/yr, and a peak in the BNL dataset at 11.25/yr. The change in
the BNL result is not significant since the uncertainties in the BNL and PTB
analyses are estimated to be 0.13/yr and 0.07/yr, respectively. Combining the
two running means by forming the joint power statistic leads to a highly
significant peak at frequency 11.23/yr. We comment briefly on the possible
implications of these results for solar physics and for particle physics.
| [
{
"version": "v1",
"created": "Mon, 11 Oct 2010 20:32:52 GMT"
}
] | 2011-02-08T00:00:00 | [
[
"Sturrock",
"P. A.",
""
],
[
"Buncher",
"J. B.",
""
],
[
"Fischbach",
"E.",
""
],
[
"Gruenwald",
"J. T.",
""
],
[
"Javorsek",
"D.",
"II"
],
[
"Jenkins",
"J. H.",
""
],
[
"Lee",
"R. H.",
""
],
[
"Mattes",
"J. J.",
""
],
[
"Newport",
"J. R.",
""
]
] | TITLE: Power Spectrum Analysis of Physikalisch-Technische Bundesanstalt
Decay-Rate Data: Evidence for Solar Rotational Modulation
ABSTRACT: Evidence for an anomalous annual periodicity in certain nuclear decay data
has led to speculation concerning a possible solar influence on nuclear
processes. We have recently analyzed data concerning the decay rates of Cl-36
and Si-32, acquired at the Brookhaven National Laboratory (BNL), to search for
evidence that might be indicative of a process involving solar rotation.
Smoothing of the power spectrum by weighted-running-mean analysis leads to a
significant peak at frequency 11.18/yr, which is lower than the equatorial
synodic rotation rates of the convection and radiative zones. This article
concerns measurements of the decay rates of Ra-226 acquired at the
Physikalisch-Technische Bundesanstalt (PTB) in Germany. We find that a similar
(but not identical) analysis yields a significant peak in the PTB dataset at
frequency 11.21/yr, and a peak in the BNL dataset at 11.25/yr. The change in
the BNL result is not significant since the uncertainties in the BNL and PTB
analyses are estimated to be 0.13/yr and 0.07/yr, respectively. Combining the
two running means by forming the joint power statistic leads to a highly
significant peak at frequency 11.23/yr. We comment briefly on the possible
implications of these results for solar physics and for particle physics.
| no_new_dataset | 0.947866 |
0907.3874 | Nikolaos Laoutaris | Ruben Cuevas, Nikolaos Laoutaris, Xiaoyuan Yang, Georgos Siganos,
Pablo Rodriguez | Deep Diving into BitTorrent Locality | Please cite the conference version of this paper appearing in the
Proceedings of IEEE INFOCOM'11 | null | null | null | cs.NI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A substantial amount of work has recently gone into localizing BitTorrent
traffic within an ISP in order to avoid excessive and often times unnecessary
transit costs. Several architectures and systems have been proposed and the
initial results from specific ISPs and a few torrents have been encouraging. In
this work we attempt to deepen and scale our understanding of locality and its
potential. Looking at specific ISPs, we consider tens of thousands of
concurrent torrents, and thus capture ISP-wide implications that cannot be
appreciated by looking at only a handful of torrents. Secondly, we go beyond
individual case studies and present results for the top 100 ISPs in terms of
number of users represented in our dataset of up to 40K torrents involving more
than 3.9M concurrent peers and more than 20M in the course of a day spread in
11K ASes. We develop scalable methodologies that permit us to process this huge
dataset and answer questions such as: "\emph{what is the minimum and the
maximum transit traffic reduction across hundreds of ISPs?}", "\emph{what are
the win-win boundaries for ISPs and their users?}", "\emph{what is the maximum
amount of transit traffic that can be localized without requiring fine-grained
control of inter-AS overlay connections?}", "\emph{what is the impact to
transit traffic from upgrades of residential broadband speeds?}".
| [
{
"version": "v1",
"created": "Wed, 22 Jul 2009 16:18:44 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jul 2009 08:35:39 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Nov 2009 19:40:27 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Feb 2011 19:01:18 GMT"
}
] | 2011-02-02T00:00:00 | [
[
"Cuevas",
"Ruben",
""
],
[
"Laoutaris",
"Nikolaos",
""
],
[
"Yang",
"Xiaoyuan",
""
],
[
"Siganos",
"Georgos",
""
],
[
"Rodriguez",
"Pablo",
""
]
] | TITLE: Deep Diving into BitTorrent Locality
ABSTRACT: A substantial amount of work has recently gone into localizing BitTorrent
traffic within an ISP in order to avoid excessive and often times unnecessary
transit costs. Several architectures and systems have been proposed and the
initial results from specific ISPs and a few torrents have been encouraging. In
this work we attempt to deepen and scale our understanding of locality and its
potential. Looking at specific ISPs, we consider tens of thousands of
concurrent torrents, and thus capture ISP-wide implications that cannot be
appreciated by looking at only a handful of torrents. Secondly, we go beyond
individual case studies and present results for the top 100 ISPs in terms of
number of users represented in our dataset of up to 40K torrents involving more
than 3.9M concurrent peers and more than 20M in the course of a day spread in
11K ASes. We develop scalable methodologies that permit us to process this huge
dataset and answer questions such as: "\emph{what is the minimum and the
maximum transit traffic reduction across hundreds of ISPs?}", "\emph{what are
the win-win boundaries for ISPs and their users?}", "\emph{what is the maximum
amount of transit traffic that can be localized without requiring fine-grained
control of inter-AS overlay connections?}", "\emph{what is the impact to
transit traffic from upgrades of residential broadband speeds?}".
| new_dataset | 0.738999 |
1009.1003 | Lorenzo Moneta | Lorenzo Moneta, Kevin Belasco, Kyle Cranmer, Sven Kreiss, Alfio
Lazzaro, Danilo Piparo, Gregory Schott, Wouter Verkerke, Matthias Wolf | The RooStats Project | 11 pages, 3 figures, ACAT2010 Conference Proceedings | null | null | null | physics.data-an | http://creativecommons.org/licenses/by-nc-sa/3.0/ | RooStats is a project to create advanced statistical tools required for the
analysis of LHC data, with emphasis on discoveries, confidence intervals, and
combined measurements. The idea is to provide the major statistical techniques
as a set of C++ classes with coherent interfaces, so that can be used on
arbitrary model and datasets in a common way. The classes are built on top of
the RooFit package, which provides functionality for easily creating
probability models, for analysis combinations and for digital publications of
the results. We will present in detail the design and the implementation of the
different statistical methods of RooStats. We will describe the various classes
for interval estimation and for hypothesis test depending on different
statistical techniques such as those based on the likelihood function, or on
frequentists or bayesian statistics. These methods can be applied in complex
problems, including cases with multiple parameters of interest and various
nuisance parameters.
| [
{
"version": "v1",
"created": "Mon, 6 Sep 2010 09:38:44 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2011 11:04:00 GMT"
}
] | 2011-02-02T00:00:00 | [
[
"Moneta",
"Lorenzo",
""
],
[
"Belasco",
"Kevin",
""
],
[
"Cranmer",
"Kyle",
""
],
[
"Kreiss",
"Sven",
""
],
[
"Lazzaro",
"Alfio",
""
],
[
"Piparo",
"Danilo",
""
],
[
"Schott",
"Gregory",
""
],
[
"Verkerke",
"Wouter",
""
],
[
"Wolf",
"Matthias",
""
]
] | TITLE: The RooStats Project
ABSTRACT: RooStats is a project to create advanced statistical tools required for the
analysis of LHC data, with emphasis on discoveries, confidence intervals, and
combined measurements. The idea is to provide the major statistical techniques
as a set of C++ classes with coherent interfaces, so that can be used on
arbitrary model and datasets in a common way. The classes are built on top of
the RooFit package, which provides functionality for easily creating
probability models, for analysis combinations and for digital publications of
the results. We will present in detail the design and the implementation of the
different statistical methods of RooStats. We will describe the various classes
for interval estimation and for hypothesis test depending on different
statistical techniques such as those based on the likelihood function, or on
frequentists or bayesian statistics. These methods can be applied in complex
problems, including cases with multiple parameters of interest and various
nuisance parameters.
| no_new_dataset | 0.944638 |
1011.2825 | Niklaus Berger | N. Berger, K. Zhu, Z. A. Liu, D. P. Jin, H. Xu, W. X. Gong, K. Wang,
G. F. Cao | Trigger efficiencies at BES III | 6 pages, 4 figures | Chin.Phys.C34:1779-1784,2010 | 10.1088/1674-1137/34/12/001 | null | hep-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trigger efficiencies at BES III were determined for both the J/psi and psi'
data taking of 2009. Both dedicated runs and physics datasets are used;
efficiencies are presented for Bhabha-scattering events, generic hadronic decay
events involving charged tracks, dimuon events and psi' -> pi+pi-J/psi, J/psi
-> l+l- events (l an electron or muon). The efficiencies are found to lie well
above 99% for all relevant physics cases, thus fulfilling the BES III design
specifications.
| [
{
"version": "v1",
"created": "Fri, 12 Nov 2010 04:43:48 GMT"
}
] | 2011-01-27T00:00:00 | [
[
"Berger",
"N.",
""
],
[
"Zhu",
"K.",
""
],
[
"Liu",
"Z. A.",
""
],
[
"Jin",
"D. P.",
""
],
[
"Xu",
"H.",
""
],
[
"Gong",
"W. X.",
""
],
[
"Wang",
"K.",
""
],
[
"Cao",
"G. F.",
""
]
] | TITLE: Trigger efficiencies at BES III
ABSTRACT: Trigger efficiencies at BES III were determined for both the J/psi and psi'
data taking of 2009. Both dedicated runs and physics datasets are used;
efficiencies are presented for Bhabha-scattering events, generic hadronic decay
events involving charged tracks, dimuon events and psi' -> pi+pi-J/psi, J/psi
-> l+l- events (l an electron or muon). The efficiencies are found to lie well
above 99% for all relevant physics cases, thus fulfilling the BES III design
specifications.
| no_new_dataset | 0.952926 |
1101.4924 | Ridwan Al Iqbal | Ridwan Al Iqbal | A Generalized Method for Integrating Rule-based Knowledge into Inductive
Methods Through Virtual Sample Creation | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hybrid learning methods use theoretical knowledge of a domain and a set of
classified examples to develop a method for classification. Methods that use
domain knowledge have been shown to perform better than inductive learners.
However, there is no general method to include domain knowledge into all
inductive learning algorithms as all hybrid methods are highly specialized for
a particular algorithm. We present an algorithm that will take domain knowledge
in the form of propositional rules, generate artificial examples from the rules
and also remove instances likely to be flawed. This enriched dataset then can
be used by any learning algorithm. Experimental results of different scenarios
are shown that demonstrate this method to be more effective than simple
inductive learning.
| [
{
"version": "v1",
"created": "Tue, 25 Jan 2011 20:42:01 GMT"
}
] | 2011-01-26T00:00:00 | [
[
"Iqbal",
"Ridwan Al",
""
]
] | TITLE: A Generalized Method for Integrating Rule-based Knowledge into Inductive
Methods Through Virtual Sample Creation
ABSTRACT: Hybrid learning methods use theoretical knowledge of a domain and a set of
classified examples to develop a method for classification. Methods that use
domain knowledge have been shown to perform better than inductive learners.
However, there is no general method to include domain knowledge into all
inductive learning algorithms as all hybrid methods are highly specialized for
a particular algorithm. We present an algorithm that will take domain knowledge
in the form of propositional rules, generate artificial examples from the rules
and also remove instances likely to be flawed. This enriched dataset then can
be used by any learning algorithm. Experimental results of different scenarios
are shown that demonstrate this method to be more effective than simple
inductive learning.
| no_new_dataset | 0.941761 |
1101.4573 | Alfredo Braunstein | M. Bailly-Bechet, C. Borgs, A. Braunstein, J. Chayes, A.
Dagkessamanskaia, J.-M. Fran\c{c}ois, and R. Zecchina | Finding undetected protein associations in cell signaling by belief
propagation | 6 pages, 3 figures, 1 table, Supporting Information | Published online before print December 27, 2010, doi:
10.1073/pnas.1004751108 PNAS January 11, 2011 vol. 108 no. 2 882-887 | 10.1073/pnas.1004751108 | null | q-bio.MN cond-mat.stat-mech cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | External information propagates in the cell mainly through signaling cascades
and transcriptional activation, allowing it to react to a wide spectrum of
environmental changes. High throughput experiments identify numerous molecular
components of such cascades that may, however, interact through unknown
partners. Some of them may be detected using data coming from the integration
of a protein-protein interaction network and mRNA expression profiles. This
inference problem can be mapped onto the problem of finding appropriate optimal
connected subgraphs of a network defined by these datasets. The optimization
procedure turns out to be computationally intractable in general. Here we
present a new distributed algorithm for this task, inspired from statistical
physics, and apply this scheme to alpha factor and drug perturbations data in
yeast. We identify the role of the COS8 protein, a member of a gene family of
previously unknown function, and validate the results by genetic experiments.
The algorithm we present is specially suited for very large datasets, can run
in parallel, and can be adapted to other problems in systems biology. On
renowned benchmarks it outperforms other algorithms in the field.
| [
{
"version": "v1",
"created": "Mon, 24 Jan 2011 15:57:48 GMT"
}
] | 2011-01-25T00:00:00 | [
[
"Bailly-Bechet",
"M.",
""
],
[
"Borgs",
"C.",
""
],
[
"Braunstein",
"A.",
""
],
[
"Chayes",
"J.",
""
],
[
"Dagkessamanskaia",
"A.",
""
],
[
"François",
"J. -M.",
""
],
[
"Zecchina",
"R.",
""
]
] | TITLE: Finding undetected protein associations in cell signaling by belief
propagation
ABSTRACT: External information propagates in the cell mainly through signaling cascades
and transcriptional activation, allowing it to react to a wide spectrum of
environmental changes. High throughput experiments identify numerous molecular
components of such cascades that may, however, interact through unknown
partners. Some of them may be detected using data coming from the integration
of a protein-protein interaction network and mRNA expression profiles. This
inference problem can be mapped onto the problem of finding appropriate optimal
connected subgraphs of a network defined by these datasets. The optimization
procedure turns out to be computationally intractable in general. Here we
present a new distributed algorithm for this task, inspired from statistical
physics, and apply this scheme to alpha factor and drug perturbations data in
yeast. We identify the role of the COS8 protein, a member of a gene family of
previously unknown function, and validate the results by genetic experiments.
The algorithm we present is specially suited for very large datasets, can run
in parallel, and can be adapted to other problems in systems biology. On
renowned benchmarks it outperforms other algorithms in the field.
| no_new_dataset | 0.940735 |
1101.2987 | Mahesh Pal Dr. | Mahesh Pal | Support vector machines/relevance vector machine for remote sensing
classification: A review | 19 pages | Proceeding of the Workshop on Application of advanced soft
computing Techniques in Geo-spatial Data Analysis. Department of Civil
Engineering, IIT Bombay, Sept. 22-23,2008, 211-227 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel-based machine learning algorithms are based on mapping data from the
original input feature space to a kernel feature space of higher dimensionality
to solve a linear problem in that space. Over the last decade, kernel based
classification and regression approaches such as support vector machines have
widely been used in remote sensing as well as in various civil engineering
applications. In spite of their better performance with different datasets,
support vector machines still suffer from shortcomings such as
visualization/interpretation of model, choice of kernel and kernel specific
parameter as well as the regularization parameter. Relevance vector machines
are another kernel based approach being explored for classification and
regression with in last few years. The advantages of the relevance vector
machines over the support vector machines is the availability of probabilistic
predictions, using arbitrary kernel functions and not requiring setting of the
regularization parameter. This paper presents a state-of-the-art review of SVM
and RVM in remote sensing and provides some details of their use in other civil
engineering application also.
| [
{
"version": "v1",
"created": "Sat, 15 Jan 2011 13:29:12 GMT"
}
] | 2011-01-18T00:00:00 | [
[
"Pal",
"Mahesh",
""
]
] | TITLE: Support vector machines/relevance vector machine for remote sensing
classification: A review
ABSTRACT: Kernel-based machine learning algorithms are based on mapping data from the
original input feature space to a kernel feature space of higher dimensionality
to solve a linear problem in that space. Over the last decade, kernel based
classification and regression approaches such as support vector machines have
widely been used in remote sensing as well as in various civil engineering
applications. In spite of their better performance with different datasets,
support vector machines still suffer from shortcomings such as
visualization/interpretation of model, choice of kernel and kernel specific
parameter as well as the regularization parameter. Relevance vector machines
are another kernel based approach being explored for classification and
regression with in last few years. The advantages of the relevance vector
machines over the support vector machines is the availability of probabilistic
predictions, using arbitrary kernel functions and not requiring setting of the
regularization parameter. This paper presents a state-of-the-art review of SVM
and RVM in remote sensing and provides some details of their use in other civil
engineering application also.
| no_new_dataset | 0.953101 |
1012.6009 | Rahmat Widia Sembiring | Rahmat Widia Sembiring, Jasni Mohamad Zain | Cluster Evaluation of Density Based Subspace Clustering | 6 pages, 15 figures | Journal of Computing, Volume 2, Issue 11, November 2010, ISSN
2151-9617 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering real world data often faced with curse of dimensionality, where
real world data often consist of many dimensions. Multidimensional data
clustering evaluation can be done through a density-based approach. Density
approaches based on the paradigm introduced by DBSCAN clustering. In this
approach, density of each object neighbours with MinPoints will be calculated.
Cluster change will occur in accordance with changes in density of each object
neighbours. The neighbours of each object typically determined using a distance
function, for example the Euclidean distance. In this paper SUBCLU, FIRES and
INSCY methods will be applied to clustering 6x1595 dimension synthetic
datasets. IO Entropy, F1 Measure, coverage, accurate and time consumption used
as evaluation performance parameters. Evaluation results showed SUBCLU method
requires considerable time to process subspace clustering; however, its value
coverage is better. Meanwhile INSCY method is better for accuracy comparing
with two other methods, although consequence time calculation was longer.
| [
{
"version": "v1",
"created": "Wed, 29 Dec 2010 19:34:11 GMT"
}
] | 2010-12-30T00:00:00 | [
[
"Sembiring",
"Rahmat Widia",
""
],
[
"Zain",
"Jasni Mohamad",
""
]
] | TITLE: Cluster Evaluation of Density Based Subspace Clustering
ABSTRACT: Clustering real world data often faced with curse of dimensionality, where
real world data often consist of many dimensions. Multidimensional data
clustering evaluation can be done through a density-based approach. Density
approaches based on the paradigm introduced by DBSCAN clustering. In this
approach, density of each object neighbours with MinPoints will be calculated.
Cluster change will occur in accordance with changes in density of each object
neighbours. The neighbours of each object typically determined using a distance
function, for example the Euclidean distance. In this paper SUBCLU, FIRES and
INSCY methods will be applied to clustering 6x1595 dimension synthetic
datasets. IO Entropy, F1 Measure, coverage, accurate and time consumption used
as evaluation performance parameters. Evaluation results showed SUBCLU method
requires considerable time to process subspace clustering; however, its value
coverage is better. Meanwhile INSCY method is better for accuracy comparing
with two other methods, although consequence time calculation was longer.
| no_new_dataset | 0.947332 |
1008.4619 | Richard Hill | Richard J. Hill and Gil Paz | Model independent extraction of the proton charge radius from electron
scattering | 17 pages, 4 figures. v2: references added, minor typos corrected,
version to appear in PRD | Phys.Rev.D82:113005,2010 | 10.1103/PhysRevD.82.113005 | EFI Preprint 10-21 | hep-ph nucl-ex nucl-th physics.atom-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraints from analyticity are combined with experimental electron-proton
scattering data to determine the proton charge radius. In contrast to previous
determinations, we provide a systematic procedure for analyzing arbitrary data
without model-dependent assumptions on the form factor shape. We also
investigate the impact of including electron-neutron scattering data, and
$\pi\pi\to N\bar{N}$ data. Using representative datasets we find r_E^p=0.870
+/- 0.023 +/- 0.012 fm using just proton scattering data;
r_E^p=0.880^{+0.017}_{-0.020} +/- 0.007 fm adding neutron data; and r_E^p=0.871
+/- 0.009 +/- 0.002 +/- 0.002 fm adding $\pi\pi$ data. The analysis can be
readily extended to other nucleon form factors and derived observables.
| [
{
"version": "v1",
"created": "Thu, 26 Aug 2010 23:38:09 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Nov 2010 18:14:05 GMT"
}
] | 2010-12-24T00:00:00 | [
[
"Hill",
"Richard J.",
""
],
[
"Paz",
"Gil",
""
]
] | TITLE: Model independent extraction of the proton charge radius from electron
scattering
ABSTRACT: Constraints from analyticity are combined with experimental electron-proton
scattering data to determine the proton charge radius. In contrast to previous
determinations, we provide a systematic procedure for analyzing arbitrary data
without model-dependent assumptions on the form factor shape. We also
investigate the impact of including electron-neutron scattering data, and
$\pi\pi\to N\bar{N}$ data. Using representative datasets we find r_E^p=0.870
+/- 0.023 +/- 0.012 fm using just proton scattering data;
r_E^p=0.880^{+0.017}_{-0.020} +/- 0.007 fm adding neutron data; and r_E^p=0.871
+/- 0.009 +/- 0.002 +/- 0.002 fm adding $\pi\pi$ data. The analysis can be
readily extended to other nucleon form factors and derived observables.
| no_new_dataset | 0.945901 |
1012.4759 | Ying Ding | Bin Chen (1), David J Wild (1), Qian Zhu (1), Ying Ding (2), Xiao Dong
(1), Madhuvanthi Sankaranarayanan (1), Huijun Wang (1), Yuyin Sun (2) ((1)
School of Informatics and Computing, Indiana University, Bloomington, IN,
USA, (2) School of Library and Information Science, Indiana University,
Bloomington, IN, USA) | Chem2Bio2RDF: A Linked Open Data Portal for Chemical Biology | 8 pages, 10 figures | null | null | null | cs.IR q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Chem2Bio2RDF portal is a Linked Open Data (LOD) portal for systems
chemical biology aiming for facilitating drug discovery. It converts around 25
different datasets on genes, compounds, drugs, pathways, side effects,
diseases, and MEDLINE/PubMed documents into RDF triples and links them to other
LOD bubbles, such as Bio2RDF, LODD and DBPedia. The portal is based on D2R
server and provides a SPARQL endpoint, but adds on few unique features like RDF
faceted browser, user-friendly SPARQL query generator, MEDLINE/PubMed cross
validation service, and Cytoscape visualization plugin. Three use cases
demonstrate the functionality and usability of this portal.
| [
{
"version": "v1",
"created": "Tue, 21 Dec 2010 18:29:54 GMT"
}
] | 2010-12-22T00:00:00 | [
[
"Chen",
"Bin",
""
],
[
"Wild",
"David J",
""
],
[
"Zhu",
"Qian",
""
],
[
"Ding",
"Ying",
""
],
[
"Dong",
"Xiao",
""
],
[
"Sankaranarayanan",
"Madhuvanthi",
""
],
[
"Wang",
"Huijun",
""
],
[
"Sun",
"Yuyin",
""
]
] | TITLE: Chem2Bio2RDF: A Linked Open Data Portal for Chemical Biology
ABSTRACT: The Chem2Bio2RDF portal is a Linked Open Data (LOD) portal for systems
chemical biology aiming for facilitating drug discovery. It converts around 25
different datasets on genes, compounds, drugs, pathways, side effects,
diseases, and MEDLINE/PubMed documents into RDF triples and links them to other
LOD bubbles, such as Bio2RDF, LODD and DBPedia. The portal is based on D2R
server and provides a SPARQL endpoint, but adds on few unique features like RDF
faceted browser, user-friendly SPARQL query generator, MEDLINE/PubMed cross
validation service, and Cytoscape visualization plugin. Three use cases
demonstrate the functionality and usability of this portal.
| no_new_dataset | 0.959724 |
1012.4396 | Walter Quattrociocchi | Walter Quattrociocchi, Frederic Amblard | Selection in Scientific Networks | 17 pages, 8 Figure, social network analysis, evolving structures | null | null | null | cs.SI cs.CY cs.DL nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most interesting scientific challenges nowadays deals with the
analysis and the understanding of complex networks' dynamics. A major issue is
the definition of new frameworks for the exploration of the dynamics at play in
real dynamic networks. Here, we focus on scientific communities by analyzing
the "social part" of Science through a descriptive approach that aims at
identifying the social determinants (e.g. goals and potential interactions
among individuals) behind the emergence and the resilience of scientific
communities. We consider that scientific communities are at the same time
communities of practice (through co-authorship) and that they exist also as
representations in the scientists' mind, since references to other scientists'
works is not merely an objective link to a relevant work, but it reveals social
objects that one manipulates and refers to. In this paper we identify the
patterns about the evolution of a scientific field by analyzing a portion of
the arXiv repository covering a period of 10 years of publications in physics.
As a citation represents a deliberative selection related to the relevance of a
work in its scientific domain, our analysis approaches the co-existence between
co-authorship and citation behaviors in a community by focusing on the most
proficient and cited authors interactions patterns. We focus in turn, on how
these patterns are affected by the selection process of citations. Such a
selection a) produces self-organization because it is played by a group of
individuals which act, compete and collaborate in a common environment in order
to advance Science and b) determines the success (emergence) of both topics and
scientists working on them. The dataset is analyzed a) at a global level, e.g.
the network evolution, b) at the meso-level, e.g. communities emergence, and c)
at a micro-level, e.g. nodes' aggregation patterns.
| [
{
"version": "v1",
"created": "Mon, 20 Dec 2010 16:47:57 GMT"
}
] | 2010-12-21T00:00:00 | [
[
"Quattrociocchi",
"Walter",
""
],
[
"Amblard",
"Frederic",
""
]
] | TITLE: Selection in Scientific Networks
ABSTRACT: One of the most interesting scientific challenges nowadays deals with the
analysis and the understanding of complex networks' dynamics. A major issue is
the definition of new frameworks for the exploration of the dynamics at play in
real dynamic networks. Here, we focus on scientific communities by analyzing
the "social part" of Science through a descriptive approach that aims at
identifying the social determinants (e.g. goals and potential interactions
among individuals) behind the emergence and the resilience of scientific
communities. We consider that scientific communities are at the same time
communities of practice (through co-authorship) and that they exist also as
representations in the scientists' mind, since references to other scientists'
works is not merely an objective link to a relevant work, but it reveals social
objects that one manipulates and refers to. In this paper we identify the
patterns about the evolution of a scientific field by analyzing a portion of
the arXiv repository covering a period of 10 years of publications in physics.
As a citation represents a deliberative selection related to the relevance of a
work in its scientific domain, our analysis approaches the co-existence between
co-authorship and citation behaviors in a community by focusing on the most
proficient and cited authors interactions patterns. We focus in turn, on how
these patterns are affected by the selection process of citations. Such a
selection a) produces self-organization because it is played by a group of
individuals which act, compete and collaborate in a common environment in order
to advance Science and b) determines the success (emergence) of both topics and
scientists working on them. The dataset is analyzed a) at a global level, e.g.
the network evolution, b) at the meso-level, e.g. communities emergence, and c)
at a micro-level, e.g. nodes' aggregation patterns.
| no_new_dataset | 0.945801 |
1012.3805 | Yang Wang | Yang Wang, Zhikui Chen, Xiaodi Huang | Element Retrieval using Namespace Based on keyword search over XML
Documents | 9 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Querying over XML elements using keyword search is steadily gaining
popularity. The traditional similarity measure is widely employed in order to
effectively retrieve various XML documents. A number of authors have already
proposed different similarity-measure methods that take advantage of the
structure and content of XML documents. They do not, however, consider the
similarity between latent semantic information of element texts and that of
keywords in a query. Although many algorithms on XML element search are
available, some of them have the high computational complexity due to searching
a huge number of elements. In this paper, we propose a new algorithm that makes
use of the semantic similarity between elements instead of between entire XML
documents, considering not only the structure and content of an XML document,
but also semantic information of namespaces in elements. We compare our
algorithm with the three other algorithms by testing on the real datasets. The
experiments have demonstrated that our proposed method is able to improve the
query accuracy, as well as to reduce the running time.
| [
{
"version": "v1",
"created": "Fri, 17 Dec 2010 04:00:10 GMT"
}
] | 2010-12-20T00:00:00 | [
[
"Wang",
"Yang",
""
],
[
"Chen",
"Zhikui",
""
],
[
"Huang",
"Xiaodi",
""
]
] | TITLE: Element Retrieval using Namespace Based on keyword search over XML
Documents
ABSTRACT: Querying over XML elements using keyword search is steadily gaining
popularity. The traditional similarity measure is widely employed in order to
effectively retrieve various XML documents. A number of authors have already
proposed different similarity-measure methods that take advantage of the
structure and content of XML documents. They do not, however, consider the
similarity between latent semantic information of element texts and that of
keywords in a query. Although many algorithms on XML element search are
available, some of them have the high computational complexity due to searching
a huge number of elements. In this paper, we propose a new algorithm that makes
use of the semantic similarity between elements instead of between entire XML
documents, considering not only the structure and content of an XML document,
but also semantic information of namespaces in elements. We compare our
algorithm with the three other algorithms by testing on the real datasets. The
experiments have demonstrated that our proposed method is able to improve the
query accuracy, as well as to reduce the running time.
| no_new_dataset | 0.951639 |
0904.1366 | Jian Li | Jian Li, Barna Saha, Amol Deshpande | A Unified Approach to Ranking in Probabilistic Databases | null | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dramatic growth in the number of application domains that naturally
generate probabilistic, uncertain data has resulted in a need for efficiently
supporting complex querying and decision-making over such data. In this paper,
we present a unified approach to ranking and top-k query processing in
probabilistic databases by viewing it as a multi-criteria optimization problem,
and by deriving a set of features that capture the key properties of a
probabilistic dataset that dictate the ranked result. We contend that a single,
specific ranking function may not suffice for probabilistic databases, and we
instead propose two parameterized ranking functions, called PRF-w and PRF-e,
that generalize or can approximate many of the previously proposed ranking
functions. We present novel generating functions-based algorithms for
efficiently ranking large datasets according to these ranking functions, even
if the datasets exhibit complex correlations modeled using probabilistic
and/xor trees or Markov networks. We further propose that the parameters of the
ranking function be learned from user preferences, and we develop an approach
to learn those parameters. Finally, we present a comprehensive experimental
study that illustrates the effectiveness of our parameterized ranking
functions, especially PRF-e, at approximating other ranking functions and the
scalability of our proposed algorithms for exact or approximate ranking.
| [
{
"version": "v1",
"created": "Wed, 8 Apr 2009 15:30:58 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Apr 2009 04:42:10 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Dec 2010 19:25:42 GMT"
},
{
"version": "v4",
"created": "Wed, 15 Dec 2010 21:12:37 GMT"
}
] | 2010-12-17T00:00:00 | [
[
"Li",
"Jian",
""
],
[
"Saha",
"Barna",
""
],
[
"Deshpande",
"Amol",
""
]
] | TITLE: A Unified Approach to Ranking in Probabilistic Databases
ABSTRACT: The dramatic growth in the number of application domains that naturally
generate probabilistic, uncertain data has resulted in a need for efficiently
supporting complex querying and decision-making over such data. In this paper,
we present a unified approach to ranking and top-k query processing in
probabilistic databases by viewing it as a multi-criteria optimization problem,
and by deriving a set of features that capture the key properties of a
probabilistic dataset that dictate the ranked result. We contend that a single,
specific ranking function may not suffice for probabilistic databases, and we
instead propose two parameterized ranking functions, called PRF-w and PRF-e,
that generalize or can approximate many of the previously proposed ranking
functions. We present novel generating functions-based algorithms for
efficiently ranking large datasets according to these ranking functions, even
if the datasets exhibit complex correlations modeled using probabilistic
and/xor trees or Markov networks. We further propose that the parameters of the
ranking function be learned from user preferences, and we develop an approach
to learn those parameters. Finally, we present a comprehensive experimental
study that illustrates the effectiveness of our parameterized ranking
functions, especially PRF-e, at approximating other ranking functions and the
scalability of our proposed algorithms for exact or approximate ranking.
| no_new_dataset | 0.949623 |
1012.3476 | Guillaume Desjardins | Guillaume Desjardins, Aaron Courville, Yoshua Bengio | Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning
of RBMs | Presented at the "NIPS 2010 Workshop on Deep Learning and
Unsupervised Feature Learning" | null | null | null | stat.ML cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restricted Boltzmann Machines (RBM) have attracted a lot of attention of
late, as one the principle building blocks of deep networks. Training RBMs
remains problematic however, because of the intractibility of their partition
function. The maximum likelihood gradient requires a very robust sampler which
can accurately sample from the model despite the loss of ergodicity often
incurred during learning. While using Parallel Tempering in the negative phase
of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a
trade-off between computational complexity and high ergodicity, and requires
careful hand-tuning of the temperatures. In this paper, we show that this
trade-off is unnecessary. The choice of optimal temperatures can be automated
by minimizing average return time (a concept first proposed by [Katzgraber et
al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing
the computational overhead. We show on a synthetic dataset, that this results
in better likelihood scores.
| [
{
"version": "v1",
"created": "Wed, 15 Dec 2010 21:23:09 GMT"
}
] | 2010-12-17T00:00:00 | [
[
"Desjardins",
"Guillaume",
""
],
[
"Courville",
"Aaron",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning
of RBMs
ABSTRACT: Restricted Boltzmann Machines (RBM) have attracted a lot of attention of
late, as one the principle building blocks of deep networks. Training RBMs
remains problematic however, because of the intractibility of their partition
function. The maximum likelihood gradient requires a very robust sampler which
can accurately sample from the model despite the loss of ergodicity often
incurred during learning. While using Parallel Tempering in the negative phase
of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a
trade-off between computational complexity and high ergodicity, and requires
careful hand-tuning of the temperatures. In this paper, we show that this
trade-off is unnecessary. The choice of optimal temperatures can be automated
by minimizing average return time (a concept first proposed by [Katzgraber et
al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing
the computational overhead. We show on a synthetic dataset, that this results
in better likelihood scores.
| no_new_dataset | 0.946941 |
1010.4843 | Massimo Brescia Dr | Massimo Brescia, Giuseppe Longo, George S. Djorgovski, Stefano
Cavuoti, Raffaele D'Abrusco, Ciro Donalek, Alessandro Di Guido, Michelangelo
Fiore, Mauro Garofalo, Omar Laurino, Ashish Mahabal, Francesco Manna, Alfonso
Nocella, Giovanni d'Angelo, Maurizio Paolillo | DAME: A Web Oriented Infrastructure for Scientific Data Mining &
Exploration | 16 pages, 9 figures, software available at
http://voneural.na.infn.it/beta_info.html | null | null | null | astro-ph.IM astro-ph.GA cs.DB cs.DC cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, many scientific areas share the same need of being able to deal
with massive and distributed datasets and to perform on them complex knowledge
extraction tasks. This simple consideration is behind the international efforts
to build virtual organizations such as, for instance, the Virtual Observatory
(VObs). DAME (DAta Mining & Exploration) is an innovative, general purpose,
Web-based, VObs compliant, distributed data mining infrastructure specialized
in Massive Data Sets exploration with machine learning methods. Initially fine
tuned to deal with astronomical data only, DAME has evolved in a general
purpose platform which has found applications also in other domains of human
endeavor. We present the products and a short outline of a science case,
together with a detailed description of main features available in the beta
release of the web application now released.
| [
{
"version": "v1",
"created": "Sat, 23 Oct 2010 04:43:13 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Dec 2010 04:48:34 GMT"
}
] | 2010-12-09T00:00:00 | [
[
"Brescia",
"Massimo",
""
],
[
"Longo",
"Giuseppe",
""
],
[
"Djorgovski",
"George S.",
""
],
[
"Cavuoti",
"Stefano",
""
],
[
"D'Abrusco",
"Raffaele",
""
],
[
"Donalek",
"Ciro",
""
],
[
"Di Guido",
"Alessandro",
""
],
[
"Fiore",
"Michelangelo",
""
],
[
"Garofalo",
"Mauro",
""
],
[
"Laurino",
"Omar",
""
],
[
"Mahabal",
"Ashish",
""
],
[
"Manna",
"Francesco",
""
],
[
"Nocella",
"Alfonso",
""
],
[
"d'Angelo",
"Giovanni",
""
],
[
"Paolillo",
"Maurizio",
""
]
] | TITLE: DAME: A Web Oriented Infrastructure for Scientific Data Mining &
Exploration
ABSTRACT: Nowadays, many scientific areas share the same need of being able to deal
with massive and distributed datasets and to perform on them complex knowledge
extraction tasks. This simple consideration is behind the international efforts
to build virtual organizations such as, for instance, the Virtual Observatory
(VObs). DAME (DAta Mining & Exploration) is an innovative, general purpose,
Web-based, VObs compliant, distributed data mining infrastructure specialized
in Massive Data Sets exploration with machine learning methods. Initially fine
tuned to deal with astronomical data only, DAME has evolved in a general
purpose platform which has found applications also in other domains of human
endeavor. We present the products and a short outline of a science case,
together with a detailed description of main features available in the beta
release of the web application now released.
| no_new_dataset | 0.951818 |
1011.3728 | Curzio Basso | Curzio Basso and Matteo Santoro and Alessandro Verri and Silvia Villa | PADDLE: Proximal Algorithm for Dual Dictionaries LEarning | null | null | null | DISI-TR-2010-06 | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, considerable research efforts have been devoted to the design of
methods to learn from data overcomplete dictionaries for sparse coding.
However, learned dictionaries require the solution of an optimization problem
for coding new data. In order to overcome this drawback, we propose an
algorithm aimed at learning both a dictionary and its dual: a linear mapping
directly performing the coding. By leveraging on proximal methods, our
algorithm jointly minimizes the reconstruction error of the dictionary and the
coding error of its dual; the sparsity of the representation is induced by an
$\ell_1$-based penalty on its coefficients. The results obtained on synthetic
data and real images show that the algorithm is capable of recovering the
expected dictionaries. Furthermore, on a benchmark dataset, we show that the
image features obtained from the dual matrix yield state-of-the-art
classification performance while being much less computational intensive.
| [
{
"version": "v1",
"created": "Tue, 16 Nov 2010 15:31:25 GMT"
}
] | 2010-11-17T00:00:00 | [
[
"Basso",
"Curzio",
""
],
[
"Santoro",
"Matteo",
""
],
[
"Verri",
"Alessandro",
""
],
[
"Villa",
"Silvia",
""
]
] | TITLE: PADDLE: Proximal Algorithm for Dual Dictionaries LEarning
ABSTRACT: Recently, considerable research efforts have been devoted to the design of
methods to learn from data overcomplete dictionaries for sparse coding.
However, learned dictionaries require the solution of an optimization problem
for coding new data. In order to overcome this drawback, we propose an
algorithm aimed at learning both a dictionary and its dual: a linear mapping
directly performing the coding. By leveraging on proximal methods, our
algorithm jointly minimizes the reconstruction error of the dictionary and the
coding error of its dual; the sparsity of the representation is induced by an
$\ell_1$-based penalty on its coefficients. The results obtained on synthetic
data and real images show that the algorithm is capable of recovering the
expected dictionaries. Furthermore, on a benchmark dataset, we show that the
image features obtained from the dual matrix yield state-of-the-art
classification performance while being much less computational intensive.
| no_new_dataset | 0.942295 |
1011.2807 | Jijie Wang | Jijie Wang, Lei Lin, Ting Huang, Jingjing Wang and Zengyou He | Efficient K-Nearest Neighbor Join Algorithms for High Dimensional Sparse
Data | 12 pages, This paper has been submitted to PAKDD2011 | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The K-Nearest Neighbor (KNN) join is an expensive but important operation in
many data mining algorithms. Several recent applications need to perform KNN
join for high dimensional sparse data. Unfortunately, all existing KNN join
algorithms are designed for low dimensional data. To fulfill this void, we
investigate the KNN join problem for high dimensional sparse data.
In this paper, we propose three KNN join algorithms: a brute force (BF)
algorithm, an inverted index-based(IIB) algorithm and an improved inverted
index-based(IIIB) algorithm. Extensive experiments on both synthetic and
real-world datasets were conducted to demonstrate the effectiveness of our
algorithms for high dimensional sparse data.
| [
{
"version": "v1",
"created": "Fri, 12 Nov 2010 01:35:39 GMT"
}
] | 2010-11-15T00:00:00 | [
[
"Wang",
"Jijie",
""
],
[
"Lin",
"Lei",
""
],
[
"Huang",
"Ting",
""
],
[
"Wang",
"Jingjing",
""
],
[
"He",
"Zengyou",
""
]
] | TITLE: Efficient K-Nearest Neighbor Join Algorithms for High Dimensional Sparse
Data
ABSTRACT: The K-Nearest Neighbor (KNN) join is an expensive but important operation in
many data mining algorithms. Several recent applications need to perform KNN
join for high dimensional sparse data. Unfortunately, all existing KNN join
algorithms are designed for low dimensional data. To fulfill this void, we
investigate the KNN join problem for high dimensional sparse data.
In this paper, we propose three KNN join algorithms: a brute force (BF)
algorithm, an inverted index-based(IIB) algorithm and an improved inverted
index-based(IIIB) algorithm. Extensive experiments on both synthetic and
real-world datasets were conducted to demonstrate the effectiveness of our
algorithms for high dimensional sparse data.
| no_new_dataset | 0.952882 |
1011.2107 | Jocelyne Troccaz | Janssoone Thomas (TIMC), Gr\'egoire Chevreau (TIMC), Lucile Vadcard
(LSE), Pierre Mozer, Jocelyne Troccaz (TIMC) | Biopsym : a learning environment for transrectal ultrasound guided
prostate biopsies | null | 18th "Medecine Meets Virtual Reality", Newport beach : United
States (2011) | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a learning environment for image-guided prostate
biopsies in cancer diagnosis; it is based on an ultrasound probe simulator
virtually exploring real datasets obtained from patients. The aim is to make
the training of young physicians easier and faster with a tool that combines
lectures, biopsy simulations and recommended exercises to master this medical
gesture. It will particularly help acquiring the three-dimensional
representation of the prostate needed for practicing biopsy sequences. The
simulator uses a haptic feedback to compute the position of the virtual probe
from three-dimensional (3D) ultrasound recorded data. This paper presents the
current version of this learning environment.
| [
{
"version": "v1",
"created": "Mon, 8 Nov 2010 15:54:10 GMT"
}
] | 2010-11-10T00:00:00 | [
[
"Thomas",
"Janssoone",
"",
"TIMC"
],
[
"Chevreau",
"Grégoire",
"",
"TIMC"
],
[
"Vadcard",
"Lucile",
"",
"LSE"
],
[
"Mozer",
"Pierre",
"",
"TIMC"
],
[
"Troccaz",
"Jocelyne",
"",
"TIMC"
]
] | TITLE: Biopsym : a learning environment for transrectal ultrasound guided
prostate biopsies
ABSTRACT: This paper describes a learning environment for image-guided prostate
biopsies in cancer diagnosis; it is based on an ultrasound probe simulator
virtually exploring real datasets obtained from patients. The aim is to make
the training of young physicians easier and faster with a tool that combines
lectures, biopsy simulations and recommended exercises to master this medical
gesture. It will particularly help acquiring the three-dimensional
representation of the prostate needed for practicing biopsy sequences. The
simulator uses a haptic feedback to compute the position of the virtual probe
from three-dimensional (3D) ultrasound recorded data. This paper presents the
current version of this learning environment.
| no_new_dataset | 0.947235 |
1011.1127 | Dan Tavrov | Oleg Chertov, Dan Tavrov | Group Anonymity: Problems and Solutions | 13 pages, 6 figures, 1 table. Published by "Lviv Polytechnica
Publishing House" in "Information Systems and Networks"
(http://vlp.com.ua/taxonomy/term/3136) | Information Systems and Networks, No. 673, pp. 3-15, 2010 | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing methods of providing data anonymity preserve individual privacy,
but, the task of protecting respondent groups' information in publicly
available datasets remains open. Group anonymity lies in hiding (masking) data
patterns that cannot be revealed by analyzing individual records. We discuss
main corresponding problems, and provide methods for solving each one.
Keywords: group anonymity, wavelet transform.
| [
{
"version": "v1",
"created": "Thu, 4 Nov 2010 12:02:53 GMT"
}
] | 2010-11-05T00:00:00 | [
[
"Chertov",
"Oleg",
""
],
[
"Tavrov",
"Dan",
""
]
] | TITLE: Group Anonymity: Problems and Solutions
ABSTRACT: Existing methods of providing data anonymity preserve individual privacy,
but, the task of protecting respondent groups' information in publicly
available datasets remains open. Group anonymity lies in hiding (masking) data
patterns that cannot be revealed by analyzing individual records. We discuss
main corresponding problems, and provide methods for solving each one.
Keywords: group anonymity, wavelet transform.
| no_new_dataset | 0.945951 |
astro-ph/0601073 | Gregory V. Vereshchagin | G. V. Vereshchagin and G. Yegorian | Cosmological models with Gurzadyan-Xue dark energy | figure 3 and typos are corrected, version matches the one to appear
in Classical and Quantum Gravity | Class.Quant.Grav.23:5049-5062,2006 | 10.1088/0264-9381/23/15/020 | null | astro-ph hep-th physics.class-ph | null | The formula for dark energy density derived by Gurzadyan and Xue provides a
value of density parameter of dark energy in remarkable agreement with current
cosmological datasets, unlike numerous phenomenological dark energy scenarios
where the corresponding value is postulated. This formula suggests the
possibility of variation of physical constants such as the speed of light and
the gravitational constant. Considering several cosmological models based on
that formula and deriving the cosmological equations for each case, we show
that, in all models source terms appear in the continuity equation. So, one one
hand, GX models make up a rich set covering a lot of currently proposed models
of dark energy, on the other hand, they reveal hidden symmetries, with a
particular role of the separatrix $\Omega_m=2/3$, and link with the issue of
the content of physical constants.
| [
{
"version": "v1",
"created": "Wed, 4 Jan 2006 09:55:28 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2006 15:27:28 GMT"
},
{
"version": "v3",
"created": "Wed, 31 May 2006 16:26:22 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Jul 2006 14:27:59 GMT"
}
] | 2010-11-05T00:00:00 | [
[
"Vereshchagin",
"G. V.",
""
],
[
"Yegorian",
"G.",
""
]
] | TITLE: Cosmological models with Gurzadyan-Xue dark energy
ABSTRACT: The formula for dark energy density derived by Gurzadyan and Xue provides a
value of density parameter of dark energy in remarkable agreement with current
cosmological datasets, unlike numerous phenomenological dark energy scenarios
where the corresponding value is postulated. This formula suggests the
possibility of variation of physical constants such as the speed of light and
the gravitational constant. Considering several cosmological models based on
that formula and deriving the cosmological equations for each case, we show
that, in all models source terms appear in the continuity equation. So, one one
hand, GX models make up a rich set covering a lot of currently proposed models
of dark energy, on the other hand, they reveal hidden symmetries, with a
particular role of the separatrix $\Omega_m=2/3$, and link with the issue of
the content of physical constants.
| no_new_dataset | 0.948489 |
1010.5943 | Szymon Chojnacki Mr | Szymon Chojnacki and Mieczys{\l}aw K{\l}opotek | Random Graph Generator for Bipartite Networks Modeling | null | null | null | null | cs.AI cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this article is to introduce a new iterative algorithm with
properties resembling real life bipartite graphs. The algorithm enables us to
generate wide range of random bigraphs, which features are determined by a set
of parameters.We adapt the advances of last decade in unipartite complex
networks modeling to the bigraph setting. This data structure can be observed
in several situations. However, only a few datasets are freely available to
test the algorithms (e.g. community detection, influential nodes
identification, information retrieval) which operate on such data. Therefore,
artificial datasets are needed to enhance development and testing of the
algorithms. We are particularly interested in applying the generator to the
analysis of recommender systems. Therefore, we focus on two characteristics
that, besides simple statistics, are in our opinion responsible for the
performance of neighborhood based collaborative filtering algorithms. The
features are node degree distribution and local clustering coeficient.
| [
{
"version": "v1",
"created": "Thu, 28 Oct 2010 12:39:10 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Nov 2010 16:47:39 GMT"
}
] | 2010-11-03T00:00:00 | [
[
"Chojnacki",
"Szymon",
""
],
[
"Kłopotek",
"Mieczysław",
""
]
] | TITLE: Random Graph Generator for Bipartite Networks Modeling
ABSTRACT: The purpose of this article is to introduce a new iterative algorithm with
properties resembling real life bipartite graphs. The algorithm enables us to
generate wide range of random bigraphs, which features are determined by a set
of parameters.We adapt the advances of last decade in unipartite complex
networks modeling to the bigraph setting. This data structure can be observed
in several situations. However, only a few datasets are freely available to
test the algorithms (e.g. community detection, influential nodes
identification, information retrieval) which operate on such data. Therefore,
artificial datasets are needed to enhance development and testing of the
algorithms. We are particularly interested in applying the generator to the
analysis of recommender systems. Therefore, we focus on two characteristics
that, besides simple statistics, are in our opinion responsible for the
performance of neighborhood based collaborative filtering algorithms. The
features are node degree distribution and local clustering coeficient.
| no_new_dataset | 0.921145 |
1010.5954 | Szymon Chojnacki Mr | Szymon Chojnacki and Mieczys{\l}aw K{\l}opotek | Random Graphs for Performance Evaluation of Recommender Systems | null | null | null | null | cs.AI cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this article is to introduce a new analytical framework
dedicated to measuring performance of recommender systems. The standard
approach is to assess the quality of a system by means of accuracy related
statistics. However, the specificity of the environments in which recommender
systems are deployed requires to pay much attention to speed and memory
requirements of the algorithms. Unfortunately, it is implausible to assess
accurately the complexity of various algorithms with formal tools. This can be
attributed to the fact that such analyses are usually based on an assumption of
dense representation of underlying data structures. Whereas, in real life the
algorithms operate on sparse data and are implemented with collections
dedicated for them. Therefore, we propose to measure the complexity of
recommender systems with artificial datasets that posses real-life properties.
We utilize recently developed bipartite graph generator to evaluate how
state-of-the-art recommender systems' behavior is determined and diversified by
topological properties of the generated datasets.
| [
{
"version": "v1",
"created": "Thu, 28 Oct 2010 13:10:03 GMT"
}
] | 2010-10-29T00:00:00 | [
[
"Chojnacki",
"Szymon",
""
],
[
"Kłopotek",
"Mieczysław",
""
]
] | TITLE: Random Graphs for Performance Evaluation of Recommender Systems
ABSTRACT: The purpose of this article is to introduce a new analytical framework
dedicated to measuring performance of recommender systems. The standard
approach is to assess the quality of a system by means of accuracy related
statistics. However, the specificity of the environments in which recommender
systems are deployed requires to pay much attention to speed and memory
requirements of the algorithms. Unfortunately, it is implausible to assess
accurately the complexity of various algorithms with formal tools. This can be
attributed to the fact that such analyses are usually based on an assumption of
dense representation of underlying data structures. Whereas, in real life the
algorithms operate on sparse data and are implemented with collections
dedicated for them. Therefore, we propose to measure the complexity of
recommender systems with artificial datasets that posses real-life properties.
We utilize recently developed bipartite graph generator to evaluate how
state-of-the-art recommender systems' behavior is determined and diversified by
topological properties of the generated datasets.
| no_new_dataset | 0.906901 |
1010.5610 | Ju Sun | Ju Sun, Qiang Chen, Shuicheng Yan, Loong-Fah Cheong | Selective Image Super-Resolution | 20 pages, 5 figures. Submitted to Computer Vision and Image
Understanding in March 2010. Keywords: image super resolution, semantic image
segmentation, vision system, vision application | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a vision system that performs image Super Resolution
(SR) with selectivity. Conventional SR techniques, either by multi-image fusion
or example-based construction, have failed to capitalize on the intrinsic
structural and semantic context in the image, and performed "blind" resolution
recovery to the entire image area. By comparison, we advocate example-based
selective SR whereby selectivity is exemplified in three aspects: region
selectivity (SR only at object regions), source selectivity (object SR with
trained object dictionaries), and refinement selectivity (object boundaries
refinement using matting). The proposed system takes over-segmented
low-resolution images as inputs, assimilates recent learning techniques of
sparse coding (SC) and grouped multi-task lasso (GMTL), and leads eventually to
a framework for joint figure-ground separation and interest object SR. The
efficiency of our framework is manifested in our experiments with subsets of
the VOC2009 and MSRC datasets. We also demonstrate several interesting vision
applications that can build on our system.
| [
{
"version": "v1",
"created": "Wed, 27 Oct 2010 08:58:48 GMT"
}
] | 2010-10-28T00:00:00 | [
[
"Sun",
"Ju",
""
],
[
"Chen",
"Qiang",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Cheong",
"Loong-Fah",
""
]
] | TITLE: Selective Image Super-Resolution
ABSTRACT: In this paper we propose a vision system that performs image Super Resolution
(SR) with selectivity. Conventional SR techniques, either by multi-image fusion
or example-based construction, have failed to capitalize on the intrinsic
structural and semantic context in the image, and performed "blind" resolution
recovery to the entire image area. By comparison, we advocate example-based
selective SR whereby selectivity is exemplified in three aspects: region
selectivity (SR only at object regions), source selectivity (object SR with
trained object dictionaries), and refinement selectivity (object boundaries
refinement using matting). The proposed system takes over-segmented
low-resolution images as inputs, assimilates recent learning techniques of
sparse coding (SC) and grouped multi-task lasso (GMTL), and leads eventually to
a framework for joint figure-ground separation and interest object SR. The
efficiency of our framework is manifested in our experiments with subsets of
the VOC2009 and MSRC datasets. We also demonstrate several interesting vision
applications that can build on our system.
| no_new_dataset | 0.951953 |
1010.5426 | Shuai Zheng | Shuai Zheng and Kaiqi Huang and Tieniu Tan | Translation-Invariant Representation for Cumulative Foot Pressure Images | 6 pages | Shuai Zheng, Kaiqi Huang and Tieniu Tan. Translation Invariant
Representation for Cumulative foot pressure Image, The second CJK Joint
Workshop on Pattern Recognition(CJKPR), 2010 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human can be distinguished by different limb movements and unique ground
reaction force. Cumulative foot pressure image is a 2-D cumulative ground
reaction force during one gait cycle. Although it contains pressure spatial
distribution information and pressure temporal distribution information, it
suffers from several problems including different shoes and noise, when putting
it into practice as a new biometric for pedestrian identification. In this
paper, we propose a hierarchical translation-invariant representation for
cumulative foot pressure images, inspired by the success of Convolutional deep
belief network for digital classification. Key contribution in our approach is
discriminative hierarchical sparse coding scheme which helps to learn useful
discriminative high-level visual features. Based on the feature representation
of cumulative foot pressure images, we develop a pedestrian recognition system
which is invariant to three different shoes and slight local shape change.
Experiments are conducted on a proposed open dataset that contains more than
2800 cumulative foot pressure images from 118 subjects. Evaluations suggest the
effectiveness of the proposed method and the potential of cumulative foot
pressure images as a biometric.
| [
{
"version": "v1",
"created": "Tue, 26 Oct 2010 15:16:50 GMT"
}
] | 2010-10-27T00:00:00 | [
[
"Zheng",
"Shuai",
""
],
[
"Huang",
"Kaiqi",
""
],
[
"Tan",
"Tieniu",
""
]
] | TITLE: Translation-Invariant Representation for Cumulative Foot Pressure Images
ABSTRACT: Human can be distinguished by different limb movements and unique ground
reaction force. Cumulative foot pressure image is a 2-D cumulative ground
reaction force during one gait cycle. Although it contains pressure spatial
distribution information and pressure temporal distribution information, it
suffers from several problems including different shoes and noise, when putting
it into practice as a new biometric for pedestrian identification. In this
paper, we propose a hierarchical translation-invariant representation for
cumulative foot pressure images, inspired by the success of Convolutional deep
belief network for digital classification. Key contribution in our approach is
discriminative hierarchical sparse coding scheme which helps to learn useful
discriminative high-level visual features. Based on the feature representation
of cumulative foot pressure images, we develop a pedestrian recognition system
which is invariant to three different shoes and slight local shape change.
Experiments are conducted on a proposed open dataset that contains more than
2800 cumulative foot pressure images from 118 subjects. Evaluations suggest the
effectiveness of the proposed method and the potential of cumulative foot
pressure images as a biometric.
| new_dataset | 0.953708 |
1010.3796 | Massimo Brescia Dr | M. Brescia, G. Longo, F. Pasian | Mining Knowledge in Astrophysical Massive Data Sets | Pages 845-849 1rs International Conference on Frontiers in
Diagnostics Technologies | Elsevier, Nuclear Instruments and Methods in Physics Research
Section A: Accelerators, Spectrometers, Detectors and Associated Equipment
Volume 623, Issue 2, 11 November 2010 | 10.1016/j.nima.2010.02.002 | null | astro-ph.IM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern scientific data mainly consist of huge datasets gathered by a very
large number of techniques and stored in very diversified and often
incompatible data repositories. More in general, in the e-science environment,
it is considered as a critical and urgent requirement to integrate services
across distributed, heterogeneous, dynamic "virtual organizations" formed by
different resources within a single enterprise. In the last decade, Astronomy
has become an immensely data rich field due to the evolution of detectors
(plates to digital to mosaics), telescopes and space instruments. The Virtual
Observatory approach consists into the federation under common standards of all
astronomical archives available worldwide, as well as data analysis, data
mining and data exploration applications. The main drive behind such effort
being that once the infrastructure will be completed, it will allow a new type
of multi-wavelength, multi-epoch science which can only be barely imagined.
Data Mining, or Knowledge Discovery in Databases, while being the main
methodology to extract the scientific information contained in such MDS
(Massive Data Sets), poses crucial problems since it has to orchestrate complex
problems posed by transparent access to different computing environments,
scalability of algorithms, reusability of resources, etc. In the present paper
we summarize the present status of the MDS in the Virtual Observatory and what
is currently done and planned to bring advanced Data Mining methodologies in
the case of the DAME (DAta Mining & Exploration) project.
| [
{
"version": "v1",
"created": "Tue, 19 Oct 2010 04:48:19 GMT"
}
] | 2010-10-20T00:00:00 | [
[
"Brescia",
"M.",
""
],
[
"Longo",
"G.",
""
],
[
"Pasian",
"F.",
""
]
] | TITLE: Mining Knowledge in Astrophysical Massive Data Sets
ABSTRACT: Modern scientific data mainly consist of huge datasets gathered by a very
large number of techniques and stored in very diversified and often
incompatible data repositories. More in general, in the e-science environment,
it is considered as a critical and urgent requirement to integrate services
across distributed, heterogeneous, dynamic "virtual organizations" formed by
different resources within a single enterprise. In the last decade, Astronomy
has become an immensely data rich field due to the evolution of detectors
(plates to digital to mosaics), telescopes and space instruments. The Virtual
Observatory approach consists into the federation under common standards of all
astronomical archives available worldwide, as well as data analysis, data
mining and data exploration applications. The main drive behind such effort
being that once the infrastructure will be completed, it will allow a new type
of multi-wavelength, multi-epoch science which can only be barely imagined.
Data Mining, or Knowledge Discovery in Databases, while being the main
methodology to extract the scientific information contained in such MDS
(Massive Data Sets), poses crucial problems since it has to orchestrate complex
problems posed by transparent access to different computing environments,
scalability of algorithms, reusability of resources, etc. In the present paper
we summarize the present status of the MDS in the Virtual Observatory and what
is currently done and planned to bring advanced Data Mining methodologies in
the case of the DAME (DAta Mining & Exploration) project.
| no_new_dataset | 0.941708 |
1010.3053 | Lars Kolb | Lars Kolb, Andreas Thor, Erhard Rahm | Parallel Sorted Neighborhood Blocking with MapReduce | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud infrastructures enable the efficient parallel execution of
data-intensive tasks such as entity resolution on large datasets. We
investigate challenges and possible solutions of using the MapReduce
programming model for parallel entity resolution. In particular, we propose and
evaluate two MapReduce-based implementations for Sorted Neighborhood blocking
that either use multiple MapReduce jobs or apply a tailored data replication.
| [
{
"version": "v1",
"created": "Fri, 15 Oct 2010 00:28:44 GMT"
}
] | 2010-10-18T00:00:00 | [
[
"Kolb",
"Lars",
""
],
[
"Thor",
"Andreas",
""
],
[
"Rahm",
"Erhard",
""
]
] | TITLE: Parallel Sorted Neighborhood Blocking with MapReduce
ABSTRACT: Cloud infrastructures enable the efficient parallel execution of
data-intensive tasks such as entity resolution on large datasets. We
investigate challenges and possible solutions of using the MapReduce
programming model for parallel entity resolution. In particular, we propose and
evaluate two MapReduce-based implementations for Sorted Neighborhood blocking
that either use multiple MapReduce jobs or apply a tailored data replication.
| no_new_dataset | 0.946051 |
1010.1437 | Mahdi Shafiei | Mahdi Shafiei and Hugh Chipman | Mixed-Membership Stochastic Block-Models for Transactional Networks | 22 pages | null | null | null | stat.ML cs.AI cs.SI stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transactional network data can be thought of as a list of one-to-many
communications(e.g., email) between nodes in a social network. Most social
network models convert this type of data into binary relations between pairs of
nodes. We develop a latent mixed membership model capable of modeling richer
forms of transactional network data, including relations between more than two
nodes. The model can cluster nodes and predict transactions. The block-model
nature of the model implies that groups can be characterized in very general
ways. This flexible notion of group structure enables discovery of rich
structure in transactional networks. Estimation and inference are accomplished
via a variational EM algorithm. Simulations indicate that the learning
algorithm can recover the correct generative model. Interesting structure is
discovered in the Enron email dataset and another dataset extracted from the
Reddit website. Analysis of the Reddit data is facilitated by a novel
performance measure for comparing two soft clusterings. The new model is
superior at discovering mixed membership in groups and in predicting
transactions.
| [
{
"version": "v1",
"created": "Thu, 7 Oct 2010 14:16:38 GMT"
}
] | 2010-10-08T00:00:00 | [
[
"Shafiei",
"Mahdi",
""
],
[
"Chipman",
"Hugh",
""
]
] | TITLE: Mixed-Membership Stochastic Block-Models for Transactional Networks
ABSTRACT: Transactional network data can be thought of as a list of one-to-many
communications(e.g., email) between nodes in a social network. Most social
network models convert this type of data into binary relations between pairs of
nodes. We develop a latent mixed membership model capable of modeling richer
forms of transactional network data, including relations between more than two
nodes. The model can cluster nodes and predict transactions. The block-model
nature of the model implies that groups can be characterized in very general
ways. This flexible notion of group structure enables discovery of rich
structure in transactional networks. Estimation and inference are accomplished
via a variational EM algorithm. Simulations indicate that the learning
algorithm can recover the correct generative model. Interesting structure is
discovered in the Enron email dataset and another dataset extracted from the
Reddit website. Analysis of the Reddit data is facilitated by a novel
performance measure for comparing two soft clusterings. The new model is
superior at discovering mixed membership in groups and in predicting
transactions.
| no_new_dataset | 0.947478 |
1009.3980 | Eiko Yoneki | Mervyn P. Freeman, Nicholas W. Watkins, Eiko Yoneki, Jon Crowcroft | Rhythm and Randomness in Human Contact | null | International Conference on Advances in Social Networks Analysis
and Mining, 2010 | 10.1109/ASONAM.2010.57 | null | physics.data-an physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is substantial interest in the effect of human mobility patterns on
opportunistic communications. Inspired by recent work revisiting some of the
early evidence for a L\'evy flight foraging strategy in animals, we analyse
datasets on human contact from real world traces. By analysing the distribution
of inter-contact times on different time scales and using different graphical
forms, we find not only the highly skewed distributions of waiting times
highlighted in previous studies but also clear circadian rhythm. The relative
visibility of these two components depends strongly on which graphical form is
adopted and the range of time scales. We use a simple model to reconstruct the
observed behaviour and discuss the implications of this for forwarding
efficiency.
| [
{
"version": "v1",
"created": "Tue, 21 Sep 2010 01:20:07 GMT"
}
] | 2010-10-01T00:00:00 | [
[
"Freeman",
"Mervyn P.",
""
],
[
"Watkins",
"Nicholas W.",
""
],
[
"Yoneki",
"Eiko",
""
],
[
"Crowcroft",
"Jon",
""
]
] | TITLE: Rhythm and Randomness in Human Contact
ABSTRACT: There is substantial interest in the effect of human mobility patterns on
opportunistic communications. Inspired by recent work revisiting some of the
early evidence for a L\'evy flight foraging strategy in animals, we analyse
datasets on human contact from real world traces. By analysing the distribution
of inter-contact times on different time scales and using different graphical
forms, we find not only the highly skewed distributions of waiting times
highlighted in previous studies but also clear circadian rhythm. The relative
visibility of these two components depends strongly on which graphical form is
adopted and the range of time scales. We use a simple model to reconstruct the
observed behaviour and discuss the implications of this for forwarding
efficiency.
| no_new_dataset | 0.949201 |
0710.4975 | Yoshiharu Maeno | Yoshiharu Maeno | Node discovery problem for a social network | null | Connections vol.29, pp.62-76 (2009) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Methods to solve a node discovery problem for a social network are presented.
Covert nodes refer to the nodes which are not observable directly. They
transmit the influence and affect the resulting collaborative activities among
the persons in a social network, but do not appear in the surveillance logs
which record the participants of the collaborative activities. Discovering the
covert nodes is identifying the suspicious logs where the covert nodes would
appear if the covert nodes became overt. The performance of the methods is
demonstrated with a test dataset generated from computationally synthesized
networks and a real organization.
| [
{
"version": "v1",
"created": "Fri, 26 Oct 2007 01:32:47 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Aug 2009 04:18:43 GMT"
}
] | 2010-09-28T00:00:00 | [
[
"Maeno",
"Yoshiharu",
""
]
] | TITLE: Node discovery problem for a social network
ABSTRACT: Methods to solve a node discovery problem for a social network are presented.
Covert nodes refer to the nodes which are not observable directly. They
transmit the influence and affect the resulting collaborative activities among
the persons in a social network, but do not appear in the surveillance logs
which record the participants of the collaborative activities. Discovering the
covert nodes is identifying the suspicious logs where the covert nodes would
appear if the covert nodes became overt. The performance of the methods is
demonstrated with a test dataset generated from computationally synthesized
networks and a real organization.
| new_dataset | 0.952131 |
1009.4823 | Adrian Ion | Joao Carreira, Adrian Ion, and Cristian Sminchisescu | Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques | 11 pages, 5 figures | null | null | TR-06-2010 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a mid-level image segmentation framework that combines multiple
figure-ground hypothesis (FG) constrained at different locations and scales,
into interpretations that tile the entire image. The problem is cast as
optimization over sets of maximal cliques sampled from the graph connecting
non-overlapping, putative figure-ground segment hypotheses. Potential functions
over cliques combine unary Gestalt-based figure quality scores and pairwise
compatibilities among spatially neighboring segments, constrained by
T-junctions and the boundary interface statistics resulting from projections of
real 3d scenes. Learning the model parameters is formulated as rank
optimization, alternating between sampling image tilings and optimizing their
potential function parameters. State of the art results are reported on both
the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was
achieved.
| [
{
"version": "v1",
"created": "Fri, 24 Sep 2010 12:32:02 GMT"
}
] | 2010-09-27T00:00:00 | [
[
"Carreira",
"Joao",
""
],
[
"Ion",
"Adrian",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | TITLE: Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques
ABSTRACT: We propose a mid-level image segmentation framework that combines multiple
figure-ground hypothesis (FG) constrained at different locations and scales,
into interpretations that tile the entire image. The problem is cast as
optimization over sets of maximal cliques sampled from the graph connecting
non-overlapping, putative figure-ground segment hypotheses. Potential functions
over cliques combine unary Gestalt-based figure quality scores and pairwise
compatibilities among spatially neighboring segments, constrained by
T-junctions and the boundary interface statistics resulting from projections of
real 3d scenes. Learning the model parameters is formulated as rank
optimization, alternating between sampling image tilings and optimizing their
potential function parameters. State of the art results are reported on both
the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was
achieved.
| no_new_dataset | 0.95388 |
1009.3984 | Hieu Dinh | Hieu Dinh and Sanguthevar Rajasekaran | A memory-efficient data structure representing exact-match overlap
graphs with application for next generation DNA assembly | null | null | null | null | cs.DS cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An exact-match overlap graph of $n$ given strings of length $\ell$ is an
edge-weighted graph in which each vertex is associated with a string and there
is an edge $(x,y)$ of weight $\omega = \ell - |ov_{max}(x,y)|$ if and only if
$\omega \leq \lambda$, where $|ov_{max}(x,y)|$ is the length of $ov_{max}(x,y)$
and $\lambda$ is a given threshold. In this paper, we show that the exact-match
overlap graphs can be represented by a compact data structure that can be
stored using at most $(2\lambda -1 )(2\lceil\log n\rceil +
\lceil\log\lambda\rceil)n$ bits with a guarantee that the basic operation of
accessing an edge takes $O(\log \lambda)$ time.
Exact-match overlap graphs have been broadly used in the context of DNA
assembly and the \emph{shortest super string problem} where the number of
strings $n$ ranges from a couple of thousands to a couple of billions, the
length $\ell$ of the strings is from 25 to 1000, depending on DNA sequencing
technologies. However, many DNA assemblers using overlap graphs are facing a
major problem of constructing and storing them. Especially, it is impossible
for these DNA assemblers to handle the huge amount of data produced by the next
generation sequencing technologies where the number of strings $n$ is usually
very large ranging from hundred million to a couple of billions. In fact, to
our best knowledge there is no DNA assemblers that can handle such a large
number of strings. Fortunately, with our compact data structure, the major
problem of constructing and storing overlap graphs is practically solved since
it only requires linear time and and linear memory. As a result, it opens the
door of possibilities to build a DNA assembler that can handle large-scale
datasets efficiently.
| [
{
"version": "v1",
"created": "Tue, 21 Sep 2010 02:39:34 GMT"
}
] | 2010-09-22T00:00:00 | [
[
"Dinh",
"Hieu",
""
],
[
"Rajasekaran",
"Sanguthevar",
""
]
] | TITLE: A memory-efficient data structure representing exact-match overlap
graphs with application for next generation DNA assembly
ABSTRACT: An exact-match overlap graph of $n$ given strings of length $\ell$ is an
edge-weighted graph in which each vertex is associated with a string and there
is an edge $(x,y)$ of weight $\omega = \ell - |ov_{max}(x,y)|$ if and only if
$\omega \leq \lambda$, where $|ov_{max}(x,y)|$ is the length of $ov_{max}(x,y)$
and $\lambda$ is a given threshold. In this paper, we show that the exact-match
overlap graphs can be represented by a compact data structure that can be
stored using at most $(2\lambda -1 )(2\lceil\log n\rceil +
\lceil\log\lambda\rceil)n$ bits with a guarantee that the basic operation of
accessing an edge takes $O(\log \lambda)$ time.
Exact-match overlap graphs have been broadly used in the context of DNA
assembly and the \emph{shortest super string problem} where the number of
strings $n$ ranges from a couple of thousands to a couple of billions, the
length $\ell$ of the strings is from 25 to 1000, depending on DNA sequencing
technologies. However, many DNA assemblers using overlap graphs are facing a
major problem of constructing and storing them. Especially, it is impossible
for these DNA assemblers to handle the huge amount of data produced by the next
generation sequencing technologies where the number of strings $n$ is usually
very large ranging from hundred million to a couple of billions. In fact, to
our best knowledge there is no DNA assemblers that can handle such a large
number of strings. Fortunately, with our compact data structure, the major
problem of constructing and storing overlap graphs is practically solved since
it only requires linear time and and linear memory. As a result, it opens the
door of possibilities to build a DNA assembler that can handle large-scale
datasets efficiently.
| no_new_dataset | 0.943295 |
1008.0135 | Amr Hassan | A.H. Hassan, C.J. Fluke, D.G. Barnes | Interactive Visualization of the Largest Radioastronomy Cubes | 15 pages, 12 figures, Accepted New Astronomy July 2010 | New Astronomy 16 (2011), pp. 100-109 | 10.1016/j.newast.2010.07.009 | null | astro-ph.IM cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D visualization is an important data analysis and knowledge discovery tool,
however, interactive visualization of large 3D astronomical datasets poses a
challenge for many existing data visualization packages. We present a solution
to interactively visualize larger-than-memory 3D astronomical data cubes by
utilizing a heterogeneous cluster of CPUs and GPUs. The system partitions the
data volume into smaller sub-volumes that are distributed over the rendering
workstations. A GPU-based ray casting volume rendering is performed to generate
images for each sub-volume, which are composited to generate the whole volume
output, and returned to the user. Datasets including the HI Parkes All Sky
Survey (HIPASS - 12 GB) southern sky and the Galactic All Sky Survey (GASS - 26
GB) data cubes were used to demonstrate our framework's performance. The
framework can render the GASS data cube with a maximum render time < 0.3 second
with 1024 x 1024 pixels output resolution using 3 rendering workstations and 8
GPUs. Our framework will scale to visualize larger datasets, even of Terabyte
order, if proper hardware infrastructure is available.
| [
{
"version": "v1",
"created": "Sun, 1 Aug 2010 00:55:23 GMT"
}
] | 2010-09-21T00:00:00 | [
[
"Hassan",
"A. H.",
""
],
[
"Fluke",
"C. J.",
""
],
[
"Barnes",
"D. G.",
""
]
] | TITLE: Interactive Visualization of the Largest Radioastronomy Cubes
ABSTRACT: 3D visualization is an important data analysis and knowledge discovery tool,
however, interactive visualization of large 3D astronomical datasets poses a
challenge for many existing data visualization packages. We present a solution
to interactively visualize larger-than-memory 3D astronomical data cubes by
utilizing a heterogeneous cluster of CPUs and GPUs. The system partitions the
data volume into smaller sub-volumes that are distributed over the rendering
workstations. A GPU-based ray casting volume rendering is performed to generate
images for each sub-volume, which are composited to generate the whole volume
output, and returned to the user. Datasets including the HI Parkes All Sky
Survey (HIPASS - 12 GB) southern sky and the Galactic All Sky Survey (GASS - 26
GB) data cubes were used to demonstrate our framework's performance. The
framework can render the GASS data cube with a maximum render time < 0.3 second
with 1024 x 1024 pixels output resolution using 3 rendering workstations and 8
GPUs. Our framework will scale to visualize larger datasets, even of Terabyte
order, if proper hardware infrastructure is available.
| no_new_dataset | 0.943086 |
1009.3711 | EPTCS | Yi-Hsun Wang, Ching-Hao Mao, Hahn-Ming Lee | Structural Learning of Attack Vectors for Generating Mutated XSS Attacks | In Proceedings TAV-WEB 2010, arXiv:1009.3306 | EPTCS 35, 2010, pp. 15-26 | 10.4204/EPTCS.35.2 | null | cs.SE cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web applications suffer from cross-site scripting (XSS) attacks that
resulting from incomplete or incorrect input sanitization. Learning the
structure of attack vectors could enrich the variety of manifestations in
generated XSS attacks. In this study, we focus on generating more threatening
XSS attacks for the state-of-the-art detection approaches that can find
potential XSS vulnerabilities in Web applications, and propose a mechanism for
structural learning of attack vectors with the aim of generating mutated XSS
attacks in a fully automatic way. Mutated XSS attack generation depends on the
analysis of attack vectors and the structural learning mechanism. For the
kernel of the learning mechanism, we use a Hidden Markov model (HMM) as the
structure of the attack vector model to capture the implicit manner of the
attack vector, and this manner is benefited from the syntax meanings that are
labeled by the proposed tokenizing mechanism. Bayes theorem is used to
determine the number of hidden states in the model for generalizing the
structure model. The paper has the contributions as following: (1)
automatically learn the structure of attack vectors from practical data
analysis to modeling a structure model of attack vectors, (2) mimic the manners
and the elements of attack vectors to extend the ability of testing tool for
identifying XSS vulnerabilities, (3) be helpful to verify the flaws of
blacklist sanitization procedures of Web applications. We evaluated the
proposed mechanism by Burp Intruder with a dataset collected from public XSS
archives. The results show that mutated XSS attack generation can identify
potential vulnerabilities.
| [
{
"version": "v1",
"created": "Mon, 20 Sep 2010 07:19:27 GMT"
}
] | 2010-09-21T00:00:00 | [
[
"Wang",
"Yi-Hsun",
""
],
[
"Mao",
"Ching-Hao",
""
],
[
"Lee",
"Hahn-Ming",
""
]
] | TITLE: Structural Learning of Attack Vectors for Generating Mutated XSS Attacks
ABSTRACT: Web applications suffer from cross-site scripting (XSS) attacks that
resulting from incomplete or incorrect input sanitization. Learning the
structure of attack vectors could enrich the variety of manifestations in
generated XSS attacks. In this study, we focus on generating more threatening
XSS attacks for the state-of-the-art detection approaches that can find
potential XSS vulnerabilities in Web applications, and propose a mechanism for
structural learning of attack vectors with the aim of generating mutated XSS
attacks in a fully automatic way. Mutated XSS attack generation depends on the
analysis of attack vectors and the structural learning mechanism. For the
kernel of the learning mechanism, we use a Hidden Markov model (HMM) as the
structure of the attack vector model to capture the implicit manner of the
attack vector, and this manner is benefited from the syntax meanings that are
labeled by the proposed tokenizing mechanism. Bayes theorem is used to
determine the number of hidden states in the model for generalizing the
structure model. The paper has the contributions as following: (1)
automatically learn the structure of attack vectors from practical data
analysis to modeling a structure model of attack vectors, (2) mimic the manners
and the elements of attack vectors to extend the ability of testing tool for
identifying XSS vulnerabilities, (3) be helpful to verify the flaws of
blacklist sanitization procedures of Web applications. We evaluated the
proposed mechanism by Burp Intruder with a dataset collected from public XSS
archives. The results show that mutated XSS attack generation can identify
potential vulnerabilities.
| no_new_dataset | 0.946498 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.