id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1009.0892 | Chunhua Shen | Yongbin Zheng, Chunhua Shen, Richard Hartley, Xinsheng Huang | Effective Pedestrian Detection Using Center-symmetric Local
Binary/Trinary Patterns | 11 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately detecting pedestrians in images plays a critically important role
in many computer vision applications. Extraction of effective features is the
key to this task. Promising features should be discriminative, robust to
various variations and easy to compute. In this work, we present novel
features, termed dense center-symmetric local binary patterns (CS-LBP) and
pyramid center-symmetric local binary/ternary patterns (CS-LBP/LTP), for
pedestrian detection. The standard LBP proposed by Ojala et al. \cite{c4}
mainly captures the texture information. The proposed CS-LBP feature, in
contrast, captures the gradient information and some texture information.
Moreover, the proposed dense CS-LBP and the pyramid CS-LBP/LTP are easy to
implement and computationally efficient, which is desirable for real-time
applications. Experiments on the INRIA pedestrian dataset show that the dense
CS-LBP feature with linear supporct vector machines (SVMs) is comparable with
the histograms of oriented gradients (HOG) feature with linear SVMs, and the
pyramid CS-LBP/LTP features outperform both HOG features with linear SVMs and
the start-of-the-art pyramid HOG (PHOG) feature with the histogram intersection
kernel SVMs. We also demonstrate that the combination of our pyramid CS-LBP
feature and the PHOG feature could significantly improve the detection
performance-producing state-of-the-art accuracy on the INRIA pedestrian
dataset.
| [
{
"version": "v1",
"created": "Sun, 5 Sep 2010 05:16:11 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Sep 2010 01:58:29 GMT"
}
] | 2010-09-20T00:00:00 | [
[
"Zheng",
"Yongbin",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hartley",
"Richard",
""
],
[
"Huang",
"Xinsheng",
""
]
] | TITLE: Effective Pedestrian Detection Using Center-symmetric Local
Binary/Trinary Patterns
ABSTRACT: Accurately detecting pedestrians in images plays a critically important role
in many computer vision applications. Extraction of effective features is the
key to this task. Promising features should be discriminative, robust to
various variations and easy to compute. In this work, we present novel
features, termed dense center-symmetric local binary patterns (CS-LBP) and
pyramid center-symmetric local binary/ternary patterns (CS-LBP/LTP), for
pedestrian detection. The standard LBP proposed by Ojala et al. \cite{c4}
mainly captures the texture information. The proposed CS-LBP feature, in
contrast, captures the gradient information and some texture information.
Moreover, the proposed dense CS-LBP and the pyramid CS-LBP/LTP are easy to
implement and computationally efficient, which is desirable for real-time
applications. Experiments on the INRIA pedestrian dataset show that the dense
CS-LBP feature with linear supporct vector machines (SVMs) is comparable with
the histograms of oriented gradients (HOG) feature with linear SVMs, and the
pyramid CS-LBP/LTP features outperform both HOG features with linear SVMs and
the start-of-the-art pyramid HOG (PHOG) feature with the histogram intersection
kernel SVMs. We also demonstrate that the combination of our pyramid CS-LBP
feature and the PHOG feature could significantly improve the detection
performance-producing state-of-the-art accuracy on the INRIA pedestrian
dataset.
| no_new_dataset | 0.95096 |
1009.2722 | Myung Jin Choi | Myung Jin Choi, Vincent Y. F. Tan, Animashree Anandkumar, Alan S.
Willsky | Learning Latent Tree Graphical Models | null | null | null | null | stat.ML cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of learning a latent tree graphical model where samples
are available only from a subset of variables. We propose two consistent and
computationally efficient algorithms for learning minimal latent trees, that
is, trees without any redundant hidden nodes. Unlike many existing methods, the
observed nodes (or variables) are not constrained to be leaf nodes. Our first
algorithm, recursive grouping, builds the latent tree recursively by
identifying sibling groups using so-called information distances. One of the
main contributions of this work is our second algorithm, which we refer to as
CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree
over the observed variables is constructed. This global step groups the
observed nodes that are likely to be close to each other in the true latent
tree, thereby guiding subsequent recursive grouping (or equivalent procedures)
on much smaller subsets of variables. This results in more accurate and
efficient learning of latent trees. We also present regularized versions of our
algorithms that learn latent tree approximations of arbitrary distributions. We
compare the proposed algorithms to other methods by performing extensive
numerical experiments on various latent tree graphical models such as hidden
Markov models and star graphs. In addition, we demonstrate the applicability of
our methods on real-world datasets by modeling the dependency structure of
monthly stock returns in the S&P index and of the words in the 20 newsgroups
dataset.
| [
{
"version": "v1",
"created": "Tue, 14 Sep 2010 17:37:44 GMT"
}
] | 2010-09-15T00:00:00 | [
[
"Choi",
"Myung Jin",
""
],
[
"Tan",
"Vincent Y. F.",
""
],
[
"Anandkumar",
"Animashree",
""
],
[
"Willsky",
"Alan S.",
""
]
] | TITLE: Learning Latent Tree Graphical Models
ABSTRACT: We study the problem of learning a latent tree graphical model where samples
are available only from a subset of variables. We propose two consistent and
computationally efficient algorithms for learning minimal latent trees, that
is, trees without any redundant hidden nodes. Unlike many existing methods, the
observed nodes (or variables) are not constrained to be leaf nodes. Our first
algorithm, recursive grouping, builds the latent tree recursively by
identifying sibling groups using so-called information distances. One of the
main contributions of this work is our second algorithm, which we refer to as
CLGrouping. CLGrouping starts with a pre-processing procedure in which a tree
over the observed variables is constructed. This global step groups the
observed nodes that are likely to be close to each other in the true latent
tree, thereby guiding subsequent recursive grouping (or equivalent procedures)
on much smaller subsets of variables. This results in more accurate and
efficient learning of latent trees. We also present regularized versions of our
algorithms that learn latent tree approximations of arbitrary distributions. We
compare the proposed algorithms to other methods by performing extensive
numerical experiments on various latent tree graphical models such as hidden
Markov models and star graphs. In addition, we demonstrate the applicability of
our methods on real-world datasets by modeling the dependency structure of
monthly stock returns in the S&P index and of the words in the 20 newsgroups
dataset.
| no_new_dataset | 0.948585 |
1009.0861 | Ameet Talwalkar | Mehryar Mohri, Ameet Talwalkar | On the Estimation of Coherence | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-rank matrix approximations are often used to help scale standard machine
learning algorithms to large-scale problems. Recently, matrix coherence has
been used to characterize the ability to extract global information from a
subset of matrix entries in the context of these low-rank approximations and
other sampling-based algorithms, e.g., matrix com- pletion, robust PCA. Since
coherence is defined in terms of the singular vectors of a matrix and is
expensive to compute, the practical significance of these results largely
hinges on the following question: Can we efficiently and accurately estimate
the coherence of a matrix? In this paper we address this question. We propose a
novel algorithm for estimating coherence from a small number of columns,
formally analyze its behavior, and derive a new coherence-based matrix
approximation bound based on this analysis. We then present extensive
experimental results on synthetic and real datasets that corroborate our
worst-case theoretical analysis, yet provide strong support for the use of our
proposed algorithm whenever low-rank approximation is being considered. Our
algorithm efficiently and accurately estimates matrix coherence across a wide
range of datasets, and these coherence estimates are excellent predictors of
the effectiveness of sampling-based matrix approximation on a case-by-case
basis.
| [
{
"version": "v1",
"created": "Sat, 4 Sep 2010 19:18:54 GMT"
}
] | 2010-09-07T00:00:00 | [
[
"Mohri",
"Mehryar",
""
],
[
"Talwalkar",
"Ameet",
""
]
] | TITLE: On the Estimation of Coherence
ABSTRACT: Low-rank matrix approximations are often used to help scale standard machine
learning algorithms to large-scale problems. Recently, matrix coherence has
been used to characterize the ability to extract global information from a
subset of matrix entries in the context of these low-rank approximations and
other sampling-based algorithms, e.g., matrix com- pletion, robust PCA. Since
coherence is defined in terms of the singular vectors of a matrix and is
expensive to compute, the practical significance of these results largely
hinges on the following question: Can we efficiently and accurately estimate
the coherence of a matrix? In this paper we address this question. We propose a
novel algorithm for estimating coherence from a small number of columns,
formally analyze its behavior, and derive a new coherence-based matrix
approximation bound based on this analysis. We then present extensive
experimental results on synthetic and real datasets that corroborate our
worst-case theoretical analysis, yet provide strong support for the use of our
proposed algorithm whenever low-rank approximation is being considered. Our
algorithm efficiently and accurately estimates matrix coherence across a wide
range of datasets, and these coherence estimates are excellent predictors of
the effectiveness of sampling-based matrix approximation on a case-by-case
basis.
| no_new_dataset | 0.945601 |
1009.0384 | Rahmat Widia Sembiring | Rahmat Widia Sembiring, Jasni Mohamad Zain, Abdullah Embong | Clustering high dimensional data using subspace and projected clustering
algorithms | 9 pages, 6 figures | International journal of computer science & information Technology
(IJCSIT) Vol.2, No.4, August 2010, p.162-170 | 10.5121/ijcsit.2010.2414 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Problem statement: Clustering has a number of techniques that have been
developed in statistics, pattern recognition, data mining, and other fields.
Subspace clustering enumerates clusters of objects in all subspaces of a
dataset. It tends to produce many over lapping clusters. Approach: Subspace
clustering and projected clustering are research areas for clustering in high
dimensional spaces. In this research we experiment three clustering oriented
algorithms, PROCLUS, P3C and STATPC. Results: In general, PROCLUS performs
better in terms of time of calculation and produced the least number of
un-clustered data while STATPC outperforms PROCLUS and P3C in the accuracy of
both cluster points and relevant attributes found. Conclusions/Recommendations:
In this study, we analyze in detail the properties of different data clustering
method.
| [
{
"version": "v1",
"created": "Thu, 2 Sep 2010 10:47:11 GMT"
}
] | 2010-09-03T00:00:00 | [
[
"Sembiring",
"Rahmat Widia",
""
],
[
"Zain",
"Jasni Mohamad",
""
],
[
"Embong",
"Abdullah",
""
]
] | TITLE: Clustering high dimensional data using subspace and projected clustering
algorithms
ABSTRACT: Problem statement: Clustering has a number of techniques that have been
developed in statistics, pattern recognition, data mining, and other fields.
Subspace clustering enumerates clusters of objects in all subspaces of a
dataset. It tends to produce many over lapping clusters. Approach: Subspace
clustering and projected clustering are research areas for clustering in high
dimensional spaces. In this research we experiment three clustering oriented
algorithms, PROCLUS, P3C and STATPC. Results: In general, PROCLUS performs
better in terms of time of calculation and produced the least number of
un-clustered data while STATPC outperforms PROCLUS and P3C in the accuracy of
both cluster points and relevant attributes found. Conclusions/Recommendations:
In this study, we analyze in detail the properties of different data clustering
method.
| no_new_dataset | 0.953492 |
1008.4938 | Randen Patterson | Yoojin Hong, Kyung Dae Ko, Gaurav Bhardwaj, Zhenhai Zhang, Damian B.
van Rossum, and Randen L. Patterson | Towards Solving the Inverse Protein Folding Problem | 22 pages, 11 figures | null | null | null | q-bio.QM cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately assigning folds for divergent protein sequences is a major
obstacle to structural studies and underlies the inverse protein folding
problem. Herein, we outline our theories for fold-recognition in the
"twilight-zone" of sequence similarity (<25% identity). Our analyses
demonstrate that structural sequence profiles built using Position-Specific
Scoring Matrices (PSSMs) significantly outperform multiple popular
homology-modeling algorithms for relating and predicting structures given only
their amino acid sequences. Importantly, structural sequence profiles
reconstitute SCOP fold classifications in control and test datasets. Results
from our experiments suggest that structural sequence profiles can be used to
rapidly annotate protein folds at proteomic scales. We propose that encoding
the entire Protein DataBank (~1070 folds) into structural sequence profiles
would extract interoperable information capable of improving most if not all
methods of structural modeling.
| [
{
"version": "v1",
"created": "Sun, 29 Aug 2010 15:34:02 GMT"
}
] | 2010-08-31T00:00:00 | [
[
"Hong",
"Yoojin",
""
],
[
"Ko",
"Kyung Dae",
""
],
[
"Bhardwaj",
"Gaurav",
""
],
[
"Zhang",
"Zhenhai",
""
],
[
"van Rossum",
"Damian B.",
""
],
[
"Patterson",
"Randen L.",
""
]
] | TITLE: Towards Solving the Inverse Protein Folding Problem
ABSTRACT: Accurately assigning folds for divergent protein sequences is a major
obstacle to structural studies and underlies the inverse protein folding
problem. Herein, we outline our theories for fold-recognition in the
"twilight-zone" of sequence similarity (<25% identity). Our analyses
demonstrate that structural sequence profiles built using Position-Specific
Scoring Matrices (PSSMs) significantly outperform multiple popular
homology-modeling algorithms for relating and predicting structures given only
their amino acid sequences. Importantly, structural sequence profiles
reconstitute SCOP fold classifications in control and test datasets. Results
from our experiments suggest that structural sequence profiles can be used to
rapidly annotate protein folds at proteomic scales. We propose that encoding
the entire Protein DataBank (~1070 folds) into structural sequence profiles
would extract interoperable information capable of improving most if not all
methods of structural modeling.
| no_new_dataset | 0.946151 |
1008.3629 | Dhouha Grissa | Dhouha Grissa, Sylvie Guillaume and Engelbert Mephu Nguifo | Combining Clustering techniques and Formal Concept Analysis to
characterize Interestingness Measures | 13 pages, 2 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Formal Concept Analysis "FCA" is a data analysis method which enables to
discover hidden knowledge existing in data. A kind of hidden knowledge
extracted from data is association rules. Different quality measures were
reported in the literature to extract only relevant association rules. Given a
dataset, the choice of a good quality measure remains a challenging task for a
user. Given a quality measures evaluation matrix according to semantic
properties, this paper describes how FCA can highlight quality measures with
similar behavior in order to help the user during his choice. The aim of this
article is the discovery of Interestingness Measures "IM" clusters, able to
validate those found due to the hierarchical and partitioning clustering
methods "AHC" and "k-means". Then, based on the theoretical study of sixty one
interestingness measures according to nineteen properties, proposed in a recent
study, "FCA" describes several groups of measures.
| [
{
"version": "v1",
"created": "Sat, 21 Aug 2010 13:23:23 GMT"
}
] | 2010-08-24T00:00:00 | [
[
"Grissa",
"Dhouha",
""
],
[
"Guillaume",
"Sylvie",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
]
] | TITLE: Combining Clustering techniques and Formal Concept Analysis to
characterize Interestingness Measures
ABSTRACT: Formal Concept Analysis "FCA" is a data analysis method which enables to
discover hidden knowledge existing in data. A kind of hidden knowledge
extracted from data is association rules. Different quality measures were
reported in the literature to extract only relevant association rules. Given a
dataset, the choice of a good quality measure remains a challenging task for a
user. Given a quality measures evaluation matrix according to semantic
properties, this paper describes how FCA can highlight quality measures with
similar behavior in order to help the user during his choice. The aim of this
article is the discovery of Interestingness Measures "IM" clusters, able to
validate those found due to the hierarchical and partitioning clustering
methods "AHC" and "k-means". Then, based on the theoretical study of sixty one
interestingness measures according to nineteen properties, proposed in a recent
study, "FCA" describes several groups of measures.
| no_new_dataset | 0.947381 |
1007.0437 | Adrian Melott | Adrian L. Melott (University of Kansas) and Richard K. Bambach
(Smithsonian Institution Museum of Natural History) | Nemesis Reconsidered | 10 pages, 2 figures, accepted for publication in Monthly Notices of
the Royal Astronomical Society | Monthly Notices of the Royal Astronomical Society Letters 407,
L99-L102 (2010) | 10.1111/j.1745-3933.2010.00913.x | null | astro-ph.SR astro-ph.EP astro-ph.GA physics.geo-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The hypothesis of a companion object (Nemesis) orbiting the Sun was motivated
by the claim of a terrestrial extinction periodicity, thought to be mediated by
comet showers. The orbit of a distant companion to the Sun is expected to be
perturbed by the Galactic tidal field and encounters with passing stars, which
will induce variation in the period. We examine the evidence for the previously
proposed periodicity, using two modern, greatly improved paleontological
datasets of fossil biodiversity. We find that there is a narrow peak at 27 My
in the cross-spectrum of extinction intensity time series between these
independent datasets. This periodicity extends over a time period nearly twice
that for which it was originally noted. An excess of extinction events are
associated with this periodicity at 99% confidence. In this sense we confirm
the originally noted feature in the time series for extinction. However, we
find that it displays extremely regular timing for about 0.5 Gy. The regularity
of the timing compared with earlier calculations of orbital perturbation would
seem to exclude the Nemesis hypothesis as a causal factor.
| [
{
"version": "v1",
"created": "Fri, 2 Jul 2010 19:59:47 GMT"
}
] | 2010-08-20T00:00:00 | [
[
"Melott",
"Adrian L.",
"",
"University of Kansas"
],
[
"Bambach",
"Richard K.",
"",
"Smithsonian Institution Museum of Natural History"
]
] | TITLE: Nemesis Reconsidered
ABSTRACT: The hypothesis of a companion object (Nemesis) orbiting the Sun was motivated
by the claim of a terrestrial extinction periodicity, thought to be mediated by
comet showers. The orbit of a distant companion to the Sun is expected to be
perturbed by the Galactic tidal field and encounters with passing stars, which
will induce variation in the period. We examine the evidence for the previously
proposed periodicity, using two modern, greatly improved paleontological
datasets of fossil biodiversity. We find that there is a narrow peak at 27 My
in the cross-spectrum of extinction intensity time series between these
independent datasets. This periodicity extends over a time period nearly twice
that for which it was originally noted. An excess of extinction events are
associated with this periodicity at 99% confidence. In this sense we confirm
the originally noted feature in the time series for extinction. However, we
find that it displays extremely regular timing for about 0.5 Gy. The regularity
of the timing compared with earlier calculations of orbital perturbation would
seem to exclude the Nemesis hypothesis as a causal factor.
| no_new_dataset | 0.942718 |
1008.2877 | Dr. Wolfgang A. Rolke | Wolfgang Rolke and Angel Lopez | A Test for Equality of Distributions in High Dimensions | 12 pages, 4 figures | null | null | null | physics.data-an astro-ph.IM hep-ex stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method which tests whether or not two datasets (one of which
could be Monte Carlo generated) might come from the same distribution. Our
method works in arbitrarily high dimensions.
| [
{
"version": "v1",
"created": "Tue, 17 Aug 2010 12:27:16 GMT"
}
] | 2010-08-18T00:00:00 | [
[
"Rolke",
"Wolfgang",
""
],
[
"Lopez",
"Angel",
""
]
] | TITLE: A Test for Equality of Distributions in High Dimensions
ABSTRACT: We present a method which tests whether or not two datasets (one of which
could be Monte Carlo generated) might come from the same distribution. Our
method works in arbitrarily high dimensions.
| no_new_dataset | 0.955319 |
1008.2574 | Jinyoung Han | Jinyoung Han, Taejoong Chung, Seungbae Kim, Hyun-chul Kim, Ted
"Taekyoung" Kwon, Yanghee Choi | An Empirical Study on Content Bundling in BitTorrent Swarming System | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the tremendous success of BitTorrent, its swarming system suffers
from a fundamental limitation: lower or no availability of unpopular contents.
Recently, Menasche et al. has shown that bundling is a promising solution to
mitigate this availability problem; it improves the availability and reduces
download times for unpopular contents by combining multiple files into a single
swarm. There also have been studies on bundling strategies and performance
issues in bundled swarms. In spite of the recent surge of interest in the
benefits of and strategies for bundling, there are still little empirical
grounding for understanding, describing, and modeling it. This is the first
empirical study that measures and analyzes how prevalent contents bundling is
in BitTorrent and how peers access the bundled contents, in comparison to the
other non-bundled (i.e., single-filed) ones. To our surprise, we found that
around 70% of BitTorrent swarms contain multiple files, which indicate that
bundling has become widespread for contents sharing. We also show that the
amount of bytes shared in bundled swarms is estimated to be around 85% out of
all the BitTorrent contents logged in our datasets. Inspired from our findings,
we raise and discuss three important research questions in the field of file
sharing systems as well as future contents-oriented networking: i) bundling
strategies, ii) bundling-aware sharing systems in BitTorrent, and iii)
implications on content-oriented networking.
| [
{
"version": "v1",
"created": "Mon, 16 Aug 2010 05:25:19 GMT"
}
] | 2010-08-17T00:00:00 | [
[
"Han",
"Jinyoung",
""
],
[
"Chung",
"Taejoong",
""
],
[
"Kim",
"Seungbae",
""
],
[
"Kim",
"Hyun-chul",
""
],
[
"Kwon",
"Ted \"Taekyoung\"",
""
],
[
"Choi",
"Yanghee",
""
]
] | TITLE: An Empirical Study on Content Bundling in BitTorrent Swarming System
ABSTRACT: Despite the tremendous success of BitTorrent, its swarming system suffers
from a fundamental limitation: lower or no availability of unpopular contents.
Recently, Menasche et al. has shown that bundling is a promising solution to
mitigate this availability problem; it improves the availability and reduces
download times for unpopular contents by combining multiple files into a single
swarm. There also have been studies on bundling strategies and performance
issues in bundled swarms. In spite of the recent surge of interest in the
benefits of and strategies for bundling, there are still little empirical
grounding for understanding, describing, and modeling it. This is the first
empirical study that measures and analyzes how prevalent contents bundling is
in BitTorrent and how peers access the bundled contents, in comparison to the
other non-bundled (i.e., single-filed) ones. To our surprise, we found that
around 70% of BitTorrent swarms contain multiple files, which indicate that
bundling has become widespread for contents sharing. We also show that the
amount of bytes shared in bundled swarms is estimated to be around 85% out of
all the BitTorrent contents logged in our datasets. Inspired from our findings,
we raise and discuss three important research questions in the field of file
sharing systems as well as future contents-oriented networking: i) bundling
strategies, ii) bundling-aware sharing systems in BitTorrent, and iii)
implications on content-oriented networking.
| no_new_dataset | 0.930395 |
1008.2626 | Jan Van den Bussche | Eveline Hoekx and Jan Van den Bussche | Mining tree-query associations in graphs | Full version of two earlier conference papers presented at KDD 2005
and ICDM 2006 | null | null | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New applications of data mining, such as in biology, bioinformatics, or
sociology, are faced with large datasetsstructured as graphs. We introduce a
novel class of tree-shapedpatterns called tree queries, and present algorithms
for miningtree queries and tree-query associations in a large data graph. Novel
about our class of patterns is that they can containconstants, and can contain
existential nodes which are not counted when determining the number of
occurrences of the patternin the data graph. Our algorithms have a number of
provableoptimality properties, which are based on the theory of conjunctive
database queries. We propose a practical, database-oriented implementation in
SQL, and show that the approach works in practice through experiments on data
about food webs, protein interactions, and citation analysis.
| [
{
"version": "v1",
"created": "Mon, 16 Aug 2010 11:35:59 GMT"
}
] | 2010-08-17T00:00:00 | [
[
"Hoekx",
"Eveline",
""
],
[
"Bussche",
"Jan Van den",
""
]
] | TITLE: Mining tree-query associations in graphs
ABSTRACT: New applications of data mining, such as in biology, bioinformatics, or
sociology, are faced with large datasetsstructured as graphs. We introduce a
novel class of tree-shapedpatterns called tree queries, and present algorithms
for miningtree queries and tree-query associations in a large data graph. Novel
about our class of patterns is that they can containconstants, and can contain
existential nodes which are not counted when determining the number of
occurrences of the patternin the data graph. Our algorithms have a number of
provableoptimality properties, which are based on the theory of conjunctive
database queries. We propose a practical, database-oriented implementation in
SQL, and show that the approach works in practice through experiments on data
about food webs, protein interactions, and citation analysis.
| no_new_dataset | 0.945551 |
1008.1253 | Wojciech Galuba | Daniel M. Romero, Wojciech Galuba, Sitaram Asur and Bernardo A.
Huberman | Influence and Passivity in Social Media | null | null | null | null | cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ever-increasing amount of information flowing through Social Media forces
the members of these networks to compete for attention and influence by relying
on other people to spread their message. A large study of information
propagation within Twitter reveals that the majority of users act as passive
information consumers and do not forward the content to the network. Therefore,
in order for individuals to become influential they must not only obtain
attention and thus be popular, but also overcome user passivity. We propose an
algorithm that determines the influence and passivity of users based on their
information forwarding activity. An evaluation performed with a 2.5 million
user dataset shows that our influence measure is a good predictor of URL
clicks, outperforming several other measures that do not explicitly take user
passivity into account. We also explicitly demonstrate that high popularity
does not necessarily imply high influence and vice-versa.
| [
{
"version": "v1",
"created": "Fri, 6 Aug 2010 18:54:10 GMT"
}
] | 2010-08-09T00:00:00 | [
[
"Romero",
"Daniel M.",
""
],
[
"Galuba",
"Wojciech",
""
],
[
"Asur",
"Sitaram",
""
],
[
"Huberman",
"Bernardo A.",
""
]
] | TITLE: Influence and Passivity in Social Media
ABSTRACT: The ever-increasing amount of information flowing through Social Media forces
the members of these networks to compete for attention and influence by relying
on other people to spread their message. A large study of information
propagation within Twitter reveals that the majority of users act as passive
information consumers and do not forward the content to the network. Therefore,
in order for individuals to become influential they must not only obtain
attention and thus be popular, but also overcome user passivity. We propose an
algorithm that determines the influence and passivity of users based on their
information forwarding activity. An evaluation performed with a 2.5 million
user dataset shows that our influence measure is a good predictor of URL
clicks, outperforming several other measures that do not explicitly take user
passivity into account. We also explicitly demonstrate that high popularity
does not necessarily imply high influence and vice-versa.
| no_new_dataset | 0.940463 |
1007.3564 | Dacheng Tao | Tianyi Zhou, Dacheng Tao, Xindong Wu | Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction | 33 pages, 12 figures | Journal of Data Mining and Knowledge Discovery, 2010 | 10.1007/s10618-010-0182-x | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2010 05:50:47 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jul 2010 03:48:30 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Jul 2010 03:01:09 GMT"
}
] | 2010-07-28T00:00:00 | [
[
"Zhou",
"Tianyi",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Wu",
"Xindong",
""
]
] | TITLE: Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction
ABSTRACT: It is difficult to find the optimal sparse solution of a manifold learning
based dimensionality reduction algorithm. The lasso or the elastic net
penalized manifold learning based dimensionality reduction is not directly a
lasso penalized least square problem and thus the least angle regression (LARS)
(Efron et al. \cite{LARS}), one of the most popular algorithms in sparse
learning, cannot be applied. Therefore, most current approaches take indirect
ways or have strict settings, which can be inconvenient for applications. In
this paper, we proposed the manifold elastic net or MEN for short. MEN
incorporates the merits of both the manifold learning based dimensionality
reduction and the sparse learning based dimensionality reduction. By using a
series of equivalent transformations, we show MEN is equivalent to the lasso
penalized least square problem and thus LARS is adopted to obtain the optimal
sparse solution of MEN. In particular, MEN has the following advantages for
subsequent classification: 1) the local geometry of samples is well preserved
for low dimensional data representation, 2) both the margin maximization and
the classification error minimization are considered for sparse projection
calculation, 3) the projection matrix of MEN improves the parsimony in
computation, 4) the elastic net penalty reduces the over-fitting problem, and
5) the projection matrix of MEN can be interpreted psychologically and
physiologically. Experimental evidence on face recognition over various popular
datasets suggests that MEN is superior to top level dimensionality reduction
algorithms.
| no_new_dataset | 0.954351 |
1001.1122 | Alexander Gorban | A. N. Gorban, A. Zinovyev | Principal manifolds and graphs in practice: from molecular biology to
dynamical systems | 12 pages, 9 figures | International Journal of Neural Systems, Vol. 20, No. 3 (2010)
219-232 | 10.1142/S0129065710002383 | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/3.0/ | We present several applications of non-linear data modeling, using principal
manifolds and principal graphs constructed using the metaphor of elasticity
(elastic principal graph approach). These approaches are generalizations of the
Kohonen's self-organizing maps, a class of artificial neural networks. On
several examples we show advantages of using non-linear objects for data
approximation in comparison to the linear ones. We propose four numerical
criteria for comparing linear and non-linear mappings of datasets into the
spaces of lower dimension. The examples are taken from comparative political
science, from analysis of high-throughput data in molecular biology, from
analysis of dynamical systems.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2010 17:46:17 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jul 2010 19:30:37 GMT"
}
] | 2010-07-27T00:00:00 | [
[
"Gorban",
"A. N.",
""
],
[
"Zinovyev",
"A.",
""
]
] | TITLE: Principal manifolds and graphs in practice: from molecular biology to
dynamical systems
ABSTRACT: We present several applications of non-linear data modeling, using principal
manifolds and principal graphs constructed using the metaphor of elasticity
(elastic principal graph approach). These approaches are generalizations of the
Kohonen's self-organizing maps, a class of artificial neural networks. On
several examples we show advantages of using non-linear objects for data
approximation in comparison to the linear ones. We propose four numerical
criteria for comparing linear and non-linear mappings of datasets into the
spaces of lower dimension. The examples are taken from comparative political
science, from analysis of high-throughput data in molecular biology, from
analysis of dynamical systems.
| no_new_dataset | 0.949809 |
1007.0824 | Remi Flamary | R\'emi Flamary (LITIS), Benjamin Labb\'e (LITIS), Alain Rakotomamonjy
(LITIS) | Filtrage vaste marge pour l'\'etiquetage s\'equentiel \`a noyaux de
signaux | null | Conf\'erence Francophone sur l'Apprentissage Automatique, Clermont
Ferrand : France (2010) | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address in this paper the problem of multi-channel signal sequence
labeling. In particular, we consider the problem where the signals are
contaminated by noise or may present some dephasing with respect to their
labels. For that, we propose to jointly learn a SVM sample classifier with a
temporal filtering of the channels. This will lead to a large margin filtering
that is adapted to the specificity of each channel (noise and time-lag). We
derive algorithms to solve the optimization problem and we discuss different
filter regularizations for automated scaling or selection of channels. Our
approach is tested on a non-linear toy example and on a BCI dataset. Results
show that the classification performance on these problems can be improved by
learning a large margin filtering.
| [
{
"version": "v1",
"created": "Tue, 6 Jul 2010 07:47:00 GMT"
}
] | 2010-07-26T00:00:00 | [
[
"Flamary",
"Rémi",
"",
"LITIS"
],
[
"Labbé",
"Benjamin",
"",
"LITIS"
],
[
"Rakotomamonjy",
"Alain",
"",
"LITIS"
]
] | TITLE: Filtrage vaste marge pour l'\'etiquetage s\'equentiel \`a noyaux de
signaux
ABSTRACT: We address in this paper the problem of multi-channel signal sequence
labeling. In particular, we consider the problem where the signals are
contaminated by noise or may present some dephasing with respect to their
labels. For that, we propose to jointly learn a SVM sample classifier with a
temporal filtering of the channels. This will lead to a large margin filtering
that is adapted to the specificity of each channel (noise and time-lag). We
derive algorithms to solve the optimization problem and we discuss different
filter regularizations for automated scaling or selection of channels. Our
approach is tested on a non-linear toy example and on a BCI dataset. Results
show that the classification performance on these problems can be improved by
learning a large margin filtering.
| no_new_dataset | 0.950041 |
1003.0470 | Krishnakumar Balasubramanian | Krishnakumar Balasubramanian, Pinar Donmez, Guy Lebanon | Unsupervised Supervised Learning II: Training Margin Based Classifiers
without Labels | 22 pages, 43 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many popular linear classifiers, such as logistic regression, boosting, or
SVM, are trained by optimizing a margin-based risk function. Traditionally,
these risk functions are computed based on a labeled dataset. We develop a
novel technique for estimating such risks using only unlabeled data and the
marginal label distribution. We prove that the proposed risk estimator is
consistent on high-dimensional datasets and demonstrate it on synthetic and
real-world data. In particular, we show how the estimate is used for evaluating
classifiers in transfer learning, and for training classifiers with no labeled
data whatsoever.
| [
{
"version": "v1",
"created": "Mon, 1 Mar 2010 22:32:18 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jul 2010 21:19:35 GMT"
}
] | 2010-07-23T00:00:00 | [
[
"Balasubramanian",
"Krishnakumar",
""
],
[
"Donmez",
"Pinar",
""
],
[
"Lebanon",
"Guy",
""
]
] | TITLE: Unsupervised Supervised Learning II: Training Margin Based Classifiers
without Labels
ABSTRACT: Many popular linear classifiers, such as logistic regression, boosting, or
SVM, are trained by optimizing a margin-based risk function. Traditionally,
these risk functions are computed based on a labeled dataset. We develop a
novel technique for estimating such risks using only unlabeled data and the
marginal label distribution. We prove that the proposed risk estimator is
consistent on high-dimensional datasets and demonstrate it on synthetic and
real-world data. In particular, we show how the estimate is used for evaluating
classifiers in transfer learning, and for training classifiers with no labeled
data whatsoever.
| no_new_dataset | 0.949995 |
1007.3553 | Francois Meyer | Kye M. Taylor, Michael J. Procopio, Christopher J. Young, and Francois
G. Meyer | Exploring the Manifold of Seismic Waves: Application to the Estimation
of Arrival-Times | 21 pages, 13 figures | null | null | null | physics.data-an nlin.CD physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new method to analyze seismic time series and estimate the
arrival-times of seismic waves. Our approach combines two ingredients: the
times series are first lifted into a high-dimensional space using time-delay
embedding; the resulting phase space is then parametrized using a nonlinear
method based on the eigenvectors of the graph Laplacian. We validate our
approach using a dataset of seismic events that occurred in Idaho, Montana,
Wyoming, and Utah, between 2005 and 2006. Our approach outperforms methods
based on singular-spectrum analysis, waveleta nalysis, and STA/LTA.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2010 02:46:30 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jul 2010 00:39:21 GMT"
}
] | 2010-07-23T00:00:00 | [
[
"Taylor",
"Kye M.",
""
],
[
"Procopio",
"Michael J.",
""
],
[
"Young",
"Christopher J.",
""
],
[
"Meyer",
"Francois G.",
""
]
] | TITLE: Exploring the Manifold of Seismic Waves: Application to the Estimation
of Arrival-Times
ABSTRACT: We propose a new method to analyze seismic time series and estimate the
arrival-times of seismic waves. Our approach combines two ingredients: the
times series are first lifted into a high-dimensional space using time-delay
embedding; the resulting phase space is then parametrized using a nonlinear
method based on the eigenvectors of the graph Laplacian. We validate our
approach using a dataset of seismic events that occurred in Idaho, Montana,
Wyoming, and Utah, between 2005 and 2006. Our approach outperforms methods
based on singular-spectrum analysis, waveleta nalysis, and STA/LTA.
| new_dataset | 0.9462 |
1007.3680 | Alain Barrat | Ciro Cattuto, Wouter Van den Broeck, Alain Barrat, Vittoria Colizza,
Jean-Fran\c{c}ois Pinton, Alessandro Vespignani | Dynamics of person-to-person interactions from distributed RFID sensor
networks | see also http://www.sociopatterns.org | PLoS ONE 5(7): e11596 (2010) | 10.1371/journal.pone.0011596 | null | physics.soc-ph cond-mat.stat-mech cs.HC q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital networks, mobile devices, and the possibility of mining the
ever-increasing amount of digital traces that we leave behind in our daily
activities are changing the way we can approach the study of human and social
interactions. Large-scale datasets, however, are mostly available for
collective and statistical behaviors, at coarse granularities, while
high-resolution data on person-to-person interactions are generally limited to
relatively small groups of individuals. Here we present a scalable experimental
framework for gathering real-time data resolving face-to-face social
interactions with tunable spatial and temporal granularities. We use active
Radio Frequency Identification (RFID) devices that assess mutual proximity in a
distributed fashion by exchanging low-power radio packets. We analyze the
dynamics of person-to-person interaction networks obtained in three
high-resolution experiments carried out at different orders of magnitude in
community size. The data sets exhibit common statistical properties and lack of
a characteristic time scale from 20 seconds to several hours. The association
between the number of connections and their duration shows an interesting
super-linear behavior, which indicates the possibility of defining
super-connectors both in the number and intensity of connections. Taking
advantage of scalability and resolution, this experimental framework allows the
monitoring of social interactions, uncovering similarities in the way
individuals interact in different contexts, and identifying patterns of
super-connector behavior in the community. These results could impact our
understanding of all phenomena driven by face-to-face interactions, such as the
spreading of transmissible infectious diseases and information.
| [
{
"version": "v1",
"created": "Wed, 21 Jul 2010 15:35:18 GMT"
}
] | 2010-07-22T00:00:00 | [
[
"Cattuto",
"Ciro",
""
],
[
"Broeck",
"Wouter Van den",
""
],
[
"Barrat",
"Alain",
""
],
[
"Colizza",
"Vittoria",
""
],
[
"Pinton",
"Jean-François",
""
],
[
"Vespignani",
"Alessandro",
""
]
] | TITLE: Dynamics of person-to-person interactions from distributed RFID sensor
networks
ABSTRACT: Digital networks, mobile devices, and the possibility of mining the
ever-increasing amount of digital traces that we leave behind in our daily
activities are changing the way we can approach the study of human and social
interactions. Large-scale datasets, however, are mostly available for
collective and statistical behaviors, at coarse granularities, while
high-resolution data on person-to-person interactions are generally limited to
relatively small groups of individuals. Here we present a scalable experimental
framework for gathering real-time data resolving face-to-face social
interactions with tunable spatial and temporal granularities. We use active
Radio Frequency Identification (RFID) devices that assess mutual proximity in a
distributed fashion by exchanging low-power radio packets. We analyze the
dynamics of person-to-person interaction networks obtained in three
high-resolution experiments carried out at different orders of magnitude in
community size. The data sets exhibit common statistical properties and lack of
a characteristic time scale from 20 seconds to several hours. The association
between the number of connections and their duration shows an interesting
super-linear behavior, which indicates the possibility of defining
super-connectors both in the number and intensity of connections. Taking
advantage of scalability and resolution, this experimental framework allows the
monitoring of social interactions, uncovering similarities in the way
individuals interact in different contexts, and identifying patterns of
super-connector behavior in the community. These results could impact our
understanding of all phenomena driven by face-to-face interactions, such as the
spreading of transmissible infectious diseases and information.
| no_new_dataset | 0.943243 |
1007.2958 | Hoang Trinh | Hoang Trinh | A Machine Learning Approach to Recovery of Scene Geometry from Images | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering the 3D structure of the scene from images yields useful
information for tasks such as shape and scene recognition, object detection, or
motion planning and object grasping in robotics. In this thesis, we introduce a
general machine learning approach called unsupervised CRF learning based on
maximizing the conditional likelihood. We apply our approach to computer vision
systems that recover the 3-D scene geometry from images. We focus on recovering
3D geometry from single images, stereo pairs and video sequences. Building
these systems requires algorithms for doing inference as well as learning the
parameters of conditional Markov random fields (MRF). Our system is trained
unsupervisedly without using ground-truth labeled data. We employ a
slanted-plane stereo vision model in which we use a fixed over-segmentation to
segment the left image into coherent regions called superpixels, then assign a
disparity plane for each superpixel. Plane parameters are estimated by solving
an MRF labelling problem, through minimizing an energy fuction. We demonstrate
the use of our unsupervised CRF learning algorithm for a parameterized
slanted-plane stereo vision model involving shape from texture cues. Our stereo
model with texture cues, only by unsupervised training, outperforms the results
in related work on the same stereo dataset. In this thesis, we also formulate
structure and motion estimation as an energy minimization problem, in which the
model is an extension of our slanted-plane stereo vision model that also
handles surface velocity. Velocity estimation is achieved by solving an MRF
labeling problem using Loopy BP. Performance analysis is done using our novel
evaluation metrics based on the notion of view prediction error. Experiments on
road-driving stereo sequences show encouraging results.
| [
{
"version": "v1",
"created": "Sat, 17 Jul 2010 19:59:11 GMT"
}
] | 2010-07-20T00:00:00 | [
[
"Trinh",
"Hoang",
""
]
] | TITLE: A Machine Learning Approach to Recovery of Scene Geometry from Images
ABSTRACT: Recovering the 3D structure of the scene from images yields useful
information for tasks such as shape and scene recognition, object detection, or
motion planning and object grasping in robotics. In this thesis, we introduce a
general machine learning approach called unsupervised CRF learning based on
maximizing the conditional likelihood. We apply our approach to computer vision
systems that recover the 3-D scene geometry from images. We focus on recovering
3D geometry from single images, stereo pairs and video sequences. Building
these systems requires algorithms for doing inference as well as learning the
parameters of conditional Markov random fields (MRF). Our system is trained
unsupervisedly without using ground-truth labeled data. We employ a
slanted-plane stereo vision model in which we use a fixed over-segmentation to
segment the left image into coherent regions called superpixels, then assign a
disparity plane for each superpixel. Plane parameters are estimated by solving
an MRF labelling problem, through minimizing an energy fuction. We demonstrate
the use of our unsupervised CRF learning algorithm for a parameterized
slanted-plane stereo vision model involving shape from texture cues. Our stereo
model with texture cues, only by unsupervised training, outperforms the results
in related work on the same stereo dataset. In this thesis, we also formulate
structure and motion estimation as an energy minimization problem, in which the
model is an extension of our slanted-plane stereo vision model that also
handles surface velocity. Velocity estimation is achieved by solving an MRF
labeling problem using Loopy BP. Performance analysis is done using our novel
evaluation metrics based on the notion of view prediction error. Experiments on
road-driving stereo sequences show encouraging results.
| no_new_dataset | 0.954563 |
1007.2545 | Anirban Chakraborti | Kimmo Kaski | Social Complexity: can it be analyzed and modelled? | 5 pages, 2 figures, REVTeX. To appear in "Econophysics", a special
issue in Science and Culture (Kolkata, India) to celebrate 15 years of
Econophysics | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decade network theory has turned out to be a powerful
methodology to investigate complex systems of various sorts. Through data
analysis, modeling, and simulation quite an unparalleled insight into their
structure, function, and response can be obtained. In human societies
individuals are linked through social interactions, which today are
increasingly mediated electronically by modern Information Communication
Technology thus leaving "footprints" of human behaviour as digital records. For
these datasets the network theory approach is a natural one as we have
demonstrated by analysing the dataset of multi-million user mobile phone
communication-logs. This social network turned out to be modular in structure
showing communities where individuals are connected with stronger ties and
between communities with weaker ties. Also the network topology and the
weighted links for pairs of individuals turned out to be related.These
empirical findings inspired us to take the next step in network theory, by
developing a simple network model based on basic network sociology mechanisms
to get friends in order to catch some salient features of mesoscopic community
and macroscopic topology formation. Our model turned out to produce many
empirically observed features of large-scale social networks. Thus we believe
that the network theory approach combining data analysis with modeling and
simulation could open a new perspective for studying and even predicting
various collective social phenomena such as information spreading, formation of
societal structures, and evolutionary processes in them.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2010 12:43:35 GMT"
}
] | 2010-07-16T00:00:00 | [
[
"Kaski",
"Kimmo",
""
]
] | TITLE: Social Complexity: can it be analyzed and modelled?
ABSTRACT: Over the past decade network theory has turned out to be a powerful
methodology to investigate complex systems of various sorts. Through data
analysis, modeling, and simulation quite an unparalleled insight into their
structure, function, and response can be obtained. In human societies
individuals are linked through social interactions, which today are
increasingly mediated electronically by modern Information Communication
Technology thus leaving "footprints" of human behaviour as digital records. For
these datasets the network theory approach is a natural one as we have
demonstrated by analysing the dataset of multi-million user mobile phone
communication-logs. This social network turned out to be modular in structure
showing communities where individuals are connected with stronger ties and
between communities with weaker ties. Also the network topology and the
weighted links for pairs of individuals turned out to be related.These
empirical findings inspired us to take the next step in network theory, by
developing a simple network model based on basic network sociology mechanisms
to get friends in order to catch some salient features of mesoscopic community
and macroscopic topology formation. Our model turned out to produce many
empirically observed features of large-scale social networks. Thus we believe
that the network theory approach combining data analysis with modeling and
simulation could open a new perspective for studying and even predicting
various collective social phenomena such as information spreading, formation of
societal structures, and evolutionary processes in them.
| no_new_dataset | 0.945751 |
1005.4496 | Secretary Aircc Journal | Dewan Md. Farid(1), Nouria Harbi(1), and Mohammad Zahidur Rahman(2),
((1)University Lumiere Lyon 2 - France, (2)Jahangirnagar University,
Bangladesh) | Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection | 14 Pages, IJNSA | International Journal of Network Security & Its Applications 2.2
(2010) 12-25 | 10.5121/ijnsa.2010.2202 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper, a new learning algorithm for adaptive network intrusion
detection using naive Bayesian classifier and decision tree is presented, which
performs balance detections and keeps false positives at acceptable level for
different types of network attacks, and eliminates redundant attributes as well
as contradictory examples from training data that make the detection model
complex. The proposed algorithm also addresses some difficulties of data mining
such as handling continuous attribute, dealing with missing attribute values,
and reducing noise in training data. Due to the large volumes of security audit
data as well as the complex and dynamic properties of intrusion behaviours,
several data miningbased intrusion detection techniques have been applied to
network-based traffic data and host-based data in the last decades. However,
there remain various issues needed to be examined towards current intrusion
detection systems (IDS). We tested the performance of our proposed algorithm
with existing learning algorithms by employing on the KDD99 benchmark intrusion
detection dataset. The experimental results prove that the proposed algorithm
achieved high detection rates (DR) and significant reduce false positives (FP)
for different types of network intrusions using limited computational
resources.
| [
{
"version": "v1",
"created": "Tue, 25 May 2010 07:47:00 GMT"
}
] | 2010-07-15T00:00:00 | [
[
"Farid",
"Dewan Md.",
""
],
[
"Harbi",
"Nouria",
""
],
[
"Rahman",
"Mohammad Zahidur",
""
]
] | TITLE: Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection
ABSTRACT: In this paper, a new learning algorithm for adaptive network intrusion
detection using naive Bayesian classifier and decision tree is presented, which
performs balance detections and keeps false positives at acceptable level for
different types of network attacks, and eliminates redundant attributes as well
as contradictory examples from training data that make the detection model
complex. The proposed algorithm also addresses some difficulties of data mining
such as handling continuous attribute, dealing with missing attribute values,
and reducing noise in training data. Due to the large volumes of security audit
data as well as the complex and dynamic properties of intrusion behaviours,
several data miningbased intrusion detection techniques have been applied to
network-based traffic data and host-based data in the last decades. However,
there remain various issues needed to be examined towards current intrusion
detection systems (IDS). We tested the performance of our proposed algorithm
with existing learning algorithms by employing on the KDD99 benchmark intrusion
detection dataset. The experimental results prove that the proposed algorithm
achieved high detection rates (DR) and significant reduce false positives (FP)
for different types of network intrusions using limited computational
resources.
| no_new_dataset | 0.947186 |
1005.5434 | Secretary Aircc Journal | B.N. Keshavamurthy, Mitesh Sharma and Durga Toshniwal | Efficient Support Coupled Frequent Pattern Mining Over Progressive
Databases | 10 Pages, IJDMS | International Journal of Database Management Systems 2.2 (2010)
73-82 | 10.5121/ijdms.2010.2205 | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/3.0/ | There have been many recent studies on sequential pattern mining. The
sequential pattern mining on progressive databases is relatively very new, in
which we progressively discover the sequential patterns in period of interest.
Period of interest is a sliding window continuously advancing as the time goes
by. As the focus of sliding window changes, the new items are added to the
dataset of interest and obsolete items are removed from it and become up to
date. In general, the existing proposals do not fully explore the real world
scenario, such as items associated with support in data stream applications
such as market basket analysis. Thus mining important knowledge from supported
frequent items becomes a non trivial research issue. Our proposed novel
approach efficiently mines frequent sequential pattern coupled with support
using progressive mining tree.
| [
{
"version": "v1",
"created": "Sat, 29 May 2010 07:38:51 GMT"
}
] | 2010-07-15T00:00:00 | [
[
"Keshavamurthy",
"B. N.",
""
],
[
"Sharma",
"Mitesh",
""
],
[
"Toshniwal",
"Durga",
""
]
] | TITLE: Efficient Support Coupled Frequent Pattern Mining Over Progressive
Databases
ABSTRACT: There have been many recent studies on sequential pattern mining. The
sequential pattern mining on progressive databases is relatively very new, in
which we progressively discover the sequential patterns in period of interest.
Period of interest is a sliding window continuously advancing as the time goes
by. As the focus of sliding window changes, the new items are added to the
dataset of interest and obsolete items are removed from it and become up to
date. In general, the existing proposals do not fully explore the real world
scenario, such as items associated with support in data stream applications
such as market basket analysis. Thus mining important knowledge from supported
frequent items becomes a non trivial research issue. Our proposed novel
approach efficiently mines frequent sequential pattern coupled with support
using progressive mining tree.
| no_new_dataset | 0.949106 |
1007.1268 | Huy Nguyen | Huy Nguyen and Deokjai Choi | Application of Data Mining to Network Intrusion Detection: Classifier
Selection Model | Presented at The 11th Asia-Pacific Network Operations and Management
Symposium (APNOMS 2008) | null | null | null | cs.NI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As network attacks have increased in number and severity over the past few
years, intrusion detection system (IDS) is increasingly becoming a critical
component to secure the network. Due to large volumes of security audit data as
well as complex and dynamic properties of intrusion behaviors, optimizing
performance of IDS becomes an important open problem that is receiving more and
more attention from the research community. The uncertainty to explore if
certain algorithms perform better for certain attack classes constitutes the
motivation for the reported herein. In this paper, we evaluate performance of a
comprehensive set of classifier algorithms using KDD99 dataset. Based on
evaluation results, best algorithms for each attack category is chosen and two
classifier algorithm selection models are proposed. The simulation result
comparison indicates that noticeable performance improvement and real-time
intrusion detection can be achieved as we apply the proposed models to detect
different kinds of network attacks.
| [
{
"version": "v1",
"created": "Thu, 8 Jul 2010 00:23:40 GMT"
}
] | 2010-07-09T00:00:00 | [
[
"Nguyen",
"Huy",
""
],
[
"Choi",
"Deokjai",
""
]
] | TITLE: Application of Data Mining to Network Intrusion Detection: Classifier
Selection Model
ABSTRACT: As network attacks have increased in number and severity over the past few
years, intrusion detection system (IDS) is increasingly becoming a critical
component to secure the network. Due to large volumes of security audit data as
well as complex and dynamic properties of intrusion behaviors, optimizing
performance of IDS becomes an important open problem that is receiving more and
more attention from the research community. The uncertainty to explore if
certain algorithms perform better for certain attack classes constitutes the
motivation for the reported herein. In this paper, we evaluate performance of a
comprehensive set of classifier algorithms using KDD99 dataset. Based on
evaluation results, best algorithms for each attack category is chosen and two
classifier algorithm selection models are proposed. The simulation result
comparison indicates that noticeable performance improvement and real-time
intrusion detection can be achieved as we apply the proposed models to detect
different kinds of network attacks.
| no_new_dataset | 0.948775 |
0803.1568 | Uwe Aickelin | Qi Chen and Uwe Aickelin | Dempster-Shafer for Anomaly Detection | null | Proceedings of the International Conference on Data Mining (DMIN
2006), pp 232-238, Las Vegas, USA 2006 | null | null | cs.NE cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we implement an anomaly detection system using the
Dempster-Shafer method. Using two standard benchmark problems we show that by
combining multiple signals it is possible to achieve better results than by
using a single signal. We further show that by applying this approach to a
real-world email dataset the algorithm works for email worm detection.
Dempster-Shafer can be a promising method for anomaly detection problems with
multiple features (data sources), and two or more classes.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2008 12:39:01 GMT"
}
] | 2010-07-05T00:00:00 | [
[
"Chen",
"Qi",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Dempster-Shafer for Anomaly Detection
ABSTRACT: In this paper, we implement an anomaly detection system using the
Dempster-Shafer method. Using two standard benchmark problems we show that by
combining multiple signals it is possible to achieve better results than by
using a single signal. We further show that by applying this approach to a
real-world email dataset the algorithm works for email worm detection.
Dempster-Shafer can be a promising method for anomaly detection problems with
multiple features (data sources), and two or more classes.
| no_new_dataset | 0.945701 |
0803.2973 | Uwe Aickelin | Uwe Aickelin, Jamie Twycross and Thomas Hesketh-Roberts | Rule Generalisation in Intrusion Detection Systems using Snort | null | International Journal of Electronic Security and Digital
Forensics, 1 (1), pp 101-116, 2007 | 10.1504/IJESDF.2007.013596, | null | cs.NE cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrusion Detection Systems (ids)provide an important layer of security for
computer systems and networks, and are becoming more and more necessary as
reliance on Internet services increases and systems with sensitive data are
more commonly open to Internet access. An ids responsibility is to detect
suspicious or unacceptable system and network activity and to alert a systems
administrator to this activity. The majority of ids use a set of signatures
that define what suspicious traffic is, and Snort is one popular and actively
developing open-source ids that uses such a set of signatures known as Snort
rules. Our aim is to identify a way in which Snort could be developed further
by generalising rules to identify novel attacks. In particular, we attempted to
relax and vary the conditions and parameters of current Snort rules, using a
similar approach to classic rule learning operators such as generalisation and
specialisation. We demonstrate the effectiveness of our approach through
experiments with standard datasets and show that we are able to detect
previously undeleted variants of various attacks. We conclude by discussing the
general effectiveness and appropriateness of generalisation in Snort based ids
rule processing.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2008 11:59:27 GMT"
},
{
"version": "v2",
"created": "Fri, 16 May 2008 10:42:09 GMT"
}
] | 2010-07-05T00:00:00 | [
[
"Aickelin",
"Uwe",
""
],
[
"Twycross",
"Jamie",
""
],
[
"Hesketh-Roberts",
"Thomas",
""
]
] | TITLE: Rule Generalisation in Intrusion Detection Systems using Snort
ABSTRACT: Intrusion Detection Systems (ids)provide an important layer of security for
computer systems and networks, and are becoming more and more necessary as
reliance on Internet services increases and systems with sensitive data are
more commonly open to Internet access. An ids responsibility is to detect
suspicious or unacceptable system and network activity and to alert a systems
administrator to this activity. The majority of ids use a set of signatures
that define what suspicious traffic is, and Snort is one popular and actively
developing open-source ids that uses such a set of signatures known as Snort
rules. Our aim is to identify a way in which Snort could be developed further
by generalising rules to identify novel attacks. In particular, we attempted to
relax and vary the conditions and parameters of current Snort rules, using a
similar approach to classic rule learning operators such as generalisation and
specialisation. We demonstrate the effectiveness of our approach through
experiments with standard datasets and show that we are able to detect
previously undeleted variants of various attacks. We conclude by discussing the
general effectiveness and appropriateness of generalisation in Snort based ids
rule processing.
| no_new_dataset | 0.940626 |
1004.3708 | Uwe Aickelin | Yongnan Ji, Pierre-Yves Herve, Uwe Aickelin, Alain Pitiot | Parcellation of fMRI Datasets with ICA and PLS-A Data Driven Approach | 8 pages, 5 figures, P12th International Conference of Medical Image
Computing and Computer-Assisted Intervention (MICCAI 2009) | Proceedings of the 12th International Conference of Medical Image
Computing and Computer-Assisted Intervention (MICCAI 2009), Part I, Lecture
Notes in Computer Science 5761, London, UK, 2009 | null | null | cs.CV cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI)
data based on a standard General Linear Model (GLM)and spectral clustering was
recently proposed as a means to alleviate the issues associated with spatial
normalization in fMRI. However, for all its appeal, a GLM-based parcellation
approach introduces its own biases, in the form of a priori knowledge about the
shape of Hemodynamic Response Function (HRF) and task-related signal changes,
or about the subject behaviour during the task. In this paper, we introduce a
data-driven version of the spectral clustering parcellation, based on
Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of
the GLM. First, a number of independent components are automatically selected.
Seed voxels are then obtained from the associated ICA maps and we compute the
PLS latent variables between the fMRI signal of the seed voxels (which covers
regional variations of the HRF) and the principal components of the signal
across all voxels. Finally, we parcellate all subjects data with a spectral
clustering of the PLS latent variables. We present results of the application
of the proposed method on both single-subject and multi-subject fMRI datasets.
Preliminary experimental results, evaluated with intra-parcel variance of GLM
t-values and PLS derived t-values, indicate that this data-driven approach
offers improvement in terms of parcellation accuracy over GLM based techniques.
| [
{
"version": "v1",
"created": "Wed, 21 Apr 2010 13:50:55 GMT"
}
] | 2010-07-05T00:00:00 | [
[
"Ji",
"Yongnan",
""
],
[
"Herve",
"Pierre-Yves",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Pitiot",
"Alain",
""
]
] | TITLE: Parcellation of fMRI Datasets with ICA and PLS-A Data Driven Approach
ABSTRACT: Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI)
data based on a standard General Linear Model (GLM)and spectral clustering was
recently proposed as a means to alleviate the issues associated with spatial
normalization in fMRI. However, for all its appeal, a GLM-based parcellation
approach introduces its own biases, in the form of a priori knowledge about the
shape of Hemodynamic Response Function (HRF) and task-related signal changes,
or about the subject behaviour during the task. In this paper, we introduce a
data-driven version of the spectral clustering parcellation, based on
Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of
the GLM. First, a number of independent components are automatically selected.
Seed voxels are then obtained from the associated ICA maps and we compute the
PLS latent variables between the fMRI signal of the seed voxels (which covers
regional variations of the HRF) and the principal components of the signal
across all voxels. Finally, we parcellate all subjects data with a spectral
clustering of the PLS latent variables. We present results of the application
of the proposed method on both single-subject and multi-subject fMRI datasets.
Preliminary experimental results, evaluated with intra-parcel variance of GLM
t-values and PLS derived t-values, indicate that this data-driven approach
offers improvement in terms of parcellation accuracy over GLM based techniques.
| no_new_dataset | 0.946101 |
1006.1512 | Uwe Aickelin | Julie Greensmith, Uwe Aickelin | The Deterministic Dendritic Cell Algorithm | 12 pages, 1 algorithm, 1 figure, 2 tables, 7th International
Conference on Artificial Immune Systems (ICARIS 2008) | Proceedings of the 7th International Conference on Artificial
Immune Systems (ICARIS 2008), Phuket, Thailand, p 291-303 | null | null | cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally
based on the function of natural dendritic cells. The original instantiation of
the algorithm is a highly stochastic algorithm. While the performance of the
algorithm is good when applied to large real-time datasets, it is difficult to
anal- yse due to the number of random-based elements. In this paper a
deterministic version of the algorithm is proposed, implemented and tested
using a port scan dataset to provide a controllable system. This version
consists of a controllable amount of parameters, which are experimented with in
this paper. In addition the effects are examined of the use of time windows and
variation on the number of cells, both which are shown to influence the
algorithm. Finally a novel metric for the assessment of the algorithms output
is introduced and proves to be a more sensitive metric than the metric used
with the original Dendritic Cell Algorithm.
| [
{
"version": "v1",
"created": "Tue, 8 Jun 2010 10:07:34 GMT"
}
] | 2010-07-05T00:00:00 | [
[
"Greensmith",
"Julie",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: The Deterministic Dendritic Cell Algorithm
ABSTRACT: The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally
based on the function of natural dendritic cells. The original instantiation of
the algorithm is a highly stochastic algorithm. While the performance of the
algorithm is good when applied to large real-time datasets, it is difficult to
anal- yse due to the number of random-based elements. In this paper a
deterministic version of the algorithm is proposed, implemented and tested
using a port scan dataset to provide a controllable system. This version
consists of a controllable amount of parameters, which are experimented with in
this paper. In addition the effects are examined of the use of time windows and
variation on the number of cells, both which are shown to influence the
algorithm. Finally a novel metric for the assessment of the algorithms output
is introduced and proves to be a more sensitive metric than the metric used
with the original Dendritic Cell Algorithm.
| no_new_dataset | 0.948537 |
1006.5060 | Xiaohui Xie | Gui-Bo Ye and Xiaohui Xie | Learning sparse gradients for variable selection and dimension reduction | null | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variable selection and dimension reduction are two commonly adopted
approaches for high-dimensional data analysis, but have traditionally been
treated separately. Here we propose an integrated approach, called sparse
gradient learning (SGL), for variable selection and dimension reduction via
learning the gradients of the prediction function directly from samples. By
imposing a sparsity constraint on the gradients, variable selection is achieved
by selecting variables corresponding to non-zero partial derivatives, and
effective dimensions are extracted based on the eigenvectors of the derived
sparse empirical gradient covariance matrix. An error analysis is given for the
convergence of the estimated gradients to the true ones in both the Euclidean
and the manifold setting. We also develop an efficient forward-backward
splitting algorithm to solve the SGL problem, making the framework practically
scalable for medium or large datasets. The utility of SGL for variable
selection and feature extraction is explicitly given and illustrated on
artificial data as well as real-world examples. The main advantages of our
method include variable selection for both linear and nonlinear predictions,
effective dimension reduction with sparse loadings, and an efficient algorithm
for large p, small n problems.
| [
{
"version": "v1",
"created": "Fri, 25 Jun 2010 20:27:00 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jul 2010 05:06:43 GMT"
}
] | 2010-07-02T00:00:00 | [
[
"Ye",
"Gui-Bo",
""
],
[
"Xie",
"Xiaohui",
""
]
] | TITLE: Learning sparse gradients for variable selection and dimension reduction
ABSTRACT: Variable selection and dimension reduction are two commonly adopted
approaches for high-dimensional data analysis, but have traditionally been
treated separately. Here we propose an integrated approach, called sparse
gradient learning (SGL), for variable selection and dimension reduction via
learning the gradients of the prediction function directly from samples. By
imposing a sparsity constraint on the gradients, variable selection is achieved
by selecting variables corresponding to non-zero partial derivatives, and
effective dimensions are extracted based on the eigenvectors of the derived
sparse empirical gradient covariance matrix. An error analysis is given for the
convergence of the estimated gradients to the true ones in both the Euclidean
and the manifold setting. We also develop an efficient forward-backward
splitting algorithm to solve the SGL problem, making the framework practically
scalable for medium or large datasets. The utility of SGL for variable
selection and feature extraction is explicitly given and illustrated on
artificial data as well as real-world examples. The main advantages of our
method include variable selection for both linear and nonlinear predictions,
effective dimension reduction with sparse loadings, and an efficient algorithm
for large p, small n problems.
| no_new_dataset | 0.947039 |
1005.4032 | Debotosh Bhattacharjee | Sandhya Arora, Debotosh Bhattacharjee, Mita Nasipuri, Dipak Kumar
Basu, and Mahantapas Kundu | Combining Multiple Feature Extraction Techniques for Handwritten
Devnagari Character Recognition | 6 pages, 8-10 December 2008 | ICIIS 2008 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present an OCR for Handwritten Devnagari Characters. Basic
symbols are recognized by neural classifier. We have used four feature
extraction techniques namely, intersection, shadow feature, chain code
histogram and straight line fitting features. Shadow features are computed
globally for character image while intersection features, chain code histogram
features and line fitting features are computed by dividing the character image
into different segments. Weighted majority voting technique is used for
combining the classification decision obtained from four Multi Layer
Perceptron(MLP) based classifier. On experimentation with a dataset of 4900
samples the overall recognition rate observed is 92.80% as we considered top
five choices results. This method is compared with other recent methods for
Handwritten Devnagari Character Recognition and it has been observed that this
approach has better success rate than other methods.
| [
{
"version": "v1",
"created": "Fri, 21 May 2010 17:57:50 GMT"
}
] | 2010-07-01T00:00:00 | [
[
"Arora",
"Sandhya",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak Kumar",
""
],
[
"Kundu",
"Mahantapas",
""
]
] | TITLE: Combining Multiple Feature Extraction Techniques for Handwritten
Devnagari Character Recognition
ABSTRACT: In this paper we present an OCR for Handwritten Devnagari Characters. Basic
symbols are recognized by neural classifier. We have used four feature
extraction techniques namely, intersection, shadow feature, chain code
histogram and straight line fitting features. Shadow features are computed
globally for character image while intersection features, chain code histogram
features and line fitting features are computed by dividing the character image
into different segments. Weighted majority voting technique is used for
combining the classification decision obtained from four Multi Layer
Perceptron(MLP) based classifier. On experimentation with a dataset of 4900
samples the overall recognition rate observed is 92.80% as we considered top
five choices results. This method is compared with other recent methods for
Handwritten Devnagari Character Recognition and it has been observed that this
approach has better success rate than other methods.
| no_new_dataset | 0.946695 |
1006.5913 | Debotosh Bhattacharjee | Sandhya Arora, Debotosh Bhattacharjee, Mita Nasipuri, Dipak Kumar
Basu, and Mahantapas Kundu | Multiple Classifier Combination for Off-line Handwritten Devnagari
Character Recognition | null | ICSC 2008 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents the application of weighted majority voting technique for
combination of classification decision obtained from three Multi_Layer
Perceptron(MLP) based classifiers for Recognition of Handwritten Devnagari
characters using three different feature sets. The features used are
intersection, shadow feature and chain code histogram features. Shadow features
are computed globally for character image while intersection features and chain
code histogram features are computed by dividing the character image into
different segments. On experimentation with a dataset of 4900 samples the
overall recognition rate observed is 92.16% as we considered top five choices
results. This method is compared with other recent methods for Handwritten
Devnagari Character Recognition and it has been observed that this approach has
better success rate than other methods.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2010 16:38:02 GMT"
}
] | 2010-07-01T00:00:00 | [
[
"Arora",
"Sandhya",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak Kumar",
""
],
[
"Kundu",
"Mahantapas",
""
]
] | TITLE: Multiple Classifier Combination for Off-line Handwritten Devnagari
Character Recognition
ABSTRACT: This work presents the application of weighted majority voting technique for
combination of classification decision obtained from three Multi_Layer
Perceptron(MLP) based classifiers for Recognition of Handwritten Devnagari
characters using three different feature sets. The features used are
intersection, shadow feature and chain code histogram features. Shadow features
are computed globally for character image while intersection features and chain
code histogram features are computed by dividing the character image into
different segments. On experimentation with a dataset of 4900 samples the
overall recognition rate observed is 92.16% as we considered top five choices
results. This method is compared with other recent methods for Handwritten
Devnagari Character Recognition and it has been observed that this approach has
better success rate than other methods.
| no_new_dataset | 0.940626 |
1006.5927 | Debotosh Bhattacharjee | Sandhya Arora, Latesh Malik, Debotosh Bhattacharjee, and Mita Nasipuri | Classification Of Gradient Change Features Using MLP For Handwritten
Character Recognition | null | EAIT 2006 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel, generic scheme for off-line handwritten English alphabets character
images is proposed. The advantage of the technique is that it can be applied in
a generic manner to different applications and is expected to perform better in
uncertain and noisy environments. The recognition scheme is using a multilayer
perceptron(MLP) neural networks. The system was trained and tested on a
database of 300 samples of handwritten characters. For improved generalization
and to avoid overtraining, the whole available dataset has been divided into
two subsets: training set and test set. We achieved 99.10% and 94.15% correct
recognition rates on training and test sets respectively. The purposed scheme
is robust with respect to various writing styles and size as well as presence
of considerable noise.
| [
{
"version": "v1",
"created": "Wed, 30 Jun 2010 17:14:40 GMT"
}
] | 2010-07-01T00:00:00 | [
[
"Arora",
"Sandhya",
""
],
[
"Malik",
"Latesh",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Nasipuri",
"Mita",
""
]
] | TITLE: Classification Of Gradient Change Features Using MLP For Handwritten
Character Recognition
ABSTRACT: A novel, generic scheme for off-line handwritten English alphabets character
images is proposed. The advantage of the technique is that it can be applied in
a generic manner to different applications and is expected to perform better in
uncertain and noisy environments. The recognition scheme is using a multilayer
perceptron(MLP) neural networks. The system was trained and tested on a
database of 300 samples of handwritten characters. For improved generalization
and to avoid overtraining, the whole available dataset has been divided into
two subsets: training set and test set. We achieved 99.10% and 94.15% correct
recognition rates on training and test sets respectively. The purposed scheme
is robust with respect to various writing styles and size as well as presence
of considerable noise.
| no_new_dataset | 0.941922 |
1006.5188 | Nicola Di Mauro | Nicola Di Mauro and Teresa M.A. Basile and Stefano Ferilli and
Floriana Esposito | Feature Construction for Relational Sequence Learning | 15 pages | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of multi-class relational sequence learning using
relevant patterns discovered from a set of labelled sequences. To deal with
this problem, firstly each relational sequence is mapped into a feature vector
using the result of a feature construction method. Since, the efficacy of
sequence learning algorithms strongly depends on the features used to represent
the sequences, the second step is to find an optimal subset of the constructed
features leading to high classification accuracy. This feature selection task
has been solved adopting a wrapper approach that uses a stochastic local search
algorithm embedding a naive Bayes classifier. The performance of the proposed
method applied to a real-world dataset shows an improvement when compared to
other established methods, such as hidden Markov models, Fisher kernels and
conditional random fields for relational sequences.
| [
{
"version": "v1",
"created": "Sun, 27 Jun 2010 08:56:11 GMT"
}
] | 2010-06-29T00:00:00 | [
[
"Di Mauro",
"Nicola",
""
],
[
"Basile",
"Teresa M. A.",
""
],
[
"Ferilli",
"Stefano",
""
],
[
"Esposito",
"Floriana",
""
]
] | TITLE: Feature Construction for Relational Sequence Learning
ABSTRACT: We tackle the problem of multi-class relational sequence learning using
relevant patterns discovered from a set of labelled sequences. To deal with
this problem, firstly each relational sequence is mapped into a feature vector
using the result of a feature construction method. Since, the efficacy of
sequence learning algorithms strongly depends on the features used to represent
the sequences, the second step is to find an optimal subset of the constructed
features leading to high classification accuracy. This feature selection task
has been solved adopting a wrapper approach that uses a stochastic local search
algorithm embedding a naive Bayes classifier. The performance of the proposed
method applied to a real-world dataset shows an improvement when compared to
other established methods, such as hidden Markov models, Fisher kernels and
conditional random fields for relational sequences.
| no_new_dataset | 0.948298 |
1006.5041 | Yoshinobu Kawahara | Yoshinobu Kawahara, Kenneth Bollen, Shohei Shimizu and Takashi Washio | GroupLiNGAM: Linear non-Gaussian acyclic models for sets of variables | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding the structure of a graphical model has been received much attention
in many fields. Recently, it is reported that the non-Gaussianity of data
enables us to identify the structure of a directed acyclic graph without any
prior knowledge on the structure. In this paper, we propose a novel
non-Gaussianity based algorithm for more general type of models; chain graphs.
The algorithm finds an ordering of the disjoint subsets of variables by
iteratively evaluating the independence between the variable subset and the
residuals when the remaining variables are regressed on those. However, its
computational cost grows exponentially according to the number of variables.
Therefore, we further discuss an efficient approximate approach for applying
the algorithm to large sized graphs. We illustrate the algorithm with
artificial and real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 24 Jun 2010 13:09:36 GMT"
}
] | 2010-06-28T00:00:00 | [
[
"Kawahara",
"Yoshinobu",
""
],
[
"Bollen",
"Kenneth",
""
],
[
"Shimizu",
"Shohei",
""
],
[
"Washio",
"Takashi",
""
]
] | TITLE: GroupLiNGAM: Linear non-Gaussian acyclic models for sets of variables
ABSTRACT: Finding the structure of a graphical model has been received much attention
in many fields. Recently, it is reported that the non-Gaussianity of data
enables us to identify the structure of a directed acyclic graph without any
prior knowledge on the structure. In this paper, we propose a novel
non-Gaussianity based algorithm for more general type of models; chain graphs.
The algorithm finds an ordering of the disjoint subsets of variables by
iteratively evaluating the independence between the variable subset and the
residuals when the remaining variables are regressed on those. However, its
computational cost grows exponentially according to the number of variables.
Therefore, we further discuss an efficient approximate approach for applying
the algorithm to large sized graphs. We illustrate the algorithm with
artificial and real-world datasets.
| no_new_dataset | 0.949201 |
1006.5051 | Ping Li | Ping Li | Fast ABC-Boost for Multi-Class Classification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abc-boost is a new line of boosting algorithms for multi-class
classification, by utilizing the commonly used sum-to-zero constraint. To
implement abc-boost, a base class must be identified at each boosting step.
Prior studies used a very expensive procedure based on exhaustive search for
determining the base class at each boosting step. Good testing performances of
abc-boost (implemented as abc-mart and abc-logitboost) on a variety of datasets
were reported.
For large datasets, however, the exhaustive search strategy adopted in prior
abc-boost algorithms can be too prohibitive. To overcome this serious
limitation, this paper suggests a heuristic by introducing Gaps when computing
the base class during training. That is, we update the choice of the base class
only for every $G$ boosting steps (i.e., G=1 in prior studies). We test this
idea on large datasets (Covertype and Poker) as well as datasets of moderate
sizes. Our preliminary results are very encouraging. On the large datasets,
even with G=100 (or larger), there is essentially no loss of test accuracy. On
the moderate datasets, no obvious loss of test accuracy is observed when G<=
20~50. Therefore, aided by this heuristic, it is promising that abc-boost will
be a practical tool for accurate multi-class classification.
| [
{
"version": "v1",
"created": "Fri, 25 Jun 2010 19:48:50 GMT"
}
] | 2010-06-28T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: Fast ABC-Boost for Multi-Class Classification
ABSTRACT: Abc-boost is a new line of boosting algorithms for multi-class
classification, by utilizing the commonly used sum-to-zero constraint. To
implement abc-boost, a base class must be identified at each boosting step.
Prior studies used a very expensive procedure based on exhaustive search for
determining the base class at each boosting step. Good testing performances of
abc-boost (implemented as abc-mart and abc-logitboost) on a variety of datasets
were reported.
For large datasets, however, the exhaustive search strategy adopted in prior
abc-boost algorithms can be too prohibitive. To overcome this serious
limitation, this paper suggests a heuristic by introducing Gaps when computing
the base class during training. That is, we update the choice of the base class
only for every $G$ boosting steps (i.e., G=1 in prior studies). We test this
idea on large datasets (Covertype and Poker) as well as datasets of moderate
sizes. Our preliminary results are very encouraging. On the large datasets,
even with G=100 (or larger), there is essentially no loss of test accuracy. On
the moderate datasets, no obvious loss of test accuracy is observed when G<=
20~50. Therefore, aided by this heuristic, it is promising that abc-boost will
be a practical tool for accurate multi-class classification.
| no_new_dataset | 0.947137 |
1006.4540 | William Jackson | N. Suguna and K. Thanushkodi | A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee
Colony Optimization | IEEE Publication Format,
https://sites.google.com/site/journalofcomputing/ | Journal of Computing, Vol. 2, No. 6, June 2010, NY, USA, ISSN
2151-9617 | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection refers to the problem of selecting relevant features which
produce the most predictive outcome. In particular, feature selection task is
involved in datasets containing huge number of features. Rough set theory has
been one of the most successful methods used for feature selection. However,
this method is still not able to find optimal subsets. This paper proposes a
new feature selection method based on Rough set theory hybrid with Bee Colony
Optimization (BCO) in an attempt to combat this. This proposed work is applied
in the medical domain to find the minimal reducts and experimentally compared
with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods
such as Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle
Swarm Optimization (PSO).
| [
{
"version": "v1",
"created": "Wed, 23 Jun 2010 14:53:33 GMT"
}
] | 2010-06-24T00:00:00 | [
[
"Suguna",
"N.",
""
],
[
"Thanushkodi",
"K.",
""
]
] | TITLE: A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee
Colony Optimization
ABSTRACT: Feature selection refers to the problem of selecting relevant features which
produce the most predictive outcome. In particular, feature selection task is
involved in datasets containing huge number of features. Rough set theory has
been one of the most successful methods used for feature selection. However,
this method is still not able to find optimal subsets. This paper proposes a
new feature selection method based on Rough set theory hybrid with Bee Colony
Optimization (BCO) in an attempt to combat this. This proposed work is applied
in the medical domain to find the minimal reducts and experimentally compared
with the Quick Reduct, Entropy Based Reduct, and other hybrid Rough Set methods
such as Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle
Swarm Optimization (PSO).
| no_new_dataset | 0.951097 |
1006.3679 | Hossein Mobahi | Hossein Mobahi, Shankar R. Rao, Allen Y. Yang, Shankar S. Sastry and
Yi Ma | Segmentation of Natural Images by Texture and Boundary Compression | null | null | null | null | cs.CV cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel algorithm for segmentation of natural images that
harnesses the principle of minimum description length (MDL). Our method is
based on observations that a homogeneously textured region of a natural image
can be well modeled by a Gaussian distribution and the region boundary can be
effectively coded by an adaptive chain code. The optimal segmentation of an
image is the one that gives the shortest coding length for encoding all
textures and boundaries in the image, and is obtained via an agglomerative
clustering process applied to a hierarchy of decreasing window sizes as
multi-scale texture features. The optimal segmentation also provides an
accurate estimate of the overall coding length and hence the true entropy of
the image. We test our algorithm on the publicly available Berkeley
Segmentation Dataset. It achieves state-of-the-art segmentation results
compared to other existing methods.
| [
{
"version": "v1",
"created": "Fri, 18 Jun 2010 12:37:28 GMT"
}
] | 2010-06-21T00:00:00 | [
[
"Mobahi",
"Hossein",
""
],
[
"Rao",
"Shankar R.",
""
],
[
"Yang",
"Allen Y.",
""
],
[
"Sastry",
"Shankar S.",
""
],
[
"Ma",
"Yi",
""
]
] | TITLE: Segmentation of Natural Images by Texture and Boundary Compression
ABSTRACT: We present a novel algorithm for segmentation of natural images that
harnesses the principle of minimum description length (MDL). Our method is
based on observations that a homogeneously textured region of a natural image
can be well modeled by a Gaussian distribution and the region boundary can be
effectively coded by an adaptive chain code. The optimal segmentation of an
image is the one that gives the shortest coding length for encoding all
textures and boundaries in the image, and is obtained via an agglomerative
clustering process applied to a hierarchy of decreasing window sizes as
multi-scale texture features. The optimal segmentation also provides an
accurate estimate of the overall coding length and hence the true entropy of
the image. We test our algorithm on the publicly available Berkeley
Segmentation Dataset. It achieves state-of-the-art segmentation results
compared to other existing methods.
| no_new_dataset | 0.949435 |
1006.2734 | Ariel Baya | Ariel E. Baya and Pablo M. Granitto | Penalized K-Nearest-Neighbor-Graph Based Metrics for Clustering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | A difficult problem in clustering is how to handle data with a manifold
structure, i.e. data that is not shaped in the form of compact clouds of
points, forming arbitrary shapes or paths embedded in a high-dimensional space.
In this work we introduce the Penalized k-Nearest-Neighbor-Graph (PKNNG) based
metric, a new tool for evaluating distances in such cases. The new metric can
be used in combination with most clustering algorithms. The PKNNG metric is
based on a two-step procedure: first it constructs the k-Nearest-Neighbor-Graph
of the dataset of interest using a low k-value and then it adds edges with an
exponentially penalized weight for connecting the sub-graphs produced by the
first step. We discuss several possible schemes for connecting the different
sub-graphs. We use three artificial datasets in four different embedding
situations to evaluate the behavior of the new metric, including a comparison
among different clustering methods. We also evaluate the new metric in a real
world application, clustering the MNIST digits dataset. In all cases the PKNNG
metric shows promising clustering results.
| [
{
"version": "v1",
"created": "Mon, 14 Jun 2010 15:07:45 GMT"
}
] | 2010-06-15T00:00:00 | [
[
"Baya",
"Ariel E.",
""
],
[
"Granitto",
"Pablo M.",
""
]
] | TITLE: Penalized K-Nearest-Neighbor-Graph Based Metrics for Clustering
ABSTRACT: A difficult problem in clustering is how to handle data with a manifold
structure, i.e. data that is not shaped in the form of compact clouds of
points, forming arbitrary shapes or paths embedded in a high-dimensional space.
In this work we introduce the Penalized k-Nearest-Neighbor-Graph (PKNNG) based
metric, a new tool for evaluating distances in such cases. The new metric can
be used in combination with most clustering algorithms. The PKNNG metric is
based on a two-step procedure: first it constructs the k-Nearest-Neighbor-Graph
of the dataset of interest using a low k-value and then it adds edges with an
exponentially penalized weight for connecting the sub-graphs produced by the
first step. We discuss several possible schemes for connecting the different
sub-graphs. We use three artificial datasets in four different embedding
situations to evaluate the behavior of the new metric, including a comparison
among different clustering methods. We also evaluate the new metric in a real
world application, clustering the MNIST digits dataset. In all cases the PKNNG
metric shows promising clustering results.
| no_new_dataset | 0.952882 |
1005.5516 | David J Brenes | David J. Brenes, Daniel Gayo-Avello and Rodrigo Garcia | On the Fly Query Entity Decomposition Using Snippets | Extended version of paper submitted to CERI 2010 | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | One of the most important issues in Information Retrieval is inferring the
intents underlying users' queries. Thus, any tool to enrich or to better
contextualized queries can proof extremely valuable. Entity extraction,
provided it is done fast, can be one of such tools. Such techniques usually
rely on a prior training phase involving large datasets. That training is
costly, specially in environments which are increasingly moving towards real
time scenarios where latency to retrieve fresh informacion should be minimal.
In this paper an `on-the-fly' query decomposition method is proposed. It uses
snippets which are mined by means of a na\"ive statistical algorithm. An
initial evaluation of such a method is provided, in addition to a discussion on
its applicability to different scenarios.
| [
{
"version": "v1",
"created": "Sun, 30 May 2010 11:41:43 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Jun 2010 11:36:05 GMT"
}
] | 2010-06-14T00:00:00 | [
[
"Brenes",
"David J.",
""
],
[
"Gayo-Avello",
"Daniel",
""
],
[
"Garcia",
"Rodrigo",
""
]
] | TITLE: On the Fly Query Entity Decomposition Using Snippets
ABSTRACT: One of the most important issues in Information Retrieval is inferring the
intents underlying users' queries. Thus, any tool to enrich or to better
contextualized queries can proof extremely valuable. Entity extraction,
provided it is done fast, can be one of such tools. Such techniques usually
rely on a prior training phase involving large datasets. That training is
costly, specially in environments which are increasingly moving towards real
time scenarios where latency to retrieve fresh informacion should be minimal.
In this paper an `on-the-fly' query decomposition method is proposed. It uses
snippets which are mined by means of a na\"ive statistical algorithm. An
initial evaluation of such a method is provided, in addition to a discussion on
its applicability to different scenarios.
| no_new_dataset | 0.946843 |
1006.2156 | Aditya Menon | Aditya Krishna Menon and Charles Elkan | Dyadic Prediction Using a Latent Feature Log-Linear Model | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In dyadic prediction, labels must be predicted for pairs (dyads) whose
members possess unique identifiers and, sometimes, additional features called
side-information. Special cases of this problem include collaborative filtering
and link prediction. We present the first model for dyadic prediction that
satisfies several important desiderata: (i) labels may be ordinal or nominal,
(ii) side-information can be easily exploited if present, (iii) with or without
side-information, latent features are inferred for dyad members, (iv) it is
resistant to sample-selection bias, (v) it can learn well-calibrated
probabilities, and (vi) it can scale to very large datasets. To our knowledge,
no existing method satisfies all the above criteria. In particular, many
methods assume that the labels are ordinal and ignore side-information when it
is present. Experimental results show that the new method is competitive with
state-of-the-art methods for the special cases of collaborative filtering and
link prediction, and that it makes accurate predictions on nominal data.
| [
{
"version": "v1",
"created": "Thu, 10 Jun 2010 21:19:28 GMT"
}
] | 2010-06-14T00:00:00 | [
[
"Menon",
"Aditya Krishna",
""
],
[
"Elkan",
"Charles",
""
]
] | TITLE: Dyadic Prediction Using a Latent Feature Log-Linear Model
ABSTRACT: In dyadic prediction, labels must be predicted for pairs (dyads) whose
members possess unique identifiers and, sometimes, additional features called
side-information. Special cases of this problem include collaborative filtering
and link prediction. We present the first model for dyadic prediction that
satisfies several important desiderata: (i) labels may be ordinal or nominal,
(ii) side-information can be easily exploited if present, (iii) with or without
side-information, latent features are inferred for dyad members, (iv) it is
resistant to sample-selection bias, (v) it can learn well-calibrated
probabilities, and (vi) it can scale to very large datasets. To our knowledge,
no existing method satisfies all the above criteria. In particular, many
methods assume that the labels are ordinal and ignore side-information when it
is present. Experimental results show that the new method is competitive with
state-of-the-art methods for the special cases of collaborative filtering and
link prediction, and that it makes accurate predictions on nominal data.
| no_new_dataset | 0.950088 |
1006.1702 | Munmun De Choudhury | Munmun De Choudhury, Hari Sundaram, Ajita John, Doree Duncan
Seligmann, Aisling Kelliher | "Birds of a Feather": Does User Homophily Impact Information Diffusion
in Social Media? | 31 pages, 10 figures, 3 tables | null | null | null | cs.CY physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This article investigates the impact of user homophily on the social process
of information diffusion in online social media. Over several decades, social
scientists have been interested in the idea that similarity breeds connection:
precisely known as "homophily". Homophily has been extensively studied in the
social sciences and refers to the idea that users in a social system tend to
bond more with ones who are similar to them than to ones who are dissimilar.
The key observation is that homophily structures the ego-networks of
individuals and impacts their communication behavior. It is therefore likely to
effect the mechanisms in which information propagates among them. To this
effect, we investigate the interplay between homophily along diverse user
attributes and the information diffusion process on social media. In our
approach, we first extract diffusion characteristics---corresponding to the
baseline social graph as well as graphs filtered on different user attributes
(e.g. location, activity). Second, we propose a Dynamic Bayesian Network based
framework to predict diffusion characteristics at a future time. Third, the
impact of attribute homophily is quantified by the ability of the predicted
characteristics in explaining actual diffusion, and external variables,
including trends in search and news. Experimental results on a large Twitter
dataset demonstrate that choice of the homophilous attribute can impact the
prediction of information diffusion, given a specific metric and a topic. In
most cases, attribute homophily is able to explain the actual diffusion and
external trends by ~15-25% over cases when homophily is not considered.
| [
{
"version": "v1",
"created": "Wed, 9 Jun 2010 04:19:20 GMT"
}
] | 2010-06-10T00:00:00 | [
[
"De Choudhury",
"Munmun",
""
],
[
"Sundaram",
"Hari",
""
],
[
"John",
"Ajita",
""
],
[
"Seligmann",
"Doree Duncan",
""
],
[
"Kelliher",
"Aisling",
""
]
] | TITLE: "Birds of a Feather": Does User Homophily Impact Information Diffusion
in Social Media?
ABSTRACT: This article investigates the impact of user homophily on the social process
of information diffusion in online social media. Over several decades, social
scientists have been interested in the idea that similarity breeds connection:
precisely known as "homophily". Homophily has been extensively studied in the
social sciences and refers to the idea that users in a social system tend to
bond more with ones who are similar to them than to ones who are dissimilar.
The key observation is that homophily structures the ego-networks of
individuals and impacts their communication behavior. It is therefore likely to
effect the mechanisms in which information propagates among them. To this
effect, we investigate the interplay between homophily along diverse user
attributes and the information diffusion process on social media. In our
approach, we first extract diffusion characteristics---corresponding to the
baseline social graph as well as graphs filtered on different user attributes
(e.g. location, activity). Second, we propose a Dynamic Bayesian Network based
framework to predict diffusion characteristics at a future time. Third, the
impact of attribute homophily is quantified by the ability of the predicted
characteristics in explaining actual diffusion, and external variables,
including trends in search and news. Experimental results on a large Twitter
dataset demonstrate that choice of the homophilous attribute can impact the
prediction of information diffusion, given a specific metric and a topic. In
most cases, attribute homophily is able to explain the actual diffusion and
external trends by ~15-25% over cases when homophily is not considered.
| no_new_dataset | 0.951818 |
1006.1328 | Jonathan Huang | Jonathan Huang and Carlos Guestrin | Uncovering the Riffled Independence Structure of Rankings | 65 pages | null | null | null | cs.LG cs.AI stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Representing distributions over permutations can be a daunting task due to
the fact that the number of permutations of $n$ objects scales factorially in
$n$. One recent way that has been used to reduce storage complexity has been to
exploit probabilistic independence, but as we argue, full independence
assumptions impose strong sparsity constraints on distributions and are
unsuitable for modeling rankings. We identify a novel class of independence
structures, called \emph{riffled independence}, encompassing a more expressive
family of distributions while retaining many of the properties necessary for
performing efficient inference and reducing sample complexity. In riffled
independence, one draws two permutations independently, then performs the
\emph{riffle shuffle}, common in card games, to combine the two permutations to
form a single permutation. Within the context of ranking, riffled independence
corresponds to ranking disjoint sets of objects independently, then
interleaving those rankings. In this paper, we provide a formal introduction to
riffled independence and present algorithms for using riffled independence
within Fourier-theoretic frameworks which have been explored by a number of
recent papers. Additionally, we propose an automated method for discovering
sets of items which are riffle independent from a training set of rankings. We
show that our clustering-like algorithms can be used to discover meaningful
latent coalitions from real preference ranking datasets and to learn the
structure of hierarchically decomposable models based on riffled independence.
| [
{
"version": "v1",
"created": "Mon, 7 Jun 2010 18:45:46 GMT"
}
] | 2010-06-08T00:00:00 | [
[
"Huang",
"Jonathan",
""
],
[
"Guestrin",
"Carlos",
""
]
] | TITLE: Uncovering the Riffled Independence Structure of Rankings
ABSTRACT: Representing distributions over permutations can be a daunting task due to
the fact that the number of permutations of $n$ objects scales factorially in
$n$. One recent way that has been used to reduce storage complexity has been to
exploit probabilistic independence, but as we argue, full independence
assumptions impose strong sparsity constraints on distributions and are
unsuitable for modeling rankings. We identify a novel class of independence
structures, called \emph{riffled independence}, encompassing a more expressive
family of distributions while retaining many of the properties necessary for
performing efficient inference and reducing sample complexity. In riffled
independence, one draws two permutations independently, then performs the
\emph{riffle shuffle}, common in card games, to combine the two permutations to
form a single permutation. Within the context of ranking, riffled independence
corresponds to ranking disjoint sets of objects independently, then
interleaving those rankings. In this paper, we provide a formal introduction to
riffled independence and present algorithms for using riffled independence
within Fourier-theoretic frameworks which have been explored by a number of
recent papers. Additionally, we propose an automated method for discovering
sets of items which are riffle independent from a training set of rankings. We
show that our clustering-like algorithms can be used to discover meaningful
latent coalitions from real preference ranking datasets and to learn the
structure of hierarchically decomposable models based on riffled independence.
| no_new_dataset | 0.946151 |
1005.0390 | Adam Gauci | Adam Gauci, Kristian Zarb Adami, John Abela | Machine Learning for Galaxy Morphology Classification | null | null | null | null | astro-ph.GA cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, decision tree learning algorithms and fuzzy inferencing systems
are applied for galaxy morphology classification. In particular, the CART, the
C4.5, the Random Forest and fuzzy logic algorithms are studied and reliable
classifiers are developed to distinguish between spiral galaxies, elliptical
galaxies or star/unknown galactic objects. Morphology information for the
training and testing datasets is obtained from the Galaxy Zoo project while the
corresponding photometric and spectra parameters are downloaded from the SDSS
DR7 catalogue.
| [
{
"version": "v1",
"created": "Mon, 3 May 2010 20:01:38 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Jun 2010 07:54:29 GMT"
}
] | 2010-06-02T00:00:00 | [
[
"Gauci",
"Adam",
""
],
[
"Adami",
"Kristian Zarb",
""
],
[
"Abela",
"John",
""
]
] | TITLE: Machine Learning for Galaxy Morphology Classification
ABSTRACT: In this work, decision tree learning algorithms and fuzzy inferencing systems
are applied for galaxy morphology classification. In particular, the CART, the
C4.5, the Random Forest and fuzzy logic algorithms are studied and reliable
classifiers are developed to distinguish between spiral galaxies, elliptical
galaxies or star/unknown galactic objects. Morphology information for the
training and testing datasets is obtained from the Galaxy Zoo project while the
corresponding photometric and spectra parameters are downloaded from the SDSS
DR7 catalogue.
| no_new_dataset | 0.953362 |
1005.4963 | Anon Plangprasopchok | Anon Plangprasopchok, Kristina Lerman, Lise Getoor | Integrating Structured Metadata with Relational Affinity Propagation | 6 Pages, To appear at AAAI Workshop on Statistical Relational AI | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structured and semi-structured data describing entities, taxonomies and
ontologies appears in many domains. There is a huge interest in integrating
structured information from multiple sources; however integrating structured
data to infer complex common structures is a difficult task because the
integration must aggregate similar structures while avoiding structural
inconsistencies that may appear when the data is combined. In this work, we
study the integration of structured social metadata: shallow personal
hierarchies specified by many individual users on the SocialWeb, and focus on
inferring a collection of integrated, consistent taxonomies. We frame this task
as an optimization problem with structural constraints. We propose a new
inference algorithm, which we refer to as Relational Affinity Propagation (RAP)
that extends affinity propagation (Frey and Dueck 2007) by introducing
structural constraints. We validate the approach on a real-world social media
dataset, collected from the photosharing website Flickr. Our empirical results
show that our proposed approach is able to construct deeper and denser
structures compared to an approach using only the standard affinity propagation
algorithm.
| [
{
"version": "v1",
"created": "Wed, 26 May 2010 23:13:05 GMT"
}
] | 2010-05-28T00:00:00 | [
[
"Plangprasopchok",
"Anon",
""
],
[
"Lerman",
"Kristina",
""
],
[
"Getoor",
"Lise",
""
]
] | TITLE: Integrating Structured Metadata with Relational Affinity Propagation
ABSTRACT: Structured and semi-structured data describing entities, taxonomies and
ontologies appears in many domains. There is a huge interest in integrating
structured information from multiple sources; however integrating structured
data to infer complex common structures is a difficult task because the
integration must aggregate similar structures while avoiding structural
inconsistencies that may appear when the data is combined. In this work, we
study the integration of structured social metadata: shallow personal
hierarchies specified by many individual users on the SocialWeb, and focus on
inferring a collection of integrated, consistent taxonomies. We frame this task
as an optimization problem with structural constraints. We propose a new
inference algorithm, which we refer to as Relational Affinity Propagation (RAP)
that extends affinity propagation (Frey and Dueck 2007) by introducing
structural constraints. We validate the approach on a real-world social media
dataset, collected from the photosharing website Flickr. Our empirical results
show that our proposed approach is able to construct deeper and denser
structures compared to an approach using only the standard affinity propagation
algorithm.
| no_new_dataset | 0.948822 |
1005.5035 | Mark Edgington | Mark Edgington, Yohannes Kassahun and Frank Kirchner | Dynamic Motion Modelling for Legged Robots | null | null | 10.1109/IROS.2009.5354026 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An accurate motion model is an important component in modern-day robotic
systems, but building such a model for a complex system often requires an
appreciable amount of manual effort. In this paper we present a motion model
representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the
need to manually design the form of a motion model, and provides a direct means
of incorporating auxiliary sensory data into the model. This representation and
its accompanying algorithms are validated experimentally using an 8-legged
kinematically complex robot, as well as a standard benchmark dataset. The
presented method not only learns the robot's motion model, but also improves
the model's accuracy by incorporating information about the terrain surrounding
the robot.
| [
{
"version": "v1",
"created": "Thu, 27 May 2010 11:41:36 GMT"
}
] | 2010-05-28T00:00:00 | [
[
"Edgington",
"Mark",
""
],
[
"Kassahun",
"Yohannes",
""
],
[
"Kirchner",
"Frank",
""
]
] | TITLE: Dynamic Motion Modelling for Legged Robots
ABSTRACT: An accurate motion model is an important component in modern-day robotic
systems, but building such a model for a complex system often requires an
appreciable amount of manual effort. In this paper we present a motion model
representation, the Dynamic Gaussian Mixture Model (DGMM), that alleviates the
need to manually design the form of a motion model, and provides a direct means
of incorporating auxiliary sensory data into the model. This representation and
its accompanying algorithms are validated experimentally using an 8-legged
kinematically complex robot, as well as a standard benchmark dataset. The
presented method not only learns the robot's motion model, but also improves
the model's accuracy by incorporating information about the terrain surrounding
the robot.
| no_new_dataset | 0.9357 |
1005.4454 | Bruce Berriman | Joseph C. Jacob, Daniel S. Katz, G. Bruce Berriman, John Good,
Anastasia C. Laity, Ewa Deelman, Carl Kesselman, Gurmeet Singh, Mei-Hui Su,
Thomas A. Prince, Roy Williams | Montage: a grid portal and software toolkit for science-grade
astronomical image mosaicking | 16 pages, 11 figures | Int. J. Computational Science and Engineering. 2009 | null | null | astro-ph.IM cs.DC cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Montage is a portable software toolkit for constructing custom, science-grade
mosaics by composing multiple astronomical images. The mosaics constructed by
Montage preserve the astrometry (position) and photometry (intensity) of the
sources in the input images. The mosaic to be constructed is specified by the
user in terms of a set of parameters, including dataset and wavelength to be
used, location and size on the sky, coordinate system and projection, and
spatial sampling rate. Many astronomical datasets are massive, and are stored
in distributed archives that are, in most cases, remote with respect to the
available computational resources. Montage can be run on both single- and
multi-processor computers, including clusters and grids. Standard grid tools
are used to run Montage in the case where the data or computers used to
construct a mosaic are located remotely on the Internet. This paper describes
the architecture, algorithms, and usage of Montage as both a software toolkit
and as a grid portal. Timing results are provided to show how Montage
performance scales with number of processors on a cluster computer. In
addition, we compare the performance of two methods of running Montage in
parallel on a grid.
| [
{
"version": "v1",
"created": "Mon, 24 May 2010 23:28:51 GMT"
}
] | 2010-05-26T00:00:00 | [
[
"Jacob",
"Joseph C.",
""
],
[
"Katz",
"Daniel S.",
""
],
[
"Berriman",
"G. Bruce",
""
],
[
"Good",
"John",
""
],
[
"Laity",
"Anastasia C.",
""
],
[
"Deelman",
"Ewa",
""
],
[
"Kesselman",
"Carl",
""
],
[
"Singh",
"Gurmeet",
""
],
[
"Su",
"Mei-Hui",
""
],
[
"Prince",
"Thomas A.",
""
],
[
"Williams",
"Roy",
""
]
] | TITLE: Montage: a grid portal and software toolkit for science-grade
astronomical image mosaicking
ABSTRACT: Montage is a portable software toolkit for constructing custom, science-grade
mosaics by composing multiple astronomical images. The mosaics constructed by
Montage preserve the astrometry (position) and photometry (intensity) of the
sources in the input images. The mosaic to be constructed is specified by the
user in terms of a set of parameters, including dataset and wavelength to be
used, location and size on the sky, coordinate system and projection, and
spatial sampling rate. Many astronomical datasets are massive, and are stored
in distributed archives that are, in most cases, remote with respect to the
available computational resources. Montage can be run on both single- and
multi-processor computers, including clusters and grids. Standard grid tools
are used to run Montage in the case where the data or computers used to
construct a mosaic are located remotely on the Internet. This paper describes
the architecture, algorithms, and usage of Montage as both a software toolkit
and as a grid portal. Timing results are provided to show how Montage
performance scales with number of processors on a cluster computer. In
addition, we compare the performance of two methods of running Montage in
parallel on a grid.
| no_new_dataset | 0.951908 |
0906.2883 | Petr Chaloupka | Petr Chaloupka, Pavel Jakl, Jan Kapit\'an, J\'er\^ome Lauret and
Michal Zerola | Setting up a STAR Tier 2 Site at Golias/Prague Farm | To appear in proceedings of Computing in High Energy and Nuclear
Physics 2009 | J.Phys.Conf.Ser.219:072031,2010 | 10.1088/1742-6596/219/7/072031 | null | physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High Energy Nuclear Physics (HENP) collaborations' experience show that the
computing resources available at a single site are often neither sufficient nor
satisfy the need of remote collaborators. From latencies in the network
connectivity to the lack of interactivity, work at distant computing centers is
often inefficient. Having fully functional software stack on local resources is
a strong enabler of science opportunities for any local group who can afford
the time investment.
Prague's heavy-ions group participating in STAR experiment at RHIC has been a
strong advocate of local computing as the most efficient means of data
processing and physics analyses. Tier 2 computing center was set up at a
Regional Computing Center for Particle Physics called "Golias".
We report on our experience in setting up a fully functional Tier 2 center
and discuss the solutions chosen to address storage space and analysis issues
and the impact on the farms overall functionality. This includes a locally
built STAR analysis framework, integration with a local DPM system (a cost
effective storage solution), the influence of the availability and quality of
the network connection to Tier 0 via a dedicated CESNET/ESnet link and the
development of light-weight yet fully automated data transfer tools allowing
the movement of entire datasets from BNL (Tier 0) to Golias (Tier 2).
| [
{
"version": "v1",
"created": "Tue, 16 Jun 2009 09:43:25 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jun 2009 09:55:28 GMT"
}
] | 2010-05-25T00:00:00 | [
[
"Chaloupka",
"Petr",
""
],
[
"Jakl",
"Pavel",
""
],
[
"Kapitán",
"Jan",
""
],
[
"Lauret",
"Jérôme",
""
],
[
"Zerola",
"Michal",
""
]
] | TITLE: Setting up a STAR Tier 2 Site at Golias/Prague Farm
ABSTRACT: High Energy Nuclear Physics (HENP) collaborations' experience show that the
computing resources available at a single site are often neither sufficient nor
satisfy the need of remote collaborators. From latencies in the network
connectivity to the lack of interactivity, work at distant computing centers is
often inefficient. Having fully functional software stack on local resources is
a strong enabler of science opportunities for any local group who can afford
the time investment.
Prague's heavy-ions group participating in STAR experiment at RHIC has been a
strong advocate of local computing as the most efficient means of data
processing and physics analyses. Tier 2 computing center was set up at a
Regional Computing Center for Particle Physics called "Golias".
We report on our experience in setting up a fully functional Tier 2 center
and discuss the solutions chosen to address storage space and analysis issues
and the impact on the farms overall functionality. This includes a locally
built STAR analysis framework, integration with a local DPM system (a cost
effective storage solution), the influence of the availability and quality of
the network connection to Tier 0 via a dedicated CESNET/ESnet link and the
development of light-weight yet fully automated data transfer tools allowing
the movement of entire datasets from BNL (Tier 0) to Golias (Tier 2).
| no_new_dataset | 0.943764 |
1005.4270 | Chriss Romy | V.Kavitha, M. Punithavalli | Clustering Time Series Data Stream - A Literature Survey | IEEE Publication format, International Journal of Computer Science
and Information Security, IJCSIS, Vol. 8 No. 1, April 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/ | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Mining Time Series data has a tremendous growth of interest in today's world.
To provide an indication various implementations are studied and summarized to
identify the different problems in existing applications. Clustering time
series is a trouble that has applications in an extensive assortment of fields
and has recently attracted a large amount of research. Time series data are
frequently large and may contain outliers. In addition, time series are a
special type of data set where elements have a temporal ordering. Therefore
clustering of such data stream is an important issue in the data mining
process. Numerous techniques and clustering algorithms have been proposed
earlier to assist clustering of time series data streams. The clustering
algorithms and its effectiveness on various applications are compared to
develop a new method to solve the existing problem. This paper presents a
survey on various clustering algorithms available for time series datasets.
Moreover, the distinctiveness and restriction of previous research are
discussed and several achievable topics for future study are recognized.
Furthermore the areas that utilize time series clustering are also summarized.
| [
{
"version": "v1",
"created": "Mon, 24 May 2010 07:41:29 GMT"
}
] | 2010-05-25T00:00:00 | [
[
"Kavitha",
"V.",
""
],
[
"Punithavalli",
"M.",
""
]
] | TITLE: Clustering Time Series Data Stream - A Literature Survey
ABSTRACT: Mining Time Series data has a tremendous growth of interest in today's world.
To provide an indication various implementations are studied and summarized to
identify the different problems in existing applications. Clustering time
series is a trouble that has applications in an extensive assortment of fields
and has recently attracted a large amount of research. Time series data are
frequently large and may contain outliers. In addition, time series are a
special type of data set where elements have a temporal ordering. Therefore
clustering of such data stream is an important issue in the data mining
process. Numerous techniques and clustering algorithms have been proposed
earlier to assist clustering of time series data streams. The clustering
algorithms and its effectiveness on various applications are compared to
develop a new method to solve the existing problem. This paper presents a
survey on various clustering algorithms available for time series datasets.
Moreover, the distinctiveness and restriction of previous research are
discussed and several achievable topics for future study are recognized.
Furthermore the areas that utilize time series clustering are also summarized.
| no_new_dataset | 0.951953 |
1005.0919 | Rdv Ijcsis | Dewan Md. Farid, Mohammad Zahidur Rahman | Attribute Weighting with Adaptive NBTree for Reducing False Positives in
Intrusion Detection | IEEE Publication format, International Journal of Computer Science
and Information Security, IJCSIS, Vol. 8 No. 1, April 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/ | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper, we introduce new learning algorithms for reducing false
positives in intrusion detection. It is based on decision tree-based attribute
weighting with adaptive na\"ive Bayesian tree, which not only reduce the false
positives (FP) at acceptable level, but also scale up the detection rates (DR)
for different types of network intrusions. Due to the tremendous growth of
network-based services, intrusion detection has emerged as an important
technique for network security. Recently data mining algorithms are applied on
network-based traffic data and host-based program behaviors to detect
intrusions or misuse patterns, but there exist some issues in current intrusion
detection algorithms such as unbalanced detection rates, large numbers of false
positives, and redundant attributes that will lead to the complexity of
detection model and degradation of detection accuracy. The purpose of this
study is to identify important input attributes for building an intrusion
detection system (IDS) that is computationally efficient and effective.
Experimental results performed using the KDD99 benchmark network intrusion
detection dataset indicate that the proposed approach can significantly reduce
the number and percentage of false positives and scale up the balance detection
rates for different types of network intrusions.
| [
{
"version": "v1",
"created": "Thu, 6 May 2010 08:07:01 GMT"
}
] | 2010-05-07T00:00:00 | [
[
"Farid",
"Dewan Md.",
""
],
[
"Rahman",
"Mohammad Zahidur",
""
]
] | TITLE: Attribute Weighting with Adaptive NBTree for Reducing False Positives in
Intrusion Detection
ABSTRACT: In this paper, we introduce new learning algorithms for reducing false
positives in intrusion detection. It is based on decision tree-based attribute
weighting with adaptive na\"ive Bayesian tree, which not only reduce the false
positives (FP) at acceptable level, but also scale up the detection rates (DR)
for different types of network intrusions. Due to the tremendous growth of
network-based services, intrusion detection has emerged as an important
technique for network security. Recently data mining algorithms are applied on
network-based traffic data and host-based program behaviors to detect
intrusions or misuse patterns, but there exist some issues in current intrusion
detection algorithms such as unbalanced detection rates, large numbers of false
positives, and redundant attributes that will lead to the complexity of
detection model and degradation of detection accuracy. The purpose of this
study is to identify important input attributes for building an intrusion
detection system (IDS) that is computationally efficient and effective.
Experimental results performed using the KDD99 benchmark network intrusion
detection dataset indicate that the proposed approach can significantly reduce
the number and percentage of false positives and scale up the balance detection
rates for different types of network intrusions.
| no_new_dataset | 0.947381 |
1005.0268 | Andri Mirzal M.Sc. | Andri Mirzal and Masashi Furukawa | Node-Context Network Clustering using PARAFAC Tensor Decomposition | 6 pages, 4 figures, International Conference on Information &
Communication Technology and Systems | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We describe a clustering method for labeled link network (semantic graph)
that can be used to group important nodes (highly connected nodes) with their
relevant link's labels by using PARAFAC tensor decomposition. In this kind of
network, the adjacency matrix can not be used to fully describe all information
about the network structure. We have to expand the matrix into 3-way adjacency
tensor, so that not only the information about to which nodes a node connects
to but by which link's labels is also included. And by applying PARAFAC
decomposition on this tensor, we get two lists, nodes and link's labels with
scores attached to each node and labels, for each decomposition group. So
clustering process to get the important nodes along with their relevant labels
can be done simply by sorting the lists in decreasing order. To test the
method, we construct labeled link network by using blog's dataset, where the
blogs are the nodes and labeled links are the shared words among them. The
similarity measures between the results and standard measures look promising,
especially for two most important tasks, finding the most relevant words to
blogs query and finding the most similar blogs to blogs query, about 0.87.
| [
{
"version": "v1",
"created": "Mon, 3 May 2010 12:28:42 GMT"
}
] | 2010-05-04T00:00:00 | [
[
"Mirzal",
"Andri",
""
],
[
"Furukawa",
"Masashi",
""
]
] | TITLE: Node-Context Network Clustering using PARAFAC Tensor Decomposition
ABSTRACT: We describe a clustering method for labeled link network (semantic graph)
that can be used to group important nodes (highly connected nodes) with their
relevant link's labels by using PARAFAC tensor decomposition. In this kind of
network, the adjacency matrix can not be used to fully describe all information
about the network structure. We have to expand the matrix into 3-way adjacency
tensor, so that not only the information about to which nodes a node connects
to but by which link's labels is also included. And by applying PARAFAC
decomposition on this tensor, we get two lists, nodes and link's labels with
scores attached to each node and labels, for each decomposition group. So
clustering process to get the important nodes along with their relevant labels
can be done simply by sorting the lists in decreasing order. To test the
method, we construct labeled link network by using blog's dataset, where the
blogs are the nodes and labeled links are the shared words among them. The
similarity measures between the results and standard measures look promising,
especially for two most important tasks, finding the most relevant words to
blogs query and finding the most similar blogs to blogs query, about 0.87.
| no_new_dataset | 0.947039 |
1004.4965 | Mikhail Zaslavskiy | Mikhail Zaslavskiy (CBIO), Francis Bach (INRIA Rocquencourt, LIENS),
Jean-Philippe Vert (CBIO) | Many-to-Many Graph Matching: a Continuous Relaxation Approach | 19 | null | null | null | stat.ML cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs provide an efficient tool for object representation in various
computer vision applications. Once graph-based representations are constructed,
an important question is how to compare graphs. This problem is often
formulated as a graph matching problem where one seeks a mapping between
vertices of two graphs which optimally aligns their structure. In the classical
formulation of graph matching, only one-to-one correspondences between vertices
are considered. However, in many applications, graphs cannot be matched
perfectly and it is more interesting to consider many-to-many correspondences
where clusters of vertices in one graph are matched to clusters of vertices in
the other graph. In this paper, we formulate the many-to-many graph matching
problem as a discrete optimization problem and propose an approximate algorithm
based on a continuous relaxation of the combinatorial problem. We compare our
method with other existing methods on several benchmark computer vision
datasets.
| [
{
"version": "v1",
"created": "Wed, 28 Apr 2010 07:46:55 GMT"
}
] | 2010-04-30T00:00:00 | [
[
"Zaslavskiy",
"Mikhail",
"",
"CBIO"
],
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt, LIENS"
],
[
"Vert",
"Jean-Philippe",
"",
"CBIO"
]
] | TITLE: Many-to-Many Graph Matching: a Continuous Relaxation Approach
ABSTRACT: Graphs provide an efficient tool for object representation in various
computer vision applications. Once graph-based representations are constructed,
an important question is how to compare graphs. This problem is often
formulated as a graph matching problem where one seeks a mapping between
vertices of two graphs which optimally aligns their structure. In the classical
formulation of graph matching, only one-to-one correspondences between vertices
are considered. However, in many applications, graphs cannot be matched
perfectly and it is more interesting to consider many-to-many correspondences
where clusters of vertices in one graph are matched to clusters of vertices in
the other graph. In this paper, we formulate the many-to-many graph matching
problem as a discrete optimization problem and propose an approximate algorithm
based on a continuous relaxation of the combinatorial problem. We compare our
method with other existing methods on several benchmark computer vision
datasets.
| no_new_dataset | 0.953188 |
1004.5370 | Dell Zhang | Dell Zhang, Jun Wang, Deng Cai, Jinsong Lu | Self-Taught Hashing for Fast Similarity Search | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/3.0/ | The ability of fast similarity search at large scale is of great importance
to many Information Retrieval (IR) applications. A promising way to accelerate
similarity search is semantic hashing which designs compact binary codes for a
large number of documents so that semantically similar documents are mapped to
similar codes (within a short Hamming distance). Although some recently
proposed techniques are able to generate high-quality codes for documents known
in advance, obtaining the codes for previously unseen documents remains to be a
very challenging problem. In this paper, we emphasise this issue and propose a
novel Self-Taught Hashing (STH) approach to semantic hashing: we first find the
optimal $l$-bit binary codes for all documents in the given corpus via
unsupervised learning, and then train $l$ classifiers via supervised learning
to predict the $l$-bit code for any query document unseen before. Our
experiments on three real-world text datasets show that the proposed approach
using binarised Laplacian Eigenmap (LapEig) and linear Support Vector Machine
(SVM) outperforms state-of-the-art techniques significantly.
| [
{
"version": "v1",
"created": "Thu, 29 Apr 2010 19:25:17 GMT"
}
] | 2010-04-30T00:00:00 | [
[
"Zhang",
"Dell",
""
],
[
"Wang",
"Jun",
""
],
[
"Cai",
"Deng",
""
],
[
"Lu",
"Jinsong",
""
]
] | TITLE: Self-Taught Hashing for Fast Similarity Search
ABSTRACT: The ability of fast similarity search at large scale is of great importance
to many Information Retrieval (IR) applications. A promising way to accelerate
similarity search is semantic hashing which designs compact binary codes for a
large number of documents so that semantically similar documents are mapped to
similar codes (within a short Hamming distance). Although some recently
proposed techniques are able to generate high-quality codes for documents known
in advance, obtaining the codes for previously unseen documents remains to be a
very challenging problem. In this paper, we emphasise this issue and propose a
novel Self-Taught Hashing (STH) approach to semantic hashing: we first find the
optimal $l$-bit binary codes for all documents in the given corpus via
unsupervised learning, and then train $l$ classifiers via supervised learning
to predict the $l$-bit code for any query document unseen before. Our
experiments on three real-world text datasets show that the proposed approach
using binarised Laplacian Eigenmap (LapEig) and linear Support Vector Machine
(SVM) outperforms state-of-the-art techniques significantly.
| no_new_dataset | 0.948965 |
1002.3724 | Francesco Silvestri | Sara Nasso (1), Francesco Silvestri (1), Francesco Tisiot (1), Barbara
Di Camillo (1), Andrea Pietracaprina (1) and Gianna Maria Toffolo (1) ((1)
Department of Information Engineering, University of Padova) | An Optimized Data Structure for High Throughput 3D Proteomics Data:
mzRTree | Paper details: 10 pages, 7 figures, 2 tables. To be published in
Journal of Proteomics. Source code available at
http://www.dei.unipd.it/mzrtree | Journal of Proteomics 73(6) (2010) 1176-1182 | 10.1016/j.jprot.2010.02.006 | null | cs.CE cs.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an emerging field, MS-based proteomics still requires software tools for
efficiently storing and accessing experimental data. In this work, we focus on
the management of LC-MS data, which are typically made available in standard
XML-based portable formats. The structures that are currently employed to
manage these data can be highly inefficient, especially when dealing with
high-throughput profile data. LC-MS datasets are usually accessed through 2D
range queries. Optimizing this type of operation could dramatically reduce the
complexity of data analysis. We propose a novel data structure for LC-MS
datasets, called mzRTree, which embodies a scalable index based on the R-tree
data structure. mzRTree can be efficiently created from the XML-based data
formats and it is suitable for handling very large datasets. We experimentally
show that, on all range queries, mzRTree outperforms other known structures
used for LC-MS data, even on those queries these structures are optimized for.
Besides, mzRTree is also more space efficient. As a result, mzRTree reduces
data analysis computational costs for very large profile datasets.
| [
{
"version": "v1",
"created": "Fri, 19 Feb 2010 17:17:02 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Feb 2010 08:18:47 GMT"
}
] | 2010-04-27T00:00:00 | [
[
"Nasso",
"Sara",
""
],
[
"Silvestri",
"Francesco",
""
],
[
"Tisiot",
"Francesco",
""
],
[
"Di Camillo",
"Barbara",
""
],
[
"Pietracaprina",
"Andrea",
""
],
[
"Toffolo",
"Gianna Maria",
""
]
] | TITLE: An Optimized Data Structure for High Throughput 3D Proteomics Data:
mzRTree
ABSTRACT: As an emerging field, MS-based proteomics still requires software tools for
efficiently storing and accessing experimental data. In this work, we focus on
the management of LC-MS data, which are typically made available in standard
XML-based portable formats. The structures that are currently employed to
manage these data can be highly inefficient, especially when dealing with
high-throughput profile data. LC-MS datasets are usually accessed through 2D
range queries. Optimizing this type of operation could dramatically reduce the
complexity of data analysis. We propose a novel data structure for LC-MS
datasets, called mzRTree, which embodies a scalable index based on the R-tree
data structure. mzRTree can be efficiently created from the XML-based data
formats and it is suitable for handling very large datasets. We experimentally
show that, on all range queries, mzRTree outperforms other known structures
used for LC-MS data, even on those queries these structures are optimized for.
Besides, mzRTree is also more space efficient. As a result, mzRTree reduces
data analysis computational costs for very large profile datasets.
| no_new_dataset | 0.948298 |
1004.3568 | Vishal Goyal | Vikram Singh, Sapna Nagpal | Integrating User's Domain Knowledge with Association Rule Mining | International Journal of Computer Science Issues online at
http://ijcsi.org/articles/Integrating-Users-Domain-Knowledge-with-Association-Rule-Mining.php | IJCSI, Volume 7, Issue 2, March 2010 | null | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a variation of Apriori algorithm that includes the role
of domain expert to guide and speed up the overall knowledge discovery task.
Usually, the user is interested in finding relationships between certain
attributes instead of the whole dataset. Moreover, he can help the mining
algorithm to select the target database which in turn takes less time to find
the desired association rules. Variants of the standard Apriori and Interactive
Apriori algorithms have been run on artificial datasets. The results show that
incorporating user's preference in selection of target attribute helps to
search the association rules efficiently both in terms of space and time.
| [
{
"version": "v1",
"created": "Tue, 20 Apr 2010 20:37:32 GMT"
}
] | 2010-04-22T00:00:00 | [
[
"Singh",
"Vikram",
""
],
[
"Nagpal",
"Sapna",
""
]
] | TITLE: Integrating User's Domain Knowledge with Association Rule Mining
ABSTRACT: This paper presents a variation of Apriori algorithm that includes the role
of domain expert to guide and speed up the overall knowledge discovery task.
Usually, the user is interested in finding relationships between certain
attributes instead of the whole dataset. Moreover, he can help the mining
algorithm to select the target database which in turn takes less time to find
the desired association rules. Variants of the standard Apriori and Interactive
Apriori algorithms have been run on artificial datasets. The results show that
incorporating user's preference in selection of target attribute helps to
search the association rules efficiently both in terms of space and time.
| no_new_dataset | 0.949342 |
0906.4582 | Patrick J. Wolfe | Mohamed-Ali Belabbas and Patrick J. Wolfe | On landmark selection and sampling in high-dimensional data analysis | 18 pages, 6 figures, submitted for publication | Philosophical Transactions of the Royal Society, Series A, vol.
367, pp. 4295-4312, 2009 | 10.1098/rsta.2009.0161 | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the spectral analysis of appropriately defined kernel
matrices has emerged as a principled way to extract the low-dimensional
structure often prevalent in high-dimensional data. Here we provide an
introduction to spectral methods for linear and nonlinear dimension reduction,
emphasizing ways to overcome the computational limitations currently faced by
practitioners with massive datasets. In particular, a data subsampling or
landmark selection process is often employed to construct a kernel based on
partial information, followed by an approximate spectral analysis termed the
Nystrom extension. We provide a quantitative framework to analyse this
procedure, and use it to demonstrate algorithmic performance bounds on a range
of practical approaches designed to optimize the landmark selection process. We
compare the practical implications of these bounds by way of real-world
examples drawn from the field of computer vision, whereby low-dimensional
manifold structure is shown to emerge from high-dimensional video data streams.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2009 23:40:22 GMT"
}
] | 2010-04-20T00:00:00 | [
[
"Belabbas",
"Mohamed-Ali",
""
],
[
"Wolfe",
"Patrick J.",
""
]
] | TITLE: On landmark selection and sampling in high-dimensional data analysis
ABSTRACT: In recent years, the spectral analysis of appropriately defined kernel
matrices has emerged as a principled way to extract the low-dimensional
structure often prevalent in high-dimensional data. Here we provide an
introduction to spectral methods for linear and nonlinear dimension reduction,
emphasizing ways to overcome the computational limitations currently faced by
practitioners with massive datasets. In particular, a data subsampling or
landmark selection process is often employed to construct a kernel based on
partial information, followed by an approximate spectral analysis termed the
Nystrom extension. We provide a quantitative framework to analyse this
procedure, and use it to demonstrate algorithmic performance bounds on a range
of practical approaches designed to optimize the landmark selection process. We
compare the practical implications of these bounds by way of real-world
examples drawn from the field of computer vision, whereby low-dimensional
manifold structure is shown to emerge from high-dimensional video data streams.
| no_new_dataset | 0.951006 |
1004.3175 | Eva Kranz | Eva Kranz | Structural Stability and Immunogenicity of Peptides | null | null | null | null | q-bio.BM cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigated the role of peptide folding stability in peptide
immunogenicity. It was the aim of this thesis to implement a stability
criterion based on energy computations using an AMBER force field, and to test
the implementation with a large dataset.
| [
{
"version": "v1",
"created": "Mon, 19 Apr 2010 12:43:55 GMT"
}
] | 2010-04-20T00:00:00 | [
[
"Kranz",
"Eva",
""
]
] | TITLE: Structural Stability and Immunogenicity of Peptides
ABSTRACT: We investigated the role of peptide folding stability in peptide
immunogenicity. It was the aim of this thesis to implement a stability
criterion based on energy computations using an AMBER force field, and to test
the implementation with a large dataset.
| no_new_dataset | 0.948728 |
1004.2447 | Jeremy Faden Mr. | J. Faden, R. S. Weigel, J. Merka, R. H. W. Friedel | Autoplot: A browser for scientific data on the web | 16 pages | null | 10.1007/s12145-010-0049-0 | null | cs.GR physics.data-an physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autoplot is software developed for the Virtual Observatories in Heliophysics
to provide intelligent and automated plotting capabilities for many typical
data products that are stored in a variety of file formats or databases.
Autoplot has proven to be a flexible tool for exploring, accessing, and viewing
data resources as typically found on the web, usually in the form of a
directory containing data files with multiple parameters contained in each
file. Data from a data source is abstracted into a common internal data model
called QDataSet. Autoplot is built from individually useful components, and can
be extended and reused to create specialized data handling and analysis
applications and is being used in a variety of science visualization and
analysis applications. Although originally developed for viewing
heliophysics-related time series and spectrograms, its flexible and generic
data representation model makes it potentially useful for the Earth sciences.
| [
{
"version": "v1",
"created": "Wed, 14 Apr 2010 16:40:41 GMT"
}
] | 2010-04-15T00:00:00 | [
[
"Faden",
"J.",
""
],
[
"Weigel",
"R. S.",
""
],
[
"Merka",
"J.",
""
],
[
"Friedel",
"R. H. W.",
""
]
] | TITLE: Autoplot: A browser for scientific data on the web
ABSTRACT: Autoplot is software developed for the Virtual Observatories in Heliophysics
to provide intelligent and automated plotting capabilities for many typical
data products that are stored in a variety of file formats or databases.
Autoplot has proven to be a flexible tool for exploring, accessing, and viewing
data resources as typically found on the web, usually in the form of a
directory containing data files with multiple parameters contained in each
file. Data from a data source is abstracted into a common internal data model
called QDataSet. Autoplot is built from individually useful components, and can
be extended and reused to create specialized data handling and analysis
applications and is being used in a variety of science visualization and
analysis applications. Although originally developed for viewing
heliophysics-related time series and spectrograms, its flexible and generic
data representation model makes it potentially useful for the Earth sciences.
| no_new_dataset | 0.940353 |
1004.1743 | Rdv Ijcsis | G. Nathiya, S. C. Punitha, M. Punithavalli | An Analytical Study on Behavior of Clusters Using K Means, EM and K*
Means Algorithm | IEEE Publication format, ISSN 1947 5500,
http://sites.google.com/site/ijcsis/ | IJCSIS, Vol. 7 No. 3, March 2010, 185-190 | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Clustering is an unsupervised learning method that constitutes a cornerstone
of an intelligent data analysis process. It is used for the exploration of
inter-relationships among a collection of patterns, by organizing them into
homogeneous clusters. Clustering has been dynamically applied to a variety of
tasks in the field of Information Retrieval (IR). Clustering has become one of
the most active area of research and the development. Clustering attempts to
discover the set of consequential groups where those within each group are more
closely related to one another than the others assigned to different groups.
The resultant clusters can provide a structure for organizing large bodies of
text for efficient browsing and searching. There exists a wide variety of
clustering algorithms that has been intensively studied in the clustering
problem. Among the algorithms that remain the most common and effectual, the
iterative optimization clustering algorithms have been demonstrated reasonable
performance for clustering, e.g. the Expectation Maximization (EM) algorithm
and its variants, and the well known k-means algorithm. This paper presents an
analysis on how partition method clustering techniques - EM, K -means and K*
Means algorithm work on heartspect dataset with below mentioned features -
Purity, Entropy, CPU time, Cluster wise analysis, Mean value analysis and inter
cluster distance. Thus the paper finally provides the experimental results of
datasets for five clusters to strengthen the results that the quality of the
behavior in clusters in EM algorithm is far better than k-means algorithm and
k*means algorithm.
| [
{
"version": "v1",
"created": "Sat, 10 Apr 2010 21:58:16 GMT"
}
] | 2010-04-13T00:00:00 | [
[
"Nathiya",
"G.",
""
],
[
"Punitha",
"S. C.",
""
],
[
"Punithavalli",
"M.",
""
]
] | TITLE: An Analytical Study on Behavior of Clusters Using K Means, EM and K*
Means Algorithm
ABSTRACT: Clustering is an unsupervised learning method that constitutes a cornerstone
of an intelligent data analysis process. It is used for the exploration of
inter-relationships among a collection of patterns, by organizing them into
homogeneous clusters. Clustering has been dynamically applied to a variety of
tasks in the field of Information Retrieval (IR). Clustering has become one of
the most active area of research and the development. Clustering attempts to
discover the set of consequential groups where those within each group are more
closely related to one another than the others assigned to different groups.
The resultant clusters can provide a structure for organizing large bodies of
text for efficient browsing and searching. There exists a wide variety of
clustering algorithms that has been intensively studied in the clustering
problem. Among the algorithms that remain the most common and effectual, the
iterative optimization clustering algorithms have been demonstrated reasonable
performance for clustering, e.g. the Expectation Maximization (EM) algorithm
and its variants, and the well known k-means algorithm. This paper presents an
analysis on how partition method clustering techniques - EM, K -means and K*
Means algorithm work on heartspect dataset with below mentioned features -
Purity, Entropy, CPU time, Cluster wise analysis, Mean value analysis and inter
cluster distance. Thus the paper finally provides the experimental results of
datasets for five clusters to strengthen the results that the quality of the
behavior in clusters in EM algorithm is far better than k-means algorithm and
k*means algorithm.
| no_new_dataset | 0.950686 |
1004.1982 | Dar\'io Garc\'ia-Garc\'ia | Dar\'io Garc\'ia-Garc\'ia and Emilio Parrado-Hern\'andez and Fernando
D\'iaz-de-Mar\'ia | State-Space Dynamics Distance for Clustering Sequential Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel similarity measure for clustering sequential
data. We first construct a common state-space by training a single
probabilistic model with all the sequences in order to get a unified
representation for the dataset. Then, distances are obtained attending to the
transition matrices induced by each sequence in that state-space. This approach
solves some of the usual overfitting and scalability issues of the existing
semi-parametric techniques, that rely on training a model for each sequence.
Empirical studies on both synthetic and real-world datasets illustrate the
advantages of the proposed similarity measure for clustering sequences.
| [
{
"version": "v1",
"created": "Fri, 9 Apr 2010 09:36:28 GMT"
}
] | 2010-04-13T00:00:00 | [
[
"García-García",
"Darío",
""
],
[
"Parrado-Hernández",
"Emilio",
""
],
[
"Díaz-de-María",
"Fernando",
""
]
] | TITLE: State-Space Dynamics Distance for Clustering Sequential Data
ABSTRACT: This paper proposes a novel similarity measure for clustering sequential
data. We first construct a common state-space by training a single
probabilistic model with all the sequences in order to get a unified
representation for the dataset. Then, distances are obtained attending to the
transition matrices induced by each sequence in that state-space. This approach
solves some of the usual overfitting and scalability issues of the existing
semi-parametric techniques, that rely on training a model for each sequence.
Empirical studies on both synthetic and real-world datasets illustrate the
advantages of the proposed similarity measure for clustering sequences.
| no_new_dataset | 0.950641 |
1002.1587 | Valiya Hamza M | V. M. Hamza, R. R. Cardoso, C. H. Alexandrino | A Magma Accretion Model for the Formation of Oceanic Lithosphere:
Implications for Global Heat Loss | 45 pages, 11 figures | null | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple magma accretion model of the oceanic lithosphere is proposed and its
implications for understanding the thermal field of oceanic lithosphere
examined. The new model (designated VBA) assumes existence of lateral
variations in magma accretion rates and temperatures at the boundary zone
between the lithosphere and the asthenosphere. Heat flow and bathymetry
variations calculated on the basis of the VBA model provide vastly improved
fits to respective observational datasets. The improved fits have been achieved
for the entire age range and without the need to invoke the ad-hoc hypothesis
of large-scale hydrothermal circulation in stable ocean crust. The results
suggest that estimates of global heat loss need to be downsized by at least
25%.
| [
{
"version": "v1",
"created": "Mon, 8 Feb 2010 12:25:20 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Apr 2010 10:39:10 GMT"
}
] | 2010-04-08T00:00:00 | [
[
"Hamza",
"V. M.",
""
],
[
"Cardoso",
"R. R.",
""
],
[
"Alexandrino",
"C. H.",
""
]
] | TITLE: A Magma Accretion Model for the Formation of Oceanic Lithosphere:
Implications for Global Heat Loss
ABSTRACT: A simple magma accretion model of the oceanic lithosphere is proposed and its
implications for understanding the thermal field of oceanic lithosphere
examined. The new model (designated VBA) assumes existence of lateral
variations in magma accretion rates and temperatures at the boundary zone
between the lithosphere and the asthenosphere. Heat flow and bathymetry
variations calculated on the basis of the VBA model provide vastly improved
fits to respective observational datasets. The improved fits have been achieved
for the entire age range and without the need to invoke the ad-hoc hypothesis
of large-scale hydrothermal circulation in stable ocean crust. The results
suggest that estimates of global heat loss need to be downsized by at least
25%.
| no_new_dataset | 0.952706 |
1004.0456 | Fabrice Rossi | Georges H\'ebrail and Bernard Hugueney and Yves Lechevallier and
Fabrice Rossi | Exploratory Analysis of Functional Data via Clustering and Optimal
Segmentation | null | Neurocomputing, Volume 73, Issues 7-9, March 2010, Pages 1125-1141 | 10.1016/j.neucom.2009.11.022 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose in this paper an exploratory analysis algorithm for functional
data. The method partitions a set of functions into $K$ clusters and represents
each cluster by a simple prototype (e.g., piecewise constant). The total number
of segments in the prototypes, $P$, is chosen by the user and optimally
distributed among the clusters via two dynamic programming algorithms. The
practical relevance of the method is shown on two real world datasets.
| [
{
"version": "v1",
"created": "Sat, 3 Apr 2010 16:28:47 GMT"
}
] | 2010-04-06T00:00:00 | [
[
"Hébrail",
"Georges",
""
],
[
"Hugueney",
"Bernard",
""
],
[
"Lechevallier",
"Yves",
""
],
[
"Rossi",
"Fabrice",
""
]
] | TITLE: Exploratory Analysis of Functional Data via Clustering and Optimal
Segmentation
ABSTRACT: We propose in this paper an exploratory analysis algorithm for functional
data. The method partitions a set of functions into $K$ clusters and represents
each cluster by a simple prototype (e.g., piecewise constant). The total number
of segments in the prototypes, $P$, is chosen by the user and optimally
distributed among the clusters via two dynamic programming algorithms. The
practical relevance of the method is shown on two real world datasets.
| no_new_dataset | 0.948632 |
1003.5886 | Sandip Rakshit | Sandip Rakshit, Subhadip Basu | Development of a multi-user handwriting recognition system using
Tesseract open source OCR engine | Proc. International Conference on C3IT (2009) 240-247 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of the paper is to recognize handwritten samples of lower case
Roman script using Tesseract open source Optical Character Recognition (OCR)
engine under Apache License 2.0. Handwritten data samples containing isolated
and free-flow text were collected from different users. Tesseract is trained
with user-specific data samples of both the categories of document pages to
generate separate user-models representing a unique language-set. Each such
language-set recognizes isolated and free-flow handwritten test samples
collected from the designated user. On a three user model, the system is
trained with 1844, 1535 and 1113 isolated handwritten character samples
collected from three different users and the performance is tested on 1133,
1186 and 1204 character samples, collected form the test sets of the three
users respectively. The user specific character level accuracies were obtained
as 87.92%, 81.53% and 65.71% respectively. The overall character-level accuracy
of the system is observed as 78.39%. The system fails to segment 10.96%
characters and erroneously classifies 10.65% characters on the overall dataset.
| [
{
"version": "v1",
"created": "Tue, 30 Mar 2010 18:22:44 GMT"
}
] | 2010-03-31T00:00:00 | [
[
"Rakshit",
"Sandip",
""
],
[
"Basu",
"Subhadip",
""
]
] | TITLE: Development of a multi-user handwriting recognition system using
Tesseract open source OCR engine
ABSTRACT: The objective of the paper is to recognize handwritten samples of lower case
Roman script using Tesseract open source Optical Character Recognition (OCR)
engine under Apache License 2.0. Handwritten data samples containing isolated
and free-flow text were collected from different users. Tesseract is trained
with user-specific data samples of both the categories of document pages to
generate separate user-models representing a unique language-set. Each such
language-set recognizes isolated and free-flow handwritten test samples
collected from the designated user. On a three user model, the system is
trained with 1844, 1535 and 1113 isolated handwritten character samples
collected from three different users and the performance is tested on 1133,
1186 and 1204 character samples, collected form the test sets of the three
users respectively. The user specific character level accuracies were obtained
as 87.92%, 81.53% and 65.71% respectively. The overall character-level accuracy
of the system is observed as 78.39%. The system fails to segment 10.96%
characters and erroneously classifies 10.65% characters on the overall dataset.
| no_new_dataset | 0.943815 |
1003.5897 | Sandip Rakshit | Sandip Rakshit, Debkumar Ghosal, Tanmoy Das, Subhrajit Dutta, Subhadip
Basu | Development of a Multi-User Recognition Engine for Handwritten Bangla
Basic Characters and Digits | Proc. (CD) Int. Conf. on Information Technology and Business
Intelligence (2009) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of the paper is to recognize handwritten samples of basic
Bangla characters using Tesseract open source Optical Character Recognition
(OCR) engine under Apache License 2.0. Handwritten data samples containing
isolated Bangla basic characters and digits were collected from different
users. Tesseract is trained with user-specific data samples of document pages
to generate separate user-models representing a unique language-set. Each such
language-set recognizes isolated basic Bangla handwritten test samples
collected from the designated users. On a three user model, the system is
trained with 919, 928 and 648 isolated handwritten character and digit samples
and the performance is tested on 1527, 14116 and 1279 character and digit
samples, collected form the test datasets of the three users respectively. The
user specific character/digit recognition accuracies were obtained as 90.66%,
91.66% and 96.87% respectively. The overall basic character-level and digit
level accuracy of the system is observed as 92.15% and 97.37%. The system fails
to segment 12.33% characters and 15.96% digits and also erroneously classifies
7.85% characters and 2.63% on the overall dataset.
| [
{
"version": "v1",
"created": "Tue, 30 Mar 2010 18:54:57 GMT"
}
] | 2010-03-31T00:00:00 | [
[
"Rakshit",
"Sandip",
""
],
[
"Ghosal",
"Debkumar",
""
],
[
"Das",
"Tanmoy",
""
],
[
"Dutta",
"Subhrajit",
""
],
[
"Basu",
"Subhadip",
""
]
] | TITLE: Development of a Multi-User Recognition Engine for Handwritten Bangla
Basic Characters and Digits
ABSTRACT: The objective of the paper is to recognize handwritten samples of basic
Bangla characters using Tesseract open source Optical Character Recognition
(OCR) engine under Apache License 2.0. Handwritten data samples containing
isolated Bangla basic characters and digits were collected from different
users. Tesseract is trained with user-specific data samples of document pages
to generate separate user-models representing a unique language-set. Each such
language-set recognizes isolated basic Bangla handwritten test samples
collected from the designated users. On a three user model, the system is
trained with 919, 928 and 648 isolated handwritten character and digit samples
and the performance is tested on 1527, 14116 and 1279 character and digit
samples, collected form the test datasets of the three users respectively. The
user specific character/digit recognition accuracies were obtained as 90.66%,
91.66% and 96.87% respectively. The overall basic character-level and digit
level accuracy of the system is observed as 92.15% and 97.37%. The system fails
to segment 12.33% characters and 15.96% digits and also erroneously classifies
7.85% characters and 2.63% on the overall dataset.
| no_new_dataset | 0.934813 |
1003.5898 | Sandip Rakshit | Sandip Rakshit, Amitava Kundu, Mrinmoy Maity, Subhajit Mandal, Satwika
Sarkar, Subhadip Basu | Recognition of handwritten Roman Numerals using Tesseract open source
OCR engine | Proc. Int. Conf. on Advances in Computer Vision and Information
Technology (2009) 572-577 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of the paper is to recognize handwritten samples of Roman
numerals using Tesseract open source Optical Character Recognition (OCR)
engine. Tesseract is trained with data samples of different persons to generate
one user-independent language model, representing the handwritten Roman
digit-set. The system is trained with 1226 digit samples collected form the
different users. The performance is tested on two different datasets, one
consisting of samples collected from the known users (those who prepared the
training data samples) and the other consisting of handwritten data samples of
unknown users. The overall recognition accuracy is obtained as 92.1% and 86.59%
on these test datasets respectively.
| [
{
"version": "v1",
"created": "Tue, 30 Mar 2010 18:59:49 GMT"
}
] | 2010-03-31T00:00:00 | [
[
"Rakshit",
"Sandip",
""
],
[
"Kundu",
"Amitava",
""
],
[
"Maity",
"Mrinmoy",
""
],
[
"Mandal",
"Subhajit",
""
],
[
"Sarkar",
"Satwika",
""
],
[
"Basu",
"Subhadip",
""
]
] | TITLE: Recognition of handwritten Roman Numerals using Tesseract open source
OCR engine
ABSTRACT: The objective of the paper is to recognize handwritten samples of Roman
numerals using Tesseract open source Optical Character Recognition (OCR)
engine. Tesseract is trained with data samples of different persons to generate
one user-independent language model, representing the handwritten Roman
digit-set. The system is trained with 1226 digit samples collected form the
different users. The performance is tested on two different datasets, one
consisting of samples collected from the known users (those who prepared the
training data samples) and the other consisting of handwritten data samples of
unknown users. The overall recognition accuracy is obtained as 92.1% and 86.59%
on these test datasets respectively.
| no_new_dataset | 0.953101 |
1002.4007 | William Jackson | Ram Sarkar, Nibaran Das, Subhadip Basu, Mahantapas Kundu, Mita
Nasipuri, Dipak Kumar Basu | Word level Script Identification from Bangla and Devanagri Handwritten
Texts mixed with Roman Script | null | Journal of Computing, Volume 2, Issue 2, February 2010,
https://sites.google.com/site/journalofcomputing/ | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | India is a multi-lingual country where Roman script is often used alongside
different Indic scripts in a text document. To develop a script specific
handwritten Optical Character Recognition (OCR) system, it is therefore
necessary to identify the scripts of handwritten text correctly. In this paper,
we present a system, which automatically separates the scripts of handwritten
words from a document, written in Bangla or Devanagri mixed with Roman scripts.
In this script separation technique, we first, extract the text lines and words
from document pages using a script independent Neighboring Component Analysis
technique. Then we have designed a Multi Layer Perceptron (MLP) based
classifier for script separation, trained with 8 different wordlevel holistic
features. Two equal sized datasets, one with Bangla and Roman scripts and the
other with Devanagri and Roman scripts, are prepared for the system evaluation.
On respective independent text samples, word-level script identification
accuracies of 99.29% and 98.43% are achieved.
| [
{
"version": "v1",
"created": "Sun, 21 Feb 2010 19:48:16 GMT"
}
] | 2010-03-25T00:00:00 | [
[
"Sarkar",
"Ram",
""
],
[
"Das",
"Nibaran",
""
],
[
"Basu",
"Subhadip",
""
],
[
"Kundu",
"Mahantapas",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak Kumar",
""
]
] | TITLE: Word level Script Identification from Bangla and Devanagri Handwritten
Texts mixed with Roman Script
ABSTRACT: India is a multi-lingual country where Roman script is often used alongside
different Indic scripts in a text document. To develop a script specific
handwritten Optical Character Recognition (OCR) system, it is therefore
necessary to identify the scripts of handwritten text correctly. In this paper,
we present a system, which automatically separates the scripts of handwritten
words from a document, written in Bangla or Devanagri mixed with Roman scripts.
In this script separation technique, we first, extract the text lines and words
from document pages using a script independent Neighboring Component Analysis
technique. Then we have designed a Multi Layer Perceptron (MLP) based
classifier for script separation, trained with 8 different wordlevel holistic
features. Two equal sized datasets, one with Bangla and Roman scripts and the
other with Devanagri and Roman scripts, are prepared for the system evaluation.
On respective independent text samples, word-level script identification
accuracies of 99.29% and 98.43% are achieved.
| no_new_dataset | 0.918991 |
1002.4048 | William Jackson | Satadal Saha, Subhadip Basu, Mita Nasipuri, Dipak Kr. Basu | A Hough Transform based Technique for Text Segmentation | null | Journal of Computing, Volume 2, Issue 2, February 2010,
https://sites.google.com/site/journalofcomputing/ | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text segmentation is an inherent part of an OCR system irrespective of the
domain of application of it. The OCR system contains a segmentation module
where the text lines, words and ultimately the characters must be segmented
properly for its successful recognition. The present work implements a Hough
transform based technique for line and word segmentation from digitized images.
The proposed technique is applied not only on the document image dataset but
also on dataset for business card reader system and license plate recognition
system. For standardization of the performance of the system the technique is
also applied on public domain dataset published in the website by CMATER,
Jadavpur University. The document images consist of multi-script printed and
hand written text lines with variety in script and line spacing in single
document image. The technique performs quite satisfactorily when applied on
mobile camera captured business card images with low resolution. The usefulness
of the technique is verified by applying it in a commercial project for
localization of license plate of vehicles from surveillance camera images by
the process of segmentation itself. The accuracy of the technique for word
segmentation, as verified experimentally, is 85.7% for document images, 94.6%
for business card images and 88% for surveillance camera images.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2010 03:16:55 GMT"
}
] | 2010-03-23T00:00:00 | [
[
"Saha",
"Satadal",
""
],
[
"Basu",
"Subhadip",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak Kr.",
""
]
] | TITLE: A Hough Transform based Technique for Text Segmentation
ABSTRACT: Text segmentation is an inherent part of an OCR system irrespective of the
domain of application of it. The OCR system contains a segmentation module
where the text lines, words and ultimately the characters must be segmented
properly for its successful recognition. The present work implements a Hough
transform based technique for line and word segmentation from digitized images.
The proposed technique is applied not only on the document image dataset but
also on dataset for business card reader system and license plate recognition
system. For standardization of the performance of the system the technique is
also applied on public domain dataset published in the website by CMATER,
Jadavpur University. The document images consist of multi-script printed and
hand written text lines with variety in script and line spacing in single
document image. The technique performs quite satisfactorily when applied on
mobile camera captured business card images with low resolution. The usefulness
of the technique is verified by applying it in a commercial project for
localization of license plate of vehicles from surveillance camera images by
the process of segmentation itself. The accuracy of the technique for word
segmentation, as verified experimentally, is 85.7% for document images, 94.6%
for business card images and 88% for surveillance camera images.
| no_new_dataset | 0.954265 |
0812.5064 | Qiang Li | Qiang Li, Zhuo Chen, Yan He, Jing-ping Jiang | A Novel Clustering Algorithm Based Upon Games on Evolving Network | 17 pages, 5 figures, 3 tables | Expert Systems with Applications, 2010 | 10.1016/j.eswa.2010.02.050 | null | cs.LG cs.CV cs.GT nlin.AO | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper introduces a model based upon games on an evolving network, and
develops three clustering algorithms according to it. In the clustering
algorithms, data points for clustering are regarded as players who can make
decisions in games. On the network describing relationships among data points,
an edge-removing-and-rewiring (ERR) function is employed to explore in a
neighborhood of a data point, which removes edges connecting to neighbors with
small payoffs, and creates new edges to neighbors with larger payoffs. As such,
the connections among data points vary over time. During the evolution of
network, some strategies are spread in the network. As a consequence, clusters
are formed automatically, in which data points with the same evolutionarily
stable strategy are collected as a cluster, so the number of evolutionarily
stable strategies indicates the number of clusters. Moreover, the experimental
results have demonstrated that data points in datasets are clustered reasonably
and efficiently, and the comparison with other algorithms also provides an
indication of the effectiveness of the proposed algorithms.
| [
{
"version": "v1",
"created": "Tue, 30 Dec 2008 13:22:31 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Mar 2010 13:30:08 GMT"
}
] | 2010-03-22T00:00:00 | [
[
"Li",
"Qiang",
""
],
[
"Chen",
"Zhuo",
""
],
[
"He",
"Yan",
""
],
[
"Jiang",
"Jing-ping",
""
]
] | TITLE: A Novel Clustering Algorithm Based Upon Games on Evolving Network
ABSTRACT: This paper introduces a model based upon games on an evolving network, and
develops three clustering algorithms according to it. In the clustering
algorithms, data points for clustering are regarded as players who can make
decisions in games. On the network describing relationships among data points,
an edge-removing-and-rewiring (ERR) function is employed to explore in a
neighborhood of a data point, which removes edges connecting to neighbors with
small payoffs, and creates new edges to neighbors with larger payoffs. As such,
the connections among data points vary over time. During the evolution of
network, some strategies are spread in the network. As a consequence, clusters
are formed automatically, in which data points with the same evolutionarily
stable strategy are collected as a cluster, so the number of evolutionarily
stable strategies indicates the number of clusters. Moreover, the experimental
results have demonstrated that data points in datasets are clustered reasonably
and efficiently, and the comparison with other algorithms also provides an
indication of the effectiveness of the proposed algorithms.
| no_new_dataset | 0.956553 |
1003.2424 | Jure Leskovec | Jure Leskovec, Daniel Huttenlocher, Jon Kleinberg | Signed Networks in Social Media | null | CHI 2010: 28th ACM Conference on Human Factors in Computing
Systems | null | null | physics.soc-ph cs.CY cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relations between users on social media sites often reflect a mixture of
positive (friendly) and negative (antagonistic) interactions. In contrast to
the bulk of research on social networks that has focused almost exclusively on
positive interpretations of links between people, we study how the interplay
between positive and negative relationships affects the structure of on-line
social networks. We connect our analyses to theories of signed networks from
social psychology. We find that the classical theory of structural balance
tends to capture certain common patterns of interaction, but that it is also at
odds with some of the fundamental phenomena we observe --- particularly related
to the evolving, directed nature of these on-line networks. We then develop an
alternate theory of status that better explains the observed edge signs and
provides insights into the underlying social mechanisms. Our work provides one
of the first large-scale evaluations of theories of signed networks using
on-line datasets, as well as providing a perspective for reasoning about social
media sites.
| [
{
"version": "v1",
"created": "Thu, 11 Mar 2010 21:11:26 GMT"
}
] | 2010-03-15T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"Huttenlocher",
"Daniel",
""
],
[
"Kleinberg",
"Jon",
""
]
] | TITLE: Signed Networks in Social Media
ABSTRACT: Relations between users on social media sites often reflect a mixture of
positive (friendly) and negative (antagonistic) interactions. In contrast to
the bulk of research on social networks that has focused almost exclusively on
positive interpretations of links between people, we study how the interplay
between positive and negative relationships affects the structure of on-line
social networks. We connect our analyses to theories of signed networks from
social psychology. We find that the classical theory of structural balance
tends to capture certain common patterns of interaction, but that it is also at
odds with some of the fundamental phenomena we observe --- particularly related
to the evolving, directed nature of these on-line networks. We then develop an
alternate theory of status that better explains the observed edge signs and
provides insights into the underlying social mechanisms. Our work provides one
of the first large-scale evaluations of theories of signed networks using
on-line datasets, as well as providing a perspective for reasoning about social
media sites.
| no_new_dataset | 0.948965 |
1003.2429 | Jure Leskovec | Jure Leskovec, Daniel Huttenlocher, Jon Kleinberg | Predicting Positive and Negative Links in Online Social Networks | null | WWW 2010: ACM WWW International conference on World Wide Web, 2010 | null | null | physics.soc-ph cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study online social networks in which relationships can be either positive
(indicating relations such as friendship) or negative (indicating relations
such as opposition or antagonism). Such a mix of positive and negative links
arise in a variety of online settings; we study datasets from Epinions,
Slashdot and Wikipedia. We find that the signs of links in the underlying
social networks can be predicted with high accuracy, using models that
generalize across this diverse range of sites. These models provide insight
into some of the fundamental principles that drive the formation of signed
links in networks, shedding light on theories of balance and status from social
psychology; they also suggest social computing applications by which the
attitude of one user toward another can be estimated from evidence provided by
their relationships with other members of the surrounding social network.
| [
{
"version": "v1",
"created": "Thu, 11 Mar 2010 21:27:11 GMT"
}
] | 2010-03-15T00:00:00 | [
[
"Leskovec",
"Jure",
""
],
[
"Huttenlocher",
"Daniel",
""
],
[
"Kleinberg",
"Jon",
""
]
] | TITLE: Predicting Positive and Negative Links in Online Social Networks
ABSTRACT: We study online social networks in which relationships can be either positive
(indicating relations such as friendship) or negative (indicating relations
such as opposition or antagonism). Such a mix of positive and negative links
arise in a variety of online settings; we study datasets from Epinions,
Slashdot and Wikipedia. We find that the signs of links in the underlying
social networks can be predicted with high accuracy, using models that
generalize across this diverse range of sites. These models provide insight
into some of the fundamental principles that drive the formation of signed
links in networks, shedding light on theories of balance and status from social
psychology; they also suggest social computing applications by which the
attitude of one user toward another can be estimated from evidence provided by
their relationships with other members of the surrounding social network.
| no_new_dataset | 0.944485 |
0909.3472 | J\'er\^oAme Kunegis | J\'er\^ome Kunegis, Alan Said, Winfried Umbrath | The Universal Recommender | 17 pages; typo and references fixed | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the Universal Recommender, a recommender system for semantic
datasets that generalizes domain-specific recommenders such as content-based,
collaborative, social, bibliographic, lexicographic, hybrid and other
recommenders. In contrast to existing recommender systems, the Universal
Recommender applies to any dataset that allows a semantic representation. We
describe the scalable three-stage architecture of the Universal Recommender and
its application to Internet Protocol Television (IPTV). To achieve good
recommendation accuracy, several novel machine learning and optimization
problems are identified. We finally give a brief argument supporting the need
for machine learning recommenders.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2009 15:54:51 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Mar 2010 12:43:28 GMT"
}
] | 2010-03-13T00:00:00 | [
[
"Kunegis",
"Jérôme",
""
],
[
"Said",
"Alan",
""
],
[
"Umbrath",
"Winfried",
""
]
] | TITLE: The Universal Recommender
ABSTRACT: We describe the Universal Recommender, a recommender system for semantic
datasets that generalizes domain-specific recommenders such as content-based,
collaborative, social, bibliographic, lexicographic, hybrid and other
recommenders. In contrast to existing recommender systems, the Universal
Recommender applies to any dataset that allows a semantic representation. We
describe the scalable three-stage architecture of the Universal Recommender and
its application to Internet Protocol Television (IPTV). To achieve good
recommendation accuracy, several novel machine learning and optimization
problems are identified. We finally give a brief argument supporting the need
for machine learning recommenders.
| no_new_dataset | 0.948917 |
1003.1814 | Rdv Ijcsis | Alok Ranjan, Harish Verma, Eatesh Kandpal, Joydip Dhar | An Analytical Approach to Document Clustering Based on Internal
Criterion Function | Pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS, Vol. 7 No. 2, February 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/ | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Fast and high quality document clustering is an important task in organizing
information, search engine results obtaining from user query, enhancing web
crawling and information retrieval. With the large amount of data available and
with a goal of creating good quality clusters, a variety of algorithms have
been developed having quality-complexity trade-offs. Among these, some
algorithms seek to minimize the computational complexity using certain
criterion functions which are defined for the whole set of clustering solution.
In this paper, we are proposing a novel document clustering algorithm based on
an internal criterion function. Most commonly used partitioning clustering
algorithms (e.g. k-means) have some drawbacks as they suffer from local optimum
solutions and creation of empty clusters as a clustering solution. The proposed
algorithm usually does not suffer from these problems and converge to a global
optimum, its performance enhances with the increase in number of clusters. We
have checked our algorithm against three different datasets for four different
values of k (required number of clusters).
| [
{
"version": "v1",
"created": "Tue, 9 Mar 2010 07:28:07 GMT"
}
] | 2010-03-11T00:00:00 | [
[
"Ranjan",
"Alok",
""
],
[
"Verma",
"Harish",
""
],
[
"Kandpal",
"Eatesh",
""
],
[
"Dhar",
"Joydip",
""
]
] | TITLE: An Analytical Approach to Document Clustering Based on Internal
Criterion Function
ABSTRACT: Fast and high quality document clustering is an important task in organizing
information, search engine results obtaining from user query, enhancing web
crawling and information retrieval. With the large amount of data available and
with a goal of creating good quality clusters, a variety of algorithms have
been developed having quality-complexity trade-offs. Among these, some
algorithms seek to minimize the computational complexity using certain
criterion functions which are defined for the whole set of clustering solution.
In this paper, we are proposing a novel document clustering algorithm based on
an internal criterion function. Most commonly used partitioning clustering
algorithms (e.g. k-means) have some drawbacks as they suffer from local optimum
solutions and creation of empty clusters as a clustering solution. The proposed
algorithm usually does not suffer from these problems and converge to a global
optimum, its performance enhances with the increase in number of clusters. We
have checked our algorithm against three different datasets for four different
values of k (required number of clusters).
| no_new_dataset | 0.949248 |
1003.1795 | Rdv Ijcsis | Vidhya. K. A, G. Aghila | A Survey of Na\"ive Bayes Machine Learning approach in Text Document
Classification | Pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS, Vol. 7 No. 2, February 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis/ | null | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Text Document classification aims in associating one or more predefined
categories based on the likelihood suggested by the training set of labeled
documents. Many machine learning algorithms play a vital role in training the
system with predefined categories among which Na\"ive Bayes has some intriguing
facts that it is simple, easy to implement and draws better accuracy in large
datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes
Machine learning approach has felt hence the study has been taken up for text
document classification and the statistical event models available. This survey
the various feature selection methods has been discussed and compared along
with the metrics related to text document classification.
| [
{
"version": "v1",
"created": "Tue, 9 Mar 2010 06:41:49 GMT"
}
] | 2010-03-10T00:00:00 | [
[
"A",
"Vidhya. K.",
""
],
[
"Aghila",
"G.",
""
]
] | TITLE: A Survey of Na\"ive Bayes Machine Learning approach in Text Document
Classification
ABSTRACT: Text Document classification aims in associating one or more predefined
categories based on the likelihood suggested by the training set of labeled
documents. Many machine learning algorithms play a vital role in training the
system with predefined categories among which Na\"ive Bayes has some intriguing
facts that it is simple, easy to implement and draws better accuracy in large
datasets in spite of the na\"ive dependence. The importance of Na\"ive Bayes
Machine learning approach has felt hence the study has been taken up for text
document classification and the statistical event models available. This survey
the various feature selection methods has been discussed and compared along
with the metrics related to text document classification.
| no_new_dataset | 0.948489 |
0906.3585 | Arnab Bhattacharya | Vishwakarma Singh, Arnab Bhattacharya, Ambuj K. Singh | Finding Significant Subregions in Large Image Databases | 16 pages, 48 figures | Extending Database Technology (EDBT) 2010 | null | null | cs.DB cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Images have become an important data source in many scientific and commercial
domains. Analysis and exploration of image collections often requires the
retrieval of the best subregions matching a given query. The support of such
content-based retrieval requires not only the formulation of an appropriate
scoring function for defining relevant subregions but also the design of new
access methods that can scale to large databases. In this paper, we propose a
solution to this problem of querying significant image subregions. We design a
scoring scheme to measure the similarity of subregions. Our similarity measure
extends to any image descriptor. All the images are tiled and each alignment of
the query and a database image produces a tile score matrix. We show that the
problem of finding the best connected subregion from this matrix is NP-hard and
develop a dynamic programming heuristic. With this heuristic, we develop two
index based scalable search strategies, TARS and SPARS, to query patterns in a
large image repository. These strategies are general enough to work with other
scoring schemes and heuristics. Experimental results on real image datasets
show that TARS saves more than 87% query time on small queries, and SPARS saves
up to 52% query time on large queries as compared to linear search. Qualitative
tests on synthetic and real datasets achieve precision of more than 80%.
| [
{
"version": "v1",
"created": "Fri, 19 Jun 2009 06:57:51 GMT"
}
] | 2010-03-09T00:00:00 | [
[
"Singh",
"Vishwakarma",
""
],
[
"Bhattacharya",
"Arnab",
""
],
[
"Singh",
"Ambuj K.",
""
]
] | TITLE: Finding Significant Subregions in Large Image Databases
ABSTRACT: Images have become an important data source in many scientific and commercial
domains. Analysis and exploration of image collections often requires the
retrieval of the best subregions matching a given query. The support of such
content-based retrieval requires not only the formulation of an appropriate
scoring function for defining relevant subregions but also the design of new
access methods that can scale to large databases. In this paper, we propose a
solution to this problem of querying significant image subregions. We design a
scoring scheme to measure the similarity of subregions. Our similarity measure
extends to any image descriptor. All the images are tiled and each alignment of
the query and a database image produces a tile score matrix. We show that the
problem of finding the best connected subregion from this matrix is NP-hard and
develop a dynamic programming heuristic. With this heuristic, we develop two
index based scalable search strategies, TARS and SPARS, to query patterns in a
large image repository. These strategies are general enough to work with other
scoring schemes and heuristics. Experimental results on real image datasets
show that TARS saves more than 87% query time on small queries, and SPARS saves
up to 52% query time on large queries as compared to linear search. Qualitative
tests on synthetic and real datasets achieve precision of more than 80%.
| no_new_dataset | 0.950134 |
0909.3169 | Purushottam Kar | Arnab Bhattacharya, Purushottam Kar and Manjish Pal | On Low Distortion Embeddings of Statistical Distance Measures into Low
Dimensional Spaces | 18 pages, The short version of this paper was accepted for
presentation at the 20th International Conference on Database and Expert
Systems Applications, DEXA 2009 | Database and Expert Systems Applications (DEXA) 2009 | 10.1007/978-3-642-03573-9_13 | null | cs.CG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical distance measures have found wide applicability in information
retrieval tasks that typically involve high dimensional datasets. In order to
reduce the storage space and ensure efficient performance of queries,
dimensionality reduction while preserving the inter-point similarity is highly
desirable. In this paper, we investigate various statistical distance measures
from the point of view of discovering low distortion embeddings into
low-dimensional spaces. More specifically, we consider the Mahalanobis distance
measure, the Bhattacharyya class of divergences and the Kullback-Leibler
divergence. We present a dimensionality reduction method based on the
Johnson-Lindenstrauss Lemma for the Mahalanobis measure that achieves
arbitrarily low distortion. By using the Johnson-Lindenstrauss Lemma again, we
further demonstrate that the Bhattacharyya distance admits dimensionality
reduction with arbitrarily low additive error. We also examine the question of
embeddability into metric spaces for these distance measures due to the
availability of efficient indexing schemes on metric spaces. We provide
explicit constructions of point sets under the Bhattacharyya and the
Kullback-Leibler divergences whose embeddings into any metric space incur
arbitrarily large distortions. We show that the lower bound presented for
Bhattacharyya distance is nearly tight by providing an embedding that
approaches the lower bound for relatively small dimensional datasets.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2009 09:29:48 GMT"
}
] | 2010-03-09T00:00:00 | [
[
"Bhattacharya",
"Arnab",
""
],
[
"Kar",
"Purushottam",
""
],
[
"Pal",
"Manjish",
""
]
] | TITLE: On Low Distortion Embeddings of Statistical Distance Measures into Low
Dimensional Spaces
ABSTRACT: Statistical distance measures have found wide applicability in information
retrieval tasks that typically involve high dimensional datasets. In order to
reduce the storage space and ensure efficient performance of queries,
dimensionality reduction while preserving the inter-point similarity is highly
desirable. In this paper, we investigate various statistical distance measures
from the point of view of discovering low distortion embeddings into
low-dimensional spaces. More specifically, we consider the Mahalanobis distance
measure, the Bhattacharyya class of divergences and the Kullback-Leibler
divergence. We present a dimensionality reduction method based on the
Johnson-Lindenstrauss Lemma for the Mahalanobis measure that achieves
arbitrarily low distortion. By using the Johnson-Lindenstrauss Lemma again, we
further demonstrate that the Bhattacharyya distance admits dimensionality
reduction with arbitrarily low additive error. We also examine the question of
embeddability into metric spaces for these distance measures due to the
availability of efficient indexing schemes on metric spaces. We provide
explicit constructions of point sets under the Bhattacharyya and the
Kullback-Leibler divergences whose embeddings into any metric space incur
arbitrarily large distortions. We show that the lower bound presented for
Bhattacharyya distance is nearly tight by providing an embedding that
approaches the lower bound for relatively small dimensional datasets.
| no_new_dataset | 0.948965 |
1001.2625 | Arnab Bhattacharya | Arnab Bhattacharya, Abhishek Bhowmick, Ambuj K. Singh | Finding top-k similar pairs of objects annotated with terms from an
ontology | 17 pages, 13 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing focus on semantic searches and interpretations, an
increasing number of standardized vocabularies and ontologies are being
designed and used to describe data. We investigate the querying of objects
described by a tree-structured ontology. Specifically, we consider the case of
finding the top-k best pairs of objects that have been annotated with terms
from such an ontology when the object descriptions are available only at
runtime. We consider three distance measures. The first one defines the object
distance as the minimum pairwise distance between the sets of terms describing
them, and the second one defines the distance as the average pairwise term
distance. The third and most useful distance measure, earth mover's distance,
finds the best way of matching the terms and computes the distance
corresponding to this best matching. We develop lower bounds that can be
aggregated progressively and utilize them to speed up the search for top-k
object pairs when the earth mover's distance is used. For the minimum pairwise
distance, we devise an algorithm that runs in O(D + Tk log k) time, where D is
the total information size and T is the total number of terms in the ontology.
We also develop a novel best-first search strategy for the average pairwise
distance that utilizes lower bounds generated in an ordered manner. Experiments
on real and synthetic datasets demonstrate the practicality and scalability of
our algorithms.
| [
{
"version": "v1",
"created": "Fri, 15 Jan 2010 07:01:37 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Mar 2010 11:23:28 GMT"
}
] | 2010-03-09T00:00:00 | [
[
"Bhattacharya",
"Arnab",
""
],
[
"Bhowmick",
"Abhishek",
""
],
[
"Singh",
"Ambuj K.",
""
]
] | TITLE: Finding top-k similar pairs of objects annotated with terms from an
ontology
ABSTRACT: With the growing focus on semantic searches and interpretations, an
increasing number of standardized vocabularies and ontologies are being
designed and used to describe data. We investigate the querying of objects
described by a tree-structured ontology. Specifically, we consider the case of
finding the top-k best pairs of objects that have been annotated with terms
from such an ontology when the object descriptions are available only at
runtime. We consider three distance measures. The first one defines the object
distance as the minimum pairwise distance between the sets of terms describing
them, and the second one defines the distance as the average pairwise term
distance. The third and most useful distance measure, earth mover's distance,
finds the best way of matching the terms and computes the distance
corresponding to this best matching. We develop lower bounds that can be
aggregated progressively and utilize them to speed up the search for top-k
object pairs when the earth mover's distance is used. For the minimum pairwise
distance, we devise an algorithm that runs in O(D + Tk log k) time, where D is
the total information size and T is the total number of terms in the ontology.
We also develop a novel best-first search strategy for the average pairwise
distance that utilizes lower bounds generated in an ordered manner. Experiments
on real and synthetic datasets demonstrate the practicality and scalability of
our algorithms.
| no_new_dataset | 0.952838 |
0910.0668 | Ahmed Abdel-Gawad | Yuan Qi, Ahmed H. Abdel-Gawad and Thomas P. Minka | Variable sigma Gaussian processes: An expectation propagation
perspective | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. The most advanced of these, the
variable-sigma GP (VSGP) (Walder et al., 2008), allows each basis point to have
its own length scale. However, VSGP was only derived for regression. We
describe how VSGP can be applied to classification and other problems, by
deriving it as an expectation propagation algorithm. In this view, sparse GP
approximations correspond to a KL-projection of the true posterior onto a
compact exponential family of GPs. VSGP constitutes one such family, and we
show how to enlarge this family to get additional accuracy. In particular, we
show that endowing each basis point with its own full covariance matrix
provides a significant increase in approximation power.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2009 03:30:13 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Oct 2009 21:52:48 GMT"
}
] | 2010-02-23T00:00:00 | [
[
"Qi",
"Yuan",
""
],
[
"Abdel-Gawad",
"Ahmed H.",
""
],
[
"Minka",
"Thomas P.",
""
]
] | TITLE: Variable sigma Gaussian processes: An expectation propagation
perspective
ABSTRACT: Gaussian processes (GPs) provide a probabilistic nonparametric representation
of functions in regression, classification, and other problems. Unfortunately,
exact learning with GPs is intractable for large datasets. A variety of
approximate GP methods have been proposed that essentially map the large
dataset into a small set of basis points. The most advanced of these, the
variable-sigma GP (VSGP) (Walder et al., 2008), allows each basis point to have
its own length scale. However, VSGP was only derived for regression. We
describe how VSGP can be applied to classification and other problems, by
deriving it as an expectation propagation algorithm. In this view, sparse GP
approximations correspond to a KL-projection of the true posterior onto a
compact exponential family of GPs. VSGP constitutes one such family, and we
show how to enlarge this family to get additional accuracy. In particular, we
show that endowing each basis point with its own full covariance matrix
provides a significant increase in approximation power.
| no_new_dataset | 0.946001 |
1002.3195 | Mahmud Hossain | M. Shahriar Hossain, Michael Narayan and Naren Ramakrishnan | Efficiently Discovering Hammock Paths from Induced Similarity Networks | null | null | null | null | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity networks are important abstractions in many information management
applications such as recommender systems, corpora analysis, and medical
informatics. For instance, by inducing similarity networks between movies rated
similarly by users, or between documents containing common terms, and or
between clinical trials involving the same themes, we can aim to find the
global structure of connectivities underlying the data, and use the network as
a basis to make connections between seemingly disparate entities. In the above
applications, composing similarities between objects of interest finds uses in
serendipitous recommendation, in storytelling, and in clinical diagnosis,
respectively. We present an algorithmic framework for traversing similarity
paths using the notion of `hammock' paths which are generalization of
traditional paths. Our framework is exploratory in nature so that, given
starting and ending objects of interest, it explores candidate objects for path
following, and heuristics to admissibly estimate the potential for paths to
lead to a desired destination. We present three diverse applications: exploring
movie similarities in the Netflix dataset, exploring abstract similarities
across the PubMed corpus, and exploring description similarities in a database
of clinical trials. Experimental results demonstrate the potential of our
approach for unstructured knowledge discovery in similarity networks.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2010 04:07:06 GMT"
}
] | 2010-02-18T00:00:00 | [
[
"Hossain",
"M. Shahriar",
""
],
[
"Narayan",
"Michael",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Efficiently Discovering Hammock Paths from Induced Similarity Networks
ABSTRACT: Similarity networks are important abstractions in many information management
applications such as recommender systems, corpora analysis, and medical
informatics. For instance, by inducing similarity networks between movies rated
similarly by users, or between documents containing common terms, and or
between clinical trials involving the same themes, we can aim to find the
global structure of connectivities underlying the data, and use the network as
a basis to make connections between seemingly disparate entities. In the above
applications, composing similarities between objects of interest finds uses in
serendipitous recommendation, in storytelling, and in clinical diagnosis,
respectively. We present an algorithmic framework for traversing similarity
paths using the notion of `hammock' paths which are generalization of
traditional paths. Our framework is exploratory in nature so that, given
starting and ending objects of interest, it explores candidate objects for path
following, and heuristics to admissibly estimate the potential for paths to
lead to a desired destination. We present three diverse applications: exploring
movie similarities in the Netflix dataset, exploring abstract similarities
across the PubMed corpus, and exploring description similarities in a database
of clinical trials. Experimental results demonstrate the potential of our
approach for unstructured knowledge discovery in similarity networks.
| no_new_dataset | 0.946001 |
1002.2780 | Ruslan Salakhutdinov | Ruslan Salakhutdinov, Nathan Srebro | Collaborative Filtering in a Non-Uniform World: Learning with the
Weighted Trace Norm | 9 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that matrix completion with trace-norm regularization can be
significantly hurt when entries of the matrix are sampled non-uniformly. We
introduce a weighted version of the trace-norm regularizer that works well also
with non-uniform sampling. Our experimental results demonstrate that the
weighted trace-norm regularization indeed yields significant gains on the
(highly non-uniformly sampled) Netflix dataset.
| [
{
"version": "v1",
"created": "Sun, 14 Feb 2010 16:37:04 GMT"
}
] | 2010-02-16T00:00:00 | [
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Srebro",
"Nathan",
""
]
] | TITLE: Collaborative Filtering in a Non-Uniform World: Learning with the
Weighted Trace Norm
ABSTRACT: We show that matrix completion with trace-norm regularization can be
significantly hurt when entries of the matrix are sampled non-uniformly. We
introduce a weighted version of the trace-norm regularizer that works well also
with non-uniform sampling. Our experimental results demonstrate that the
weighted trace-norm regularization indeed yields significant gains on the
(highly non-uniformly sampled) Netflix dataset.
| no_new_dataset | 0.950319 |
0908.0050 | Julien Mairal | Julien Mairal (INRIA Rocquencourt), Francis Bach (INRIA Rocquencourt),
Jean Ponce (INRIA Rocquencourt, LIENS), Guillermo Sapiro | Online Learning for Matrix Factorization and Sparse Coding | revised version | Journal of Machine Learning Research 11 (2010) 19--60 | null | null | stat.ML cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse coding--that is, modelling data vectors as sparse linear combinations
of basis elements--is widely used in machine learning, neuroscience, signal
processing, and statistics. This paper focuses on the large-scale matrix
factorization problem that consists of learning the basis set, adapting it to
specific data. Variations of this problem include dictionary learning in signal
processing, non-negative matrix factorization and sparse principal component
analysis. In this paper, we propose to address these tasks with a new online
optimization algorithm, based on stochastic approximations, which scales up
gracefully to large datasets with millions of training samples, and extends
naturally to various matrix factorization formulations, making it suitable for
a wide range of learning problems. A proof of convergence is presented, along
with experiments with natural images and genomic data demonstrating that it
leads to state-of-the-art performance in terms of speed and optimization for
both small and large datasets.
| [
{
"version": "v1",
"created": "Sat, 1 Aug 2009 06:09:18 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Feb 2010 07:33:02 GMT"
}
] | 2010-02-11T00:00:00 | [
[
"Mairal",
"Julien",
"",
"INRIA Rocquencourt"
],
[
"Bach",
"Francis",
"",
"INRIA Rocquencourt"
],
[
"Ponce",
"Jean",
"",
"INRIA Rocquencourt, LIENS"
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: Online Learning for Matrix Factorization and Sparse Coding
ABSTRACT: Sparse coding--that is, modelling data vectors as sparse linear combinations
of basis elements--is widely used in machine learning, neuroscience, signal
processing, and statistics. This paper focuses on the large-scale matrix
factorization problem that consists of learning the basis set, adapting it to
specific data. Variations of this problem include dictionary learning in signal
processing, non-negative matrix factorization and sparse principal component
analysis. In this paper, we propose to address these tasks with a new online
optimization algorithm, based on stochastic approximations, which scales up
gracefully to large datasets with millions of training samples, and extends
naturally to various matrix factorization formulations, making it suitable for
a wide range of learning problems. A proof of convergence is presented, along
with experiments with natural images and genomic data demonstrating that it
leads to state-of-the-art performance in terms of speed and optimization for
both small and large datasets.
| no_new_dataset | 0.947769 |
1002.1156 | Vishal Goyal | M. Babu Reddy, L. S. S. Reddy | Dimensionality Reduction: An Empirical Study on the Usability of IFE-CF
(Independent Feature Elimination- by C-Correlation and F-Correlation)
Measures | International Journal of Computer Science Issues, IJCSI, Vol. 7,
Issue 1, No. 1, January 2010, http://ijcsi.org | International Journal of Computer Science Issues, IJCSI, Vol. 7,
Issue 1, No. 1, January 2010,
http://ijcsi.org/articles/Dimensionality-Reduction-An-Empirical-Study-on-the-Usability-of-IFE-CF-(Independent-Feature-Elimination-by-C-Correlation-and-F-Correlation)-Measures.php | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent increase in dimensionality of data has thrown a great challenge to
the existing dimensionality reduction methods in terms of their effectiveness.
Dimensionality reduction has emerged as one of the significant preprocessing
steps in machine learning applications and has been effective in removing
inappropriate data, increasing learning accuracy, and improving
comprehensibility. Feature redundancy exercises great influence on the
performance of classification process. Towards the better classification
performance, this paper addresses the usefulness of truncating the highly
correlated and redundant attributes. Here, an effort has been made to verify
the utility of dimensionality reduction by applying LVQ (Learning Vector
Quantization) method on two Benchmark datasets of 'Pima Indian Diabetic
patients' and 'Lung cancer patients'.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2010 08:59:05 GMT"
}
] | 2010-02-10T00:00:00 | [
[
"Reddy",
"M. Babu",
""
],
[
"Reddy",
"L. S. S.",
""
]
] | TITLE: Dimensionality Reduction: An Empirical Study on the Usability of IFE-CF
(Independent Feature Elimination- by C-Correlation and F-Correlation)
Measures
ABSTRACT: The recent increase in dimensionality of data has thrown a great challenge to
the existing dimensionality reduction methods in terms of their effectiveness.
Dimensionality reduction has emerged as one of the significant preprocessing
steps in machine learning applications and has been effective in removing
inappropriate data, increasing learning accuracy, and improving
comprehensibility. Feature redundancy exercises great influence on the
performance of classification process. Towards the better classification
performance, this paper addresses the usefulness of truncating the highly
correlated and redundant attributes. Here, an effort has been made to verify
the utility of dimensionality reduction by applying LVQ (Learning Vector
Quantization) method on two Benchmark datasets of 'Pima Indian Diabetic
patients' and 'Lung cancer patients'.
| no_new_dataset | 0.949295 |
1002.1104 | Fabio Vandin | Adam Kirsch, Michael Mitzenmacher, Andrea Pietracaprina, Geppino
Pucci, Eli Upfal, Fabio Vandin | An Efficient Rigorous Approach for Identifying Statistically Significant
Frequent Itemsets | A preliminary version of this work was presented in ACM PODS 2009. 20
pages, 0 figures | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As advances in technology allow for the collection, storage, and analysis of
vast amounts of data, the task of screening and assessing the significance of
discovered patterns is becoming a major challenge in data mining applications.
In this work, we address significance in the context of frequent itemset
mining. Specifically, we develop a novel methodology to identify a meaningful
support threshold s* for a dataset, such that the number of itemsets with
support at least s* represents a substantial deviation from what would be
expected in a random dataset with the same number of transactions and the same
individual item frequencies. These itemsets can then be flagged as
statistically significant with a small false discovery rate. We present
extensive experimental results to substantiate the effectiveness of our
methodology.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2010 23:33:47 GMT"
}
] | 2010-02-08T00:00:00 | [
[
"Kirsch",
"Adam",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Pietracaprina",
"Andrea",
""
],
[
"Pucci",
"Geppino",
""
],
[
"Upfal",
"Eli",
""
],
[
"Vandin",
"Fabio",
""
]
] | TITLE: An Efficient Rigorous Approach for Identifying Statistically Significant
Frequent Itemsets
ABSTRACT: As advances in technology allow for the collection, storage, and analysis of
vast amounts of data, the task of screening and assessing the significance of
discovered patterns is becoming a major challenge in data mining applications.
In this work, we address significance in the context of frequent itemset
mining. Specifically, we develop a novel methodology to identify a meaningful
support threshold s* for a dataset, such that the number of itemsets with
support at least s* represents a substantial deviation from what would be
expected in a random dataset with the same number of transactions and the same
individual item frequencies. These itemsets can then be flagged as
statistically significant with a small false discovery rate. We present
extensive experimental results to substantiate the effectiveness of our
methodology.
| no_new_dataset | 0.950732 |
1002.1144 | Vishal Goyal | M. Ramaswami, R. Bhaskaran | A CHAID Based Performance Prediction Model in Educational Data Mining | International Journal of Computer Science Issues, IJCSI, Vol. 7,
Issue 1, No. 1, January 2010,
http://ijcsi.org/articles/A-CHAID-Based-Performance-Prediction-Model-in-Educational-Data-Mining.php | International Journal of Computer Science Issues, IJCSI, Vol. 7,
Issue 1, No. 1, January 2010,
http://ijcsi.org/articles/A-CHAID-Based-Performance-Prediction-Model-in-Educational-Data-Mining.php | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance in higher secondary school education in India is a turning
point in the academic lives of all students. As this academic performance is
influenced by many factors, it is essential to develop predictive data mining
model for students' performance so as to identify the slow learners and study
the influence of the dominant factors on their academic performance. In the
present investigation, a survey cum experimental methodology was adopted to
generate a database and it was constructed from a primary and a secondary
source. While the primary data was collected from the regular students, the
secondary data was gathered from the school and office of the Chief Educational
Officer (CEO). A total of 1000 datasets of the year 2006 from five different
schools in three different districts of Tamilnadu were collected. The raw data
was preprocessed in terms of filling up missing values, transforming values in
one form into another and relevant attribute/ variable selection. As a result,
we had 772 student records, which were used for CHAID prediction model
construction. A set of prediction rules were extracted from CHIAD prediction
model and the efficiency of the generated CHIAD prediction model was found. The
accuracy of the present model was compared with other model and it has been
found to be satisfactory.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2010 08:27:17 GMT"
}
] | 2010-02-08T00:00:00 | [
[
"Ramaswami",
"M.",
""
],
[
"Bhaskaran",
"R.",
""
]
] | TITLE: A CHAID Based Performance Prediction Model in Educational Data Mining
ABSTRACT: The performance in higher secondary school education in India is a turning
point in the academic lives of all students. As this academic performance is
influenced by many factors, it is essential to develop predictive data mining
model for students' performance so as to identify the slow learners and study
the influence of the dominant factors on their academic performance. In the
present investigation, a survey cum experimental methodology was adopted to
generate a database and it was constructed from a primary and a secondary
source. While the primary data was collected from the regular students, the
secondary data was gathered from the school and office of the Chief Educational
Officer (CEO). A total of 1000 datasets of the year 2006 from five different
schools in three different districts of Tamilnadu were collected. The raw data
was preprocessed in terms of filling up missing values, transforming values in
one form into another and relevant attribute/ variable selection. As a result,
we had 772 student records, which were used for CHAID prediction model
construction. A set of prediction rules were extracted from CHIAD prediction
model and the efficiency of the generated CHIAD prediction model was found. The
accuracy of the present model was compared with other model and it has been
found to be satisfactory.
| no_new_dataset | 0.875787 |
1002.0414 | Dakshina Ranjan Kisku | Dakshina Ranjan Kisku, Phalguni Gupta, Jamuna Kanta Sing | Feature Level Fusion of Biometrics Cues: Human Identification with
Doddingtons Caricature | 8 pages, 3 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper presents a multimodal biometric system of fingerprint and ear
biometrics. Scale Invariant Feature Transform (SIFT) descriptor based feature
sets extracted from fingerprint and ear are fused. The fused set is encoded by
K-medoids partitioning approach with less number of feature points in the set.
K-medoids partition the whole dataset into clusters to minimize the error
between data points belonging to the clusters and its center. Reduced feature
set is used to match between two biometric sets. Matching scores are generated
using wolf-lamb user-dependent feature weighting scheme introduced by
Doddington. The technique is tested to exhibit its robust performance.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2010 08:12:23 GMT"
}
] | 2010-02-03T00:00:00 | [
[
"Kisku",
"Dakshina Ranjan",
""
],
[
"Gupta",
"Phalguni",
""
],
[
"Sing",
"Jamuna Kanta",
""
]
] | TITLE: Feature Level Fusion of Biometrics Cues: Human Identification with
Doddingtons Caricature
ABSTRACT: This paper presents a multimodal biometric system of fingerprint and ear
biometrics. Scale Invariant Feature Transform (SIFT) descriptor based feature
sets extracted from fingerprint and ear are fused. The fused set is encoded by
K-medoids partitioning approach with less number of feature points in the set.
K-medoids partition the whole dataset into clusters to minimize the error
between data points belonging to the clusters and its center. Reduced feature
set is used to match between two biometric sets. Matching scores are generated
using wolf-lamb user-dependent feature weighting scheme introduced by
Doddington. The technique is tested to exhibit its robust performance.
| no_new_dataset | 0.95297 |
0912.2548 | Grigorios Loukides | Grigorios Loukides, Aris Gkoulalas-Divanis and Bradley Malin | Towards Utility-driven Anonymization of Transactions | null | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Publishing person-specific transactions in an anonymous form is increasingly
required by organizations. Recent approaches ensure that potentially
identifying information (e.g., a set of diagnosis codes) cannot be used to link
published transactions to persons' identities, but all are limited in
application because they incorporate coarse privacy requirements (e.g.,
protecting a certain set of m diagnosis codes requires protecting all m-sized
sets), do not integrate utility requirements, and tend to explore a small
portion of the solution space. In this paper, we propose a more general
framework for anonymizing transactional data under specific privacy and utility
requirements. We model such requirements as constraints, investigate how these
constraints can be specified, and propose COAT (COnstraint-based Anonymization
of Transactions), an algorithm that anonymizes transactions using a flexible
hierarchy-free generalization scheme to meet the specified constraints.
Experiments with benchmark datasets verify that COAT significantly outperforms
the current state-of-the-art algorithm in terms of data utility, while being
comparable in terms of efficiency. The effectiveness of our approach is also
demonstrated in a real-world scenario, which requires disseminating a private,
patient-specific transactional dataset in a way that preserves both privacy and
utility in intended studies.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2009 23:30:24 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jan 2010 05:26:00 GMT"
}
] | 2010-01-26T00:00:00 | [
[
"Loukides",
"Grigorios",
""
],
[
"Gkoulalas-Divanis",
"Aris",
""
],
[
"Malin",
"Bradley",
""
]
] | TITLE: Towards Utility-driven Anonymization of Transactions
ABSTRACT: Publishing person-specific transactions in an anonymous form is increasingly
required by organizations. Recent approaches ensure that potentially
identifying information (e.g., a set of diagnosis codes) cannot be used to link
published transactions to persons' identities, but all are limited in
application because they incorporate coarse privacy requirements (e.g.,
protecting a certain set of m diagnosis codes requires protecting all m-sized
sets), do not integrate utility requirements, and tend to explore a small
portion of the solution space. In this paper, we propose a more general
framework for anonymizing transactional data under specific privacy and utility
requirements. We model such requirements as constraints, investigate how these
constraints can be specified, and propose COAT (COnstraint-based Anonymization
of Transactions), an algorithm that anonymizes transactions using a flexible
hierarchy-free generalization scheme to meet the specified constraints.
Experiments with benchmark datasets verify that COAT significantly outperforms
the current state-of-the-art algorithm in terms of data utility, while being
comparable in terms of efficiency. The effectiveness of our approach is also
demonstrated in a real-world scenario, which requires disseminating a private,
patient-specific transactional dataset in a way that preserves both privacy and
utility in intended studies.
| no_new_dataset | 0.9463 |
1001.3824 | Federico Sacerdoti MSc | Federico D. Sacerdoti | Performance and Fault Tolerance in the StoreTorrent Parallel Filesystem | 13 pages, 7 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With a goal of supporting the timely and cost-effective analysis of Terabyte
datasets on commodity components, we present and evaluate StoreTorrent, a
simple distributed filesystem with integrated fault tolerance for efficient
handling of small data records. Our contributions include an application-OS
pipelining technique and metadata structure to increase small write and read
performance by a factor of 1-10, and the use of peer-to-peer communication of
replica-location indexes to avoid transferring data during parallel analysis
even in a degraded state. We evaluated StoreTorrent, PVFS, and Gluster
filesystems using 70 storage nodes and 560 parallel clients on an 8-core/node
Ethernet cluster with directly attached SATA disks. StoreTorrent performed
parallel small writes at an aggregate rate of 1.69 GB/s, and supported reads
over the network at 8.47 GB/s. We ported a parallel analysis task and
demonstrate it achieved parallel reads at the full aggregate speed of the
storage node local filesystems.
| [
{
"version": "v1",
"created": "Thu, 21 Jan 2010 15:17:30 GMT"
}
] | 2010-01-22T00:00:00 | [
[
"Sacerdoti",
"Federico D.",
""
]
] | TITLE: Performance and Fault Tolerance in the StoreTorrent Parallel Filesystem
ABSTRACT: With a goal of supporting the timely and cost-effective analysis of Terabyte
datasets on commodity components, we present and evaluate StoreTorrent, a
simple distributed filesystem with integrated fault tolerance for efficient
handling of small data records. Our contributions include an application-OS
pipelining technique and metadata structure to increase small write and read
performance by a factor of 1-10, and the use of peer-to-peer communication of
replica-location indexes to avoid transferring data during parallel analysis
even in a degraded state. We evaluated StoreTorrent, PVFS, and Gluster
filesystems using 70 storage nodes and 560 parallel clients on an 8-core/node
Ethernet cluster with directly attached SATA disks. StoreTorrent performed
parallel small writes at an aggregate rate of 1.69 GB/s, and supported reads
over the network at 8.47 GB/s. We ported a parallel analysis task and
demonstrate it achieved parallel reads at the full aggregate speed of the
storage node local filesystems.
| no_new_dataset | 0.940298 |
1001.2921 | Loet Leydesdorff | Willem Halffman and Loet Leydesdorff | Is Inequality Among Universities Increasing? Gini Coefficients and the
Elusive Rise of Elite Universities | null | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the unintended consequences of the New Public Management (NPM) in
universities is often feared to be a division between elite institutions
focused on research and large institutions with teaching missions. However,
institutional isomorphisms provide counter-incentives. For example, university
rankings focus on certain output parameters such as publications, but not on
others (e.g., patents). In this study, we apply Gini coefficients to university
rankings in order to assess whether universities are becoming more unequal, at
the level of both the world and individual nations. Our results do not support
the thesis that universities are becoming more unequal. If anything, we
predominantly find homogenization, both at the level of the global comparisons
and nationally. In a more restricted dataset (using only publications in the
natural and life sciences), we find increasing inequality for those countries,
which used NPM during the 1990s, but not during the 2000s. Our findings suggest
that increased output steering from the policy side leads to a global
conformation to performance standards.
| [
{
"version": "v1",
"created": "Sun, 17 Jan 2010 19:36:29 GMT"
}
] | 2010-01-19T00:00:00 | [
[
"Halffman",
"Willem",
""
],
[
"Leydesdorff",
"Loet",
""
]
] | TITLE: Is Inequality Among Universities Increasing? Gini Coefficients and the
Elusive Rise of Elite Universities
ABSTRACT: One of the unintended consequences of the New Public Management (NPM) in
universities is often feared to be a division between elite institutions
focused on research and large institutions with teaching missions. However,
institutional isomorphisms provide counter-incentives. For example, university
rankings focus on certain output parameters such as publications, but not on
others (e.g., patents). In this study, we apply Gini coefficients to university
rankings in order to assess whether universities are becoming more unequal, at
the level of both the world and individual nations. Our results do not support
the thesis that universities are becoming more unequal. If anything, we
predominantly find homogenization, both at the level of the global comparisons
and nationally. In a more restricted dataset (using only publications in the
natural and life sciences), we find increasing inequality for those countries,
which used NPM during the 1990s, but not during the 2000s. Our findings suggest
that increased output steering from the policy side leads to a global
conformation to performance standards.
| no_new_dataset | 0.938969 |
1001.2922 | Louis De Barros | Louis De Barros (UCD), Christopher J. Bean (UCD), Ivan Lokmer (UCD),
Gilberto Saccorotti, Luciano Zucarello, Gareth O'Brien (UCD), Jean-Philippe
M\'etaxian (LGIT), Domenico Patan\`e | Source geometry from exceptionally high resolution long period event
observations at Mt Etna during the 2008 eruption | null | Geophysical Research Letters 36 (2009) L24305 | 10.1029/2009GL041273 | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the second half of June, 2008, 50 broadband seismic stations were
deployed on Mt Etna volcano in close proximity to the summit, allowing us to
observe seismic activity with exceptionally high resolution. 129 long period
events (LP) with dominant frequencies ranging between 0.3 and 1.2 Hz, were
extracted from this dataset. These events form two families of similar
waveforms with different temporal distributions. Event locations are performed
by cross-correlating signals for all pairs of stations in a two-step scheme. In
the first step, the absolute location of the centre of the clusters was found.
In the second step, all events are located using this position. The hypocentres
are found at shallow depths (20 to 700 m deep) below the summit craters. The
very high location resolution allows us to detect the temporal migration of the
events along a dike-like structure and 2 pipe shaped bodies, yielding an
unprecedented view of some elements of the shallow plumbing system at Mount
Etna. These events do not seem to be a direct indicator of the ongoing lava
flow or magma upwelling.
| [
{
"version": "v1",
"created": "Sun, 17 Jan 2010 19:43:41 GMT"
}
] | 2010-01-19T00:00:00 | [
[
"De Barros",
"Louis",
"",
"UCD"
],
[
"Bean",
"Christopher J.",
"",
"UCD"
],
[
"Lokmer",
"Ivan",
"",
"UCD"
],
[
"Saccorotti",
"Gilberto",
"",
"UCD"
],
[
"Zucarello",
"Luciano",
"",
"UCD"
],
[
"O'Brien",
"Gareth",
"",
"UCD"
],
[
"Métaxian",
"Jean-Philippe",
"",
"LGIT"
],
[
"Patanè",
"Domenico",
""
]
] | TITLE: Source geometry from exceptionally high resolution long period event
observations at Mt Etna during the 2008 eruption
ABSTRACT: During the second half of June, 2008, 50 broadband seismic stations were
deployed on Mt Etna volcano in close proximity to the summit, allowing us to
observe seismic activity with exceptionally high resolution. 129 long period
events (LP) with dominant frequencies ranging between 0.3 and 1.2 Hz, were
extracted from this dataset. These events form two families of similar
waveforms with different temporal distributions. Event locations are performed
by cross-correlating signals for all pairs of stations in a two-step scheme. In
the first step, the absolute location of the centre of the clusters was found.
In the second step, all events are located using this position. The hypocentres
are found at shallow depths (20 to 700 m deep) below the summit craters. The
very high location resolution allows us to detect the temporal migration of the
events along a dike-like structure and 2 pipe shaped bodies, yielding an
unprecedented view of some elements of the shallow plumbing system at Mount
Etna. These events do not seem to be a direct indicator of the ongoing lava
flow or magma upwelling.
| no_new_dataset | 0.89303 |
1001.1221 | Paolo Piro | Paolo Piro, Richard Nock, Frank Nielsen, Michel Barlaud | Boosting k-NN for categorization of natural scenes | under revision for IJCV | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k-nearest neighbors (k-NN) classification rule has proven extremely
successful in countless many computer vision applications. For example, image
categorization often relies on uniform voting among the nearest prototypes in
the space of descriptors. In spite of its good properties, the classic k-NN
rule suffers from high variance when dealing with sparse prototype datasets in
high dimensions. A few techniques have been proposed to improve k-NN
classification, which rely on either deforming the nearest neighborhood
relationship or modifying the input space. In this paper, we propose a novel
boosting algorithm, called UNN (Universal Nearest Neighbors), which induces
leveraged k-NN, thus generalizing the classic k-NN rule. We redefine the voting
rule as a strong classifier that linearly combines predictions from the k
closest prototypes. Weak classifiers are learned by UNN so as to minimize a
surrogate risk. A major feature of UNN is the ability to learn which prototypes
are the most relevant for a given class, thus allowing one for effective data
reduction. Experimental results on the synthetic two-class dataset of Ripley
show that such a filtering strategy is able to reject "noisy" prototypes. We
carried out image categorization experiments on a database containing eight
classes of natural scenes. We show that our method outperforms significantly
the classic k-NN classification, while enabling significant reduction of the
computational cost by means of data filtering.
| [
{
"version": "v1",
"created": "Fri, 8 Jan 2010 08:30:51 GMT"
}
] | 2010-01-11T00:00:00 | [
[
"Piro",
"Paolo",
""
],
[
"Nock",
"Richard",
""
],
[
"Nielsen",
"Frank",
""
],
[
"Barlaud",
"Michel",
""
]
] | TITLE: Boosting k-NN for categorization of natural scenes
ABSTRACT: The k-nearest neighbors (k-NN) classification rule has proven extremely
successful in countless many computer vision applications. For example, image
categorization often relies on uniform voting among the nearest prototypes in
the space of descriptors. In spite of its good properties, the classic k-NN
rule suffers from high variance when dealing with sparse prototype datasets in
high dimensions. A few techniques have been proposed to improve k-NN
classification, which rely on either deforming the nearest neighborhood
relationship or modifying the input space. In this paper, we propose a novel
boosting algorithm, called UNN (Universal Nearest Neighbors), which induces
leveraged k-NN, thus generalizing the classic k-NN rule. We redefine the voting
rule as a strong classifier that linearly combines predictions from the k
closest prototypes. Weak classifiers are learned by UNN so as to minimize a
surrogate risk. A major feature of UNN is the ability to learn which prototypes
are the most relevant for a given class, thus allowing one for effective data
reduction. Experimental results on the synthetic two-class dataset of Ripley
show that such a filtering strategy is able to reject "noisy" prototypes. We
carried out image categorization experiments on a database containing eight
classes of natural scenes. We show that our method outperforms significantly
the classic k-NN classification, while enabling significant reduction of the
computational cost by means of data filtering.
| no_new_dataset | 0.948251 |
1001.1020 | Ping Li | Ping Li | An Empirical Evaluation of Four Algorithms for Multi-Class
Classification: Mart, ABC-Mart, Robust LogitBoost, and ABC-LogitBoost | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This empirical study is mainly devoted to comparing four tree-based boosting
algorithms: mart, abc-mart, robust logitboost, and abc-logitboost, for
multi-class classification on a variety of publicly available datasets. Some of
those datasets have been thoroughly tested in prior studies using a broad range
of classification algorithms including SVM, neural nets, and deep learning.
In terms of the empirical classification errors, our experiment results
demonstrate:
1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably
improves (robust) logitboost. 3. Robust) logitboost} considerably improves mart
on most datasets. 4. Abc-logitboost considerably improves abc-mart on most
datasets. 5. These four boosting algorithms (especially abc-logitboost)
outperform SVM on many datasets. 6. Compared to the best deep learning methods,
these four boosting algorithms (especially abc-logitboost) are competitive.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2010 06:34:21 GMT"
}
] | 2010-01-08T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: An Empirical Evaluation of Four Algorithms for Multi-Class
Classification: Mart, ABC-Mart, Robust LogitBoost, and ABC-LogitBoost
ABSTRACT: This empirical study is mainly devoted to comparing four tree-based boosting
algorithms: mart, abc-mart, robust logitboost, and abc-logitboost, for
multi-class classification on a variety of publicly available datasets. Some of
those datasets have been thoroughly tested in prior studies using a broad range
of classification algorithms including SVM, neural nets, and deep learning.
In terms of the empirical classification errors, our experiment results
demonstrate:
1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably
improves (robust) logitboost. 3. Robust) logitboost} considerably improves mart
on most datasets. 4. Abc-logitboost considerably improves abc-mart on most
datasets. 5. These four boosting algorithms (especially abc-logitboost)
outperform SVM on many datasets. 6. Compared to the best deep learning methods,
these four boosting algorithms (especially abc-logitboost) are competitive.
| no_new_dataset | 0.949153 |
1001.1079 | Ricardo Silva | Ricardo Silva | Measuring Latent Causal Structure | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering latent representations of the observed world has become
increasingly more relevant in data analysis. Much of the effort concentrates on
building latent variables which can be used in prediction problems, such as
classification and regression. A related goal of learning latent structure from
data is that of identifying which hidden common causes generate the
observations, such as in applications that require predicting the effect of
policies. This will be the main problem tackled in our contribution: given a
dataset of indicators assumed to be generated by unknown and unmeasured common
causes, we wish to discover which hidden common causes are those, and how they
generate our data. This is possible under the assumption that observed
variables are linear functions of the latent causes with additive noise.
Previous results in the literature present solutions for the case where each
observed variable is a noisy function of a single latent variable. We show how
to extend the existing results for some cases where observed variables measure
more than one latent variable.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2010 14:41:21 GMT"
}
] | 2010-01-08T00:00:00 | [
[
"Silva",
"Ricardo",
""
]
] | TITLE: Measuring Latent Causal Structure
ABSTRACT: Discovering latent representations of the observed world has become
increasingly more relevant in data analysis. Much of the effort concentrates on
building latent variables which can be used in prediction problems, such as
classification and regression. A related goal of learning latent structure from
data is that of identifying which hidden common causes generate the
observations, such as in applications that require predicting the effect of
policies. This will be the main problem tackled in our contribution: given a
dataset of indicators assumed to be generated by unknown and unmeasured common
causes, we wish to discover which hidden common causes are those, and how they
generate our data. This is possible under the assumption that observed
variables are linear functions of the latent causes with additive noise.
Previous results in the literature present solutions for the case where each
observed variable is a noisy function of a single latent variable. We show how
to extend the existing results for some cases where observed variables measure
more than one latent variable.
| no_new_dataset | 0.944228 |
0904.2037 | Chunhua Shen | Chunhua Shen and Hanxi Li | Boosting through Optimization of Margin Distributions | 9 pages. To publish/Published in IEEE Transactions on Neural
Networks, 21(7), July 2010 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boosting has attracted much research attention in the past decade. The
success of boosting algorithms may be interpreted in terms of the margin
theory. Recently it has been shown that generalization error of classifiers can
be obtained by explicitly taking the margin distribution of the training data
into account. Most of the current boosting algorithms in practice usually
optimizes a convex loss function and do not make use of the margin
distribution. In this work we design a new boosting algorithm, termed
margin-distribution boosting (MDBoost), which directly maximizes the average
margin and minimizes the margin variance simultaneously. This way the margin
distribution is optimized. A totally-corrective optimization algorithm based on
column generation is proposed to implement MDBoost. Experiments on UCI datasets
show that MDBoost outperforms AdaBoost and LPBoost in most cases.
| [
{
"version": "v1",
"created": "Tue, 14 Apr 2009 01:57:12 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Nov 2009 02:24:51 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jan 2010 09:00:26 GMT"
}
] | 2010-01-06T00:00:00 | [
[
"Shen",
"Chunhua",
""
],
[
"Li",
"Hanxi",
""
]
] | TITLE: Boosting through Optimization of Margin Distributions
ABSTRACT: Boosting has attracted much research attention in the past decade. The
success of boosting algorithms may be interpreted in terms of the margin
theory. Recently it has been shown that generalization error of classifiers can
be obtained by explicitly taking the margin distribution of the training data
into account. Most of the current boosting algorithms in practice usually
optimizes a convex loss function and do not make use of the margin
distribution. In this work we design a new boosting algorithm, termed
margin-distribution boosting (MDBoost), which directly maximizes the average
margin and minimizes the margin variance simultaneously. This way the margin
distribution is optimized. A totally-corrective optimization algorithm based on
column generation is proposed to implement MDBoost. Experiments on UCI datasets
show that MDBoost outperforms AdaBoost and LPBoost in most cases.
| no_new_dataset | 0.941708 |
0912.5426 | Xiaokui Xiao | Xiaokui Xiao, Ke Yi, Yufei Tao | The Hardness and Approximation Algorithms for L-Diversity | EDBT 2010 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The existing solutions to privacy preserving publication can be classified
into the theoretical and heuristic categories. The former guarantees provably
low information loss, whereas the latter incurs gigantic loss in the worst
case, but is shown empirically to perform well on many real inputs. While
numerous heuristic algorithms have been developed to satisfy advanced privacy
principles such as l-diversity, t-closeness, etc., the theoretical category is
currently limited to k-anonymity which is the earliest principle known to have
severe vulnerability to privacy attacks. Motivated by this, we present the
first theoretical study on l-diversity, a popular principle that is widely
adopted in the literature. First, we show that optimal l-diverse generalization
is NP-hard even when there are only 3 distinct sensitive values in the
microdata. Then, an (l*d)-approximation algorithm is developed, where d is the
dimensionality of the underlying dataset. This is the first known algorithm
with a non-trivial bound on information loss. Extensive experiments with real
datasets validate the effectiveness and efficiency of proposed solution.
| [
{
"version": "v1",
"created": "Wed, 30 Dec 2009 08:31:10 GMT"
}
] | 2009-12-31T00:00:00 | [
[
"Xiao",
"Xiaokui",
""
],
[
"Yi",
"Ke",
""
],
[
"Tao",
"Yufei",
""
]
] | TITLE: The Hardness and Approximation Algorithms for L-Diversity
ABSTRACT: The existing solutions to privacy preserving publication can be classified
into the theoretical and heuristic categories. The former guarantees provably
low information loss, whereas the latter incurs gigantic loss in the worst
case, but is shown empirically to perform well on many real inputs. While
numerous heuristic algorithms have been developed to satisfy advanced privacy
principles such as l-diversity, t-closeness, etc., the theoretical category is
currently limited to k-anonymity which is the earliest principle known to have
severe vulnerability to privacy attacks. Motivated by this, we present the
first theoretical study on l-diversity, a popular principle that is widely
adopted in the literature. First, we show that optimal l-diverse generalization
is NP-hard even when there are only 3 distinct sensitive values in the
microdata. Then, an (l*d)-approximation algorithm is developed, where d is the
dimensionality of the underlying dataset. This is the first known algorithm
with a non-trivial bound on information loss. Extensive experiments with real
datasets validate the effectiveness and efficiency of proposed solution.
| no_new_dataset | 0.945701 |
0903.3257 | Marcus Hutter | Ke Zhang and Marcus Hutter and Huidong Jin | A New Local Distance-Based Outlier Detection Approach for Scattered
Real-World Data | 15 LaTeX pages, 7 figures, 2 tables, 1 algorithm, 2 theorems | Proc. 13th Pacific-Asia Conf. on Knowledge Discovery and Data
Mining (PAKDD 2009) pages 813-822 | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting outliers which are grossly different from or inconsistent with the
remaining dataset is a major challenge in real-world KDD applications. Existing
outlier detection methods are ineffective on scattered real-world datasets due
to implicit data patterns and parameter setting issues. We define a novel
"Local Distance-based Outlier Factor" (LDOF) to measure the {outlier-ness} of
objects in scattered datasets which addresses these issues. LDOF uses the
relative location of an object to its neighbours to determine the degree to
which the object deviates from its neighbourhood. Properties of LDOF are
theoretically analysed including LDOF's lower bound and its false-detection
probability, as well as parameter settings. In order to facilitate parameter
settings in real-world applications, we employ a top-n technique in our outlier
detection approach, where only the objects with the highest LDOF values are
regarded as outliers. Compared to conventional approaches (such as top-n KNN
and top-n LOF), our method top-n LDOF is more effective at detecting outliers
in scattered data. It is also easier to set parameters, since its performance
is relatively stable over a large range of parameter values, as illustrated by
experimental results on both real-world and synthetic datasets.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2009 23:50:29 GMT"
}
] | 2009-12-30T00:00:00 | [
[
"Zhang",
"Ke",
""
],
[
"Hutter",
"Marcus",
""
],
[
"Jin",
"Huidong",
""
]
] | TITLE: A New Local Distance-Based Outlier Detection Approach for Scattered
Real-World Data
ABSTRACT: Detecting outliers which are grossly different from or inconsistent with the
remaining dataset is a major challenge in real-world KDD applications. Existing
outlier detection methods are ineffective on scattered real-world datasets due
to implicit data patterns and parameter setting issues. We define a novel
"Local Distance-based Outlier Factor" (LDOF) to measure the {outlier-ness} of
objects in scattered datasets which addresses these issues. LDOF uses the
relative location of an object to its neighbours to determine the degree to
which the object deviates from its neighbourhood. Properties of LDOF are
theoretically analysed including LDOF's lower bound and its false-detection
probability, as well as parameter settings. In order to facilitate parameter
settings in real-world applications, we employ a top-n technique in our outlier
detection approach, where only the objects with the highest LDOF values are
regarded as outliers. Compared to conventional approaches (such as top-n KNN
and top-n LOF), our method top-n LDOF is more effective at detecting outliers
in scattered data. It is also easier to set parameters, since its performance
is relatively stable over a large range of parameter values, as illustrated by
experimental results on both real-world and synthetic datasets.
| no_new_dataset | 0.951729 |
0912.3982 | William Jackson | D. Bhanu, S. Pavai Madeshwari | Retail Market analysis in targeting sales based on Consumer Behaviour
using Fuzzy Clustering - A Rule Based Mode | null | Journal of Computing, Volume 1, Issue 1, pp 92-99, December 2009 | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Product Bundling and offering products to customers is of critical importance
in retail marketing. In general, product bundling and offering products to
customers involves two main issues, namely identification of product taste
according to demography and product evaluation and selection to increase sales.
The former helps to identify, analyze and understand customer needs according
to the demo-graphical characteristics and correspondingly transform them into a
set of specifications and offerings for people. The latter, concerns with how
to determine the best product strategy and offerings for the customer in
helping the retail market to improve their sales. Existing research has focused
only on identifying patterns for a particular dataset and for a particular
setting. This work aims to develop an explicit decision support for the
retailers to improve their product segmentation for different settings based on
the people characteristics and thereby promoting sales by efficient knowledge
discovery from the existing sales and product records. The work presents a
framework, which models an association relation mapping between the customers
and the clusters of products they purchase in an existing location and helps in
finding rules for a new location. The methodology is based on the integration
of popular data mining approaches such as clustering and association rule
mining. It focuses on the discovery of rules that vary according to the
economic and demographic characteristics and concentrates on marketing of
products based on the population.
| [
{
"version": "v1",
"created": "Sun, 20 Dec 2009 05:18:57 GMT"
}
] | 2009-12-22T00:00:00 | [
[
"Bhanu",
"D.",
""
],
[
"Madeshwari",
"S. Pavai",
""
]
] | TITLE: Retail Market analysis in targeting sales based on Consumer Behaviour
using Fuzzy Clustering - A Rule Based Mode
ABSTRACT: Product Bundling and offering products to customers is of critical importance
in retail marketing. In general, product bundling and offering products to
customers involves two main issues, namely identification of product taste
according to demography and product evaluation and selection to increase sales.
The former helps to identify, analyze and understand customer needs according
to the demo-graphical characteristics and correspondingly transform them into a
set of specifications and offerings for people. The latter, concerns with how
to determine the best product strategy and offerings for the customer in
helping the retail market to improve their sales. Existing research has focused
only on identifying patterns for a particular dataset and for a particular
setting. This work aims to develop an explicit decision support for the
retailers to improve their product segmentation for different settings based on
the people characteristics and thereby promoting sales by efficient knowledge
discovery from the existing sales and product records. The work presents a
framework, which models an association relation mapping between the customers
and the clusters of products they purchase in an existing location and helps in
finding rules for a new location. The methodology is based on the integration
of popular data mining approaches such as clustering and association rule
mining. It focuses on the discovery of rules that vary according to the
economic and demographic characteristics and concentrates on marketing of
products based on the population.
| no_new_dataset | 0.94801 |
0912.4141 | Felix Moya-Anegon Dr | Borja Gonzalez-Pereira (1), Vicente Guerrero-Bote (1) and Felix
Moya-Anegon (2) ((1) University of Extremadura, Department of Information and
Communication, Scimago Group, Spain (2) CSIC, CCHS, IPP, Scimago Group Spain) | The SJR indicator: A new indicator of journals' scientific prestige | 21 pages with graphs and tables | null | null | null | cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an indicator of journals' scientific prestige, the SJR
indicator, for ranking scholarly journals based on citation weighting schemes
and eigenvector centrality to be used in complex and heterogeneous citation
networks such Scopus. Its computation methodology is described and the results
after implementing the indicator over Scopus 2007 dataset are compared to an
ad-hoc Journal Impact Factor both generally and inside specific scientific
areas. The results showed that SJR indicator and JIF distributions fitted well
to a power law distribution and that both metrics were strongly correlated,
although there were also major changes in rank. There was an observable general
trend that might indicate that SJR indicator values decreased certain JIF
values whose citedeness was greater than would correspond to their scientific
influence.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2009 11:32:08 GMT"
}
] | 2009-12-22T00:00:00 | [
[
"Gonzalez-Pereira",
"Borja",
""
],
[
"Guerrero-Bote",
"Vicente",
""
],
[
"Moya-Anegon",
"Felix",
""
]
] | TITLE: The SJR indicator: A new indicator of journals' scientific prestige
ABSTRACT: This paper proposes an indicator of journals' scientific prestige, the SJR
indicator, for ranking scholarly journals based on citation weighting schemes
and eigenvector centrality to be used in complex and heterogeneous citation
networks such Scopus. Its computation methodology is described and the results
after implementing the indicator over Scopus 2007 dataset are compared to an
ad-hoc Journal Impact Factor both generally and inside specific scientific
areas. The results showed that SJR indicator and JIF distributions fitted well
to a power law distribution and that both metrics were strongly correlated,
although there were also major changes in rank. There was an observable general
trend that might indicate that SJR indicator values decreased certain JIF
values whose citedeness was greater than would correspond to their scientific
influence.
| no_new_dataset | 0.945349 |
0912.2430 | Feng Xia | Feng Xia, Zhenzhen Xu, Lin Yao, Weifeng Sun, Mingchu Li | Prediction-Based Data Transmission for Energy Conservation in Wireless
Body Sensors | To appear in The Int Workshop on Ubiquitous Body Sensor Networks
(UBSN), in conjunction with the 5th Annual Int Wireless Internet Conf
(WICON), Singapore, March 2010 | null | null | null | cs.NI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wireless body sensors are becoming popular in healthcare applications. Since
they are either worn or implanted into human body, these sensors must be very
small in size and light in weight. The energy consequently becomes an extremely
scarce resource, and energy conservation turns into a first class design issue
for body sensor networks (BSNs). This paper deals with this issue by taking
into account the unique characteristics of BSNs in contrast to conventional
wireless sensor networks (WSNs) for e.g. environment monitoring. A
prediction-based data transmission approach suitable for BSNs is presented,
which combines a dual prediction framework and a low-complexity prediction
algorithm that takes advantage of PID (proportional-integral-derivative)
control. Both the framework and the algorithm are generic, making the proposed
approach widely applicable. The effectiveness of the approach is verified
through simulations using real-world health monitoring datasets.
| [
{
"version": "v1",
"created": "Sat, 12 Dec 2009 16:30:14 GMT"
}
] | 2009-12-15T00:00:00 | [
[
"Xia",
"Feng",
""
],
[
"Xu",
"Zhenzhen",
""
],
[
"Yao",
"Lin",
""
],
[
"Sun",
"Weifeng",
""
],
[
"Li",
"Mingchu",
""
]
] | TITLE: Prediction-Based Data Transmission for Energy Conservation in Wireless
Body Sensors
ABSTRACT: Wireless body sensors are becoming popular in healthcare applications. Since
they are either worn or implanted into human body, these sensors must be very
small in size and light in weight. The energy consequently becomes an extremely
scarce resource, and energy conservation turns into a first class design issue
for body sensor networks (BSNs). This paper deals with this issue by taking
into account the unique characteristics of BSNs in contrast to conventional
wireless sensor networks (WSNs) for e.g. environment monitoring. A
prediction-based data transmission approach suitable for BSNs is presented,
which combines a dual prediction framework and a low-complexity prediction
algorithm that takes advantage of PID (proportional-integral-derivative)
control. Both the framework and the algorithm are generic, making the proposed
approach widely applicable. The effectiveness of the approach is verified
through simulations using real-world health monitoring datasets.
| no_new_dataset | 0.955236 |
0912.0955 | Rdv Ijcsis | Nazmeen Bibi Boodoo, and R. K. Subramanian | Robust Multi biometric Recognition Using Face and Ear Images | 6 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS November 2009, ISSN 1947 5500,
http://sites.google.com/site/ijcsis/ | International Journal of Computer Science and Information
Security, IJCSIS, Vol. 6, No. 2, pp. 164-169, November 2009, USA | null | ISSN 1947 5500 | cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates the use of ear as a biometric for authentication and
shows experimental results obtained on a newly created dataset of 420 images.
Images are passed to a quality module in order to reduce False Rejection Rate.
The Principal Component Analysis (eigen ear) approach was used, obtaining 90.7
percent recognition rate. Improvement in recognition results is obtained when
ear biometric is fused with face biometric. The fusion is done at decision
level, achieving a recognition rate of 96 percent.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2009 21:51:03 GMT"
}
] | 2009-12-08T00:00:00 | [
[
"Boodoo",
"Nazmeen Bibi",
""
],
[
"Subramanian",
"R. K.",
""
]
] | TITLE: Robust Multi biometric Recognition Using Face and Ear Images
ABSTRACT: This study investigates the use of ear as a biometric for authentication and
shows experimental results obtained on a newly created dataset of 420 images.
Images are passed to a quality module in order to reduce False Rejection Rate.
The Principal Component Analysis (eigen ear) approach was used, obtaining 90.7
percent recognition rate. Improvement in recognition results is obtained when
ear biometric is fused with face biometric. The fusion is done at decision
level, achieving a recognition rate of 96 percent.
| new_dataset | 0.953579 |
0912.1014 | Rdv Ijcsis | Shailendra Singh, Sanjay Silakari | An ensemble approach for feature selection of Cyber Attack Dataset | 6 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS November 2009, ISSN 1947 5500,
http://sites.google.com/site/ijcsis/ | International Journal of Computer Science and Information
Security, IJCSIS, Vol. 6, No. 2, pp. 297-302, November 2009, USA | null | ISSN 1947 5500 | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection is an indispensable preprocessing step when mining huge
datasets that can significantly improve the overall system performance.
Therefore in this paper we focus on a hybrid approach of feature selection.
This method falls into two phases. The filter phase select the features with
highest information gain and guides the initialization of search process for
wrapper phase whose output the final feature subset. The final feature subsets
are passed through the Knearest neighbor classifier for classification of
attacks. The effectiveness of this algorithm is demonstrated on DARPA KDDCUP99
cyber attack dataset.
| [
{
"version": "v1",
"created": "Sat, 5 Dec 2009 13:15:08 GMT"
}
] | 2009-12-08T00:00:00 | [
[
"Singh",
"Shailendra",
""
],
[
"Silakari",
"Sanjay",
""
]
] | TITLE: An ensemble approach for feature selection of Cyber Attack Dataset
ABSTRACT: Feature selection is an indispensable preprocessing step when mining huge
datasets that can significantly improve the overall system performance.
Therefore in this paper we focus on a hybrid approach of feature selection.
This method falls into two phases. The filter phase select the features with
highest information gain and guides the initialization of search process for
wrapper phase whose output the final feature subset. The final feature subsets
are passed through the Knearest neighbor classifier for classification of
attacks. The effectiveness of this algorithm is demonstrated on DARPA KDDCUP99
cyber attack dataset.
| no_new_dataset | 0.948442 |
0912.0717 | Karol Gregor | Karol Gregor, Gregory Griffin | Behavior and performance of the deep belief networks on image
classification | 8 pages, 9 figures | null | null | null | cs.NE cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply deep belief networks of restricted Boltzmann machines to bags of
words of sift features obtained from databases of 13 Scenes, 15 Scenes and
Caltech 256 and study experimentally their behavior and performance. We find
that the final performance in the supervised phase is reached much faster if
the system is pre-trained. Pre-training the system on a larger dataset keeping
the supervised dataset fixed improves the performance (for the 13 Scenes case).
After the unsupervised pre-training, neurons arise that form approximate
explicit representations for several categories (meaning they are mostly active
for this category). The last three facts suggest that unsupervised training
really discovers structure in these data. Pre-training can be done on a
completely different dataset (we use Corel dataset) and we find that the
supervised phase performs just as good (on the 15 Scenes dataset). This leads
us to conjecture that one can pre-train the system once (e.g. in a factory) and
subsequently apply it to many supervised problems which then learn much faster.
The best performance is obtained with single hidden layer system suggesting
that the histogram of sift features doesn't have much high level structure. The
overall performance is almost equal, but slightly worse then that of the
support vector machine and the spatial pyramidal matching.
| [
{
"version": "v1",
"created": "Thu, 3 Dec 2009 19:20:14 GMT"
}
] | 2009-12-04T00:00:00 | [
[
"Gregor",
"Karol",
""
],
[
"Griffin",
"Gregory",
""
]
] | TITLE: Behavior and performance of the deep belief networks on image
classification
ABSTRACT: We apply deep belief networks of restricted Boltzmann machines to bags of
words of sift features obtained from databases of 13 Scenes, 15 Scenes and
Caltech 256 and study experimentally their behavior and performance. We find
that the final performance in the supervised phase is reached much faster if
the system is pre-trained. Pre-training the system on a larger dataset keeping
the supervised dataset fixed improves the performance (for the 13 Scenes case).
After the unsupervised pre-training, neurons arise that form approximate
explicit representations for several categories (meaning they are mostly active
for this category). The last three facts suggest that unsupervised training
really discovers structure in these data. Pre-training can be done on a
completely different dataset (we use Corel dataset) and we find that the
supervised phase performs just as good (on the 15 Scenes dataset). This leads
us to conjecture that one can pre-train the system once (e.g. in a factory) and
subsequently apply it to many supervised problems which then learn much faster.
The best performance is obtained with single hidden layer system suggesting
that the histogram of sift features doesn't have much high level structure. The
overall performance is almost equal, but slightly worse then that of the
support vector machine and the spatial pyramidal matching.
| no_new_dataset | 0.947914 |
0903.2870 | Patrick Erik Bradley | Patrick Erik Bradley | On $p$-adic Classification | 16 pages, 7 figures, 1 table; added reference, corrected typos, minor
content changes | p-Adic Numbers, Ultrametric Analysis, and Applications, Vol. 1,
No. 4 (2009), 271-285 | 10.1134/S2070046609040013 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A $p$-adic modification of the split-LBG classification method is presented
in which first clusterings and then cluster centers are computed which locally
minimise an energy function. The outcome for a fixed dataset is independent of
the prime number $p$ with finitely many exceptions. The methods are applied to
the construction of $p$-adic classifiers in the context of learning.
| [
{
"version": "v1",
"created": "Mon, 16 Mar 2009 22:52:06 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jun 2009 14:10:45 GMT"
}
] | 2009-12-01T00:00:00 | [
[
"Bradley",
"Patrick Erik",
""
]
] | TITLE: On $p$-adic Classification
ABSTRACT: A $p$-adic modification of the split-LBG classification method is presented
in which first clusterings and then cluster centers are computed which locally
minimise an energy function. The outcome for a fixed dataset is independent of
the prime number $p$ with finitely many exceptions. The methods are applied to
the construction of $p$-adic classifiers in the context of learning.
| no_new_dataset | 0.949482 |
0712.2063 | Vladimir Pestov | Vladimir Pestov | An axiomatic approach to intrinsic dimension of a dataset | 10 pages, 5 figures, latex 2e with Elsevier macros, final submission
to Neural Networks with referees' comments taken into account | Neural Networks 21, 2-3 (2008), 204-213. | null | null | cs.IR | null | We perform a deeper analysis of an axiomatic approach to the concept of
intrinsic dimension of a dataset proposed by us in the IJCNN'07 paper
(arXiv:cs/0703125). The main features of our approach are that a high intrinsic
dimension of a dataset reflects the presence of the curse of dimensionality (in
a certain mathematically precise sense), and that dimension of a discrete
i.i.d. sample of a low-dimensional manifold is, with high probability, close to
that of the manifold. At the same time, the intrinsic dimension of a sample is
easily corrupted by moderate high-dimensional noise (of the same amplitude as
the size of the manifold) and suffers from prohibitevely high computational
complexity (computing it is an $NP$-complete problem). We outline a possible
way to overcome these difficulties.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2007 23:39:21 GMT"
}
] | 2009-11-17T00:00:00 | [
[
"Pestov",
"Vladimir",
""
]
] | TITLE: An axiomatic approach to intrinsic dimension of a dataset
ABSTRACT: We perform a deeper analysis of an axiomatic approach to the concept of
intrinsic dimension of a dataset proposed by us in the IJCNN'07 paper
(arXiv:cs/0703125). The main features of our approach are that a high intrinsic
dimension of a dataset reflects the presence of the curse of dimensionality (in
a certain mathematically precise sense), and that dimension of a discrete
i.i.d. sample of a low-dimensional manifold is, with high probability, close to
that of the manifold. At the same time, the intrinsic dimension of a sample is
easily corrupted by moderate high-dimensional noise (of the same amplitude as
the size of the manifold) and suffers from prohibitevely high computational
complexity (computing it is an $NP$-complete problem). We outline a possible
way to overcome these difficulties.
| no_new_dataset | 0.94545 |
cs/9901004 | Vladimir Pestov | Vladimir Pestov | On the geometry of similarity search: dimensionality curse and
concentration of measure | 7 pages, LaTeX 2e | Information Processing Letters 73 (2000), 47-51. | null | RP-99-01, Victoria University of Wellington, NZ | cs.IR cs.CG cs.DB cs.DS | null | We suggest that the curse of dimensionality affecting the similarity-based
search in large datasets is a manifestation of the phenomenon of concentration
of measure on high-dimensional structures. We prove that, under certain
geometric assumptions on the query domain $\Omega$ and the dataset $X$, if
$\Omega$ satisfies the so-called concentration property, then for most query
points $x^\ast$ the ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$
contains either all points of $X$ or else at least $C_1\exp(-C_2\e^2n)$ of
them. Here $d_X(x^\ast)$ is the distance from $x^\ast$ to the nearest neighbour
in $X$ and $n$ is the dimension of $\Omega$.
| [
{
"version": "v1",
"created": "Tue, 12 Jan 1999 21:56:39 GMT"
}
] | 2009-11-17T00:00:00 | [
[
"Pestov",
"Vladimir",
""
]
] | TITLE: On the geometry of similarity search: dimensionality curse and
concentration of measure
ABSTRACT: We suggest that the curse of dimensionality affecting the similarity-based
search in large datasets is a manifestation of the phenomenon of concentration
of measure on high-dimensional structures. We prove that, under certain
geometric assumptions on the query domain $\Omega$ and the dataset $X$, if
$\Omega$ satisfies the so-called concentration property, then for most query
points $x^\ast$ the ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$
contains either all points of $X$ or else at least $C_1\exp(-C_2\e^2n)$ of
them. Here $d_X(x^\ast)$ is the distance from $x^\ast$ to the nearest neighbour
in $X$ and $n$ is the dimension of $\Omega$.
| no_new_dataset | 0.944125 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.